Your DVR does not have a search feature. It has a timeline with a motion filter.
Every top result for “search security camera footage” walks you through the same DVR menu: Playback, date picker, timeline scrub, 4x playback. That workflow is not search. Search requires a pre-built index with a row per event and a semantic class label. Consumer DVRs never build that index, which is why the workflow collapses into scrubbing. This guide defines what the index has to contain, shows the nine-field record Cyrano writes per event off the DVR's HDMI multiview, and explains the one preprocessing step (overlay masking) that keeps the index from flooding with false positives on the recorder's own clock and channel-bug pixels.
See a live index query on a real DVRWhat the DVR menu labelled “Search” actually does
Open any consumer DVR (Hikvision DS-7716, Dahua XVR5216, Lorex, Amcrest, Reolink NVR, Swann, Night Owl, Q-See) and the Search or Playback button lands on the same UI: a twenty-four-hour timeline slider per channel, with segments shaded in a different colour to indicate that the motion detector fired. You drag the slider. You play at 4x or 8x. You watch for the event. You rewind and watch at real speed. Then you export.
That UI is doing exactly one thing under the hood: it is running a pixel-difference check between adjacent frames and shading the segments where the difference exceeded a threshold. A plastic bag across the lot, a passing car, a tree branch in wind, a cloud shadow, and a person at the mailroom all produce the same shaded segment. There is no event table. There is no class label. There is no zone filter. The DVR never built an index at all.
This is why the retrieval workflow that every top-ranked guide describes is identical: narrow the time window, scrub the timeline, and hope. If you know the incident happened between 2:00 and 4:00 a.m., you have two camera-hours of footage to scrub per channel. On a sixteen-channel recorder that is up to thirty-two camera-hours of material. At 4x playback that is eight hours of review, per incident.
What a real search index has to contain
If you are going to treat footage the way a property manager actually wants to treat it, as a queryable record of events at a site, the index needs these fields. Drop any one of them and search either becomes unreliable or brittle.
tile.label
The camera name the DVR already paints on the multiview strip (Mailroom Interior, Loading Dock NE, Lobby W). This is the stable key a property manager remembers. Channel numbers drift when cameras get reassigned; the label does not.
tile.index and tile.coords
The row-major position of the tile on the multiview grid, and the pixel rectangle inside the composite frame. Used to crop the tile cleanly before inference.
property
Site identifier. Makes portfolio-wide queries possible across mixed-brand recorders that otherwise have incompatible playback UIs.
layout_id
Recorder layout template (4x4-std, 3x3-std, 2x2-std, custom). Drives which overlay mask template gets applied.
overlay_mask
List of DVR chrome regions blanked before inference: clock, cam_name_strip, channel_bug. Without this field the classifier scores bounding boxes on the colon glyph of the clock, and the index fills with noise.
event_class
Semantic label at index time: person_in_zone, loiter, vehicle_dwell, tamper, package_tamper. This is what makes filter-by-what-happened possible.
iso8601_ts
Recorder clock time. Ties the index back to the full DVR recording so a click on a thumbnail opens the underlying clip.
latency_ms
Capture-to-delivery time. Exposed on every record so regressions are visible at query time, not buried in a monitoring dashboard nobody opens.
The actual record on disk
One row per event. Flat JSON, eight scalars and one small object for the tile. The thumbnail is a 480x270 JPEG that makes the reverse-chronological strip UI render instantly. Everything else is just columns a query can filter on.
The preprocessing step nobody talks about
Every consumer DVR composites the camera feeds into a mosaic and then paints its own chrome on top of the mosaic before pushing it out the HDMI port. A live clock along the top or bottom. A camera-name banner at the corner of each tile. A channel bug per tile (CH1, CH2, CH3 in a small pill). On some recorders a recording indicator or a network-status icon.
A person-detection or vehicle-detection classifier that sees those pixels will occasionally fire a confident bounding box on them. The colon glyph inside 02:41:07 looks like a tiny vertical person at the right scale. The letterforms of Mailroom Interior can score as low-confidence text-shaped objects. None of those detections are events, but they all get a high enough score to pass a naive threshold.
For a live monitoring product this is annoying. For a search index it is catastrophic, because every false positive becomes a row, and within a week the index is mostly noise. The fix is a per-layout overlay mask template computed once at install time. The mask is a binary rectangle list that the pipeline paints black on the composite frame before any classifier runs. The overlay_mask field on the event record is the record that the mask was applied, and which regions were blanked, so the mask state at index time is reconstructable months later if a reviewer ever challenges an event.
Overlay regions masked before classification
- Top clock strip (HH:MM:SS with seconds-updating colon glyph)
- Per-tile camera-name banner (the label that becomes tile.label)
- Per-tile channel bug (CH1, CH2, CH3, small pill)
- Recording indicator (red dot, varies by recorder)
- Network / disk status icon (bottom right on most Hikvision DVRs)
- Date badge (varies, sometimes embedded in clock, sometimes separate)
The pipeline, end to end
One HDMI tap in, one event row out. The hub runs all four stages on the edge unit, so no frames are uploaded.
HDMI to searchable in 7 to 8 seconds median
What a query actually looks like
In the dashboard, a property manager picks from dropdowns. Under the hood, the dropdowns resolve to a filter against the event table. The SQL below is what the scenario “anyone standing in the mailroom overnight last week?” actually compiles to.
DVR built-in search vs. an event index
The two approaches look similar from the button label. They are different products underneath.
| Feature | DVR built-in search | Cyrano event index |
|---|---|---|
| Index granularity | Pixel-difference motion, no event rows | One row per event with a class label |
| Built when? | On demand, 2 to 5 min rebuild per 24 h window | At capture time, 7 to 8 s median end-to-end |
| Search key | Channel number (CH1, CH2) | tile.label (Mailroom Interior, Loading Dock NE) |
| Survives layout or cable swaps | No. Queries break when channels are reassigned. | Yes. The label is painted into the tile itself. |
| Filter by what happened | No. All motion looks the same. | person_in_zone, loiter, vehicle_dwell, tamper, etc. |
| Handles DVR clock and channel-bug pixels | N/A (no classifier running) | overlay_mask blanks them before inference |
| Portfolio search across mixed brands | No. Each recorder has its own UI. | Yes. One table keyed on property. |
| Result UI | Timeline slider with 4x scrub | Reverse-chronological thumbnail strip |
A real search, start to finish
Tenant reports mailroom disturbance, 48 hours after the fact
Open the dashboard, pick the property
From a dropdown sourced by the property field on the event table. No VPN, no remote desktop into the DVR.
Pick tile.label from the camera dropdown
The options are the labels the DVR already paints on its own multiview (Mailroom Interior, Mailroom Exterior, Lobby W). Picked up by OCR at install time and cached.
Pick event_class
person_in_zone, loiter, vehicle_dwell, tamper, or all. Maps directly to the class label the classifier wrote at index time.
Set the time window
Standard ISO 8601 range. Filters on iso8601_ts, which is the recorder clock, same timestamp as the full DVR recording.
Skim the thumbnail strip
Reverse chronological. A 480x270 JPEG per event, pre-rendered, so the strip paints instantly no matter how many results the query returns.
Click to watch, click to export
Clicking a thumbnail opens the 30-second clip. Export button writes an MP4 plus the event-row JSON. Chain-of-custody metadata is already in the record.
The index does not care what recorder the property owns
The HDMI port is the standard interface. Any recorder that drives a monitor drives Cyrano. The overlay mask template per recorder is what makes the classifier portable across brands. The per-layout template set is the actual asset.
Three ways to tell if your current system has a real search
A quick diagnostic. If your answer to any of these is no, you do not have search, you have a scrubbable timeline.
1. Can you filter by event class?
Not motion (which is a pixel-diff). Event class: person, vehicle, loiter, tamper. If the only filter is “motion yes/no” you do not have search.
2. Do saved queries survive a camera swap?
If yesterday's search for “loading dock after 10 p.m.” silently starts returning the wrong camera after maintenance rearranged channels, the search is keyed on channel number, not camera name.
3. Can you query across properties at once?
If the only way to search a twelve-property portfolio is to remote into twelve different DVRs, there is no portfolio search. An event index makes property a filter column.
See a real event index query on a real DVR
15 minutes. We tap the HDMI on one of your recorders on the call and show you the thumbnail strip filling from live events.
Book a call →Questions from the Reddit thread that landed here
Why do you say my DVR does not actually have a search feature?
Because what the menu labels Search is a timeline scrub with a motion filter on top. The DVR stored raw pixel differences frame by frame while recording, and when you hit Search it redraws the timeline with the motion-dense segments highlighted. There is no event row, no class label, no actor identity, no zone. A plastic bag drifting across the lot produces the same highlight as a person loitering at the mailroom. Real search requires a pre-built index where each row represents a discrete event with a label (person_in_zone, vehicle_dwell, loiter, tamper, and so on) and a timestamp, so you can filter by what happened. Consumer DVRs do not build that index, which is why the workflow collapses into scrubbing.
What is actually in Cyrano's event record, and why does the shape matter?
Nine fields plus a thumbnail. tile.label is the camera name the DVR stamps on its multiview strip (for example Mailroom Interior), which is the stable key a property manager remembers. tile.index is the row-major position on the multiview grid. tile.coords is the pixel rectangle inside the composite frame, which lets the pipeline crop the tile cleanly. property is the site identifier. layout_id is the recorder layout (4x4-std, 3x3-std, custom), which is how the overlay mask template gets picked. overlay_mask is the list of DVR chrome regions that got blanked before inference (typical: clock, cam_name_strip, channel_bug). event_class is the semantic label the classifier assigned. iso8601_ts is the recorder clock time. latency_ms is capture-to-delivery. That shape is what lets filtering work. Remove any one of those fields and search either becomes unreliable (drop overlay_mask and the detector scores boxes on the clock glyph) or brittle (drop tile.label and your queries break when maintenance swaps channels).
What is the overlay mask and why is it critical for search specifically?
A DVR composite signal is not a clean feed. Every recorder paints a clock strip, a per-tile camera name banner, and a channel bug onto the mosaic before pushing it out the HDMI port. A classifier that sees those pixels will occasionally score a confident bounding box on the colon glyph inside the clock, or on the letterforms of the camera name. For a search index that is catastrophic: every such false positive becomes an event row, the index fills with noise, and real events get buried. The overlay_mask field records exactly which overlay regions were blanked out on a given layout_id, once at install time. Once a layout is masked, the detector only sees the scene inside each tile. No consumer DVR does this because consumer DVRs do not run classifiers at all, and no third-party cloud-NVR does it because they get clean per-camera RTSP streams rather than composite video.
If the DVR already stores everything on disk, why do I need a separate index?
Because storage and search are different products. The DVR's disk is a blob of H.264 or H.265 that is addressable by (channel, timestamp). That lets you play back footage if you already know the channel and the time. It does not let you ask a question like 'show me every time someone stood in the mailroom alcove for longer than 20 seconds outside 9 a.m. to 8 p.m. last week.' To answer that you need a row per event with the zone, the dwell, and the timestamp. The raw video keeps its job, the index does its job, and the two are tied together by iso8601_ts so a click on a thumbnail in the index opens the full clip on the DVR.
How do I actually run a search once the index exists?
You filter the index. In the Cyrano dashboard the filters are: property, tile.label (the camera name the DVR already stamps on the multiview), event_class (person_in_zone, loiter, vehicle_dwell, tamper, or all), and an iso8601_ts window. The results render as a reverse-chronological thumbnail strip. Click a thumbnail and the full clip opens. No scrubbing, no 4x playback, no channel-number math. For integrators who want to script against the index, there is a SQL surface that reads the same underlying event table with the same nine fields.
Why is keying on tile.label better than keying on channel number?
Channel numbers are assigned by the DVR. When a tech swaps cameras on a re-cable, or when maintenance rearranges the multiview to put the loading dock on a bigger tile, the channel number on a given camera changes. Every saved search that referenced CH6 now points at a different camera. tile.label is the human-readable name that gets painted onto the tile itself (Loading Dock NE, Mailroom Interior, Lobby W). Property managers already know those names because that is what they see on the monitor. Cyrano reads them off the composite frame using OCR at install time, then uses them as the stable key. A search for Mailroom Interior keeps working even if the recorder channel underneath changes.
How fast is an event available to search after it happens?
Median capture-to-delivery is 7 to 8 seconds on a 25-tile multiview, with a 5 to 15 second envelope. That is end-to-end: HDMI frame grab, per-tile crop, overlay mask, classifier, event-row write, thumbnail render, push to the dashboard. A person who walks into a detection zone at 2:04:12 p.m. is searchable at 2:04:20 p.m. No end-of-day batch indexing. No cloud upload waiting on a residential uplink.
Does this require replacing my DVR or my cameras?
No. Cyrano plugs into the HDMI output that already drives the monitor in the office or guard booth. One HDMI cable in, one HDMI cable out to the monitor, one network cable, one power plug. Under two minutes of physical install on a running DVR. Nothing on the recorder changes. The cameras stay exactly where they are, whether they are analog over coax, IP over PoE, or a mixed fleet. Supported recorders include Hikvision DS-7xxx, Dahua XVR and NVR, Lorex, Amcrest, Reolink NVR, Uniview, Swann, Night Owl, Q-See, ANNKE, EZVIZ, Bosch DIVAR, Honeywell Performance, and Panasonic WJ-NX, along with the long tail of rebrands. If your recorder has an HDMI port, the search index is a plug-in.
Can I search across multiple properties at once?
Yes. The event index is a single table keyed by the property field. A regional manager with 12 sites can open one dashboard, leave property unset, and filter by tile.label or event_class across the whole portfolio. Before this, searching across mixed-brand recorders (say a Hikvision at one property and a Dahua at another) was impossible at the DVR level because each recorder has its own proprietary playback UI and export format. Moving search up into the index layer makes the underlying recorder irrelevant to the search experience.
What happens to footage that is not indexed as an event?
Nothing. The DVR continues recording continuously to its own disk under whatever retention policy was already in place. The index is a parallel structure. About 99 percent of the time, a property manager can answer a review question from the index alone. For the 1 percent case (a subpoena asking for a specific two-hour window with no events, or the five minutes leading up to a flagged event) the full DVR recording is still there to pull from. Fast search for the common case, unchanged archive for the long tail.
What is the one thing a reader from a Reddit thread should actually do this week?
If you have an unsolved incident sitting on a DVR right now, stop scrubbing the timeline. Write down the two or three tile names you remember from the multiview, the rough time window, and the event class (person, vehicle, loiter, tamper). Those are the four filters a real search index needs. If your DVR does not expose those filters, the answer is not to keep scrubbing faster. The answer is to put a search index on top of the recorder, which is what Cyrano does off the HDMI output in under two minutes of install time.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.