C
Cyrano Security
11 min read
Footage Review Guide

Footage review should be a list of events, not a pile of hours.

Every top guide to security camera footage review walks you through the same flow: pick a camera, narrow the time window, scrub at 4x, watch the timeline scroll. That workflow is playback, not review. Real review is a reverse-chronological list of classified events across every tile on the monitor, filterable by camera name, event class, and time. This guide shows the math of the review-time flip, the nine-field record that makes the list possible, and the one trick (OCR of the DVR's own camera-name strip) that lets the review dashboard speak the exact labels your team reads off the multiview.

See a live review session on a real DVR
4.9from 50+ properties
Reviewing events, not scrubbing timelines
Filter chips use the DVR's own camera names
Median 7 to 8 seconds capture to reviewable
One HDMI cable in, nothing on the DVR changes

What “review” actually means on most properties

Ask a multifamily property manager or a small-business owner to describe what it takes to review their camera footage after an overnight incident, and the answer is almost always the same. They drive to the building or VPN in, log into the DVR, select the channel they think captured the event, set a rough time window, and scrub the timeline at 4x while staring at a 1920x1080 playback window. If they don't find it on the first camera, they pick the next one and repeat. Sixteen cameras, four hours of footage per camera, 4x playback speed: that is sixteen hours of compressed review time, chopped up across sixteen separate tab-switches.

This is why the top-ranked results for “security camera footage review” all converge on the same three tips. Narrow the time window. Fast-forward at higher speeds. Use motion indexing to skip the empty segments. Every one of those tips is a way to compress playback. None of them is review. The reason they exist is that the DVR never built a real index, so there is nothing to filter over. The only knob you have is the playback speed.

Two ways to answer: "What happened overnight?"

Log into the DVR, pick a channel, set a time window, scrub the timeline, bump speed to 4x or 8x, watch the playback, repeat for every camera that might have caught the event.

  • Playback is serialized per channel
  • No way to filter by what happened, only by when
  • Easy to miss short events at high playback speed
  • 16-camera property: a full work day of attention

The math flip: review time stops scaling with footage

Playback review time is a function of hours of footage times camera count divided by playback speed. Event-list review time is a function of event count. On a quiet property the difference is dramatic. On a busy one the difference is still large because events are bounded by how many things actually happened, not by how long the recorder was on.

0 fpsPer-tile inference rate on the HDMI composite
0 msPer-frame inference latency, single hardware decoder
0Max tiles per HDMI input processed in parallel
0 sMedian capture-to-reviewable on a 25-tile multiview

A 25-camera property running a standard 5x5 multiview at 30 fps composite produces a single frame every 33 milliseconds. Cyrano crops 25 tiles out of that frame and runs a person and vehicle classifier on each tile in parallel. The inference budget stays constant whether your property has 4 cameras or 25, because the pipeline is bounded by the composite frame rate, not the per-camera stream count. A reviewer watching the dashboard sees 0 to 0 second median lag between an incident happening and its thumbnail appearing at the top of the list.

How the review list gets built off one HDMI cable

The entire pipeline runs off the HDMI cable that already drives the monitor in the office or guard booth. Nothing on the recorder changes. The DVR keeps recording to its own disk under whatever retention policy was in place before. Cyrano reads the composite, classifies each tile, and writes event rows to a local index that the review dashboard renders.

One HDMI composite in, one chronological review feed out

Tile 01
Tile 07
Tile 12
Tile 18
Tile 22
Cyrano edge box
event_class
tile.label
iso8601_ts
thumbnail
review feed

The one OCR trick that makes the review feed legible

Every DVR paints its own chrome onto the composite before pushing it out the HDMI port: a live clock, a per-tile camera-name banner, a channel bug (CH1, CH2, CH3) in a corner pill. That painted-on chrome is noise for a person-detection classifier, which is why the event pipeline has to mask it out before inference. But one of those overlays, the cam_name_strip, is also a gift. It contains the exact human-readable name the property manager uses to refer to each camera. Mailroom Interior. Loading Dock NE. Lobby W. North Gate.

At install time, once Cyrano has locked onto the multiview layout, it runs OCR across each tile's name strip and populates the tile.label field of the event record. From that point forward, every row in the review feed is keyed by the exact camera name the monitor has been showing all along. The review dashboard filter chips look like this:

Mailroom InteriorLoading Dock NELobby WParking Garage P1North GateTrash RoomPool DeckRear AlleyLeasing OfficeElevator AMailroom ExteriorSouth Stairwell

Competitors that ingest RTSP streams see a clean per-camera feed but have no access to the label the monitor shows, because the label is painted by the DVR only in the composite, not in the RTSP. Their filter chips end up as Camera 01 through Camera 25. That works until the day maintenance swaps two coax runs and every saved filter now points at the wrong camera. Keying on the painted label instead of the channel number makes saved review filters survive physical re-cabling.

What one reviewable event actually looks like on disk

One row per event. Flat JSON, eight scalars plus one small object for the tile. The thumbnail is a 480x270 JPEG that lets the reverse-chronological strip render instantly. Everything else is columns a filter can hit.

events/ev_01HW9Q.json

The shift-end review workflow, step by step

This is the sequence a property manager or security lead actually runs through on Monday morning after a quiet-ish weekend. The goal is to get from zero context to a full answer to “what happened?” in under twenty minutes on a 25-camera property.

1

Open the review dashboard

One URL. No VPN to the DVR. No client software install. Loads in a browser because the index itself lives off the DVR.

2

Set the time window

Pick a range: last 12 hours, last weekend, a custom window. The time filter hits iso8601_ts on the event table.

3

Filter by event class

Person_in_zone and loiter and tamper for most reviews. Vehicle_dwell if you are tracking parking lot activity. Leave empty to see everything.

4

Optional: pin specific tiles

If you know the area (Loading Dock NE, Mailroom Interior) select the tile filter chip. The label matches what the monitor shows.

5

Scan the thumbnail strip

Reverse-chronological. Half a second per thumbnail. Quiet night is 5 to 15 rows. Busy night is 50 to 200. Either way it is O(events), not O(hours).

6

Click to open the underlying DVR clip

Each thumbnail links to the full recording on the DVR, seeked to iso8601_ts minus a buffer. Full context available without scrubbing.

What a scripted review session looks like

For integrators or operations teams running the same review query every morning, the event index exposes a SQL surface over the same nine-field record. The dashboard and the SQL hit the same table.

cyrano review: last overnight, mailroom + loading dock

Four review use cases the event list handles natively

The reason to move review up into an index layer is that the same table answers every question the property manager gets asked in a given week. Shift review, post-incident review, pattern review, and audit review all hit the same nine fields with different filter combinations.

Shift-end review

Monday morning, Friday 6 p.m. Filter: last 8 hours, no tile pin, class = person_in_zone + loiter + tamper. Scan the strip, flag anything that looks wrong, escalate one or two clips. Replaces the four-hour DVR scrub that nobody actually does.

Post-incident review

Resident reports a package gone at 14:30. Filter: property + date + tile.label containing 'Mailroom'. Every person_in_zone event in that tile across the day is on screen in one scroll.

Pattern review

Recurring loiterer at the dumpster. Filter: class = loiter, tile = Trash Room, last 30 days. The dashboard renders a dot-plot of event times and thumbnails. Pattern becomes obvious.

Audit prep

Insurance carrier asks for a 30-day incident log. Export the event table as CSV or PDF filtered by event_class = tamper + person_in_zone. Nine fields per row, already timestamped, already labelled.

Law enforcement handoff

Officer on-site needs a specific clip. Filter by time window, click the thumbnail, the underlying DVR clip exports as MP4. Minutes, not the hour it used to take.

Cross-property review

Regional manager with 12 sites: leave the property filter unset. The event table is already unified across mixed-brand recorders because the index layer sits above the DVR.

Event-list review vs. traditional DVR review

The comparison that matters is not Cyrano versus another cloud NVR. It is review-as-list versus review-as-playback. Every DVR and NVR on the market today ships with some form of the playback workflow. A smaller and growing set ships with an index.

FeatureDVR playbackCyrano event list
Review time on a 25-camera property, quiet overnight3 to 4 hours of scrubbing across channels10 to 20 minutes of thumbnail triage
What the filter actually doesRedraws the timeline with motion-dense segments shadedQuery over an indexed event table (nine fields)
Cross-camera review in one viewPlayback is per-channel. One camera at a time.Single reverse-chronological list across all tiles
Camera names in filter UICH1, CH2, CH3 (channel numbers assigned by DVR)Mailroom Interior, Loading Dock NE (OCR from multiview)
Classification of eventsPixel-difference motion threshold, no object labelperson_in_zone, loiter, vehicle_dwell, tamper at index time
Survives camera re-cablingNo, filters break when channels get reassignedYes, filters are keyed on the painted label
Latency from event to reviewableAvailable immediately as raw footage; findable whenever you find time7 to 8 seconds median, 5 to 15 second envelope
Original footage retentionUnchanged.Unchanged. DVR keeps recording under its own retention policy.
Install cost on a running systemAlready installed. Review cost is a work-day per incident.One HDMI cable in, one out to monitor, one network, one power. Under 2 minutes physical install.

What a review-ready event record needs

The reason most DIY NVR setups never make the jump from playback to list-based review is that the event record they write is incomplete. Drop any one of these fields and the review experience either becomes unreliable or becomes brittle.

Fields you must materialise at index time, not query time

  • tile.label: the human-readable camera name (OCR'd once at install from the DVR's cam_name_strip). Stable across channel swaps.
  • tile.index and tile.coords: row-major position and pixel rectangle inside the composite frame, for clean per-tile crop.
  • property: site identifier. Required for portfolio-wide review across mixed-brand recorders.
  • layout_id: recorder layout template (4x4-std, 3x3-std, 2x2-std, custom). Drives which overlay mask template applies.
  • overlay_mask: DVR chrome regions blanked before inference (clock, cam_name_strip, channel_bug). Without it the classifier scores boxes on the colon glyph of the clock and the index fills with noise.
  • event_class: semantic label at index time (person_in_zone, loiter, vehicle_dwell, tamper, package_tamper). This is what makes filter-by-what-happened possible.
  • iso8601_ts: recorder clock time. Ties the index back to the raw DVR recording so a thumbnail click opens the underlying clip.
  • latency_ms: capture-to-delivery time exposed per record so regressions are visible at query time, not buried in a monitoring dashboard nobody opens.
  • thumbnail: 480x270 JPEG produced at index time so the reverse-chronological strip renders instantly without re-decoding the DVR blob.

What you lose by moving to list-based review

Not everything. The DVR still records every frame under whatever retention policy was in place. Nothing on the recorder changes. No existing footage is thrown away. The only thing you lose is the habit of opening the playback client every time someone asks what happened overnight.

The one real tradeoff is that the event list covers what the classifier knows about (person_in_zone, loiter, vehicle_dwell, tamper, package_tamper). About 99 percent of post-incident review questions a property manager actually gets land inside that set. For the 1 percent case, a subpoena asking for a specific two-hour window with no events, or the five minutes leading up to a flagged event, the full DVR recording is still there. The list is the common-case review tool. The recorder is the long-tail archive.

Stop scrubbing. See your overnight as a list, not a tape.

15-minute call. We plug Cyrano into a running DVR on a live call and walk through an event-list review on the same footage you would have scrubbed.

Book a call

Frequently asked questions

Why does every review guide tell me to fast-forward at 4x?

Because the DVR hands them no other option. A consumer DVR stores a raw H.264 or H.265 blob per channel, addressable by timestamp and nothing else. With no index, the only way to find an event is to play the blob back faster than real time and watch it with human eyes. The 4x recommendation is not a review strategy, it is an admission that no real review tool exists on the recorder. The review workflow should not be playback at all. It should be a filter over a list of events that were already classified at the moment they happened.

What is a reverse-chronological event list and why is it faster?

It is a table where each row is one classified incident with a timestamp, a camera label, an event class, and a thumbnail. Most recent event on top. To review an overnight shift you scroll the list from 6 a.m. down to 10 p.m. the night before, glancing at each thumbnail for half a second. A quiet property produces five to twenty rows. A busy one produces two hundred. Either way, review time is O(events), not O(hours). Compare that to scrubbing eight hours of footage across sixteen cameras at 4x, which is thirty-two camera-hours of real-time video condensed to eight viewing-hours. One shift review. One work day lost.

How does Cyrano know what to put in the list without replacing my cameras?

Cyrano taps the HDMI output of your existing DVR. Every DVR drives a local monitor via HDMI, displaying all the camera feeds composited into a multiview grid (2x2, 3x3, 4x4, and so on). Cyrano splits that signal with an HDMI tap, feeds it into a small edge box, and runs a person and vehicle classifier on each tile of the composite in parallel. Up to 25 tiles per HDMI input at 30 frames per second, 38 millisecond inference latency per frame. The classifier writes a row every time something crosses a threshold. No camera replacement. No RTSP credentials. No cloud upload of video. One HDMI cable in, one HDMI cable out to your monitor, one network cable for the review dashboard.

Why do the review filter chips say Mailroom Interior and Loading Dock NE instead of Channel 4 and Channel 7?

Because Cyrano reads them off the DVR's own pixels. Every DVR paints the camera name onto each tile of the multiview as a small text strip. That strip is called the cam_name_strip in the Cyrano layout schema. At install time, once Cyrano has locked onto the multiview layout, it runs OCR across each tile's name strip and populates the tile.label field of the event record. The review dashboard's camera filter chips are rendered straight from those labels. Property managers already know those names because that is what they see on the monitor. Reviewing by name survives channel swaps, cable re-runs, and multiview layout changes. Reviewing by channel number does not.

How long after an incident before I can review it?

Median capture-to-reviewable latency is 7 to 8 seconds on a 25-tile multiview, with a 5 to 15 second envelope. That is end-to-end: HDMI frame grab, per-tile crop, overlay mask, classifier inference, event-row write, thumbnail render, push to dashboard. If someone steps into the mailroom alcove at 2:04:12 a.m., the row is sitting on top of the review feed at 2:04:20 a.m. This matters for overnight shift review, because by the time the morning shift takes over, last night is already fully reviewable. Nothing is batch-indexed at end of day.

Doesn't this miss events the classifier doesn't know about?

Sometimes, which is why the DVR's own recording stays untouched as a fallback. Cyrano classifies person_in_zone, loiter, vehicle_dwell, tamper, and package_tamper today. About 99 percent of post-incident review questions a property manager actually asks are answered from the event list. For the 1 percent case (a subpoena asking for a specific two-hour window with no events, or the 5 minutes leading up to a flagged event) the full DVR recording is still there to pull from. Every event thumbnail links to the underlying clip on the DVR so you can open the full context without re-scrubbing.

What does a reviewer actually do when they open the dashboard in the morning?

Set the time range to the overnight window (say 10 p.m. to 6 a.m.), leave the camera filter empty to see every tile, and let the event-class filter show person_in_zone, loiter, and tamper. The thumbnail strip loads. A quiet night is five to fifteen thumbnails. Triage: glance at each thumbnail, click the one or two that look interesting, the full 30-second clip opens inline, reviewer decides whether to escalate. Typical shift review is 10 to 20 minutes of attention on a 25-camera property, replacing what was 3 to 4 hours of scrubbing.

Why not just record only on motion and review those clips?

Motion-triggered recording is the first thing every installer tries and it breaks in two predictable ways. First, it creates recording gaps, because motion detection at the camera level misses the start of slow events and drops segments in low light. Second, on a busy outdoor camera it triggers almost continuously, producing hours of flagged material that still needs to be reviewed. Motion is not a review tool, it is a storage tool. Review needs classification (person, vehicle, loiter, tamper) plus a timeline view across tiles. Motion gives you neither.

What if my DVR has its own smart search or playback app?

Most do. Hikvision's iVMS, Dahua's SmartPSS, Reolink's client, and the long tail of vendor apps all offer a motion-segment view, a playback timeline, and sometimes a weak per-camera object filter. Review of a single camera on a single DVR is serviceable in those apps. The moment you need to review two cameras at once, or ten, or a portfolio spanning four recorders from two vendors, each app becomes its own silo and cross-camera review becomes a tab-switching exercise. Moving review up into an event index layer sitting above the recorder makes the recorder brand irrelevant to the review experience.

What is the single thing a Redditor should do this week if they have footage to review right now?

Stop scrubbing. Write down the four things that actually matter for the event you care about: the tile name on the monitor (Mailroom, Lobby, Loading Dock), the event class (person, vehicle, loiter, tamper), the approximate time window, and the property. If your DVR does not let you filter on those four fields, you are not reviewing, you are watching tape. The answer is not faster scrubbing. The answer is a review layer on top of the recorder that builds the event list while the DVR keeps recording.

🛡️CyranoEdge AI Security for Apartments
© 2026 Cyrano. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.