C
Cyrano Security
14 min read
Security camera cloud, reframed

You do not have to upload your video to get the security camera cloud features.

Every guide on this keyword compares Ring Protect against Nest Aware against Arlo Secure against Eufy against Reolink Cloud, all the way down to retention windows and monthly tiers. None of them say the quiet part: if you already have 16 to 25 cameras on a DVR, continuous cloud upload is not a plan, it is a bandwidth suicide pact. This page is the other path.

Run the AI on the DVR's HDMI output. Keep every pixel of video on the recorder. Ship only the filtered events and their thumbnails upstream. You still get cloud search, cloud alerts, portfolio dashboards, and ownership reporting. You just get them from 0.058% of the data.

4.9from 50+ properties
HDMI-tap into the existing DVR or NVR, nothing else changes
412,803 candidate detections filter down to 241 emitted alerts
Under 2 Mbps average upstream vs 50 to 80 Mbps for pure cloud cameras
Install under 2 minutes on the recorder already in the office closet

What the top security-camera-cloud guides all skip

Scan the first page of results. Every guide is a feature matrix between Ring Protect, Nest Aware, Arlo Secure, Eufy Cloud, Reolink Cloud, and sometimes Verkada Command. Every guide assumes you want 24/7 continuous video on a vendor server, and then debates how long that server keeps it.

That framing leaves out the single biggest operational fact about commercial deployments. A Class B/C apartment property typically has 16 to 25 analog or HD-over-coax cameras feeding a DVR in the office closet, the DVR pipes an HDMI multiview to a wall monitor, and the property's commercial internet gives you 20 to 30 Mbps of upstream on the whole building. Pushing all that video to the cloud needs 50 to 80 Mbps sustained. The math does not work, and the cloud-camera guides never explicitly tell you that.

The reframe is simple. The feature set people actually want from a security camera cloud (natural-language search, mobile alerts, portfolio-wide incident trends, ownership reports, insurance-grade audit logs) does not require continuous video upload. It requires a structured stream of classified events and their thumbnails. That is a tiny fraction of the video bandwidth. This page is how you get there.

What leaves the property, what stays

The shape of the cloud changes. On a continuous-upload cloud camera, everything on the left ends up in the vendor's bucket. On an HDMI edge-tap, only the filtered event stream and its thumbnails cross the link. The rest stays on the recorder that was already there.

HDMI edge-tap pipeline: what crosses the link

25-tile multiview
Overlay pixels
Per-tile motion
Candidate detections
On-device edge AI
Event JSON (9 fields)
480x270 thumbnail
Portfolio dashboard
Mobile alert

The anchor: 0.058% of detections reach the cloud

This is the single most load-bearing number on the page. Every other claim (bandwidth, privacy posture, monthly cost, ROI) follows from it. These are counts from one 24-hour audit window at a single 16-tile Class C property in Fort Worth, Texas on a Hikvision DS-7716NI-K4 recorder with a 4x4-std multiview layout.

Candidate detections
0
Generated on-device in 24h
Dropped on device
0%
Never hit the network
Alerts synced upstream
0
Event JSON + thumbnail
Video bytes uploaded
0
All clips stay on recorder
Anchor fact

241 of 412,803 detections became cloud objects. 0 seconds of video did.

The cloud object for each alert is a 9-field event JSON plus a 480x270 JPEG crop of just the triggering tile. Average event payload is well under 50 KB. Mean upstream load for the whole property is under 2 Mbps, even during a busy delivery window. A true cloud camera setup on the same 16 feeds would need 50 to 80 Mbps sustained upload for the video alone, before you add the event stream.

The event that reaches the cloud

This is the entire cloud-side footprint of one alert. Nine fields of event JSON and a pointer to the tile-cropped thumbnail. No clip, no continuous buffer, no pre-roll, no post-roll. The recorder in the office closet still has the full video on its native retention clock. This is just enough structure to drive search, dashboards, and mobile alerts without carrying the video upstream.

cyrano.event.json

What the on-device suppression funnel looks like

Every detection the device drops gets logged with a reason code, locally. You can grep it after the fact to see what would have uploaded if you were running a naive cloud camera. This is one 24-hour audit on the same Fort Worth property.

suppression_audit.log

Pure cloud cameras vs HDMI edge-tap, same 16 feeds

Everything on the left column assumes the big-name cloud camera subscription. Everything on the right is the HDMI-tap pipeline. Same 16 cameras, same property, same expected cloud features.

What the cloud actually holds at a 16-camera property

Every camera uploads continuously. The cloud holds 24/7 of every feed in a vendor bucket, with a retention window measured in days to months depending on plan. Bandwidth scales linearly with camera count. Privacy posture: all raw footage on vendor infrastructure.

  • 50 to 80 Mbps sustained upload for 16 feeds at 1080p
  • Cloud plan scales per-camera per-month
  • Every frame of tenant life is on a vendor server
  • Outage = no detection, no recording, no recovery

What the bandwidth math actually looks like

0 MbpsPure cloud, 16 feeds @ 1080p
0 MbpsHDMI edge-tap, peak upstream
0Max feeds per Cyrano unit
0Alerts synced in 24h

The left number is why commercial properties cannot use consumer cloud cameras at scale. Most Class B/C buildings have 20 to 30 Mbps total upstream, shared across residents and staff. You cannot saturate it with camera video without breaking everything else on the network. The right number is what you actually need when only events leave.

Cloud features you still get

Natural-language search across every camera and every property

Search resolves to event_class, tile.label, and iso8601_ts windows against the cloud event stream. Matching events return with thumbnails and a deep link to the exact timestamp on the recorder. The video for that clip streams on demand only when you click play.

Real-time mobile alerts with context

The 241 alerts that made it through the filter land as push notifications and (optionally) SMS or phone calls, each with the 480x270 thumbnail and a one-tap recorder deep link. No alert fatigue because the four-stage filter already killed the 412,562 noisy candidates.

Portfolio incident dashboard

All properties feed one dashboard. Per-property trend lines, per-tile.label hotspots, time-of-day density. Runs off the event stream. Zero video transport.

Threat classification on the event

Each event is tagged LOW or HIGH threat on the device before sync so the operator inbox does not treat a delivery person at noon the same as a masked figure at 2am.

Ownership and insurance reporting

Exportable monthly reports of incidents by class, property, and severity. Insurance carriers read the event log, not the raw video.

How natural-language search runs without uploading the video

This is the part most readers trip on. If the video is on the recorder and only events are in the cloud, how does a query like "masked man near the gate last Tuesday night" return anything? It runs against the structured event stream, not the pixels. The cloud knows which tiles fired loiter events in that time window on a camera whose tile.label contains "gate." The thumbnails come back. The video for the specific clip is pulled from the recorder on demand when you tap it.

search_without_upload.ts

HDMI edge-tap rollout, end to end

1

Unplug the HDMI cable from the guard monitor

This is usually a single cable from the DVR in the office closet going to a wall-mounted monitor in the lobby or back office. Leave the monitor powered on. The Cyrano unit will pass the signal back through.

2

Plug that cable into the Cyrano unit, then pass through to the monitor

Cyrano has HDMI-in and HDMI-out. The multiview still reaches the guard monitor unchanged. Inference runs on the captured frames on-device. The guard does not notice anything changed on the monitor.

3

Connect Cyrano to the property network

Ethernet preferred, Wi-Fi supported. The unit starts auto-detecting the recorder model, layout_id, and overlay template from the multiview itself at boot. No DVR credentials needed, no RTSP, no ONVIF, no camera-side config.

4

Connect the property to the portfolio dashboard

Claim the device from the dashboard. tile.label strings get pulled from the camera-name strip on each tile. Zones default to full-tile and can be narrowed later. The event stream starts flowing upstream immediately.

5

Tune zones in weeks 2 through 4

Look at the suppression_audit log. Where is overlay_masked firing constantly? Recalibrate that tile's mask. Where is zone_miss too permissive? Redraw the zone. Zones are keyed on tile.label, so recorder layout changes do not break them.

Feature-by-feature against the cloud camera playbook

FeatureConsumer cloud camera (continuous upload)HDMI edge-tap (event-only cloud)
What reaches the cloudEvery frame of every camera, continuously241 events + thumbnails per 24h (0.058% of candidates)
Upstream bandwidth load50 to 80 Mbps sustained, 16 feeds at 1080pUnder 2 Mbps average, up to 25 feeds per unit
Monthly cost at 16 cameras$160 to $400 per-camera-plan aggregate$200 flat per property, any feed count up to 25
Cameras requiredFull replacement with vendor-branded camerasNone, works off existing DVR/NVR HDMI output
Install timeWeeks to months per propertyUnder 2 minutes per property
Internet outage behaviorNo detection, no record, no recoveryEvents queue locally, backfill on reconnect
Privacy postureAll raw footage on vendor serversRaw footage never leaves the property
Natural language searchRuns against cloud-stored videoRuns against cloud-stored event stream
Portfolio-wide dashboardPer-site app, limited aggregationOne dashboard across entire portfolio
Legacy recorder supportThrows it out and starts overHikvision, Dahua, Lorex, Amcrest, Reolink, Uniview, Bosch...

Recorders this already works on

The HDMI-tap only helps if the device knows where your specific recorder stamps its overlays. Cyrano ships overlay templates for the models commonly found in Class B/C multifamily DVR closets.

Hikvision DS-7xxxDahua XVR / NVRLorexAmcrestReolink NVRUniviewSwannNight OwlQ-SeeANNKEEZVIZBosch DIVARHoneywell PerformancePanasonic WJ-NX
20

At one Class C multifamily property in Fort Worth, Cyrano caught 20 incidents including a break-in attempt in the first month. Customer renewed after 30 days.

Fort Worth, TX property deployment

See the event-only cloud sync on your own DVR

15-minute walkthrough. We plug Cyrano into your HDMI output, show you the suppression audit live, and open the cloud dashboard for your property with the event stream flowing.

Book a call

Frequently asked questions

What does 'security camera cloud' usually mean, and why does this page reframe it?

In the consumer market, 'security camera cloud' means a subscription (Ring Protect, Nest Aware, Arlo Secure, Eufy Cloud, Reolink Cloud) where your cameras continuously upload video to a vendor bucket and you get a retention window, clip sharing, and an app feed. That model assumes a handful of cameras in a house with plenty of upstream bandwidth. The reframe on this page is: when you have 16 to 25 cameras on a commercial DVR, continuous upload is not bandwidth-feasible, but you can still get the cloud features (AI search, remote alerts, portfolio dashboard, ops reporting) by running edge AI on the DVR's HDMI output and syncing only classified events plus their thumbnails upstream.

How much data actually leaves the property in an event-only cloud sync?

On a representative 25-tile multiview, the detector sees about 412,803 candidate detections per 24 hours. The four-stage filter pipeline (overlay_mask, tile-grid zones, multi-frame persistence, tile.label dedup) drops all but about 241 of those. What leaves the property is 241 event JSON blobs (nine fields each) and 241 thumbnails cropped to 480x270 pixels of just the triggering tile. Mean upstream load is under 2 Mbps. Compare that to a true cloud camera setup pushing 16 1080p feeds, which needs 50 to 80 Mbps of sustained upload that most commercial buildings do not have.

What exactly gets stored in the cloud versus what stays on the recorder?

The recorder keeps all video forever (or for its native retention window, usually 14 to 90 days depending on disk size). Cyrano does not touch that. The cloud holds the structured event stream: iso8601_ts, property, tile.label, tile.index, tile.coords, layout_id, overlay_mask, event_class, latency_ms, and one 480x270 thumbnail. That is enough to search events in natural language, chart incident trends across a portfolio, export ownership reports, and jump to the exact timestamp on the recorder when you need the full clip. The clip itself never leaves the building.

Why does this matter for tenant privacy and compliance?

State tenant privacy laws and property insurance riders increasingly require that continuous camera footage not leave the property without cause. A pure cloud camera setup violates that by default because every minute of video is on a vendor server 24/7. An HDMI-tap event-only sync inverts the default: the raw footage stays on the recorder in the office closet, and only discrete events (with their own retention clock and audit trail) leave. That is the posture most multifamily owners and insurance carriers are already assuming when they sign off on a camera deployment.

Do I lose remote access to live video if my cloud only has events?

No. Cyrano passes the HDMI signal through to the wall monitor untouched and exposes a remote live view to the dashboard, but live view is pulled on demand, not pushed continuously. When you open live view, the unit starts a short-duration WebRTC or similar pull for as long as you watch, then stops. That is a fundamentally different bandwidth profile from 24/7 upload. The rest of the time, the property's upstream link is idle except for the event stream.

What happens when the internet at the property goes down?

Detection keeps running. Events keep getting generated, filtered, and written to a local queue on the Cyrano device. Thumbnails are generated locally. When the link comes back up, the queue drains to the cloud in timestamp order and the portfolio dashboard backfills. A pure cloud camera setup gives you zero detections during an outage because the inference is running on someone else's server that cannot see your video. The HDMI edge-tap path is offline-durable by default.

How does natural language footage search work if the video never uploads?

Search runs against the structured event stream in the cloud, not against the video itself. When you type 'masked man near gate last Tuesday night,' the backend resolves that to event_class in {person_in_zone, loiter}, a tile.label matching a gate camera, an iso8601_ts range, and a thumbnail appearance check. The matching events come back as a list with thumbnails, each with a deep link to the exact timestamp on the recorder. You click the link, the on-site recorder cues up that clip, and it streams to your browser only at the moment you press play. This is why search works without continuous upload.

What does installing this look like compared to a cloud camera rollout?

A cloud camera rollout replaces every camera in the property, runs new cable if the existing camera count or placement is wrong, installs a new NVR, and starts a subscription per camera. That is $50,000 to $100,000+ per property and months of work. The HDMI edge-tap rollout plugs one Cyrano device into the HDMI output of the DVR you already have, passes the signal through to the guard monitor, and connects the unit to the property's network. Install is under 2 minutes per property. Hardware is $450 one-time per property. Software is $200 per month. Nothing on the camera side changes.

Which DVR and NVR brands does the HDMI edge-tap actually work on?

The overlay_mask template library covers the recorder brands commonly found in Class B/C multifamily DVR closets: Hikvision DS-7xxx, Dahua XVR and NVR, Lorex, Amcrest, Reolink NVR, Uniview, Swann, Night Owl, Q-See, ANNKE, EZVIZ, Bosch DIVAR, Honeywell Performance, and Panasonic WJ-NX. Each template knows where that recorder stamps the clock, the camera-name strip, and the channel bug so inference does not fire on overlay pixels. A custom layout outside the library gets a one-time mask calibration and then behaves the same as any supported recorder.

How is this different from a VMS with cloud relay like Eagle Eye or OpenEye?

Cloud-relay VMS platforms still push continuous video from the recorder to the cloud, just with deduplication and adaptive bitrate smoothing in between. The property still pays the upstream bandwidth cost, and the raw footage still lives on a vendor server. The HDMI edge-tap is strictly event-only upstream. There is no relay tier running continuous video transport. The only thing the cloud sees is the filtered event stream and its thumbnails. You lose the ability to scrub video through the VMS web app, but you gain bandwidth sanity, lower monthly cost, and a tenant-privacy posture most owners already want.

What does the ROI look like against a cloud camera subscription?

Back-of-envelope: a typical multi-camera cloud plan with 60-day retention runs $10 to $25 per camera per month, so a 16-camera property is $160 to $400 per month in subscription alone, plus the one-time cost of replacing all the cameras. The HDMI edge-tap is $450 one-time for the device plus $200 per month total for the property, regardless of camera count. On a 50-property portfolio that is $10,000 per month vs anywhere from $8,000 to $20,000 per month on cloud cameras, with zero capex to rip-and-replace. The ROI line usually crosses before month 3.

What is the actual event JSON schema that gets synced upstream?

Nine fields plus a thumbnail: tile.label (camera name the DVR stamps on the strip, e.g. 'Loading Dock NE'), tile.index (row-major position in the grid), tile.coords (x,y,w,h in composite-frame pixels), property (site ID), layout_id (recorder layout, e.g. '4x4-std'), overlay_mask (array of overlay classes blanked, e.g. ['clock','cam_name_strip','channel_bug']), event_class (person_in_zone / vehicle_dwell / loiter / tamper), iso8601_ts (recorder clock timestamp), latency_ms (capture-to-delivery time). The thumbnail is a 480x270 JPEG crop of just the triggering tile, stored at an S3 key the event references.

🛡️CyranoEdge AI Security for Apartments
© 2026 Cyrano. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.