M
Matthew Diakonov
13 min read
For property teams running more than one site

Multi-site camera monitoring is an architecture choice, not a dashboard choice.

There are three ways to watch cameras across more than one property. Cloud-centralized streams every feed up to a central server. Remote NOC services put a human in front of a wall of monitors. Edge-distributed runs inference at the site and ships only events. Most pages on this topic talk about which dashboard you sit in front of, which is the part that matters least. The part that decides whether your bill goes up or down per camera as you add the eleventh, the twentieth, the fiftieth site is where the inference physically runs and where the bandwidth physically goes. That is the part this guide is about.

If you are running 5 to 50 properties at a property-management price point and you have any mix of legacy DVR or NVR systems already installed, the answer is almost always edge-distributed. The reasoning, the numbers, and the failure modes are below.

Direct answer (verified 2026-05-02)

Multi-site camera monitoring breaks down into three architectural patterns. Cloud-centralized VMS (Verkada, Rhombus, Eagle Eye Networks) streams every feed to a central server and runs $10 to $25 per camera per month, so per-camera cost grows with the camera count. Remote NOC services watch feeds with human operators at $50 to $150 per camera per month, so per-site cost grows with the operator headcount and response time degrades past a few hundred sites. Edge-distributed runs inference at each site (the model Cyrano ships) at a flat $200 per site per month for up to 25 cameras, so only structured events leave the building and the per-camera cost curve flattens. For 5-to-50 site portfolios with mixed legacy DVR and NVR hardware already installed, the edge-distributed pattern is the only one whose unit economics do not get worse as you scale.

Key takeaways

  • 1.A 30-property portfolio with 16 cameras per site costs roughly $4,800 to $12,000 per month on cloud VMS, $24,000 to $72,000 per month on a remote NOC service, and a flat $6,000 per month on an edge-distributed stack like Cyrano (after a one-time $13,500 in hardware across the portfolio).
  • 2.Cloud VMS upstream bandwidth is roughly 32 Mbps sustained per 16-camera site; edge-distributed is roughly 3 MB per day per site. The contrast is about four orders of magnitude.
  • 3.Cloud VMS pricing assumes camera replacement (Verkada quotes $1,000 to $4,000 per camera in hardware). Edge-distributed works with the existing DVR or NVR via HDMI, so the cameras already on the property stay.
  • 4.During a WAN outage, cloud VMS goes dark; remote NOC cannot stream the feeds; edge-distributed keeps detecting and recording locally and replays alerts when the link returns.
  • 5.The architectural decision is where the inference physically runs. The dashboard layer that everyone benchmarks is a thin shell on top of that decision.

The three architectures, side by side

The columns of the table below are not three vendors, they are three places the inference can physically live. Every product in this market, including Cyrano, fits into one of the three. The differentiator that matters is the row, not the column.

FeatureCloud-centralized VMSEdge-distributed
Where inference runsOn rented servers in a cloud regionOn a small box at each property
Upstream bandwidth per 16-camera siteOn the order of 32 Mbps sustained (every feed streamed up)On the order of kilobytes per day (events + occasional clip)
Per-camera cost curve as portfolio growsLinear, every camera adds a per-month subscriptionFlat after the per-site box ($450 one-time)
Behavior during a WAN outage at a siteSite goes dark for the durationDetection and recording continue locally; alerts queue and replay
Works with existing mixed-vintage camerasRequires camera replacement to match vendor's hardwareYes, plugs into the existing DVR or NVR via HDMI
Where alerts routeTo the central dashboard, then to whoever happens to be watchingDirect to the on-call person for that specific site
Per-site install timeDays to weeks (camera replacement, rewiring, NVR swap)Under 30 minutes (one HDMI cable, one ethernet, configure zones)

What actually moves across the wire at each site

The single biggest difference between the three patterns is what crosses the WAN at each property. The diagram below is the flow on a Cyrano-style edge-distributed deployment. Read it left to right: cameras feed the existing DVR or NVR, the Cyrano box reads the DVR's HDMI output, runs inference locally, emits structured events, and the only thing that leaves the building is a tiny JSON payload (and a 10-second clip when an event clears the threat threshold). Compare that with a cloud VMS, where the equivalent arrow at the cameras-to-cloud boundary is a continuous video stream of every feed.

Edge-distributed: what crosses the WAN at one site

CamerasDVR/NVRCyrano box (on-site)Portfolio dashboardRTSP video feeds (LAN-only)HDMI output, all 25 feeds tiledLocal inference: detect, track, classifyStructured event (~800 bytes JSON)10-second clip (only on HIGH THREAT)Join with on-call schedule, route alert

The two arrows leaving the box (the JSON event line and the occasional clip) are the only things that consume upstream bandwidth. Everything inside the property, the RTSP streams from the cameras, the HDMI feed off the DVR, the inference, never crosses the WAN. The contrast with a cloud VMS is that the first arrow (cameras to anywhere outside the building) is the entire problem.

The bandwidth math at portfolio scale

Numbers below are first-principles, drawn from real Cyrano deployments at Class B/C multifamily properties. They are order-of-magnitude, not penny-precise. The point is the shape of the curve.

Cloud-centralized

~32 Mbps

Sustained upstream per 16-camera site at moderate H.264 bitrate. Multiply by site count.

Remote NOC

~8 Mbps

Per site, when the operator pulls a live view. Lower at rest, but the dispatch model assumes pull-on-demand.

Edge-distributed

~3 MB/day

Per 16-camera site in normal operation. Events plus occasional clips. Orders of magnitude below the others.

The reason this matters at portfolio scale is not the bandwidth cost itself, which is sometimes hidden in a vendor's flat per-camera-per-month price. It is the operational consequence of the underlying architecture. Cloud VMS pricing has to cover the ingest, the egress, and the storage at the cloud side, every month, forever, for every camera. Edge-distributed pricing has to cover the box (one-time, per-site) and the dashboard layer, which is processing JSON lines, not video. The cost of running the dashboard layer for a 30-property portfolio is small, because the dashboard is doing the work of an event database, not a video CDN.

The dollar math at portfolio scale

Bandwidth is an indirect proxy. The number an operator actually cares about is the monthly bill. Below is a side-by-side projection at 10, 30, and 50 sites, 16 cameras per site, using current public pricing ranges from each category. Cloud VMS assumes a midrange $15 per camera per month subscription (the range across Verkada, Rhombus, Eagle Eye, Videoloft is roughly $10 to $25). Remote NOC assumes a midrange $75 per camera per month (the range is roughly $50 to $150 for professional monitoring). Edge-distributed uses Cyrano's published pricing of $450 one-time per site plus $200 per site per month for up to 25 cameras.

PortfolioCloud VMS
$15/cam/mo
Remote NOC
$75/cam/mo
Edge-distributed
$200/site/mo flat
10 sites, 160 cameras$2,400/mo$12,000/mo$2,000/mo
+ $4,500 one-time
30 sites, 480 cameras$7,200/mo$36,000/mo$6,000/mo
+ $13,500 one-time
50 sites, 800 cameras$12,000/mo$60,000/mo$10,000/mo
+ $22,500 one-time

Camera-replacement capex is excluded from the cloud VMS row. Add roughly $1,000 to $4,000 per camera if the vendor requires ripping out the existing cameras (Verkada and Rhombus typically do; Eagle Eye and Videoloft typically do not).

The shape of the curve is the headline. Cloud VMS scales linearly with the camera count. Remote NOC scales linearly with the camera count and is roughly five times more expensive per camera than cloud VMS. Edge-distributed scales linearly with the site count (not the camera count) because the limiting factor is the box at each property, which handles up to 25 feeds. At a 16-camera average per site, edge-distributed lands a bit cheaper than cloud VMS at month one, and the gap widens every time a property adds cameras without paying more.

One more line item the table does not show: the cost of doing nothing. A single live security guard runs roughly $3,000 per month per site at a 12-hour overnight shift. The portfolio teams we work with use the savings against a guard line (or against a cloud VMS quote that came in twice the budget) to fund the edge-distributed rollout. The math pencils on the first site, not the tenth.

Want the same projection run against your actual site count and camera mix?

Walk us through your portfolio →

Where each architecture wins

None of these patterns is universally correct. Pick based on the shape of the portfolio.

1

Cloud-centralized VMS wins when you are doing a full hardware refresh anyway

If a developer is building a brand new Class A complex, has not bought cameras yet, and has a 5-year capex window for security, the cloud VMS pitch is real. Verkada and Rhombus do this well. The cost makes sense when you are not also paying to keep two-year-old cameras you bought last year, and when the buyer is enterprise-shaped (an IT department, a multi-year vendor relationship, a budget for $10K-25K of camera replacement per site).

Where it loses: any portfolio where the existing cameras work fine and the math of ripping them out does not pencil out. Class B and C multifamily, almost universally.

2

Remote NOC services win at high-risk sites where human verification matters

Construction sites with active theft, multifamily properties with a known recent incident, anywhere the cost of a wrong dispatch (sending police on a false alarm) exceeds the cost of an operator's hour. The NOC operator is the human in the loop that calls the on-call manager, not the bot. For a small number of high-value sites this is the right shape.

Where it loses: at portfolio scale, where the operator-to-site ratio falls and the model degrades in response time. We routinely see customers stack a small NOC service on top of edge-distributed for the few sites where they want a human, instead of paying a NOC for every site.

3

Edge-distributed wins for 5-to-50 site portfolios with mixed existing hardware

This is the band Cyrano was built for. The portfolio already has DVRs and NVRs. The cameras are a mix of brands and ages. The owner does not want to spend $50K-100K per property on a hardware refresh. The on-call routing is per-property, not central. The events that matter at 2 a.m. need to fire even when the upstream link is down. Every one of those statements is true on a typical multifamily portfolio, and every one of them points away from cloud-centralized and toward edge-distributed.

Where it loses: enterprise-shaped sites with a procurement department that wants a single five-year vendor contract for every camera and dashboard. That is a different sales cycle and a different price band.

What the dashboard layer actually does

On every multi-site monitoring stack, regardless of architecture, there is a central dashboard. The thing that varies is what feeds it. On a cloud VMS, the dashboard is fed by a continuous video pipeline: it shows a wall of live tiles and lets the operator scrub timelines. On a remote NOC, the dashboard is the operator's workstation: it shows the wall of live tiles and the events the operator has flagged. On an edge-distributed stack, the dashboard is fed by an event stream: it shows a chronological feed of what happened, by site, with thumbnails attached and a link to fetch the 10-second clip from that site's local storage on demand.

The portfolio-level views fall out of this naturally. An incident count by site, by category, by week is a SQL query against the event table. An incident-trend report ownership wants for an insurance renewal is the same query, exported to PDF. A site-by-site comparison of false positive rates is the same query grouped by camera and by event type. None of those views require the dashboard to have ever seen the video. They require the dashboard to have seen the events, which is what a Cyrano-style box at each site sends.

The practical consequence is that the dashboard is small. It is a postgres database, an event ingest endpoint, and a UI that reads the database. It is not a video CDN. The cost of running it for a 30-property portfolio is on the order of what a small SaaS company spends on its own metabase, which is small. This is the third piece of the cost story, after the bandwidth and the inference compute: the central layer in an edge-distributed architecture is cheap because the heavy lifting already happened on each box.

The deployment pattern in the field

The thing nobody puts in a multi-site monitoring guide is that the rollout is not a single big-bang deployment. The teams that pick edge-distributed do it one site at a time. A typical sequence on a 30-property portfolio: pick three pilot sites that represent the range of camera vintages and risk profiles in the portfolio, install the box at each (under 30 minutes per site), wire the on-call schedule, run for two to four weeks, look at the event log and the alerts that fired, and tune the zone polygons and dwell thresholds at each site to match how the property actually operates. Then scale to the rest.

The reason for the staged rollout is not the technology, it is the per-property tuning. Every site has its own quirks: a delivery driver that arrives at 4 a.m., a maintenance van that parks in the lot every Wednesday, a stretch of fence that gets touched by tree branches in the wind. The alert quality at site twelve is a function of how long that site has been running. The staged rollout lets the early sites educate the configuration of the later ones, and lets the on-call team build a feel for what a real alert looks like before you scale the surface area.

The shipped pattern at one Class C multifamily property in Fort Worth: 20 incidents caught in the first month, including a break-in attempt. The customer renewed after 30 days. That is one site. The point is the shape of what happens after the box has been on a property for two weeks of tuning, not a marketing claim you should retire from. Apply the same shape to every site you roll out and the portfolio average converges to something the operations team can actually trust.

Walk us through your portfolio.

A 15-minute call to look at your site count, your existing DVR/NVR mix, and whether the edge-distributed pattern fits.

Frequently asked questions

What does multi-site camera monitoring actually mean in practice?

It means watching, or having something watch on your behalf, the camera feeds at more than one physical location and routing the alerts that come out of those feeds to the right person at the right site at the right time. The two halves of that sentence are doing more work than they look. Most operators starting out treat it as a dashboard problem, the question of which screen they sit in front of. Almost every real failure mode in the field is the other half: the alert fired at site twelve, the property manager for site twelve was off-call that night, the alert went to the regional inbox, and nobody opened it until Monday. Multi-site camera monitoring is mostly a routing problem layered on top of an inference problem. Pick a stack that solves both and the dashboard takes care of itself.

What are the three architectures and how do they differ?

Cloud-centralized VMS streams every camera feed from every site to a central server. Inference and recording happen in the cloud. Examples in the property-management price band include Verkada, Rhombus, Eagle Eye Networks. Bandwidth scales linearly with cameras: each site needs a continuous upload of every feed it owns. Remote NOC services put humans in front of the feeds. A monitoring company in a central command center watches a wall of monitors, dispatches when something happens. Cost scales linearly with sites and with operator count, and response time degrades as the operator-to-site ratio falls. Edge-distributed puts the inference at the property. A small box at each site (the model Cyrano ships) runs detection, tracking, and threat classification locally, and only the structured events leave the building. The three differ on where the bandwidth goes, where the inference compute lives, and what the per-camera cost curve looks like as you add sites. The dashboard layer sits on top of all three, but it is a thin layer.

Why does cloud-centralized VMS get more expensive per camera as I add sites?

Because every camera at every site has to push its feed somewhere, continuously, and that somewhere is rented infrastructure that bills per stream. A single 1080p H.264 feed at a moderate bitrate is around 2 Mbps continuous. A 16-camera property is a 32 Mbps sustained upload, which on a typical small-property internet plan saturates the link before you have run anything else. Cloud VMS vendors paper over this with constrained-resolution proxies, motion-only upload, and tiered storage, but the architectural reality remains that the feed leaves the building. Multiply by 30 sites and you have rented egress bandwidth, ingest infrastructure, and storage that scales linearly with the camera count, and you are paying a per-camera-per-month subscription fee that exists primarily to cover that infrastructure. There is no version of this architecture that costs less per camera at site fifty than at site one.

Why do remote NOC services degrade as the portfolio grows?

Because the only lever the operator has against cost is the operator-to-site ratio. A live monitoring company with one operator watching 50 sites has good response time. The same company growing into 500 sites either hires 9 more operators (so cost per site stays flat) or spreads the same 10 operators thinner (so each operator now watches 50 sites in rotation and the average attention any one site gets falls). Most companies do the second, and after the second 200 sites the response time on a real intrusion goes from minutes to tens of minutes. The economics of the model are bounded by the cost of the operator, which is the most expensive line item in security. Operators do not get cheaper as you scale; software does. This is the structural reason the model breaks at scale, not a dig at any specific NOC.

What are the actual bandwidth numbers for an edge-distributed setup?

On a Cyrano-style edge-distributed deployment, the only thing that crosses the WAN at each site is the structured event payload and, in the case of a confirmed high-threat event, a 10-second clip. Structured events are JSON lines on the order of 600 to 1200 bytes each. A typical Class C multifamily property emits low double digits of events per camera per day. Ten clips per day at 720p, 10 seconds, H.264 is on the order of 30 megabytes total. Run the math at portfolio scale and a 30-property portfolio with 16 cameras per site uses roughly 100 megabytes per day of upstream bandwidth in normal operation. That is an email inbox, not a video stream. The contrast with cloud-centralized is roughly four orders of magnitude.

Where does the central dashboard live in an edge-distributed architecture?

It lives in the cloud, but it is a thin layer. It receives the event stream from each site, joins it against the per-property metadata (which property, which camera, which zone, who is on call), and surfaces three views: a live event feed (what is happening across the portfolio right now), a per-property incident history, and a portfolio-level trend report (incident count by site, by category, by week). It does not stream video. When an operator wants to see the clip attached to an event, the dashboard fetches the 10-second clip from that site's local storage on demand. This is the inverse of the cloud VMS flow, where the dashboard is fed by a continuous video pipeline and events are an afterthought.

How does alert routing work across multiple sites?

On any sane multi-site system, alerts route based on three things: which site fired the event, the on-call schedule for that site at the wall-clock moment the event fired, and the threat level the local inference assigned to it. Cyrano's alert router reads the on-call file for each property (a small JSON or CSV maintained per site) and emits an SMS, a phone call, or a queued email depending on the threat level. A LOW THREAT event during business hours queues to email. A HIGH THREAT event after midnight at a residential property phones the on-call manager directly. Mistakes happen at the boundaries: routing a 2 a.m. alert to a person who is not on call, or escalating a known false positive past the silencing rule. The honest description is that this routing is a per-property configuration problem, not a one-size-fits-all dashboard setting. Any vendor who tells you otherwise has not run a 30-property portfolio.

What about a portfolio that has cameras from five different brands and three NVR vintages?

This is the normal state of any multifamily portfolio that grew through acquisitions. Cloud VMS vendors handle it badly because their pricing assumes camera replacement; the per-camera price they quote includes the new hardware. Remote NOC services handle it adequately because the operator does not care which camera the feed came from. Edge-distributed handles it well, by design, because the Cyrano box plugs into the HDMI output of whatever DVR or NVR is already in the closet at each site. The brand and vintage of the cameras stop mattering at the HDMI boundary. This is the single largest practical reason a property manager picks the edge-distributed architecture over a cloud VMS: the cloud VMS pricing assumes you are willing to rip out hardware that is two years old and works fine. The edge-distributed architecture does not.

Does the central dashboard work if a site loses internet?

On a cloud-centralized stack, no. The site is dark until the link comes back. On a remote NOC stack, no. The operator cannot watch what they cannot stream. On an edge-distributed stack, yes, partly. Detection, tracking, and threat classification all keep running locally on the Cyrano box because the inference is local. Recording continues to disk. Alerts queue to a local outbox file and replay in order when the link comes back. The dashboard misses the events for the duration of the outage, but the events are not lost. This is not a marketing claim, it is a consequence of where the inference physically runs. If your portfolio has rural sites or sites with flaky cell-modem WAN links, this property is not optional.

Do I really need a separate box at every site?

For the price band this guide is written for (property management, 5 to 50 sites), yes. The alternative is to backhaul every feed to a central inference server, which puts you back into the cloud-centralized architecture and its bandwidth bill. The Cyrano box at each site is a one-time $450 hardware cost that handles up to 25 feeds. For a 16-camera property the per-feed amortized hardware cost is $28. The site needs to be capable of running the box (power, ethernet to the network, HDMI cable to the DVR/NVR) and that is the entire site-side install. Installation runs under 30 minutes on every property we have shipped to. The box is the architectural unit, not a dependency you can engineer out without giving up the bandwidth and offline-resilience properties that motivated picking this architecture in the first place.

How does this compare to the live remote video monitoring services that pitch portfolio-scale coverage?

Live remote video monitoring services work well at small to moderate scale and they have a real role at higher-risk sites where a human in the loop is genuinely needed. The honest comparison is that they answer a different question. They answer 'who watches the feeds tonight,' and the answer is 'one of our operators.' Edge-distributed answers 'what watches the feeds tonight,' and the answer is 'the box at the site, by itself, all night, every night.' The two are stackable: a few of our customers run the Cyrano box for first-line filtering and overnight detection at every site, then route a small filtered subset of HIGH THREAT events to a NOC service for human verification before dispatch. That hybrid is cheaper than either alone at scale, because the NOC operator only spends time on already-filtered events.

What does the rollout actually look like for a 30-property portfolio?

It is not a single big-bang deployment. The teams that pick edge-distributed do it one site at a time. A typical rollout: pick three pilot sites, install the box at each (under 30 minutes per site), wire the on-call schedule, run for two to four weeks, look at the event log and the alerts that fired, tune the zone polygons and dwell thresholds at each site to match how the property actually operates, then scale to the rest. The reason for the staged rollout is not the technology, it is the per-property tuning. Every site has its own quirks (a delivery driver that arrives at 4 a.m., a maintenance van that parks in the lot every Wednesday, a stretch of fence that gets touched by tree branches in the wind), and the alert quality at site twelve is a function of how long that site has been running. The staged rollout lets the early sites educate the configuration of the later ones.

🛡️CyranoEdge AI Security for Apartments
© 2026 Cyrano. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.