C
Cyrano Security
13 min read
One model, N properties, one 20 KB config file per deployed system

AI powered surveillance systems are governed by a config file, not a model.

The first-page results for this phrase argue at the capability level: what the AI can detect, what it should be allowed to, how accurate it is, which vendor has the bigger model. On a portfolio of real deployed systems, the model is constant across every unit. The only thing that actually diverges between property 1 and property 10 is a 5-field JSON record written on install day, around 6 to 20 kilobytes per property. That file is the real spec of an AI surveillance system at portfolio scale. This page publishes its shape.

See the config file driving a live 25-camera deployment
4.9from 50+ properties
Every unit runs the same model binary and the same 612-byte event envelope
Only the per-property config differs: 5 fields, ~6 to 20 KB, JSON
Median install-time calibration: 3 min 47 sec per property
Config is cloud-mirrored; device swaps recover in ~90 seconds

The top search results have the wrong object of study

Run the query. The first page has a Wikipedia definition, a policy-advocacy piece from the ACLU, a ScienceDirect research paper on public surveillance, and three vendor pages listing capabilities in bullet form. Every single one of them treats an AI surveillance system as a singular capability, as if the question is what one of them can do. The plural in the keyword is treated as grammar, not substance.

What is missing is the portfolio question. When a property management company runs AI powered surveillance systems across ten buildings, what actually differs between system one and system ten? The model is the same. The hardware is the same. The inference pipeline is the same. The zone rule engine is the same. The delivery protocol is the same. The one thing that changes is a config file that names which cameras you have, where the DVR paints its clock on screen, which pixel regions count as sensitive, and which phone gets buzzed at 3 AM.

That config file is the operational spec of the system. It is the part that a property manager actually has to review, approve, and maintain. No SERP result writes about it because no vendor has a reason to publish theirs. This page publishes ours.

What is constant vs. what diverges across N deployed systems

A quick accounting. In a healthy Cyrano portfolio, the left column is identical across every unit; the right column lives inside the per-property config.

FeatureIdentical across every unitPer-property config record
Detection model weightsSame binary checkpoint, every device(not in config)
Inference runtime and NPU driverSame version, every device(not in config)
Event envelope schemaSame 612-byte JSON shape(not in config)
DVR make, model, firmwareVaries across portfoliodvr_profile
HDMI composite layoutVaries (4x4 vs 5x5 vs 3x3)layout_id
Clock / channel-bug pixel rectanglesDiffers per DVR brandoverlay_mask
Tile-to-camera labels and zonesEvery property has its owntiles[]
Dwell, active hours, threat weightsPer-zone, per-propertytiles[].zones[]
WhatsApp delivery routesDifferent on-call group per propertydelivery

The config record, as it actually looks

Abridged but real. The block below is the JSON shape a Cyrano unit loads at boot. Everything the operator touches during install ends up in this document, and nothing else. The file below is from a 25-camera multifamily property on a Hikvision DVR; on a different property every value in it would be different, while the shape is always the same.

property.config.json

Two tiles are shown for illustration; a real record includes all 25. Total serialized size for this property: roughly 17 KB. Nothing in this file is video, imagery, or personal data. It is entirely structured state that describes the property to the inference pipeline.

The install, stage by stage, in under four minutes

Five stages. Each one writes exactly one section of the config. The operator does not touch the model at any point; they do not touch a camera feed directly. They describe the property to the unit, and the unit handles the rest.

1

Stage 1: DVR profile auto-detected from HDMI

Plug the unit into the DVR's HDMI out. The HDMI EDID handshake reveals the DVR make, model, firmware version, and signal shape. Cyrano writes dvr_profile and picks the right overlay_mask template defaults for that DVR brand. Operator does not type anything.

2

Stage 2: Layout confirmed from the composite signal

The unit samples the HDMI output, detects whether the DVR is painting a 2x2, 3x3, 4x4, or 5x5 grid, and writes layout_id. Tile coordinates for output slicing are derived from this single value. If the DVR is running a non-standard layout, the operator taps to adjust; otherwise accepted automatically.

3

Stage 3: Tile labels assigned in a dropdown

For each tile in the composite, the operator picks a camera label from a dropdown (lobby, mailroom, dumpster bay, parking NW, etc.) or types a custom label. If the DVR already has human-readable tile names stored internally (many do), Cyrano reads them and pre-fills. Writes the camera_label for each entry in tiles[].

4

Stage 4: Zones drawn by tapping on the tiles that need them

Not every tile needs a zone. Public hallways and open-lobby cameras often run without one. For tiles that need a zone (trash alcoves, restricted doors, after-hours amenities), the operator taps vertices on the tile image to draw a polygon, picks a dwell preset (3s, 6s, 12s), and sets an active-hour schedule if the zone is time-restricted. Writes zones[] into each relevant tile entry.

5

Stage 5: Delivery route linked and validated

Paste or scan the WhatsApp group invite for the on-call thread. Fire a test event by having the operator walk in front of a camera with zone coverage; within two seconds the thread should buzz with a thumbnail and a label. Writes delivery.whatsapp_thread_id. Optional escalation thread and dashboard group added in the same step.

What the portfolio dashboard actually does with N config files

The cloud control plane does not run inference, does not ingest video, and does not train per-property models. It does three things, all of them downstream of the per-property config records. The beam below is the data shape: five installed systems flow their delivered events through their own delivery routes into one portfolio view.

Five config files, one unified portfolio view

Property 1
Property 2
Property 3
Property 4
Property 5
Cloud control plane
Portfolio incident trend view
Cross-property event search by label
Versioned config audit log
Exportable safety report (PDF, CSV)

The hub does not know how to detect a person. It knows how to aggregate events whose labels and zone ids were set by five config files. Swap one config from Hikvision to Dahua and the hub is undisturbed; the field inside dvr_profile changes and the indexing continues.

What lives in the config vs. what lives in the model

A common confusion during vendor evaluation is thinking the per-property state is in the model. It is not. The four categories below sort cleanly into config or model. If a vendor cannot tell you which side each category lives on, they are not operating under a config-first architecture.

Lives in the config (per-property, operator-editable)

Tile labels. Zone polygons. Dwell seconds. Active-hour schedules. Threat weights. WhatsApp delivery routes. Overlay mask coordinates. Layout id. DVR profile. These all get written during install and edited from the dashboard when the property changes operationally.

Lives in the model (portfolio-constant, not editable per property)

Detector weights. Class definitions. Confidence thresholds built into the model. Threat-classifier logic. These are the same binary across every property. A property cannot tune them from their dashboard because that would make the portfolio inconsistent.

Lives on the device (transient, not persisted)

The current HDMI composite frame in memory. Per-tile ROI slices. Intermediate detection tensors. Event dedup windows for the last N seconds. None of these are stored between reboots and none of them leave the property.

Lives in the cloud (mirrored, audit-friendly)

Versioned config records, one document per property. Delivered event envelopes (240 KB per HIGH event). Portfolio trend indexes. Exportable safety reports. Zero camera frames unless one was attached to a delivered event.

A device swap, on the wire

Concrete example of why the config is the durable asset and the device is replaceable. Stdout below is from a replacement unit being plugged into an existing property's DVR; it pulls the last known config from the cloud mirror and starts running events inside 90 seconds. No install operator on site is required.

cyrano.bootstrap - replacement unit on an existing property
~17 KB

That is the size of one property's entire operational config on a 25-camera deployment. Every value that differs between this property and any other property in the portfolio is inside that file. The model binary is two orders of magnitude larger and identical across every deployed system.

Measured on a live 25-camera Cyrano install, JSON-serialized unminified

Four numbers that describe a portfolio of deployed systems

These are the portfolio-scale numbers that matter once you have more than one system. None of them depend on the model; they all depend on the config records.

0Fields per config record
0KB per property (typical)
0 minMedian install-time calibration
0 sDevice-swap recovery time

Fields: dvr_profile, layout_id, overlay_mask, tiles, delivery. KB figure reflects a 25-camera install with zones on 9 tiles. Install-time is measured from HDMI plug to first validated WhatsApp delivery. Device-swap is measured from cold unit on bench to online with the last known config.

A checklist for evaluating any AI powered surveillance system at portfolio scale

Use this as a six-point qualification when a vendor says they do multi-property deployments. A vendor whose architecture is config-first will answer all six without hedging. A vendor who cannot articulate the per-property state is running the same undifferentiated pipeline at every customer.

Six questions to ask before deploying any AI surveillance system to more than one property

  • Show me the exact document that will be different between my two buildings, field by field.
  • Confirm that the detection model is binary-identical across every deployed unit in my portfolio.
  • Let me export that per-property document as a JSON file I can open in a text editor.
  • Version the document so every operator edit is audit-traceable with timestamp, author, and diff.
  • Mirror the document to the cloud so a device swap recovers the property's entire state in under 2 minutes.
  • Keep the document under 50 KB per property so it can be reviewed by a human, not just a dashboard.

What the portfolio looks like from an on-call manager's phone

Because the config record carries consistent labels across properties, a single on-call thread can meaningfully interleave events from 10 deployed systems without the reader losing track of which property is talking. The label strings in the notification are the same strings the operator wrote in the config during install.

oakpark-apts - dumpster_bay - loiter 14s - HIGHriver-lofts - loading_dock - vehicle 6s after-hours - HIGHbrook-commons - pool_deck - person 11s post-close - HIGHmaple-terrace - rear_entry - person tailgate - HIGHelm-court - mail_room - package handoff - LOW (dashboard only)cedar-run - parking_NW - vehicle circling - HIGHaspen-heights - trash_alcove - loiter 8s - HIGHwillow-park - main_gate - vehicle 3s at gate - LOW (dashboard only)

The property slug and the zone label at the front of each event are strings pulled directly from the per-property config records. The on-call manager does not need a separate index to know which building is talking because the config names the building.

What this architecture is not good for

A config-first architecture has sharp limits worth stating directly. If a property genuinely needs a model tuned to its site (a cell-block deployment that needs a custom pose detector, a manufacturing line that needs a defect classifier trained on its own widgets), the config-first shape does not provide that. You would have to sit outside this architecture.

It is also not the right shape for surveillance flows that rely on continuous off-property upload of full camera streams. A cloud-ingest system inherently cannot be described by a 17 KB per-property config, because the per-property state lives in the streams themselves, not in a document. If that is the shape you want, the right vendors are cloud camera vendors like Verkada, Rhombus, or Coram. They are good at different problems.

What the config-first architecture is good for is multifamily, industrial, construction, and standalone-property surveillance where the event classes are stable across the portfolio and the source of variation is the physical property: which camera is where, which zones matter, which phone gets buzzed. That is the majority of property surveillance by volume, and it is the domain this page is about.

Walk through a live portfolio's config records, property by property.

Fifteen minutes on a video call. Open a real deployed portfolio, pull the JSON record for each property, and see exactly what differs between them.

Book a call

Frequently asked questions

Why is a per-property config file the defining spec of an AI powered surveillance system, instead of the model?

Because at portfolio scale the model is constant and the config is the only thing that varies. Every Cyrano unit across every property ships with the same binary inference runtime, the same class set, the same threat classifier, and the same 612-byte event envelope schema. If you took a unit from property A and swapped it with a unit from property B, the hardware and the software are identical; the unit would immediately misbehave because the overlay mask would be aimed at the wrong clock position, the tile-to-camera map would label the lobby as the parking lot, and WhatsApp deliveries would go to the wrong on-call thread. The divergence lives in the config record, not in the model. That makes the config the operational spec of the system at the property level. Top-ranking articles on AI surveillance talk about the model because it is the photogenic part. The config is the part that actually determines what each deployed system does.

What are the five fields in a Cyrano per-property config record, in order?

Field one, dvr_profile: the DVR make, model, and firmware stamp observed from the HDMI signal at install (used to pick the right overlay mask template and tile geometry defaults). Field two, layout_id: the compositing mode the DVR drives on its HDMI output, for example 5x5-std or 4x4-std, which determines the tile coordinates for output slicing. Field three, overlay_mask: a list of pixel rectangles within the 1920x1080 frame that get zeroed before inference because they hold non-scene glyphs (the digital clock, the per-tile channel name strip, the channel bug, any static watermark). Field four, tiles: a list of 16 to 25 entries, one per tile in the composite, carrying the tile index, the camera label in English (lobby, mailroom, dumpster bay), the zone polygons drawn on that tile, the dwell thresholds per zone, and the active-hour schedule per zone. Field five, delivery: the WhatsApp thread id that HIGH events land in, plus an optional escalation thread id for after-hours and an optional dashboard group id. That is the whole record. Nothing about the model lives in it.

How long does it take to write a Cyrano config record at install time?

About four minutes per property on an operator tablet. One minute: plug in the HDMI passthrough and let the unit sample the DVR signal; it reads the DVR make/model from the EDID handshake and writes dvr_profile and layout_id automatically. One minute: the operator confirms which tile is which camera by tapping each tile and picking a label from a dropdown, or accepting the DVR's native channel names if those are human-readable. One minute: the operator draws zone polygons on the tiles that need zones (not every tile needs one; a public hallway can be left permissive) and picks a dwell preset per zone. One minute: confirm the WhatsApp thread link and fire a test event to validate the delivery route. When we profile install recordings, the median total is 3 minutes 47 seconds. The model loads, warms up, and starts running during this time; the config is the only thing the operator actively writes.

If every unit runs the same model, what exactly does the portfolio dashboard do that a single-property device cannot?

It renders the union of the delivered event stream from every config record in the portfolio, indexed by property label, camera label, zone label, and event class. The dashboard does not do inference and does not store raw frames; those live on the devices and leave the property only as the 240 KB per-event payload. What the portfolio view adds is two derived products that do not exist on a single unit. One, cross-property incident trend detection: if the same event class fires at five properties in the same week, a portfolio-level report surfaces it even though no single device saw the pattern. Two, a unified review index where a property manager investigating an incident at property 4, camera 7 can search by label across the whole portfolio without juggling eight different DVR web UIs. Neither of those exists until you have a portfolio of config files whose tile labels and zone ids are consistent enough to be aggregated.

What breaks when two properties in the same portfolio use different DVR brands, and how does the config absorb that?

Different DVR brands render the composite frame with different pixel-level chrome: the clock is in a different corner, the per-tile channel strip is a different height, the channel bug is a different glyph. Everything else downstream in the pipeline (inference model, zone rules, delivery) is indifferent to DVR brand. The config absorbs the difference entirely through dvr_profile (which picks the right overlay_mask template defaults) and the overlay_mask field (which stores the actual rectangles). A portfolio with five properties on Dahua, three on Hikvision, and two on Lorex runs ten units with the same model binary, ten different overlay_mask arrays, and ten delivery routes. Nothing else in the stack changes. The operator does not see DVR brand as a configuration dimension; it becomes a value inside one field.

How is the per-property config record stored and versioned?

On-device as a local JSON record that the inference loop reads on startup, and mirrored to the cloud control plane as a versioned document. Every edit (operator re-draws a polygon from the dashboard, changes a dwell threshold, adds a zone) writes a new version with a timestamp, author, and diff. The inference loop picks up the new version on the next tick; there is no device restart. The versioning matters operationally because calibration is not a one-shot event. Properties change: a pool opens earlier for summer, a construction area gets fenced off, a mailroom moves, a new camera gets added to the DVR. The config is the running log of those changes. The cloud mirror exists for three reasons: it survives a device swap (plug a new unit into the same DVR, pull the last config from the cloud, you are back online), it makes portfolio-level audits possible, and it lets a regional manager tune zones across several properties without visiting each site.

Does the config file include the operator's camera video, or any per-property model fine-tuning?

No to both. The config holds only structured state: coordinate rectangles, labels, polygon vertex lists, numeric thresholds, timestamps, and route identifiers. No video, no images, no embeddings. No per-property model weights are trained or stored, because every unit runs the same public detector checkpoint. There is no face database, no vehicle database, no license plate index attached to the config. That is a deliberate architectural choice. The consequence is that a config record can be exported, reviewed, and audited as a plain text document, and nothing in it is private personal data that a property manager would have to treat as a protected asset. The config can be emailed to a compliance lawyer. The model weights are the same open checkpoint across every deployment.

What is the single-file size of a typical per-property config, in kilobytes?

Between 6 and 20 kilobytes, JSON-serialized and unminified. A minimal install with 16 tiles, two zone polygons drawn, and one delivery route comes in around 6 KB. A heavier install with 25 tiles, three or four zone polygons per tile, custom dwell thresholds per zone, active-hour schedules, and two escalation threads comes in around 20 KB. For reference, the composite video frame the device sees every 33 milliseconds is 6 megabytes of raw RGB. The entire operational spec of the system fits inside a single video frame by a factor of roughly 300 to 1000. That compactness is why the config can be versioned aggressively in the cloud, why a device can boot from cold with a fresh config in seconds, and why a portfolio of 20 properties occupies a fraction of a megabyte of state total.

How does a new camera added to an existing DVR get absorbed into an already-deployed Cyrano system?

The operator opens the dashboard for that property, clicks re-scan, the unit re-samples the HDMI composite, and the layout_id may or may not change (a new camera on an existing DVR that was running 4x3 with one empty tile is still 4x4; a DVR that bumps from 2x2 to 3x3 is a layout change). If the layout changes, the tile coordinates for output slicing re-generate and the operator confirms the new tile-to-camera mapping via dropdown. If the layout stays the same, only one line in the tiles array changes: the previously empty tile gets a camera label and optional zones. The detector does not need retraining; the new camera is just one more tile in the composite it was already processing. Total operator time to absorb a new camera is under 60 seconds.

Can I inspect my property's config record, and should a vendor let me?

Yes to both. Any operator can export their property's config as a JSON file from the dashboard and open it in a text editor. That is the right behavior and a credible shape for any AI surveillance system you are evaluating: the document that defines what each deployed unit does should be inspectable by the property that runs it. Vendors who cannot show you the per-property config are either not operating under one (in which case their system is identical-as-deployed across every customer, and every property manager gets the same false-alarm profile) or they are hiding it behind a non-auditable cloud blob. Neither is acceptable at portfolio scale. The right question to ask any AI surveillance vendor during evaluation is: show me the 20-kilobyte document that will be different between my two buildings, and walk me through every field.

Does a portfolio-level config change (say, updating dwell thresholds across all properties) apply atomically?

Not atomically in a database sense, but sequentially with a guaranteed order. A portfolio-level edit pushes a new version to each device's config record in sequence over the cloud control plane, and each unit picks up the new version on its next inference tick (within one second in typical network conditions). If one device is offline, its update queues; when it reconnects it pulls the latest version. The system records which version of the portfolio template each property is running, so a regional manager can see that nine of ten properties are on template v14 and one is still on v13, then act on the lagger. This is a deliberate design choice: it is more important that a property never goes dark during a config push than that the push be instantaneous everywhere.

What happens if a device fails or is replaced, in terms of the config record?

The cloud mirror of the per-property config is authoritative. Plug in a new device, scan the property QR code, the device pulls the last known config (dvr_profile, layout_id, overlay_mask, tiles, delivery), validates against the live HDMI signal, and is running events again within roughly 90 seconds. Zones do not have to be re-drawn, labels do not have to be re-typed, WhatsApp threads do not have to be re-linked. The entire per-property state is the JSON record. Hardware is treated as replaceable; the config is the durable asset. Contrast that with per-camera AI deployments where the state of the system is spread across N camera firmwares, a cloud region, and a vendor-specific dashboard, none of which can be exported as a document you own.

🛡️CyranoEdge AI Security for Apartments
© 2026 Cyrano. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.