At night, the AI alert pipeline does not just turn on the IR light. Three inputs change weight.
Most coverage of "AI security camera alerts at night" tells you the camera has IR illumination and an AI model that filters false positives. That is true and it is also not the part that matters. The part that matters is what the alert classifier reweights when the camera flips into IR mode at sunset, because the same six inputs that ran the day pipeline are not all reliable in the dark.
This page walks through what flips at sunset on a properly tuned night-mode classifier: which inputs lose confidence (posture), which gain weight (time window, dwell), and which environmental noise sources show up that the day pipeline never sees (insects on the IR lens, headlight glare, IR-cut filter swap). We end on the operator-asleep cost asymmetry, which is the reason the night tuning has to be tighter than the day tuning, not looser.
At night, a working AI camera alert pipeline reweights three inputs relative to the day pipeline. Time becomes a primary HIGH-threat trigger for after-hours zones, because at 03:00 there is no benign reason for an unknown person to be at the rear gate. Posture inference is downweighted because IR mode collapses the scene to monochrome and pushes the posture model under its confidence floor. The environmental-motion silencer switches profiles from day sources (foliage, shadow shifts) to night sources (insects on the IR lens, headlight glare, IR-cut filter swap). On a typical multifamily property this collapses about 30 to 60 raw detections per camera per night down to 1 to 3 paged events.
“At one Class C multifamily property in Fort Worth, this configuration caught 20 incidents including a break-in attempt in the first 30 days. The customer renewed at month one.”
180-unit Class C multifamily, Fort Worth TX, month-one deployment review
The four night-only failure modes
Before we get to the classifier, here are the four sources of night noise that do not exist in the day pipeline. Each one fires a track that looks structurally similar to a real person or vehicle, and each one has to be silenced upstream of the intent decision.
Insects on the IR lens
The 850 nanometer illuminator attracts moths, mosquitoes, and small flying insects, which land on or near the lens and produce moving high-contrast blobs in the foreground. The detector occasionally promotes one to a person at low confidence. Silenced upstream by a size-and-distance prior tied to the IR fall-off curve.
Headlight glare
A car on a perimeter road at midnight produces a 1 to 2 second column of high-luminance pixels that crosses three or four zones in sequence. Track stitcher has to recognize this as a single vehicle event, not a fast-moving person.
IR hot-spot reflection
When the illuminator is mounted close to glass, a metal mailbox, or a parked car windshield, the reflection produces a bright stationary blob the detector reads as a person at 0.3 to 0.5 confidence. Silenced by a stationary-blob exclusion drawn at install.
IR-cut filter swap
At dusk and dawn the camera physically slides a filter in front of the sensor. During the 200 to 800 ms transition, the auto-gain swings the entire frame through an exposure spike. Every pixel appears to move. The classifier gates for 1 second after the bimodal luminance shift.
Sunset to sunrise, what the classifier does at each transition
The night-mode classifier is not a single state. It moves through five regimes between sunset and sunrise, and the dwell thresholds, posture weights, and environmental silencer profile change at each one. The transitions below are what a properly configured edge unit walks through on a normal evening.
Five regimes between sunset and sunrise
Sunset minus 30 minutes (golden hour)
Sensor still in color mode. Detector confidence on shape and color is normal. The classifier flags the approaching IR-cut transition based on declining ambient lux from the camera and prepares to gate the next 1 second window.
IR-cut filter transition (200 to 800 ms)
The filter slides in. Auto-gain swings, the whole frame appears to move. The classifier raises the per-frame detector confidence floor to 0.85 for one second and discards any tracks that started during the swap. No alerts fire during this window.
Full IR mode, after-hours window not yet open
Camera is monochrome with active IR illumination. The day-rule classifier still runs. Posture confidence has dropped about 25 percent because the IR scene flattens. The environmental silencer has switched from day_v3 (foliage, shadow) to night_v2 (insects, glare, hot-spot).
After-hours window open (typically 22:00 to 06:00)
The night reweighting kicks in. Time becomes a primary HIGH trigger for after-hours zones. Dwell thresholds drop by 30 to 50 percent. Posture weight is cut in half and posture can no longer override alone (it now requires a confirming dwell of at least 8 seconds). False-HIGH budget tightens to under 0.3 per camera per night.
Sunrise IR-cut transition
Mirror of sunset. Filter slides out, gain swings, classifier gates 1 second, color mode returns. Day weights restore, day environmental silencer (day_v3) takes over, dwell thresholds revert. The night digest from the past eight hours is composited into a single email and dropped to the regional manager at 06:30.
The night-mode reweighting as actual code
Below is the shape of the night-mode adjustment the production classifier applies on top of the day rules. Real source is more careful at the edges (per-camera IR calibration, dawn thrash on cloudy days, cameras whose ambient light sensor lies), but the routing logic looks like this. The six fields inside nightAdjustments() are the only thing that changes when the IR mode flips on.
Field 1 cuts posture weight in half and removes its override authority. Field 2 makes the time window a primary HIGH trigger for after-hours zones. Field 3 lowers the dwell threshold for the rest. Field 4 raises the detector confidence floor during the IR-cut filter swap. Field 5 swaps the environmental silencer from day to night noise sources. Field 6 tightens the false-HIGH budget because the operator is asleep. Note that the day classifier's six-input decision (class, zone, time, dwell, badge, posture) is unchanged. Night does not add inputs, it changes how they are weighted.
Why night tuning is tighter than day tuning, not looser
The most common mistake in a first deployment is to relax the alert thresholds at night because "everything that happens at night is suspicious anyway." This sounds right and is empirically wrong. The reason is the asymmetric cost of false-HIGH.
A false-HIGH at 12:30 on a Tuesday afternoon interrupts a meeting and costs the on-call manager about two minutes of context-switch. A false-HIGH at 03:15 wakes them up, costs them 20 to 40 minutes of sleep latency on the recovery, and trains them to silence the channel by week three. The cost of a missed real event is the same at any hour: the window for an action response is the same regardless of clock time. The cost of a false event is roughly an order of magnitude higher overnight.
The numerical version: target under 1.0 false-HIGH per camera per week during business hours, and under 0.3 false-HIGH per camera per night. That is roughly a 5x tighter rate at night, on a smaller event base. The way you get there is the night reweighting above plus a confirming dwell on any posture-driven HIGH (a hands-on-door alert during the day pages immediately at 8 second dwell, the same alert at 02:00 requires the dwell to confirm before the page lands).
The intuitive but wrong tuning, lowering the bar at night, sounds protective but produces a channel the operator silences within a month and the result is no overnight monitoring at all. The correct tuning is to raise the bar in proportion to the cost of waking someone, recovering the missed-event signal through better gating (time-of-day, zone-of-day) rather than louder alerts.
One feature you cannot have at night, and how the search index handles it
At night, color is gone. The IR sensor path is monochrome. A natural-language search like "person in red shirt at the rear gate" resolves zero hits because the camera literally did not record color. This is not a bug in the search index, it is physics: 850 nanometer reflectance is not a function of visible-light dye chemistry, so a red shirt and a navy shirt look the same to an IR sensor.
What survives at night and is still searchable: shape (silhouette of a person versus a vehicle), gait (walking versus running), vehicle silhouette (sedan, truck, van), license plate (only if there is a dedicated LPR camera), badge state from access control, posture (with the lower confidence floor), and zone-and-time. A night-aware search index expands the query at runtime: "person in red shirt at 02:30" rewrites to "person at 02:30, color filter dropped" and returns the matching detection without crashing on the missing color feature.
This matters when an operator is reviewing footage at 09:00 the next morning, working from a daylight description a tenant gave them, and trying to search a 6-hour window of overnight detections. The right behavior is to return the night results with a small disclaimer that color was not used, not to silently return zero rows.
Retrofitting night-mode AI on a recorder you already own
Night is the easier case to retrofit, not the harder one. The reason: the recorder is already drawing every camera onto a multiview composite for the wall display, and it is already handling the IR-cut filter and the IR illuminator at the camera level. None of that has to change. An edge unit reads the recorder's HDMI multiview output, splits it into per-tile crops using a fixed tile grid template captured at install (2x2, 3x3, 4x4, or 5x5), and runs the detection plus night-mode reweighting on each tile.
The cameras stay. The coax stays. The recorder stays. The only new piece of hardware is the edge box, which plugs in with one HDMI cable. Installation on site is roughly two minutes. The night-mode classifier is software on top of the detection layer the box was going to run anyway, so adding night coverage to a property that already has cameras costs zero camera replacements.
The unit we ship handles up to 25 camera tiles in parallel, runs entirely on-device (zero footage leaves the building, alerts do not depend on the cellular uplink), and is $450 one-time plus $200 per month from month two. That includes the night-mode classifier. There is no separate night SKU.
Want night-mode alerts on the cameras you already have?
15-minute call. We walk through the night classifier on a recording from your own building, show what would have paged at 03:00 versus what would have been silenced, and what install looks like (one HDMI cable, two minutes on site).
Frequently asked questions
How do AI cameras alert at night without flooding the operator with false positives?
By treating night as a different operating mode, not as the day pipeline with the IR illuminator turned on. A working night-mode classifier reweights three inputs at sunset. (1) The time window itself becomes a primary HIGH-threat trigger for after-hours zones, because at 03:00 a person at the rear gate is not plausibly a delivery driver or a tenant returning from work. (2) Posture inference is downweighted, because IR mode collapses the scene to grayscale, kills clothing pattern, and pushes the posture model below the confidence floor for most frames. (3) The environmental-motion silencer flips its noise model from day sources (foliage, shadow shifts, sun glare) to night sources (insects on the IR lens, headlight glare from passing vehicles, IR hot-spot reflection on glass, and the half-second of color-IR transition when the IR-cut filter switches). The result on a typical multifamily property is about 30 to 60 raw detections per camera per night collapsing to 1 to 3 paged events.
Why does posture inference fail under IR illumination?
Most posture classifiers were trained on color RGB crops. Under IR, the sensor switches to a monochrome path and the IR illuminator floods the scene with near-infrared light at roughly 850 nanometers. Three things degrade at once. Clothing patterns vanish, because dyes that look distinct in visible light reflect near-IR similarly. Skin tone disappears, so head and hand boundaries get fuzzier. And the IR illuminator itself produces a hot spot in the center of the frame and a fall-off at the edges, which warps the texture distribution the model trained on. The posture model still emits a class, but its confidence on poses like 'leaning' versus 'walking' versus 'hands-on-door' drops from a comfortable 0.85 day-time average to under 0.6 in IR. Below that floor, the right behavior is to cut the posture weight in half and let zone, time, dwell, and badge state carry more of the decision.
What environmental noise sources are specific to night that the day pipeline does not see?
Four matter in practice. (1) Insects landing on or near the IR lens. The IR illuminator attracts moths and mosquitoes, which then fly into the field of view and trigger a person-or-vehicle classifier briefly before the bounding box dissolves. Day cameras do not see this because no IR illuminator is running. (2) Headlight glare from passing vehicles. A car on the perimeter road at midnight produces a moving column of high-luminance pixels that crosses several zones in 1 to 2 seconds. The track stitcher needs to recognize this as a vehicle and not as a fast-moving person. (3) IR hot-spot reflection. When the illuminator is mounted close to a window, a glass door, or a metal mailbox, the reflection produces a bright stationary blob that the detector occasionally promotes to a person at low confidence. (4) The IR-cut filter switching itself. At dusk and dawn, the camera slides a mechanical filter into place to switch between color and monochrome modes, and during that 200 to 800 millisecond transition the whole frame goes through an exposure swing that produces a wave of motion that no real object generated.
Does the time window alone qualify as a HIGH threat trigger at night?
For after-hours-restricted zones, yes. The intent classifier's day rule for a person at the rear gate is 'unknown badge plus dwell over 60 seconds equals HIGH'. The night rule for the same zone is 'unknown badge plus any dwell equals HIGH', because at 03:00 there is no benign reason for an unknown person to be at the rear gate at all. For zones that remain semi-public at night (lobby, parking) the time window does not auto-trigger but it does lower the dwell threshold, typically by 30 to 50 percent. The classifier still requires class plus zone plus dwell to fire, but the dwell bar is lower so the alert lands faster. The reason this works is that the operator is asleep and false-HIGH at 03:00 costs them about ten times more than a false-HIGH at noon. Tighter time-based gating recovers signal at a lower volume.
What does the asymmetric cost of false-HIGH at night mean for tuning?
A false-HIGH at noon interrupts a meeting. A false-HIGH at 03:00 wakes the on-call manager, costs them 20 to 40 minutes of sleep latency on the recovery, and trains them to silence the channel by week three. The cost of a missed real event is the same at any hour, but the cost of a false event is roughly an order of magnitude higher overnight. Tuning has to reflect that. Practical numbers we hold to on production sites: target under 0.3 false-HIGH per camera per night, versus under 1.0 per camera per week in daylight. The two ways we get there are the time-window reweighting above and a stricter posture override (a 'hands-on-door at 02:00' alert requires confirming dwell over 8 seconds before paging, even though the same posture during the day pages immediately).
How does the IR-cut filter transition produce false motion, and how is it suppressed?
The IR-cut filter is a small mechanical filter that physically slides in front of the sensor. During the day it blocks IR; at night it slides out so the sensor can see IR. The transition is triggered by an ambient light sensor or a luminance threshold on the frame itself, typically firing once at sunset and once at sunrise per camera, with occasional thrash on cloudy dawns. While the filter is moving, the sensor's gain auto-adjusts and the entire frame goes through a 200 to 800 millisecond exposure swing. During that swing, every part of the frame appears to be moving, and a naive motion-difference engine fires on the whole composite. A working classifier suppresses this in two ways. First, the edge unit detects the bimodal luminance histogram shift that signals a filter swap and gates classification for one second after the event. Second, the per-frame detector's confidence floor is raised to 0.85 during that gate, so any survivor classification has to be unusually clean. The recorder's own motion engine has neither of these and will fire a wave of false events at sunset every single day.
What happens to color-based features (red shirt, white truck) at night?
They are gone. The IR sensor path is monochrome. A search query like 'person in red shirt at the rear gate' resolves zero hits at night because the camera literally did not record color. What survives at night and is still searchable: shape, posture, gait, vehicle silhouette, license plate (if there is a dedicated LPR camera in the cluster), badge state from access control, and zone-and-time. A night-aware search index expands the query at runtime, so 'person in red shirt at 02:30' becomes 'person at 02:30, color filter dropped' and returns the matching detection without crashing on the missing color feature. This matters when an operator is reviewing footage at 09:00 the next morning trying to find a specific person and types in a daylight description.
Does the alert latency change at night versus day?
On the inference side, no. The detection-to-classification path runs at the same composite frame rate whether the camera is in color or IR mode, because the model accepts whatever the sensor emits. On the messaging side, the page itself is the same SMS plus voice call to on-call. Where night does change is the operator's ack latency. Day acks land in 10 to 30 seconds. Night acks land in 60 to 180 seconds because the manager has to wake up, find the phone, and pull the clip. We compensate for that by attaching the 10-second triggering clip to the SMS itself, so the manager can decide whether to escalate before fully sitting up. The practical effect is that the action window at night is wider by about a minute, but it starts deeper in the incident timeline.
Can I run an AI night-alerting layer on my existing DVR or NVR cameras without replacing them?
Yes, and night is actually the easier case to retrofit. The reason: the recorder is already drawing every camera onto a multiview composite for the wall display, and it is already handling the IR-cut filter and the IR illuminator at the camera level. An edge unit reads the recorder's HDMI output, splits it into per-tile crops, runs detection on each tile, and applies the night-mode reweighting in software. None of the cameras change, none of the IR illuminators change, and the recorder keeps its retention schedule. The edge unit handles up to 25 feeds, runs entirely on-device (no cloud round trip for alerts), installs in roughly two minutes (one HDMI cable from the recorder out), and adds the night-mode classifier on top of the detection layer the box was going to run anyway. The production unit we ship is $450 one-time and $200 per month from month two.
What are the practical false-alarm rates I should expect on a typical multifamily property at night?
Numbers we see on production deployments at Class B and C multifamily, 8 to 15 cameras per property, IR illumination at every exterior camera. Raw detections from the per-frame detector: roughly 30 to 60 per camera per night, dominated by insects and headlight glare. After the night-mode classifier (zone, time, dwell, badge, downweighted posture, environmental silencer): 1 to 3 paged events per camera per night, of which about 1 in 10 is a real intrusion or trespass. The remaining 9 in 10 are LOW THREAT events that go to the digest. False-HIGH rate: under 0.3 per camera per night. False-LOW rate (real event missed at HIGH): under 1 per camera per quarter, audited weekly by reviewing the digest for events that should have paged. At a 180-unit Class C property in Fort Worth, this configuration caught 20 incidents including a break-in attempt in the first 30 days; the customer renewed at the end of month one.
Adjacent reading on the same stack
AI security camera intent alerts: the six-input decision
The day pipeline this page extends. Walks through the six inputs (class, zone, time, dwell, badge, posture) the intent classifier reads on every detection and how it routes to LOW vs HIGH.
Multifamily overnight security coverage gap
Operations side: why Class B and C properties cannot justify a $3,000 per month guard, and what the layered overnight stack looks like (cameras plus AI plus access control plus a digest).
CCTV real-time intrusion alerts: which frame the alert fires on
Sister page on the day side: the four-stage intrusion sequence (approach, dwell, breach, exit) and which stages still have an action window when the alert lands.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.