Solution

How the AI Works

Oddity watches your existing cameras and flags likely physical aggression in real time, without recording voices or generating new content.

Oddity uses computer vision based on Convolutional Neural Networks (CNNs)—a deep-learning approach that learns patterns of physical aggression from large, labeled video datasets. The model analyzes live video from your existing cameras and outputs a confidence score for violence. It is not generative AI; it only observes what’s on camera and scores the likelihood of physical aggression.

The pipeline

Higher Expectations

1. Ingest

Live streams come in from your cameras

Ready Tech

2. Analyze

The CNN evaluates motion patterns frame by frame and produces a confidence score.

Funding Compliance

3. Detect

A site-tuned threshold converts that score into an "alert/no alert" decision.

Workforce Pressure

4. Alert

On detection, an alert with a clip and camera name is delivered to mobile and/or VMS.

What the model learns
and how we test it

Higher Expectations

Training data

A mix of public, reenacted/self-recorded, and licensed footage; improved iteratively with new edge cases.

Ready Tech

Definition of violence

Forceful aggressive movement with (intended) physical contact between two or more persons (focus on physical behavior, not speech).

Workforce Pressure

Production targets:

At least 80% true positive rate and ≈0.3 false alerts per stream/day—tuned for timely, actionable notifications.

Security

Fairness & bias checks

We continuously run a statistical bias check in training and measure fairness with equalized odds—so TPR remains consistent across protected groups. Each production model has an automatic no-bias attestation. Learn more about our bias checks here.

Learn more