Oddity watches your existing cameras and flags likely physical aggression in real time, without recording voices or generating new content.
Oddity uses computer vision based on Convolutional Neural Networks (CNNs)—a deep-learning approach that learns patterns of physical aggression from large, labeled video datasets. The model analyzes live video from your existing cameras and outputs a confidence score for violence. It is not generative AI; it only observes what’s on camera and scores the likelihood of physical aggression.
Live streams come in from your cameras
The CNN evaluates motion patterns frame by frame and produces a confidence score.
A site-tuned threshold converts that score into an "alert/no alert" decision.
On detection, an alert with a clip and camera name is delivered to mobile and/or VMS.
A mix of public, reenacted/self-recorded, and licensed footage; improved iteratively with new edge cases.
Forceful aggressive movement with (intended) physical contact between two or more persons (focus on physical behavior, not speech).
At least 80% true positive rate and ≈0.3 false alerts per stream/day—tuned for timely, actionable notifications.
We continuously run a statistical bias check in training and measure fairness with equalized odds—so TPR remains consistent across protected groups. Each production model has an automatic no-bias attestation. Learn more about our bias checks here.
Learn more