This model determines spoof attempts by analyzing fine details of the image. By detecting details like glare or the frames of a phone that might be visible (example: if the person is holding a phone presenting the image of someone else), a verification attempt can be validated or invalidated.
Create a Modzy account to get started →
At a threshold of 1.0, the model blocks 99.9% of all spoof attempts.
This model identifies if anyone is trying to spoof the model (someone is impersonating another individual to gain malicious access). The model accepts an image of any size as the input, and outputs the prediction of whether it was a spoof attempt as a percentage.
This model uses a Deep Neural Network (DNN), which was trained in an end-to-end manner on large scale datasets covering a multitude of spoofing attack types such as cutouts, printed photos, face image of someone on a phone/laptop screen, and masks.
The model was tested on large in house datasets, achieving a near perfect score on all spoof attacks it was trained on.
See how quickly you can deploy and run models, connect to pipelines, autoscale resources, and integrate into workflows with Modzy—the ModelOps and MLOps platform