This model determines spoof attempts by analyzing fine details of the image. By detecting details like glare or the frames of a phone that might be visible (example: if the person is holding a phone presenting the image of someone else), a verification attempt can be validated or invalidated.
Many models are available for limited use in the free Modzy Basic account.
At a threshold of 1.0, the model blocks 99.9% of all spoof attempts.
This model identifies if anyone is trying to spoof the model (someone is impersonating another individual to gain malicious access). The model accepts an image of any size as the input, and outputs the prediction of whether it was a spoof attempt as a percentage.
This model uses a Deep Neural Network (DNN), which was trained in an end-to-end manner on large scale datasets covering a multitude of spoofing attack types such as cutouts, printed photos, face image of someone on a phone/laptop screen, and masks.
The model was tested on large in house datasets, achieving a near perfect score on all spoof attacks it was trained on.
Get a video demo and join the community of developers and customers building the future of Artificial Intelligence.