Under Review – NeurIPS 2020 An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Abstract: Machine learning models have been successfully applied to a wide range of applications including computer vision, natural language processing, and speech recognition. A successful implementation of these models however, usually relies on deep neural
CVPR 2020 Robust Design of Deep Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
Abstract: Deep neural networks (DNNs) are vulnerable to subtle adversarial perturbations applied to the input. These adversarial perturbations, though imperceptible, can easily mislead the DNN. In this work, we take a control theoretic approach to
Abstract: Significant work is being done to develop the math and tools necessary to build provable defenses, or at least bounds, against adversarial attacks of neural networks. In this work, we argue that tools from
Modzy Labs Innovations
Experience the innovations that started in the Lab and often become unique features in the Modzy Enterprise AI Platform. These innovations help accelerate value creation for our customers and advance the state of Artificial Intelligence for everyone. Schedule a Demo to see these features in action!