- This event has passed.
Public Tech Talk: “Refutation-based Adversarial Robustness Verification of Deep Neural Networks” by Dr. Zhen Zhang
March 26 @ 1:00 pm - 2:00 pm
Advances in deep neural networks (DNNs) have increased their deployment in safety-critical systems, such as vision perception modules for autonomous vehicles and airborne collision avoidance system controllers for unmanned aircraft. Modern DNNs and linear classifiers are susceptible to adversarial input perturbations. Adversarial perturbations are small changes to an input that result in unexpected changes of the output. It is, therefore, critical to verify their adversarial robustness properties to avoid undesirable and even catastrophic behaviors. The problem of proving the adversarial robustness property is extremely challenging due to high dimensionality of the input spaces and large and complex modern neural networks. In this talk, I will present a highly versatile refutation-based adversarial robustness verification framework that performs abstraction and partitioning of discretized hyperrectangular regions in the input space of classification networks. A novel technique is presented for reliable adversarial example generation with improved variance. Coupled with a gradient-based dimension ranking heuristic, it enables the framework to place priority on refuting the adversarial robustness property to efficiently eliminate unsafe regions due to discovered adversarial examples and provide incremental feedback for retraining and analysis purposes. This work evaluates the efficiency of several dimension ranking heuristics on adversarial example generation and partitioning strategies of the proposed framework. I will also briefly describe my other research activities, including an efficient probabilistic verification tool for the analysis of infinite-state synthetic biological models and formal verification of Network-on-Chip systems.
Imported from: http://calagator.org/events/1250477635