Evolving Perspectives on Adversarial Robustness for Deep Neural Networks

9 March 2022
Presented by Ben Zhao (University of Chicago)


Abstract

Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work in recent years to explore both attacks and defenses. In this talk, I will first discuss some of our projects at UChicago SAND Lab covering both sides of the struggle between attacks and defenses, including recent work on honeypot defenses (CCS 2020) and physical domain poison attacks (CVPR 2021).

Unfortunately, our experiences in these projects has only reaffirmed the inevitable cat and mouse nature of attacks and defenses. Looking forward, I believe we must go beyond the current focus on attacking & defending single static DNN models, and to bring more pragmatic perspectives to improving robustness for deployed ML systems. To this end, I will present some of our early work on digital forensics for DNNs, and outline some future challenges in this new space.


See video on YouTube