2021 Convention Sijia Liu


Session Speaker-Trusted AI

Dissecting Adversarial Robustness of Deep Neural Networks: A Machine Learning and Optimization Perspective

Dr. Sijia Liu (刘思佳)

Assistant Professor

Department of Computer Science and Engineering

Michigan State University




Sijia Liu is currently an Assistant Professor at the Computer Science & Engineering Department of Michigan State University. He received the Ph.D. degree (with All-University Doctoral Prize) in Electrical and Computer Engineering from Syracuse University, NY, USA, in 2016. He was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, in 2016-2017, and a Research Staff Member at the MIT-IBM Watson AI Lab in 2018-2020. His research spans the areas of machine learning, optimization, computer vision, signal processing and computational biology, with a focus on developing learning algorithms and theory for scalable and trustworthy artificial intelligence (AI). He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). His work has been published at top-tier AI conferences such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, and AAAI.


It has been known that deep neural networks (DNNs) are vulnerable to adversarial attacks. Thus, studying the robustness of DNNs becomes one of the active research topics toward trustworthy AI. In this talk, I will first present a practical threat model, black-box adversarial attack, in which the adversary only has access to a victim DNN via input-output function queries. I will illustrate how the problem of generating black-box adversarial attacks can be addressed by scalable and theoretically-grounded zeroth-order optimization techniques. Moreover, I will explore a connection between adversarial robustness and network interpretability. I will present novel insights on when and how interpretability helps adversarial exploration and robustness. Spurred by that, I will introduce an interpretability-aware robust training method, which outperforms state-of-the-art adversarial training methods against adversarial attacks of large perturbation in particular. Lastly, I will present several future research directions that provide grand challenges and opportunities to the adversarial learning community.

<<2021 CIE/USA GNYC Annual Convention