Session – Artificial Intelligence and Data

Security Vulnerabilities of Deep Neural Network Execution 

Speaker: Dr. Yunsi Fei (费运思)

Professor

Department of Electrical and Computer Engineering

Northeastern University

Boston, MA 02115

 

 

Biography:

Dr. Yunsi Fei is a Professor of Electrical and Computer Engineering at Northeastern University, Boston, and directs the Northeastern University Energy-efficient and Secure System (NUEESS) laboratory. She received her BS and MS degrees in Electronic Engineering from Tsinghua University, China, in 1997 and 1999, respectively, and her PhD degree in Electrical Engineering from Princeton University in 2004. Her recent research focuses on hardware-oriented security and trust, side-channel attack analysis and countermeasures, and secure computer architecture and heterogeneous systems. She was a recipient of National Science Foundation CAREER award. She has been on the TPCs of many conferences in hardware security, computer architecture, and EDA, including CHES, HOST, ISCA, HPCA, DAC, ICCAD, ISLPED, etc. She was a general co-chair for CHES (International Conference on Cryptographic Hardware and Embedded Systems) 2019. Currently, she is the site director for an NSF Industry University Research Cooperation Center – Center for Hardware and Embedded System Security and Trust (CHEST), and actively engaging with industry partners to address security needs arising in their products and applications.

Abstract:

Security of deep neural network (DNN) inference engines, i.e., trained DNN models on various platforms, has become one of the biggest challenges in deploying artificial intelligence in domains where privacy, safety, and reliability are of paramount importance. In addition to classic software attacks such as model inversion and evasion attacks, recently a new attack surface-implementation attacks which include both passive side-channel attacks and active fault injection attacks-is arising, targeting implementation peculiarities of DNNs to breach their confidentiality and integrity. This talk presents several novel passive attacks to reverse engineer the valuable DNN models and an active attack which results in image misclassification. Our new vector of attacks are first of their kind and reveal a largely under-explored attack surface of DNN inference engines. Insights gained during attack exploration will provide valuable guidance for effectively protecting DNN execution against IP stealing and integrity violations.