Colloquia - Adnan Siraj Rakin, Exploring the Vulnerabilities of Modern Deep Learning Algorithms and System, Virtual, 4:25 - 5:25 pm

Tuesday, December 7, 2021 - 4:25pm to 5:25pm
Event Type: 

AI Security: Exploring the Vulnerabilities of Modern Deep Learning Algorithms and System

Abstract:

In recent years, Artificial Intelligence (AI) has been deployed in real-world applications because of its superior performance in various cognitive tasks. Such a widespread deployment of AI has raised several security issues in critical applications. A recently developed threat model, namely adversarial attack, poses a potent threat to hijack the functionality of the deployed inference AI model by manipulating the input and network parameters in sensitive applications such as autonomous vehicles, robotics and health care sectors. The adversity of these attacks can cause detrimental social, physical and economic impacts. As a result, the study and analysis of the attack threats and corresponding counter defenses have become a challenging and timely mission for both the industry and academia. This talk will shed light on the emerging security challenges in AI, particularly for deep learning algorithms and systems. It will cover the state-of-the-art adversarial examples, weight perturbation attacks, Trojan attack algorithms, and potential defensive solutions. In addition, it will cover the hardware vulnerabilities of computing platforms (e.g., FPGA) and system-level implications of such threatening novel attack frameworks.

Bio:

Adnan Siraj Rakin is a Ph.D. candidate in Computer Engineering Department at Arizona State University (ASU), advised by Dr. Deliang Fan. He completed his B.Sc. degree in Electrical and Electronic Engineering (EEE) from the Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2016. He completed his Master's degree in Computer Engineering from ASU in 2021. His research interests focus on the secure deployment of deep learning frameworks, exploring the attack and defense of adversarial input examples, adversarial weight attacks, the privacy of deep learning, computer vision and efficient machine learning algorithm. He has also worked on exposing hardware/system-level vulnerabilities in practical AI applications. He has been the author/co-author of over 20 publications on IEEE/ACM top-tier journals and conferences (e.g., CVPR, ICCV, T-PAMI, USENIX Security) in this broad topic of AI Security.