Colloquia - Jinyuan Jia, Machine Learning Meets Security and Privacy: Opportunities and Challenges, Virtual, 4:25 - 5:25 pm

Monday, January 24, 2022 - 4:25pm to 5:25pm
Event Type: 

Jinyuan Jia - man in light blue button up, with black hair, standing in front of a leaf background.

Machine Learning Meets Security and Privacy: Opportunities and Challenges


Machine learning provides powerful tools for data analytics while security and privacy attacks increasingly involve data. Therefore, machine learning naturally intersects with security and privacy. In the first part of this talk, I will discuss how machine learning impacts security and privacy. In particular, I will discuss how to leverage (adversarial) machine learning techniques to protect user privacy. Specifically, machine learning-based inference attacks cause severe security and privacy concerns to the Internet. For example, in the Facebook data privacy scandal in 2018, the company called Cambridge Analytica used inference attacks to infer millions of Facebook users’ sensitive personal attributes. We build an adversarial examples-based defense against such inference attacks. Adversarial examples are often viewed as harmful techniques that compromise the integrity of machine learning systems. Our work is the first one to show that adversarial examples can be used as defensive techniques for privacy protection. In the second part of this talk, I will focus on how security and privacy impact the deployment of machine learning. In particular, I will discuss the security vulnerabilities of self-supervised learning, which is commonly believed to be a promising approach for general-purpose AI. Self-supervised learning pre-trains an encoder using a large amount of unlabeled data. The pre-trained encoder is like an “operating system” of the AI ecosystem. We showed that an attacker can inject backdoors into a pre-trained encoder such that the downstream classifiers built based on the backdoored encoder for different downstream tasks simultaneously inherit the backdoor behavior. Our attacks show that an insecure pre-trained encoder leads to a single point of failure for the AI ecosystem. Finally, I will briefly discuss my other projects and future research directions.


Jinyuan Jia is a Ph.D. candidate in the Department of Electrical and Computer Engineering at Duke University under the supervision of Prof. Neil Gong. He received an M.Eng. in Computer Engineering from Iowa State University in 2019 and a B.S. in Electrical Engineering from the University of Science and Technology of China in 2016. His research involves security, privacy, and machine learning, with a recent focus on the intersection among them. He has published multiple papers in security venues such as IEEE S&P, USENIX Security, CCS, NDSS, and machine learning/data mining venues such as ICLR, WWW, and KDD. He has received several awards such as 2020 DeepMind Best Extended Abstract Award, 2019 NDSS Distinguished Paper Award Honorable Mention, IBM Fellowship, and NortonLifeLock Graduate Fellowship Finalist. His work was also featured by popular media such as WIRED.