Security and Privacy of AI-enabled IoT Eco-Systems
Goal
Goal
![]() |
Scientific Challenge
In the first phase of the project, we will address the following challenges:- Security/Privacy of AI-enabled IoT gateways in the presence of side channel attacks
We will first perform a detailed vulnerability analysis of the impact of side-channel attacks on light-weight inference machines (such as dynamic time warping or 1-D convolutional neural network models) deployed on IoT gateways, such as Jetson Nano, or on-board systems in autonomous vehicles. For instance, fingerprinting attacksc can potentially leak information about on-device AI/ML models employed by sensitive applications. We will then explore cross-layer detection and mitigation schemes that utilize data across layers to construct a holistic view by considering hardware, network, and application-layer context.This has potential huge impact on many societal-scale applications ranging from surveillance to intelligent transportation and smart health applications. - Benchmarking communication, computation, power, and security/privacy tradeoffs of distributed learning models in AIoT Eco-System
We will build a testbed that enumerates the different scenarios for learning-based IoT eco-systems. The initial training, fine-tuning with dynamic data of the ML models may be partitioned between IoT gateway, edge devices, and the server. ML training and inference tasks will incur computation costs while reporting data and receiving updates about the ML model will incur communication costs. We will perform detailed benchmarking results with off-the-shelf components.
Faculty Participants
- Chen-Nee Chuah, Electrical and Computer Engineering (Lead PI)
- Houman Homayoun, Electrical and Computer Engineering (Co-PI)
- Zubair Shafiq, Computer Science (Co-PI)
-
Domain Expert Collaborators:
- Jason Adams, Pulmonary and Critical Care Medicine, UC Davis School of Medicine
- Sally Ozonoff, Psychiatry and Behavioral Science
Noyce Fellows
- Kartik Patwari, Electrical and Computer Engineering (PhD)
- Hari Venugopalan, Computer Science (PhD)
- Chongzhou Fang, Electrical and Computer Engineering (PhD)
Noyce Fellow Alumni
- Han Wang, Electrical and Computer Engineering (PhD, graduated 2022)
Current position: Assistant Professor, Temple University - Dr. Syed Mahbub Hafiz, Computer Science (Postdoc, 2021-23)
Current position: Staff Security Research Engineer, LG Silicon Valley Lab, CA - Brijesh Vora (MS, graduated 2023)
Current position: Software engineer, SailPoint
Progress Highlights
- We demonstrated a fingerprinting attack to identify the (running) deep neural network (DNN) model architecture family on CPU-GPU edge devices by exploiting a stealthy analysis of system-level side-channel information such as memory, CPU, and GPU usages with only the user privilege. Such attacks do not require physical access or sudo access to the victim's device. The leakage of such model architecture information can be further exploited to strengthen the black-box adversarial attack against a DNN-based application. We are currently exploring detection and mitigation mechanisms.
- Modern platforms like Jeston Nano have made it feasible to run AI/ML models on resource-constrained IoTs and edge devices to support new services. However, running deep neural network (DNN) models on edge devices introduces a new security risk -- the potential leakage of inference results. For instance, the victim may want to protect the inference result from a DNN-based disease diagnostic application or the investment decision from an AI-driven financial application. We have demonstrated that side-channel attacks such as Flush+Reload can be launched on victim DNN models to observe the execution of computation graphs and deduce the inference results.
- We have built a benchmarking testbed to study the performance and security tradeoff of DNN model compression and optimization methods that aim at making the models memory efficient for IoTs/edge devices. We characterize the classification accuracy in the presence of attack, inference time, memory, and CPU/GPU usages.
- Training AI/ML or running real-time inference often involve queiring database to retrieve sensitive user data. Private information retrieval (PIR), a privacy-preserving cryptographic tool, can hide the database item that a client accesses, but does not protect aggregate queries. To bridge the gap, we have built a general-purpose novel information-theoretic PIR (IT-PIR) framework that permits a user to fetch the aggregated result, hiding all sensitive sections of the complex query from the hosting PIR server in a single round of interaction.
- The project provides undergraduate research training through "EEC174ABY Applied Machine Learning Senior Design Projects" course spanning two quarters.
Synergistic Activities
- Noyce Fellow, Kartik Patwari, established a collaborative project with SonyAI after interning there for one summer on exploring the human perception of image anonymization (PerceptAnon)
Publications
- S. M. Hafiz, C. Gupta, W. Wnuck, B. Vora, and C-N. Nee Chuah, "Private Aggregate Queries to Untrusted Databases" Network and Distributed System Security Symposium (NDSS), Feb 26-Mar 1, 2024. (Awarded NDSS Artifacts Available, Functional, and Reproduced Badges)
- B. Vora, K. Patwari, S. M. Hafiz, Z. Shafiq, and C-N. Chuah, "Establishing a Benchmark for Adversarial Robustness of Compressed Deep Learning Models After Pruning," ICML 2023 Workshop on New Frontiers in Adversarial Machine Learning, July 2023.
- K. Patwari, S. M. Hafiz, H. Wang, H. Homayoun, Z. Shafiq, and C-N. Chuah, "DNN Model Architecture Fingerprinting Attack on CPU-GPU Edge Devices," 7th IEEE European Symposium on Security and Privacy (EuroS&P), June 6-10, 2022. [pdf]
- Han Wang, Syed Mahbub Hafiz, Kartik Patwari, Chen-Nee Chuah, Zubair Shafiq, and Houman Homayoun, "Stealthy Inference Attack on DNN via Cache-based Side-channel Attacks," to appear in Design, Automation and Test in Europe Conference (DATE), March, 2022. [pdf]