Security and Privacy of AI-enabled IoT Eco-Systems
Goal
With the ubiquitous adoption of Internet-of-Things (IoTs) and wearable devices, an increasing amount of sensitive user information is being collected, analyzed, and relayed to remote locations for training machine learning and artificial intelligence (ML/AI) models. Some modern IoTs and edge devices (e.g., GPU-enabled embedded devices) have significant on-device computing that allows them to employ ML/AI models. Due to their high level of connectivity, maintaining the privacy and assuring security is challenging for wearable/IoT devices. In this new eco-system, IoTs and edge computing devices that are heterogeneous in processing/storage capabilities are leveraged to perform various tasks, from pure sensing and data aggregation to analysis, inferencing, and training deep neural network (DNN) models or running real-time DNN-based inferences. Some of these devices only have a local view, while others leverage information collected from other clusters of IoTs or central servers. This results in new attack surfaces that compromise security and user privacy across layers of the network protocol stack, hardware/software, and system level. Our team proposes a holistic, cross-layer approach to designing robust ML/AI systems for enhancing the security and privacy of wearable/IoTs. The aimed project, if successful, will have a societal impact by providing the much-needed security and privacy assurance to enable AIoT-based applications (such as smart health services) on edge devices.
Scientific Challenge
In the first phase of the project, we will address the following two challenges:
Security/Privacy of AI-enabled IoT Gateway in the Presence of Side Channel Attacks
We will first perform a detailed vulnerability analysis of the impact of side-channel attacks on light-weight inference machines (such as dynamic time warping or 1-D convolutional neural network models) deployed on IoT gateways, such as Jetson Nano, or on-board systems in autonomous vehicles. We will then explore cross-layer detection and mitigation schemes that utilize data across layers to construct a holistic view by considering hardware, network, and application-layer context.
Benchmarking communication, computation, power, and security/privacy tradeoffs of distributed learning models in AIoT Eco-System
We will build a testbed that enumerates the different scenarios for learning-based IoT eco-systems. The initial training, fine-tuning with dynamic data of the ML models may be partitioned between IoT gateway, edge devices, and the server. ML training and inference tasks will incur computation costs while reporting data and receiving updates about the ML model will incur communication costs. We will perform detailed benchmarking results with off-the-shelf components.
Designing defense mechanisms against novel ML/DNN model fingerprinting and adversarial attacks
We will assess new vulnerabilitys based on side-channel attacks or fingerprinting attacks that leak information about on-device DNN models employed by smart edge applications. This has potential huge impact on many societal-scale applications ranging from surveillance to intelligent transportation and smart health applications.
Faculty Participants
Chen-Nee Chuah, Electrical and Computer Engineering (Lead PI)
Houman Homayoun, Electrical and Computer Engineering (Co-PI)
Kartik Patwari, Electrical and Computer Engineering (PhD)
Hari Venugopalan, Computer Science (PhD)
Chongzhou Fang, Electrical and Computer Engineering (PhD)
Noyce Fellow Alumni
Han Wang, Electrical and Computer Engineering (PhD, 2022) Current position: Assistant Professor, Temple University
Progress Highlights
We demonstrated a fingerprinting attack to identify the (running) deep neural network (DNN) model architecture family on CPU-GPU edge devices by exploiting a stealthy analysis of system-level side-channel information such as memory, CPU, and GPU usages with only the user privilege. Such attacks do not require physical access or sudo access to the victim's device. The leakage of such model architecture information can be further exploited to strengthen the black-box adversarial attack against a DNN-based application. We are currently exploring detection and mitigation mechanisms.
Modern platforms like Jeston Nano have made it feasible to run AI/ML models on resource-constrained IoTs and edge devices to support new services. However, running deep neural network (DNN) models on edge devices introduces a new security risk -- the potential leakage of inference results. For instance, the victim may want to protect the inference result from a DNN-based disease diagnostic application or the investment decision from an AI-driven financial application. We have demonstrated that side-channel attacks such as Flush+Reload can be launched on victim DNN models to observe the execution of computation graphs and deduce the inference results.
We have built a benchmarking testbed to study the performance and security tradeoff of DNN model compression and optimization methods that aim at making the models memory efficient for IoTs/edge devices. We characterize the classification accuracy in the presence of attack, inference time, memory, and CPU/GPU usages.
The project provides undergraduate research training through "EEC174ABY Applied Machine Learning Senior Design Projects" course spanning two quarters.
Publications, Patents, Other Reports
Han Wang, Syed Mahbub Hafiz, Kartik Patwari, Chen-Nee Chuah, Zubair Shafiq, and Houman Homayoun, "Stealthy Inference Attack on DNN via Cache-based Side-channel Attacks," to appear in Design, Automation and Test in Europe Conference (DATE), March, 2022. [pdf]
K. Patwari, S. M. Hafiz, H. Wang, H. Homayoun, Z. Shafiq, and C-N. Chuah, "DNN Model Architecture Fingerprinting Attack on CPU-GPU Edge Devices," 7th IEEE European Symposium on Security and Privacy (EuroS&P), June 6-10, 2022. [pdf]