In this talk, on three separate projects, I will present my research on how deep learning systems, owing to the computational properties of DNNs, are particularly vulnerable to existing well-studied attacks. First, I will show how over-parameterization hurts a system's resilience to fault-injection attacks [USENIX'19]. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN's parameters have at least one-bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I will how computational regularities can compromise the confidentiality of a system [ICLR'20]. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN's often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting [ICLR'21]. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network's computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I will conclude my talk by outlining future research directions for designing secure and reliable deep learning systems.
Examining Committee:
Dean's rep: Dr. Mike Hicks
Members: Dr. Dana Dachman-Soled
Dr. Leonidas Lampropoulos
Dr. Nicolas Papernot
Dr. Nicholas Carlini
Sanghyun Hong is a PhD candidate in computer science at the University of Maryland, College Park, advised by Prof. Tudor Dumitras. His research interests lie at the intersection of computer security and machine learning. His current research focus is to study the computational properties of DNNs from a systems security perspective. He also works on identifying distinct computational behaviors of DNNs, such as network confusion or gradient-level disparity, whose quantification led to defenses against backdooring or data poisoning. He was invited as a speaker at USENIX Enigma'21, where he talked about practical hardware attacks on deep learning, and he is also a recipient of the Ann G. Wylie Dissertation Fellowship. He received his BS in EECS at Seoul National University in South Korea. Sanghyun will join Oregon State University as an Assistant Professor in Fall 2021.