学术信息

当前位置: > 学术信息 >

关于举办《Deep Neural Network Security》学术报告的通知

发布日期:2018-06-11 09:04      来源:未知      点击:
时间:6月11日  14:00—16:30
地点:艺术馆520
题目:Deep Neural Network Security
报告人:Yier Jin
报告人简介:Yier Jin,美国耶鲁大学博士,美国弗罗里达大学博士生导师。从事物联网及其安全理论与应用的研究。发表物联网方向国际顶级期刊、顶级学术会议论文百余篇。为5家国际学术期刊副主编(Associate Editor)、多家国际学术期刊Guest Editor、Proposal Panelist/Reviewer,多个国际学术会议(Co-)Founder、(Co-)Chair、Organizing Committee、Technical Program Committee。
 Yier Jin is the Endowed IoT Term Professor in the Warren B. Nelms Institute for the Connected World and also an Associate Professor in the Department of Electrical and Computer Engineering (ECE) in the University of Florida (UF). Prior to joining UF, he was an assistant professor in the ECE Department at the University of Central Florida (UCF). He received his PhD degree in Electrical Engineering in 2012 from Yale University after he got the B.S. and M.S. degrees in Electrical Engineering from Zhejiang University, China, in 2005 and 2007, respectively. His research focuses on the areas of embedded systems design and security, trusted hardware intellectual property (IP) cores and hardware-software co-design for modern computing systems. He is currently focusing on the design and security analysis on Internet of Things (IoT) and wearable devices with particular emphasis on information integrity and privacy protection in the IoT era. Dr. Jin received Department of Energy (DoE) early CAREER award in 2016 and the Outstanding New Faculty Award of ACM's Special Interest Group on Design Automation (SIGDA) in 2017. He also received the Best Paper Award of the 52nd Design Automation Conference in 2015, the 21st Asia and South Pacific Design Automation Conference in 2016, the 10th IEEE Symposium on Hardware-Oriented Security and Trust in 2017, the 2018 ACM TODAES, and the 28th edition of the ACM Great Lakes Symposium on VLSI.
 报告内容简介
With the rapid growth and significant successes in a wide spectrum of applications, Deep Learning (DL) has been applied in many real-world applications including those safety-critical scenarios. However, the increasing popularity also comes with new security concerns to deep learning utilization. Specifically, Deep Neural Networks (DNN) are highly vulnerable to adversarial examples, which can easily fool the DNN to produce misclassification errors with high confidence. In this talk, I will first introduce previous methods for generating adversarial examples which focus mainly on adding perturbation to input images directly. Orthogonal to existing solutions, I will then present our research effort and proof-of-concept implementation of adversarial feature manipulation attacks against deep learning applications. Rather than concentrating on modifying input vectors of DNN, we craft adversarial examples based on the precise understanding of the sensitivity between max-pooling feature representation and final classification output. The emerging hardware-software DNN framework will also be introduced to help better understand the security vulnerabilities of the DNN systems.
承办单位:计算机学院
                                                          2018.6