报告地点：腾讯会议253 922 334
报告题目：Deep neural networks compression and acceleration
摘 要：Deep neural networks (DNNs) have developed rapidly and achieved remarkable success in many artificial intelligence (AI) applications, such as image understanding, speech recognition and natural language processing, which have been one of the research focuses in AI. However, with the high performance improvement of DNNs, the networks have become deeper and wider, which significantly increases the number of parameters and computation complexity. How to compress and accelerate these large DNNs has received ever-increasing focus from both academic and industrial research. Aiming at the problem of parameter redundancy in DNNs, this talk presents general methods of low-rank decomposition, parameter pruning and knowledge distillation for DNNs compression and acceleration, especially for convolutional neural networks (CNNs) compression and acceleration.
报告人简介：Shaohui Lin is currently an associate researcher and a Zijiang Young Scholar in the School of Computer Science and Technology, East China Normal University (ECNU). He received Ph.D. from Xiamen University in June 2019. He was working as a postdoc researcher at National University of Singapore before joining ECNU. My research specialty is computer vision, machine learning and deep learning, especially compression and speeding-up of large capacity models. He is the first-author of about 10 scientific articles at top venues, including IEEE TPAMI, TNNLS, CVPR, IJCAI, and AAAI. He serves as reviewers for TPAMI, IJCV, TNNLS and TMM, CVPR and NeurIPS etc. He is the recipient of Outstanding Doctoral Dissertation Nomination Award, Chinese Association for Artificial Intelligence (CAAI), 2020.
科 研 处