Department of Computer Engineering
CS 590 SEMINAR
Efficient Processing of Deep Neural Networks
Computer Engineering Department
In many Artificial Intelligence(AI) applications, Deep Neural Networks(DNN) are commonly deployed. Since they have deep and complicated structures and multiple layers that include massive neurons and connections, they are famous for being computationally and memory intensive. Generally, DNNs require undesirably large number of parameters. As an effective solution to improve computation and memory usage, sparse neural networks have attracted wide attention. Hence, techniques that bring the energy efficient processing of DNNs and significant throughput while maintaining the performance accuracy are critical. Owing to the popularity of DNNs, many recent hardware platforms have created special features that target DNN processing. Therefore, hardware accelerators are a key part of deep learning systems. The proposed solution in our project is to accelerate DNN processing on Intels Knight Landing architecture, which has many cores and each core can run vector instructions to achieve high compute throughput. In an attempt to take advantage of multiple cores, this work will involve both multithreading and vectorization of the DNN algorithms.
DATE: 30 October, 2017, Monday @ 15:40