Title: Shape_DNN: Getting deep neural networks in shape
Training a deep neural network (DNN) involves selecting a set of hyperparameters that define the network topology and influence the accuracy of the resulting network. Often times, the goal is to maximize prediction accuracy on a given data set. However, non-functional requirements of the trained network such as inference speed, size, and energy consumption can be very important as well. This talk presents an approach to automate the process of selecting an appropriate DNN topology that fulfills both functional and non-functional requirements of the application. Additionally, we discuss the challenges that have emerged with deploying deep learning applications and how we are likely to react to these challenges. In particular, this talk gives an overview of an open source framework that enables programmers to implement efficient DNN applications across multiple hardware platforms.