Header

UZH-Logo

Maintenance Infos

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps


Aimar, Alessandro; Mostafa, Hesham; Calabrese, Enrico; Rios-Navarro, Antonio; Tapiador-Morales, Ricardo; Lungu, Iulia-Alexandra; Milde, Moritz B; Corradi, Federico; Linares-Barranco, Alejandro; Liu, Shih-Chii; Delbruck, Tobi (2017). NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps. arXiv.org 1706.01406, Institute of Neuroinformatics.

Abstract

Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power consumption becomes a problem for real time mobile applications. We propose a flexible and efficient CNN accelerator architecture which can support the implementation of SOA CNNs in low-power and low-latency application scenarios. This architecture exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across a wide range of convolutional network kernel sizes; and numbers of input and output feature maps. We implemented the proposed architecture on an FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. We show how in RTL simulations in a 28nm process with a clock frequency of 500MHz, the NullHop core is able to reach over 450 GOp/s and efficiency of 368%, maintaining over 98% utilization of the MAC units and achieving a power efficiency of over 3TOp/s/W in a core area of 5.8mm2

Abstract

Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though Graphical Processing Units (GPUs) are most often used in training and deploying CNNs, their power consumption becomes a problem for real time mobile applications. We propose a flexible and efficient CNN accelerator architecture which can support the implementation of SOA CNNs in low-power and low-latency application scenarios. This architecture exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across a wide range of convolutional network kernel sizes; and numbers of input and output feature maps. We implemented the proposed architecture on an FPGA platform and present results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. We show how in RTL simulations in a 28nm process with a clock frequency of 500MHz, the NullHop core is able to reach over 450 GOp/s and efficiency of 368%, maintaining over 98% utilization of the MAC units and achieving a power efficiency of over 3TOp/s/W in a core area of 5.8mm2

Statistics

Citations

4 citations in Microsoft Academic

Downloads

1 download since deposited on 01 Mar 2018
1 download since 12 months
Detailed statistics

Additional indexing

Item Type:Working Paper
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2017
Deposited On:01 Mar 2018 11:15
Last Modified:20 Mar 2018 00:41
Series Name:arXiv.org
ISSN:2331-8422
OA Status:Green
Free access at:Official URL. An embargo period may apply.
Official URL:https://arxiv.org/abs/1706.01406

Download

Download PDF  'NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps'.
Preview
Content: Published Version
Filetype: PDF
Size: 1MB