Header

UZH-Logo

Maintenance Infos

Precise deep neural network computation on imprecise low-power analog hardware


Binas, Jonathan; Neil, Daniel; Indiveri, Giacomo; Liu, Shih-Chii; Pfeiffer, Michael (2016). Precise deep neural network computation on imprecise low-power analog hardware. arXiv: Computer Science/Neural and Evolutionary Computing 1606.07786, Institute of Neuroinformatics.

Abstract

There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-the-art artificial intelligence. Here we propose a power-efficient approach for real-time inference, in which deep neural networks (DNNs) are implemented through low-power analog circuits. Although analog implementations can be extremely compact, they have been largely supplanted by digital designs, partly because of device mismatch effects due to fabrication. We propose a framework that exploits the power of Deep Learning to compensate for this mismatch by incorporating the measured variations of the devices as constraints in the DNN training process. This eliminates the use of mismatch minimization strategies such as the use of very large transistors, and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate at least a 3-fold improvement of processing efficiency over current digital implementations.

Abstract

There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-the-art artificial intelligence. Here we propose a power-efficient approach for real-time inference, in which deep neural networks (DNNs) are implemented through low-power analog circuits. Although analog implementations can be extremely compact, they have been largely supplanted by digital designs, partly because of device mismatch effects due to fabrication. We propose a framework that exploits the power of Deep Learning to compensate for this mismatch by incorporating the measured variations of the devices as constraints in the DNN training process. This eliminates the use of mismatch minimization strategies such as the use of very large transistors, and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate at least a 3-fold improvement of processing efficiency over current digital implementations.

Statistics

Citations

3 citations in Microsoft Academic

Downloads

17 downloads since deposited on 26 Jan 2017
9 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Working Paper
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2016
Deposited On:26 Jan 2017 11:48
Last Modified:02 Feb 2018 11:45
Series Name:arXiv: Computer Science/Neural and Evolutionary Computing
OA Status:Green
Free access at:Official URL. An embargo period may apply.
Official URL:https://arxiv.org/abs/1606.07786

Download

Download PDF  'Precise deep neural network computation on imprecise low-power analog hardware'.
Preview
Content: Published Version
Filetype: PDF
Size: 716kB