Header

UZH-Logo

Maintenance Infos

Incremental Learning Meets Reduced Precision Networks


Hu, Yuhuang; Delbruck, Tobi; Liu, Shih-Chii (2019). Incremental Learning Meets Reduced Precision Networks. In: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26 May 2019 - 29 May 2019.

Abstract

Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more energy efficient than the equivalent full precision networks. While many studies have focused on reduced precision training methods for supervised networks with the availability of large datasets, less work has been reported on incremental learning algorithms that adapt the network for new classes and the consequence of reduced precision has on these algorithms. This paper presents an empirical study of how reduced precision training methods impact the iCARL incremental learning algorithm. The incremental network accuracies on the CIFAR-100 image dataset show that weights can be quantized to 1 bit (2.39% drop in accuracy) but when activations are quantized to 1 bit, the accuracy drops much more (12.75%). Quantizing gradients from 32 to 8 bits only affects the accuracies of the trained network by less than 1%. These results are encouraging for hardware accelerators that support incremental learning algorithms.

Abstract

Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more energy efficient than the equivalent full precision networks. While many studies have focused on reduced precision training methods for supervised networks with the availability of large datasets, less work has been reported on incremental learning algorithms that adapt the network for new classes and the consequence of reduced precision has on these algorithms. This paper presents an empirical study of how reduced precision training methods impact the iCARL incremental learning algorithm. The incremental network accuracies on the CIFAR-100 image dataset show that weights can be quantized to 1 bit (2.39% drop in accuracy) but when activations are quantized to 1 bit, the accuracy drops much more (12.75%). Quantizing gradients from 32 to 8 bits only affects the accuracies of the trained network by less than 1%. These results are encouraging for hardware accelerators that support incremental learning algorithms.

Statistics

Citations

Altmetrics

Downloads

1 download since deposited on 11 Feb 2020
1 download since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:29 May 2019
Deposited On:11 Feb 2020 15:22
Last Modified:16 Feb 2020 07:07
Publisher:IEEE
ISBN:9781728103976
OA Status:Closed
Publisher DOI:https://doi.org/10.1109/iscas.2019.8702541

Download

Closed Access: Download allowed only for UZH members