Publication: Energy-Efficient Recurrent and Fully-Connected Neural Network Training with Bio-Inspired Temporal Sparsity
Energy-Efficient Recurrent and Fully-Connected Neural Network Training with Bio-Inspired Temporal Sparsity
Date
Date
Date
Citations
Chen, X. (2024). Energy-Efficient Recurrent and Fully-Connected Neural Network Training with Bio-Inspired Temporal Sparsity. (Dissertation, University of Zurich) https://doi.org/10.5167/uzh-259329
Abstract
Abstract
Abstract
The past decade has seen a resurgence of Deep Learning (DL) driven by the rapid advancement of computational power and the explosion of data. The massive parallel processing capacities of the Graphics Processing Units (GPU) and Application Specific Integrated Circuit (ASIC) clusters on the cloud have enabled training of large-scale Deep Neural Network (DNN) models, but they consume a considerable amount of power and risk leaking private data. Local learning on edge devices is becoming increasingly important in privacy-sensitive applic
Additional indexing
Creators (Authors)
Faculty
Faculty
Faculty
Item Type
Item Type
Item Type
Referees
Language
Language
Language
Place of Publication
Place of Publication
Place of Publication
Publication date
Publication date
Publication date
Date available
Date available
Date available
Number of pages
Number of pages
Number of pages
OA Status
OA Status
OA Status
Citations
Chen, X. (2024). Energy-Efficient Recurrent and Fully-Connected Neural Network Training with Bio-Inspired Temporal Sparsity. (Dissertation, University of Zurich) https://doi.org/10.5167/uzh-259329