Header

UZH-Logo

Maintenance Infos

Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences


Neil, D; Pfeiffer, M; Liu, S-C (2016). Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences. In: Neural Information and Processing Systems (NIPS), Barcelona, 5 December 2016 - 10 December 2016.

Abstract

Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.

Abstract

Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.

Statistics

Citations

Downloads

51 downloads since deposited on 23 Feb 2018
51 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Event End Date:10 December 2016
Deposited On:23 Feb 2018 10:20
Last Modified:31 Jul 2018 05:12
Publisher:Advances in Neural Information Processing Systems 29 (NIPS 2016)
Series Name:Advances in Neural Information Processing Systems 29 (NIPS 2016)
OA Status:Green
Free access at:Official URL. An embargo period may apply.
Official URL:https://papers.nips.cc/paper/6310-phased-lstm-accelerating-recurrent-network-training-for-long-or-event-based-sequences.pdf

Download

Download PDF  'Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences'.
Preview
Content: Published Version
Filetype: PDF
Size: 1MB