Header

UZH-Logo

Maintenance Infos

Reconstruction of audio waveforms from spike trains of artificial cochlea models


Zai, Anja T; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii (2015). Reconstruction of audio waveforms from spike trains of artificial cochlea models. Frontiers in Neuroscience:9:347.

Abstract

Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task.

Abstract

Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task.

Statistics

Citations

Dimensions.ai Metrics
3 citations in Web of Science®
4 citations in Scopus®
3 citations in Microsoft Academic
Google Scholar™

Altmetrics

Downloads

20 downloads since deposited on 11 Feb 2016
5 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:07 Faculty of Science > Institute of Neuroinformatics
Dewey Decimal Classification:570 Life sciences; biology
Language:English
Date:2015
Deposited On:11 Feb 2016 09:27
Last Modified:01 Jul 2018 00:28
Publisher:Frontiers Research Foundation
Series Name:Frontiers in Neuroscience
ISSN:1662-453X
OA Status:Gold
Free access at:PubMed ID. An embargo period may apply.
Publisher DOI:https://doi.org/10.3389/fnins.2015.00347
PubMed ID:26528113

Download

Download PDF  'Reconstruction of audio waveforms from spike trains of artificial cochlea models'.
Preview
Content: Published Version
Filetype: PDF
Size: 3MB
View at publisher
Licence: Creative Commons: Attribution 4.0 International (CC BY 4.0)