Publication:

Listeners use temporal information to identify French- and English-accented speech

Date

Date

Date
2017
Journal Article
Published version
cris.lastimport.scopus2025-08-13T03:33:52Z
cris.lastimport.wos2025-07-15T01:33:14Z
cris.virtual.orcid0000-0002-8494-6025
cris.virtualsource.orcide9e4ab17-277e-4636-b457-c5b819d87e05
dc.contributor.institutionUniversity of Zurich
dc.date.accessioned2017-01-09T13:57:18Z
dc.date.available2017-01-09T13:57:18Z
dc.date.issued2017
dc.description.abstract

Which acoustic cues can be used by listeners to identify speakers’ linguistic origins in foreign-accented speech? We investigated accent identification performance in signal-manipulated speech, where (a) Swiss German listeners heard native German speech to which we transplanted segment durations of French-accented German and English-accented German, and (b) Swiss German listeners heard 6-band noise-vocoded French-accented and English-accented German speech to which we transplanted native German segment durations. Therefore, the foreign accent cues in the stimuli consisted of only temporal information (in a) and only strongly degraded spectral information (in b). Findings suggest that listeners were able to identify the linguistic origin of French and English speakers in their foreign-accented German speech based on temporal features alone, as well as based on strongly degraded spectral features alone. When comparing these results to previous research, we found an additive trend of temporal and spectral cues: identification performance tended to be higher when both cues were present in the signal. Acoustic measures of temporal variability could not easily explain the perceptual results. However, listeners were drawn towards some of the native German segmental cues in condition (a), which biased responses towards ‘French’ when stimuli featured uvular /r/s and towards ‘English’ when they contained vocalized /r/s or lacked /r/.

dc.identifier.doi10.1016/j.specom.2016.11.006
dc.identifier.issn0167-6393
dc.identifier.scopus2-s2.0-85006494115
dc.identifier.urihttps://www.zora.uzh.ch/handle/20.500.14742/124701
dc.identifier.wos000394397800010
dc.language.isoeng
dc.subjectLinguistics and Language
dc.subjectModelling and Simulation
dc.subjectSoftware
dc.subjectCommunication
dc.subjectComputer Vision and Pattern Recognition
dc.subjectLanguage and Linguistics
dc.subjectComputer Science Applications
dc.subject.ddc490 Other languages
dc.subject.ddc890 Other literatures
dc.subject.ddc410 Linguistics
dc.title

Listeners use temporal information to identify French- and English-accented speech

dc.typearticle
dcterms.accessRightsinfo:eu-repo/semantics/restrictedAccess
dcterms.bibliographicCitation.journaltitleSpeech Communication
dcterms.bibliographicCitation.originalpublishernameElsevier
dcterms.bibliographicCitation.pageend134
dcterms.bibliographicCitation.pagestart121
dcterms.bibliographicCitation.volume86
dspace.entity.typePublicationen
uzh.contributor.affiliationUniversity of Zurich, Universite Paris-Saclay
uzh.contributor.affiliationUniversite Paris-Saclay
uzh.contributor.affiliationUniversity of Cambridge
uzh.contributor.affiliation#PLACEHOLDER_PARENT_METADATA_VALUE#
uzh.contributor.authorKolly, Marie-José
uzh.contributor.authorBoula de Mareüil, Philippe
uzh.contributor.authorLeemann, Adrian
uzh.contributor.authorDellwo, Volker
uzh.contributor.correspondenceYes
uzh.contributor.correspondenceNo
uzh.contributor.correspondenceNo
uzh.contributor.correspondenceNo
uzh.document.availabilitynone
uzh.eprint.datestamp2017-01-09 13:57:18
uzh.eprint.lastmod2025-08-13 03:33:52
uzh.eprint.statusChange2017-01-09 13:57:18
uzh.funder.nameSNSF
uzh.funder.projectTitleSwiss National Science Foundation
uzh.funder.projectTitleGebert Rüf Stiftung
uzh.harvester.ethYes
uzh.harvester.nbNo
uzh.identifier.doi10.5167/uzh-130233
uzh.jdb.eprintsId17066
uzh.oastatus.unpaywallbronze
uzh.oastatus.zoraClosed
uzh.publication.citationKolly, Marie-José; Boula de Mareüil, Philippe; Leemann, Adrian; Dellwo, Volker (2017). Listeners use temporal information to identify French- and English-accented speech. Speech Communication, 86:121-134.
uzh.publication.originalworkoriginal
uzh.publication.publishedStatusfinal
uzh.scopus.impact5
uzh.scopus.subjectsSoftware
uzh.scopus.subjectsModeling and Simulation
uzh.scopus.subjectsCommunication
uzh.scopus.subjectsLanguage and Linguistics
uzh.scopus.subjectsLinguistics and Language
uzh.scopus.subjectsComputer Vision and Pattern Recognition
uzh.scopus.subjectsComputer Science Applications
uzh.workflow.doajuzh.workflow.doaj.false
uzh.workflow.eprintid130233
uzh.workflow.fulltextStatusrestricted
uzh.workflow.revisions61
uzh.workflow.rightsCheckkeininfo
uzh.workflow.sourceCrossRef:10.1016/j.specom.2016.11.006
uzh.workflow.statusarchive
uzh.wos.impact4
Files

Original bundle

Name:
130233.pdf
Size:
1.12 MB
Format:
Adobe Portable Document Format
Downloadable by admins only
Publication available in collections: