Monaural azimuth localization using spectral dynamics of speech

Roi Kliper*, Hendrik Kayser, Daphna Weinshall, Israel Nelken, Jörn Anemüller

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

We tackle the task of localizing speech signals on the horizontal plane using monaural cues. We show that monaural cues as incorporated in speech are efficiently captured by amplitude modulation spectra patterns. We demonstrate that by using these patterns, a linear Support Vector Machine can use directionality-related information to learn to discriminate and classify sound location at high resolution. We propose a straightforward and robust way of integrating information from two ears. Each ear is treated as an independent processor and information is integrated at the decision level thus resolving, to a large extent, ambiguity in location.

Original languageAmerican English
Pages (from-to)33-36
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2011
Event12th Annual Conference of the International Speech Communication Association, INTERSPEECH 2011 - Florence, Italy
Duration: 27 Aug 201131 Aug 2011

Keywords

  • Amplitude modulation
  • Monaural Processing
  • Speech localization

Fingerprint

Dive into the research topics of 'Monaural azimuth localization using spectral dynamics of speech'. Together they form a unique fingerprint.

Cite this