SAGRNN: Self-Attentive Gated RNN for Binaural Speaker Separation with Interaural Cue Preservation

Ke Tan*, Buye Xu, Anurag Kumar, Eliya Nachmani, Yossi Adi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Most existing deep learning based binaural speaker separation systems focus on producing a monaural estimate for each of the target speakers, and thus do not preserve the interaural cues, which are crucial for human listeners to perform sound localization and lateralization. In this study, we address talker-independent binaural speaker separation with interaural cues preserved in the estimated binaural signals. Specifically, we extend a newly-developed gated recurrent neural network for monaural separation by additionally incorporating self-attention mechanisms and dense connectivity. We develop an end-to-end multiple-input multiple-output system, which directly maps from the binaural waveform of the mixture to those of the speech signals. The experimental results show that our proposed approach achieves significantly better separation performance than a recent binaural separation approach. In addition, our approach effectively preserves the interaural cues, which improves the accuracy of sound localization.

Original languageAmerican English
Article number9292089
Pages (from-to)26-30
Number of pages5
JournalIEEE Signal Processing Letters
Volume28
DOIs
StatePublished - 2021

Bibliographical note

Publisher Copyright:
© 1994-2012 IEEE.

Keywords

  • Binaural speaker separation
  • interaural cue preservation
  • self-attention
  • time-domain

Fingerprint

Dive into the research topics of 'SAGRNN: Self-Attentive Gated RNN for Binaural Speaker Separation with Interaural Cue Preservation'. Together they form a unique fingerprint.

Cite this