Synthesizing spatially complex sound in virtual space: An accurate offline algorithm

Gilad Jacobson*, Iris Poganiatz, Israel Nelken

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

The study of spatial processing in the auditory system usually requires complex experimental setups, using arrays of speakers or speakers mounted on moving arms. These devices, while allowing precision in the presentation of the spatial attributes of sound, are complex, expensive and limited. Alternative approaches rely on virtual space sound delivery. In this paper, we describe a virtual space algorithm that enables accurate reconstruction of eardrum waveforms for arbitrary sound sources moving along arbitrary trajectories in space. A physical validation of the synthesis algorithm is performed by comparing waveforms recorded during real motion with waveforms synthesized by the algorithm. As a demonstration of possible applications of the algorithm, virtual motion stimuli are used to reproduce psychophysical results in humans and for studying responses of barn owls to auditory motion stimuli.

Original languageEnglish
Pages (from-to)29-38
Number of pages10
JournalJournal of Neuroscience Methods
Volume106
Issue number1
DOIs
StatePublished - 30 Mar 2001

Bibliographical note

Funding Information:
The authors thank Professor Hermann Wagner and Nachum Ulanovsky for comments on the manuscript, and Yehoshua Yehuda for help with programming the motor. This work was supported by a grant from the German–Israeli Foundation (GIF).

Keywords

  • Auditory motion
  • Barn owl
  • HRTF
  • Physiology
  • Psychophysics
  • Sound localization
  • Sound synthesis
  • Virtual space

Fingerprint

Dive into the research topics of 'Synthesizing spatially complex sound in virtual space: An accurate offline algorithm'. Together they form a unique fingerprint.

Cite this