Vid2speech: Speech reconstruction from silent video

Ariel Ephrat, Shmuel Peleg

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

91 Scopus citations

Abstract

Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.

Original languageEnglish
Title of host publication2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5095-5099
Number of pages5
ISBN (Electronic)9781509041176
DOIs
StatePublished - 16 Jun 2017
Event2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - New Orleans, United States
Duration: 5 Mar 20179 Mar 2017

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Conference

Conference2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
Country/TerritoryUnited States
CityNew Orleans
Period5/03/179/03/17

Bibliographical note

Publisher Copyright:
© 2017 IEEE.

Keywords

  • articulatory-to-acoustic mapping
  • neural networks
  • speech intelligibility
  • Speechreading
  • visual speech processing

Fingerprint

Dive into the research topics of 'Vid2speech: Speech reconstruction from silent video'. Together they form a unique fingerprint.

Cite this