Houdini: Fooling deep structured visual and speech recognition models with adversarial examples

Moustapha Cisse, Yossi Adi, Natalia Neverova, Joseph Keshet

Research output: Contribution to journalConference articlepeer-review

123 Scopus citations

Abstract

Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.

Original languageEnglish
Pages (from-to)6978-6988
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - 2017
Externally publishedYes
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017

Bibliographical note

Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Houdini: Fooling deep structured visual and speech recognition models with adversarial examples'. Together they form a unique fingerprint.

Cite this