Minimal modifications of deep neural networks using verification

Ben Goldberger, Yossi Adi, Joseph Keshet, Guy Katz

Research output: Contribution to journalConference articlepeer-review

34 Scopus citations


Deep neural networks (DNNs) are revolutionizing the way complex systems are designed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behavior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.

Original languageAmerican English
Pages (from-to)260-278
Number of pages19
JournalEPiC Series in Computing
StatePublished - 2020
Event23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, LPAR23 2020 - Alicante, Spain
Duration: 22 May 202027 May 2020

Bibliographical note

Publisher Copyright:
© 2020, EasyChair.


Dive into the research topics of 'Minimal modifications of deep neural networks using verification'. Together they form a unique fingerprint.

Cite this