Minimal modifications of deep neural networks using verification

Ben Goldberger, Yossi Adi, Joseph Keshet, Guy Katz

Research output: Contribution to journalConference articlepeer-review

28 Scopus citations


Deep neural networks (DNNs) are revolutionizing the way complex systems are designed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behavior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.

Original languageAmerican English
Pages (from-to)260-278
Number of pages19
JournalEPiC Series in Computing
StatePublished - 2020
Event23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, LPAR23 2020 - Alicante, Spain
Duration: 22 May 202027 May 2020

Bibliographical note

Funding Information:
In the future we plan to extend our technique so that it can be applied when the changes to a DNN occur in more than one layer. In order to overcome the highly non-linear and non-convex nature of the problem, we plan to apply compositional techniques: i.e., to break the DNN down into smaller DNNs, so that the changes to each smaller DNN will only occur in a single layer. We will then apply our technique to each smaller DNN separately, and use the results to draw conclusions regarding changes to the original DNN as a whole. In addition, we plan to identify and research additional use cases, beyond watermark resilience and bug correction, where our technique could prove useful. Acknowledgments. The project was partially supported by grants from the Binational Science Foundation (2017662) and the Israel Science Foundation (683/18).

Publisher Copyright:
© 2020, EasyChair.


Dive into the research topics of 'Minimal modifications of deep neural networks using verification'. Together they form a unique fingerprint.

Cite this