Abstract
Deep neural networks (DNNs) are revolutionizing the way complex systems are designed, developed and maintained. As part of the life cycle of DNN-based systems, there is often a need to modify a DNN in subtle ways that affect certain aspects of its behavior, while leaving other aspects of its behavior unchanged (e.g., if a bug is discovered and needs to be fixed, without altering other functionality). Unfortunately, retraining a DNN is often difficult and expensive, and may produce a new DNN that is quite different from the original. We leverage recent advances in DNN verification and propose a technique for modifying a DNN according to certain requirements, in a way that is provably minimal, does not require any retraining, and is thus less likely to affect other aspects of the DNN’s behavior. Using a proof-of-concept implementation, we demonstrate the usefulness and potential of our approach in addressing two real-world needs: (i) measuring the resilience of DNN watermarking schemes; and (ii) bug repair in already-trained DNNs.
Original language | English |
---|---|
Pages (from-to) | 260-278 |
Number of pages | 19 |
Journal | EPiC Series in Computing |
Volume | 73 |
DOIs | |
State | Published - 2020 |
Event | 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, LPAR23 2020 - Alicante, Spain Duration: 22 May 2020 → 27 May 2020 |
Bibliographical note
Publisher Copyright:© 2020, EasyChair.