Abstract
Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary. In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring.
Original language | English |
---|---|
Title of host publication | Proceedings of the 27th USENIX Security Symposium |
Publisher | USENIX Association |
Pages | 1615-1631 |
Number of pages | 17 |
ISBN (Electronic) | 9781939133045 |
State | Published - 2018 |
Externally published | Yes |
Event | 27th USENIX Security Symposium - Baltimore, United States Duration: 15 Aug 2018 → 17 Aug 2018 |
Publication series
Name | Proceedings of the 27th USENIX Security Symposium |
---|
Conference
Conference | 27th USENIX Security Symposium |
---|---|
Country/Territory | United States |
City | Baltimore |
Period | 15/08/18 → 17/08/18 |
Bibliographical note
Publisher Copyright:© 2018 Proceedings of the 27th USENIX Security Symposium. All rights reserved.