Sending private data to Neural Network applications raises many privacy concerns. The cryptography community developed a variety of secure computation methods to address such privacy issues. As generic techniques for secure computation are typically prohibitively expensive, efforts focus on optimizing these cryptographic tools. Differently, we propose to optimize the design of crypto-oriented neural architectures, introducing a novel Partial Activation layer. The proposed layer is much faster for secure computation as it contains fewer non linear computations. Evaluating our method on three state-of-the-art architectures (SqueezeNet, ShuffleNetV2, and MobileNetV2) demonstrates significant improvement to the efficiency of secure inference on common evaluation metrics.
|Original language||American English|
|Number of pages||5|
|Journal||Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing|
|State||Published - 2021|
|Event||2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada|
Duration: 6 Jun 2021 → 11 Jun 2021
Bibliographical noteFunding Information:
This research has been supported by the Israel ministry of Science and Technology, by the Israel Science foundation, and by the European Union’s Horizon 2020 Framework Program (H2020) via an ERC Grant (Grant No. 714253).
© 2021 IEEE
- NN Architecture
- Secure Inference