Reluplex: a calculus for reasoning about deep neural networks

Guy Katz*, Clark Barrett*, David L. Dill, Kyle Julian, Mykel J. Kochenderfer

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks that could be verified previously.

Original languageAmerican English
Pages (from-to)87-116
Number of pages30
JournalFormal Methods in System Design
Issue number1
StatePublished - Feb 2022

Bibliographical note

Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.


  • Neural networks
  • Satisfiability modulo theories
  • Verification


Dive into the research topics of 'Reluplex: a calculus for reasoning about deep neural networks'. Together they form a unique fingerprint.

Cite this