Neural Network Verification Using Residual Reasoning

Yizhak Yisrael Elboher*, Elazar Cohen, Guy Katz

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

With the increasing integration of neural networks as components in mission-critical systems, there is an increasing need to ensure that they satisfy various safety and liveness requirements. In recent years, numerous sound and complete verification methods have been proposed towards that end, but these typically suffer from severe scalability limitations. Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network. A shortcoming of such a scheme is that if verifying the smaller network fails, the verifier needs to perform a refinement step that increases the size of the network being verified, and then start verifying the new network from scratch—effectively “wasting” its earlier work on verifying the smaller network. In this paper, we present an enhancement to abstraction-based verification of neural networks, by using residual reasoning: the process of utilizing information acquired when verifying an abstract network, in order to expedite the verification of a refined network. In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly, and allows it to focus on areas where bugs might be discovered. We implemented our approach as an extension to the Marabou verifier, and obtained promising results.

Original languageAmerican English
Title of host publicationSoftware Engineering and Formal Methods - 20th International Conference, SEFM 2022, Proceedings
EditorsBernd-Holger Schlingloff, Ming Chai
PublisherSpringer Science and Business Media Deutschland GmbH
Pages173-189
Number of pages17
ISBN (Print)9783031171079
DOIs
StatePublished - 2022
Event20th International Conference on Software Engineering and Formal Methods, SEFM 2022 - Berlin, Germany
Duration: 26 Sep 202230 Sep 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13550 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference20th International Conference on Software Engineering and Formal Methods, SEFM 2022
Country/TerritoryGermany
CityBerlin
Period26/09/2230/09/22

Bibliographical note

Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • Abstraction refinement
  • Incremental reasoning
  • Neural networks
  • Residual reasoning
  • Verification

Fingerprint

Dive into the research topics of 'Neural Network Verification Using Residual Reasoning'. Together they form a unique fingerprint.

Cite this