gRoMA: A Tool for Measuring the Global Robustness of Deep Neural Networks

Natan Levy*, Raz Yerushalmi, Guy Katz

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Deep neural networks (DNNs) are at the forefront of cutting-edge technology, and have been achieving remarkable performance in a variety of complex tasks. Nevertheless, their integration into safety-critical systems, such as in the aerospace or automotive domains, poses a significant challenge due to the threat of adversarial inputs: perturbations in inputs that might cause the DNN to make grievous mistakes. Multiple studies have demonstrated that even modern DNNs are susceptible to adversarial inputs, and this risk must thus be measured and mitigated to allow the deployment of DNNs in critical settings. Here, we present gRoMA (global Robustness Measurement and Assessment), an innovative and scalable tool that implements a probabilistic approach to measure the global categorial robustness of a DNN. Specifically, gRoMA measures the probability of encountering adversarial inputs for a specific output category. Our tool operates on pre-trained, black-box classification DNNs, and generates input samples belonging to an output category of interest. It measures the DNN’s susceptibility to adversarial inputs around these inputs, and aggregates the results to infer the overall global categorial robustness of the DNN up to some small bounded statistical error. We evaluate our tool on the popular Densenet DNN model over the CIFAR10 dataset. Our results reveal significant gaps in the robustness of the different output categories. This experiment demonstrates the usefulness and scalability of our approach and its potential for allowing DNNs to be deployed within critical systems of interest.

Original languageEnglish
Title of host publicationBridging the Gap Between AI and Reality - 1st International Conference, AISoLA 2023, Proceedings
EditorsBernhard Steffen
PublisherSpringer Science and Business Media Deutschland GmbH
Pages160-170
Number of pages11
ISBN (Print)9783031460012
DOIs
StatePublished - 2023
Event1st International Conference on Bridging the Gap between AI and Reality, AISoLA 2023 - Crete, Greece
Duration: 23 Oct 202328 Oct 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14380 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference1st International Conference on Bridging the Gap between AI and Reality, AISoLA 2023
Country/TerritoryGreece
CityCrete
Period23/10/2328/10/23

Bibliographical note

Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • Adversarial Examples
  • Categorial Robustness
  • Deep Neural Networks
  • Global Robustness
  • Regulation
  • Safety Critical

Fingerprint

Dive into the research topics of 'gRoMA: A Tool for Measuring the Global Robustness of Deep Neural Networks'. Together they form a unique fingerprint.

Cite this