Learning rules-first classifiers

Deborah Cohen, Amit Daniely, Amir Globerson, Gal Elidan

Research output: Contribution to conferencePaperpeer-review

Abstract

Complex classifiers may exhibit “embarassing” failures in cases where humans can easily provide a justified classification. Avoiding such failures is obviously of key importance. In this work, we focus on one such setting, where a label is perfectly predictable if the input contains certain features, or rules, and otherwise it is predictable by a linear classifier. We define a hypothesis class that captures this notion and determine its sample complexity. We also give evidence that efficient algorithms cannot achieve this sample complexity. We then derive a simple and efficient algorithm and show that its sample complexity is close to optimal, among efficient algorithms. Experiments on synthetic and sentiment analysis data demonstrate the efficacy of the method, both in terms of accuracy and interpretability.

Original languageEnglish
StatePublished - 2020
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: 16 Apr 201918 Apr 2019

Conference

Conference22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Country/TerritoryJapan
CityNaha
Period16/04/1918/04/19

Bibliographical note

Publisher Copyright:
© 2019 by the author(s).

Fingerprint

Dive into the research topics of 'Learning rules-first classifiers'. Together they form a unique fingerprint.

Cite this