Controlling Imbalanced Error in Deep Learning with the Log Bilinear Loss.

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep learning has become the method of choice for many machine learning tasks in recent years, and especially for multi-class classification. The most common loss function used in this context is the cross-entropy loss. While this function is insensitive to the identity of the assigned class in the case of misclassification, in practice it very common to have imbalanced sensitivity to error, meaning some wrong assignments are much worse than others. Here we present the bilinear-loss (and related log-bilinear-loss) which differentially penalizes the different wrong assignments of the model. We thoroughly test the proposed method using standard models and benchmark image datasets.
Original languageEnglish
Title of host publicationProceedings of the First International Workshop on Learning with Imbalanced Domains
Subtitle of host publicationTheory and Applications
PublisherPMLR
Pages141-151
Number of pages11
StatePublished - 2017
EventInternational Workshop on Learning with Imbalanced Domains: Theory and Applications - Skopje, Macedonia, The Former Yugoslav Republic of
Duration: 22 Sep 201722 Sep 2017
Conference number: 1
https://proceedings.mlr.press/v74

Publication series

NameProceedings of Machine Learning Research
Volume74
ISSN (Electronic)2640-3498

Conference

ConferenceInternational Workshop on Learning with Imbalanced Domains
Abbreviated titleLIDTA 2017
Country/TerritoryMacedonia, The Former Yugoslav Republic of
CitySkopje
Period22/09/1722/09/17
Internet address

Fingerprint

Dive into the research topics of 'Controlling Imbalanced Error in Deep Learning with the Log Bilinear Loss.'. Together they form a unique fingerprint.

Cite this