Abstract
Deep learning has become the method of choice for many machine learning tasks in recent years, and especially for multi-class classification. The most common loss function used in this context is the cross-entropy loss. While this function is insensitive to the identity of the assigned class in the case of misclassification, in practice it very common to have imbalanced sensitivity to error, meaning some wrong assignments are much worse than others. Here we present the bilinear-loss (and related log-bilinear-loss) which differentially penalizes the different wrong assignments of the model. We thoroughly test the proposed method using standard models and benchmark image datasets.
Original language | English |
---|---|
Title of host publication | Proceedings of the First International Workshop on Learning with Imbalanced Domains |
Subtitle of host publication | Theory and Applications |
Publisher | PMLR |
Pages | 141-151 |
Number of pages | 11 |
State | Published - 2017 |
Event | International Workshop on Learning with Imbalanced Domains: Theory and Applications - Skopje, Macedonia, The Former Yugoslav Republic of Duration: 22 Sep 2017 → 22 Sep 2017 Conference number: 1 https://proceedings.mlr.press/v74 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 74 |
ISSN (Electronic) | 2640-3498 |
Conference
Conference | International Workshop on Learning with Imbalanced Domains |
---|---|
Abbreviated title | LIDTA 2017 |
Country/Territory | Macedonia, The Former Yugoslav Republic of |
City | Skopje |
Period | 22/09/17 → 22/09/17 |
Internet address |