Abstract
We consider the problem of learning a similarity function from a set of positive equivalence constraints, i.e. 'similar' point pairs. We define the similarity in information theoretic terms, as the gain in coding length when shifting from independent encoding of the pair to joint encoding. Under simple Gaussian assumptions, this formulation leads to a non-Mahalanobis similarity function which is efficient and simple to learn. This function can be viewed as a likelihood ratio test, and we show that the optimal similarity-preserving projection of the data is a variant of Fisher Linear Discriminant. We also show that under some naturally occurring sampling conditions of equivalence constraints, this function converges to a known Mahalanobis distance (RCA). The suggested similarity function exhibits superior performance over alternative Mahalanobis distances learnt from the same data. Its superiority is demonstrated in the context of image retrieval and graph based clustering, using a large number of data sets.
Original language | English |
---|---|
Pages | 65-72 |
Number of pages | 8 |
DOIs | |
State | Published - 2007 |
Event | 24th International Conference on Machine Learning, ICML 2007 - Corvalis, OR, United States Duration: 20 Jun 2007 → 24 Jun 2007 |
Conference
Conference | 24th International Conference on Machine Learning, ICML 2007 |
---|---|
Country/Territory | United States |
City | Corvalis, OR |
Period | 20/06/07 → 24/06/07 |