Efficient learning of relational object class models

Aharon Bar Hillel*, Tomer Hertz, Daphna Weinshall

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

29 Scopus citations

Abstract

We present an efficient method for learning part-based object class models. The models include location and scale relations between parts, as well as part appearance. Models are learnt from raw object and background images, represented as an unordered set of features extracted using an interest point detector. The object class is generatively modeled using a simple Bayesian network with a central hidden node containing location and scale information, and nodes describing object parts. The model's parameters, however, are optimized to reduce a loss function which reflects training error, as in discriminative methods. Specifically, the optimization is done using a boosting-like technique with complexity linear in the number of parts and the number of features per image. This efficiency allows our method to learn relational models with many parts and features, and leads to improved results when compared with other methods. Extensive experimental results are described, using some common bench-mark datasets and three sets of newly collected data, showing the relative advantage of our method.

Original languageEnglish
Title of host publicationProceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005
Pages1762-1769
Number of pages8
DOIs
StatePublished - 2005
EventProceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005 - Beijing, China
Duration: 17 Oct 200520 Oct 2005

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
VolumeII

Conference

ConferenceProceedings - 10th IEEE International Conference on Computer Vision, ICCV 2005
Country/TerritoryChina
CityBeijing
Period17/10/0520/10/05

Fingerprint

Dive into the research topics of 'Efficient learning of relational object class models'. Together they form a unique fingerprint.

Cite this