Shape-based object localization for descriptive classification

Geremy Heitz*, Gal Elidan, Ben Packer, Daphne Koller

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Discriminative tasks, including object categorization and detection, are central components of high-level computer vision. Sometimes, however, we are interested in more refined aspects of the object in an image, such as pose or particular regions. In this paper we develop a method (LOOPS) for learning a shape and image feature model that can be trained on a particular object class, and used to outline instances of the class in novel images. Furthermore, while the training data consists of uncorresponded outlines, the resulting LOOPS model contains a set of landmark points that appear consistently across instances, and can be accurately localized in an image. Our model achieves state-of-the-art results in precisely outlining objects that exhibit large deformations and articulations in cluttered natural images. These localizations can then be used to address a range of tasks, including descriptive classification, search, and clustering.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference
PublisherNeural Information Processing Systems
Pages633-640
Number of pages8
ISBN (Print)9781605609492
StatePublished - 2009
Event22nd Annual Conference on Neural Information Processing Systems, NIPS 2008 - Vancouver, BC, Canada
Duration: 8 Dec 200811 Dec 2008

Publication series

NameAdvances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference

Conference

Conference22nd Annual Conference on Neural Information Processing Systems, NIPS 2008
Country/TerritoryCanada
CityVancouver, BC
Period8/12/0811/12/08

Fingerprint

Dive into the research topics of 'Shape-based object localization for descriptive classification'. Together they form a unique fingerprint.

Cite this