TextonBoost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context

TitleTextonBoost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context
Publication TypeJournal Article
Year of Publication2009
AuthorsShotton, J, Winn, J, Rother, C, Criminisi, A
JournalInternational Journal of Computer Vision
Volume81
Pagination2–23
ISSN09205691
KeywordsBoosting, Conditional random field, Context, image understanding, Layout, Object recognition, Piecewise training, Segmentation, Semantic image segmentation, Textons, Texture
Abstract

This paper details a new approach for learning a discriminative model of object classes, incorporating texture, layout, and context information efficiently. The learned model is used for automatic visual understanding and semantic segmentation of photographs. Our discriminative model exploits texture-layout filters, novel features based on textons, which jointly model patterns of texture and their spatial layout. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating the unary classifier in a conditional random field, which (i) captures the spatial interactions between class labels of neighboring pixels, and (ii) improves the segmentation of specific object instances. Efficient training of the model on large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy is demonstrated on four varied databases: (i) the MSRC 21-class database containing photographs of real objects viewed under general lighting conditions, poses and viewpoints, (ii) the 7-class Corel subset and (iii) the 7-class Sowerby database used in He et al. (Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 695-702, June 2004), and (iv) a set of video sequences of television shows. The proposed algorithm gives competitive and visually pleasing results for objects that are highly textured (grass, trees, etc.), highly structured (cars, faces, bicycles, airplanes, etc.), and even articulated (body, cow, etc.). © 2007 Springer Science+Business Media, LLC.

URLhttp://jamie.shotton.org/work/code.html
DOI10.1007/s11263-007-0109-1
Citation KeyShotton2009