<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Thomas Hörnlein</style></author><author><style face="normal" font="default" size="100%">Bernd Jähne</style></author><author><style face="normal" font="default" size="100%">Herbert Süße</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Joachim Denzler</style></author><author><style face="normal" font="default" size="100%">Gunther Notni</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Boosting shift-invariant features</style></title><secondary-title><style face="normal" font="default" size="100%">Pattern Recognition</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2009</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><volume><style face="normal" font="default" size="100%">5748</style></volume><pages><style face="normal" font="default" size="100%">121--130</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This work presents a novel method for training shift-invariant features using a Boosting framework. Features performing local convolutions followed by subsampling are used to achieve shift-invariance. Other systems using this type of features, e.g. Convolutional Neural Networks, use complex feed-forward networks with multiple layers. In contrast, the proposed system adds features one at a time using smoothing spline base classifiers. Feature training optimizes base classifier costs. Boosting sample-reweighting ensures features to be both descriptive and independent. Our system has a lower number of design parameters as comparable systems, so adapting the system to new problems is simple. Also, the stage-wise training makes it very scalable. Experimental results show the competitiveness of our approach.</style></abstract><custom3><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></custom3></record></records></xml>