<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Omid Hosseini Jafari</style></author><author><style face="normal" font="default" size="100%">Yang, Michael Ying</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Real-time RGB-D based template matching pedestrian detection</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings - IEEE International Conference on Robotics and Automation</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><volume><style face="normal" font="default" size="100%">2016-June</style></volume><pages><style face="normal" font="default" size="100%">5520–5527</style></pages><isbn><style face="normal" font="default" size="100%">9781467380263</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Pedestrian detection is one of the most popular topics in computer vision and robotics. Considering challenging issues in multiple pedestrian detection, we present a real-time depth-based template matching people detector. In this paper, we propose different approaches for training the depth-based template. We train multiple templates for handling issues due to various upper-body orientations of the pedestrians and different levels of detail in depth-map of the pedestrians with various distances from the camera. And, we take into account the degree of reliability for different regions of sliding window by proposing the weighted template approach. Furthermore, we combine the depth-detector with an appearance based detector as a verifier to take advantage of the appearance cues for dealing with the limitations of depth data. We evaluate our method on the challenging ETH dataset sequence. We show that our method outperforms the state-of-the-art approaches.</style></abstract></record></records></xml>