<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Biagio Brattoli</style></author><author><style face="normal" font="default" size="100%">Uta Büchler</style></author><author><style face="normal" font="default" size="100%">Anna-Sophia Wahl</style></author><author><style face="normal" font="default" size="100%">M. E. Schwab</style></author><author><style face="normal" font="default" size="100%">Björn Ommer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">LSTM Self-Supervision for Detailed Behavior Analysis</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2017</style></year></dates><publisher><style face="normal" font="default" size="100%">(BB and UB contributed equally)</style></publisher><abstract><style face="normal" font="default" size="100%">Behavior analysis provides a crucial non-invasive and
easily accessible diagnostic tool for biomedical research.
A detailed analysis of posture changes during skilled mo-
tor tasks can reveal distinct functional deficits and their
restoration during recovery. Our specific scenario is based
on a neuroscientific study of rodents recovering from a large
sensorimotor cortex stroke and skilled forelimb grasping is
being recorded. Given large amounts of unlabeled videos
that are recorded during such long-term studies, we seek
an approach that captures fine-grained details of posture
and its change during rehabilitation without costly manual
supervision. Therefore, we utilize self-supervision to au-
tomatically learn accurate posture and behavior represen-
tations for analyzing motor function. Learning our model
depends on the following fundamental elements: (i) limb
detection based on a fully convolutional network is ini-
tialized solely using motion information, (ii) a novel self-
supervised training of LSTMs using only temporal permu-
tation yields a detailed representation of behavior, and (iii)
back-propagation of this sequence representation also im-
proves the description of individual postures. We establish a
novel test dataset with expert annotations for evaluation of
fine-grained behavior analysis. Moreover, we demonstrate
the generality of our approach by successfully applying it to
self-supervised learning of human posture on two standard
benchmark datasets.</style></abstract></record></records></xml>