<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Miguel Bautista</style></author><author><style face="normal" font="default" size="100%">Sanakoyeu, A.</style></author><author><style face="normal" font="default" size="100%">Sutter, E.</style></author><author><style face="normal" font="default" size="100%">Björn Ommer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">CliqueCNN: Deep Unsupervised Exemplar Learning</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://arxiv.org/abs/1608.08792</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Barcelona</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Exemplar learning is a powerful paradigm for discovering visual similarities in
an unsupervised manner.  In this context, however, the recent breakthrough in
deep learning could not yet unfold its full potential.  With only a single positive
sample, a great imbalance between one positive and many negatives, and unreliable
relationships between most samples, training of Convolutional Neural networks is
impaired. Given weak estimates of local distance we propose a single optimization
problem to extract batches of samples with mutually consistent relations. Conflict-
ing relations are distributed over different batches and similar samples are grouped
into compact cliques. Learning exemplar similarities is framed as a sequence of
clique categorization tasks. The CNN then consolidates transitivity relations within
and between cliques and learns a single representation for all samples without
the need for labels. The proposed unsupervised approach has shown competitive
performance on detailed posture analysis and object classification.</style></abstract></record></records></xml>