ECCV 2016 Workshop on Datasets and Performance Analysis in Early Vision

Saturday, Oct. 8, 10am-6pm, (Oudemanhuispoort of the University of Amsterdam)

Workshop Program

10:00 Welcome Address: Michael Goesele (TU Darmstadt)

Session 1: Questioning the datasets/metrics - What is a good dataset/metric?

10:15 Invited Talk by Oliver Zendel (AIT Vienna): "Creating Good Test Data" [slides]
11:00 Invited Talk by Jordi Pont-Tuset (ETH Zurich): "Meta Measures: How to Quantitatively Compare Evaluation Measures" [slides]

11:45 Lunch Break

Session 2: What happened so far … in tracking?

13:30 Invited Talk by Laura Leal-Taixé (TU Munich): "Benchmarking Multi-target Tracking" [slides]
14:15 Invited Talk by Matej Kristan (University of Ljubljana): "The VOT Visual Object Tracking Challenge – Four Years of Benchmarking Trackers" [slides]

15:00 Poster Session + Coffee

Session 3: Creative testing / data acquisition

15:45 Invited Talk by Hideaki Uchiyama (Kyushu University): "Tracking competitions for evaluating visual SLAM techniques" [slides]
16:30 Invited Talk by Stephan Richter (TU Darmstadt): "Playing for Data: Ground Truth from Computer Games"

17:15 Invited Talk by Matteo Ruggero Ronchi (Caltech) and Genevieve Patterson (Brown University): "The Common Visual Data Foundation (CVDF)" [slides]
17:30 Panel Discussion (all speakers)

18:00 Closing Remarks: Michael Goesele (TU Darmstadt)

Goals and Topics

In computer vision, benchmarks are indispensable in proving and keeping track of scientific progress as well as in sparking interest in new research areas. For this workshop, we want to focus on early vision communities such as stereo, flow, intrinsic images, light fields and shape from X.

We aim at bringing the communities together to share, discuss and consolidate insights and best practices for:

  1. dataset design
  2. data generation
  3. data-aware algorithm evaluation

Our workshop aims at raising awareness for the new opportunities, diversified requirements and scientific challenges with respect to dataset generation and performance evaluation.

In open panel discussions with leading dataset and performance analysis experts we plan to address research questions such as:
  • What kind of additional dataset(s) would be most valuable for the community?
  • What constitutes a good dataset for learning?
  • Should datasets for algorithm evaluation be designed differently from those for learning?
  • How should dataset peculiarities be taken into account when analyzing algorithm performance?
  • What can we learn from existing benchmarks, datasets and metrics?
  • How to combine the benefits of real, engineered and synthetic data?
  • Can we trust human annotations as ground truth?
  • Given a real-world application, which ground truth dataset is best for studying the performance?

Previous Workshop: CVPR 2015 Workshop on Performance Metrics for Correspondence Problems

Call for Posters

We cordially invite submissions of posters presenting work on datasets and performance analysis in stereo, flow, intrinsic images, light fields, shape from X, etc. Your work can be recently published or even still in progress, as long as it has already produced exciting results that you want to share or discuss. If you want to present a poster, please submit an extended abstract (up to 2 pages in ECCV layout incl. images/references) until Sept. 26, 11:59 pm Pacific Standard Time using the CMT Based on submitted abstracts, posters will be accepted in a rolling manner, i.e., if you submit your abstract before the deadline, acceptance may be decided earlier.

Of course, if you are simply interested in the workshop's topic, e.g., because you work in early vision or machine learning and want to know how others generate ground truth or evaluate results, we look forward to welcoming you at our workshop.


Michael Goesele (TU Darmstadt)
Michael Waechter (TU Darmstadt)
Katrin Honauer (HCI Heidelberg)
Bernd Jaehne (HCI Heidelberg)

Should you have any questions, please contact us via: dpaev2016*AT*