CVPR 2015 Workshop on Performance Metrics for Correspondence Problems

A Reality Check for Stereo, Flow and Reconstruction

Important Dates

  • CVPR 2015, Boston, USA
  • Workshop Day: Thursday, June 11, 2015, afternoon, room 202
  • Call for Posters on Recent or Ongoing Work on Datasets, Performance Metrics and Evaluation: Submission Details
  • Abstract Submission for Poster Session until Sunday, May 10, Midnight PST (acceptance in rolling release manner)

Background

Measuring and benchmarking algorithm performance is indispensable in proving and keeping track of scientific progress. Furthermore, benchmarks define what aspects in a field should be looked at next. They can spark new interest and research in their respective field, as can for example be seen with the PASCAL and ImageNet Large Scale Visual Recognition Challenges.

In stereo, flow and 3D reconstruction existing benchmarks may not be sufficient to keep track of current research.
Even more importantly we argue that it is even unclear what metrics should be used in the next 3 to 5 years. Recently, application-driven metrics such as the “Visual Turing Test for Scene Reconstruction” [Shan et al. 2013], which evaluates 3D reconstructions according to the rendering resolution at which they are indistinguishable from a photograph, emerged and demonstrate that alternatives to the classical metrics may be needed.
Similarly, in stereo and flow for automotive applications the simple error metrics that are traditionally used may be insufficient because they may not be fully correlated to safety with respect to the application.

Goals and Topics

In the scope of this workshop we focus on the fields of stereo, optical flow, and 3D reconstruction.
Despite solid theoretical reasoning in these fields in the late 90s and recent remarkable efforts on creating datasets, we are still at a point where available benchmarks either do not cover many aspects of current lines of research or they do not match the final application, for example if safety or visual perception matter.

As discussed in [1], a comprehensive performance analysis would ideally assess the actual algorithm output and additionally take into account the characteristics of the input data, the ground truth data and the algorithm itself.
Working towards this goal, a number of questions arise such as how to …

1. input data

…combine benefits of real, engineered, and synthetic data?
…integrate input properties into the evaluation?

2. ground truth

…get representative, challenging, unbiased data?
…assess ground truth quality?
…deal with the absence of ground truth data?
…integrate known uncertainties and inaccuracies?

3. algorithm

…improve self-diagnosis and confidence measures?
…predict the performance on given data?
…value accessibility & usability of the code?

4. performance measures

…define a good performance measure?
…combine performance measures?
…measure robustness and graceful degradation?
…include application specialties?

Many of these questions are largely unanswered in the correspondence estimation community. As we see inspiring research in the segmentation community, our workshop aims at bringing together these communities and generating awareness for the scientific challenge of performance evaluation in stereo and optical flow estimation as well as 3D reconstruction.

[1] Kondermann, Daniel, et al. "On performance analysis of optical flow algorithms." Outdoor and Large-Scale Real-World Scene Analysis. Springer Berlin Heidelberg, 2012. 329-355

Abstract Submission for Poster Session on Datasets, Performance Metrics and Evaluation

If you have exciting recent or ongoing work on datasets or performance metrics in stereo, optical flow, 3D reconstruction and closely related fields, we cordially invite you to present a poster at our workshop.
Please submit an extended abstract (up to one page) through our CMT page until May 10. Based on submitted abstracts, posters will be accepted in a rolling release manner, i.e. if you submit your abstract before May 10, acceptance may be decided earlier than May 10.
The dimensions of the poster boards are 4x8 feet. Please prepare your poster accordingly.

Workshop Program

13:30-13:45 Welcome Address: Daniel Kondermann

Invited Talks

13:45-14:15 The Middlebury Stereo Evaluation Version 3, Daniel Scharstein (Middlebury College) [slides]
14:15-14:45 Corresponding Points Performance Characterization, Robert Haralick (City University of New York) [slides]
14:45-15:15 Correspondence Performance in Medical Imaging, Christian Wojek (Carl Zeiss AG)

15:15-16:00 Poster Session & Coffee Break

Invited Talks

16:00-16:15 Why methods with bad performance measures can still be useful in practice, Henning Zimmer (Disney Research)
16:15-16:30 3D reconstruction - how accurate can it be? Pierre Moulon (core developer OpenMVG) [slides]
16:30-16:45 Performance Evaluation in the KITTI Vision Benchmark, Andreas Geiger (MPI for Intelligent Systems) [slides]
16:45-17:00 Performance Evaluation in the SINTEL Vision Benchmark, Jonas Wulff, Michael Black (MPI for Intelligent Systems)
17:00-17:30 Challenges for Evaluating Camera-Based Driver Assistance, Bjoern Froehlich, Uwe Franke (Daimler AG)

17:30-18:30 Panel Discussion
18:30-18:45 Closing Remarks: Michael Goesele (Technische Universität Darmstadt)

Program Committee

Correspondence (Stereo, Flow, Reconstruction)

John Barron (University of Western Ontario, Canada)
Steven Beauchemin (University of Western Ontario, Canada)
Michael Black (MPI Tübingen, Germany)
Gabriel Brostow (University College London, UK)
David Fleet (University of Toronto, Canada)
Jan-Michael Frahm (University of North Carolina at Chapel Hill, US)
Dieter Fritsch (University of Stuttgart, Germany)
Yasutaka Furukawa (Washington University in St. Louis, US)
Robert Haralick (Graduate Center City University of New York, US)
Adrian Hilton (University of Surrey, UK)
Andreas Kuhn (Universität der Bundeswehr, Munich, Germany)
Ferran Marqués (Universitat Politècnica de Barcelona, Spain)
Helmut Mayer (Universität der Bundeswehr, Munich, Germany)
Philippos Mordohai (Stevens Institute of Technology Hoboken, US)
Markus Murschitz (AIT Vienna, Austria)
Tomas Pajdla (Czech Technical University, Prague, Czech Republic)
Thomas Pock (TU Graz, Austria)
Stefan Roth (TU Darmstadt, Germany)
Daniel Scharstein (Middlebury College, US)
Konrad Schindler (ETH Zürich, Switzerland)
Cristian Sminchisescu (Lund University, Sweden)
Thorsten Thormählen (Philipps-Universität Marburg, Germany)
Jiang Xiaoyi (University of Muenster, Germany)
Oliver Zendel (AIT Vienna, Austria)

Industry

Simon Baker (NVIDIA San Francisco, US)
Michael Bleyer (Microsoft Research Seattle, US)
Jean-Yves Bouguet (Magic Leap Mountain View, US)
Derek Bradley (Disney Research, Zurich, Switzerland)
Goksel Dedeoglu (PercepTonic Dallas, US)
Uwe Franke (Daimler Stuttgart, Germany)
Stefan Gehrig (Daimler Stuttgart, Germany)
Heiko Hirschmüller (DLR Wessling, Germany)
Sudipta Sinha (Microsoft Research, US)
Paul Springer (Sony Stuttgart, Germany)
Richard Szeliski (Microsoft Research Seattle, US)
Christian Unger (BMW Munich, Germany)
Jue Wang (Adobe San Diego, US)

Organizers

Daniel Kondermann (Heidelberg Collaboratory for Image Processing)
Michael Goesele (TU Darmstadt)
Katrin Honauer (Heidelberg Collaboratory for Image Processing)
Michael Waechter (TU Darmstadt)
Bernd Jaehne (Heidelberg Collaboratory for Image Processing)