Tutorial on Graphical Models

 

gm-tutorial-iccv2015-head

Organizers Lecture Description Downloads

An ICCV 2015 Tutorial on

Inference and Learning in Discrete Graphical Models:
Theory and Practice

December 12, 2015, 8:30-16:30, Hotel Grand Hyatt, “Loa” Hall

Slides are aready on-line!

Organizers


Abstract:
Discrete
energy minimization problems, in the form of factor graphs, Markov or
Conditional Random Field models (MRF/CRF) are a mainstay of computer
vision research. Their applications range from image denoising,
segmentation, motion estimation and stereo to object recognition and
image editing. The aim of this tutorial is to give researchers a clear
guidance, which type of models are tractable and which optimization
methods are best suited for these models. Besides theoretical aspects
we will focus on practical questions, in particular on an optimization
aware modeling.

Programme

8:30 Opening
8:40 Discrete Graphical Models

  • Applications in Computer Vision, Definitions and Notation, Overview of Existing Software-packages
9:30 Inference in Discrete Graphical Models : Part 1

  • Exact Inference Methods: Dynamic Programming, A*, Branch-and-Bound
10:00 Coffe break
10:30 Inference in Discrete Graphical Models : Part 2

  • Exact Inference Methods: Submodular Problems, Min-Cut, Multicut
  • Relaxation-Based Methods: LP Relaxation, Dual Decomposition; Sub-gradient, Bundle, Coordinate Ascent Algorithms (TRW-S)
  • Partial Optimality
  • Approximative and Move Making Methods
  • Meta-Methods : Combining Methods to get a better overall performance
12:00 Lunch
14:00 From Benchmarks to the Current Limits

  • Inside from Benchmark Studies
  • How to deal with Huge Models and Higher-order Potentials
  • Models with Discrete and Continuous Variables
15:00 Coffe break
15:30 Learning in Discrete Graphical Models

  • Problem Setting
  • Maximum Likelihood Based Methods
  • Prediction-Based Parameter Learning Methods
16:20 Closing



Back to top

Organizers


Bogdan Savchynskyy
Computer Vision Lab Dresden,
Technische Universität Dresden, Germany

graduated from the National Technical University of Ukraine ”Kiev Polytechnic Institute” in 2002. In 2007 he defended
his PhD Thesis at the National Academy of Sciences of Ukraine. In 2009-2014 he is a PostDoc at Heidelberg University, Germany,
since 2015 – with Dresden University of Technology. Here he delivers a lecture cours Machine Learning-2 devoted to inference
and learning for graphical models. His main interests are optimization problems for image and video analysis, in particular
inference and learning with graphical models.
He is a co-developer of OpenGM2, a free library for inference with discrete graphical models, co-author of the recent OpenGM
benchmark paper and a number of works about exact and approximate inference for graphical models. Together with Paul Swoboda,
Jöorg Hendrik Kappes and Christoph Schnörr we won the Best Student Paper Award at CVPR 2014 for a new method for
partial optimality in energy minimizazation with graphical models.


Jörg Hendrik Kappes
Heidelberg Collaboratory for Image Processing,
Heidelberg University, Germany

received diploma degree in computer science from the University of Mannheim in 2005 and it PhD degree from the Heidelberg University 2011.
Since 2011 he is a PostDoc at Heidelberg University.
His main research interests includes discrete optimization, computer vision and probabilistic graphical models.
He regularly serves as reviewer for major computer vision and machine learning conferences (ICCV, CVPR, ECCV, ICML) and journals (TPAMI, IJCV) and
is one of the major developers of OpenGM2, a free library for inference on discrete graphical models,
and author of several well received papers dealing with inference on discrete graphical models including
a recent benchmark paper on inference in discrete graphical models published in IJCV 2015.


Thorsten Beier
Multidimensional Image Processing Group,
Heidelberg University, Germany

received bachelor degree in physics from the University of Heidelberg in 2011,
master degree in computer science from the University of Heidelberg in 2014.
Since 2014 he is a PhD student at the University of Heidelberg.
He is one of the main developers of OpenGM2, an open source library for
inference in discrete graphical models.
His main interests are optimization problems for multidimensional image segmentation
and he is an author of recent papers dealing with large scale approximate optimization.


Sebastian Nowozin
Microsoft Research Cambridge, UK

is researcher in the Machine Learning and Perception (MLP) group at Microsoft Research, Cambridge, UK.
He received his PhD degree summa cum laude in 2009 from the Technical University of Berlin, Germany.
His research is at the intersection of machine learning and computer vision.
He regularly serves as area chair or program committee member at major conferences (NIPS, ICML, CVPR, ICCV, ECCV)
and is reviewer for all major computer vision and machine learning journals (TPAMI, IJCV, JMLR, MLJ),
as well as associate editor for IEEE TPAMI, IJCV, and JMLR. He has co-edited (with Suvrit Sra and Stephen Wright)
a MIT Press volume on “Optimization for Machine Learning” and (with Peter Gehler, Jeremy Jancsary, and Christoph Lampert)
a MIT Press volume on “Advanced Structured Prediction”. He also published (with Christoph Lampert) a tutorial on
structured prediction that has been presented at CVPR 2011 and CVPR 2012.


Carsten Rother
Computer Vision Lab Dresden,
Technische Universität Dresden, Germany

received the diploma degree in 1999 from the University of Karlsruhe, Germany, and the PhD degree in 2003 from
the Royal Institute of Technology Stockholm, Sweden. From 2003 until 2013 he was researcher with Microsoft Research Cambridge, United Kingdom.
Since October 2013 he is full professor at TU Dresden, heading the computer vision lab Dresden.
One of his main research interests is modelling, optimization and learning in structured models for various application tasks,
including dense correspondence search, image segmentation, and BioImaging. In this field he has been teaching courses at university,
at summer schools, and tutorials at conferences (CVPR 2010, ICCV 2009). His H-index is 51 and citation count over 13.000.
He won awards at ACCV 2014, CVPR 2013, BMVC 2012, ACCV 2010, CHI 2007, CVPR 2005, and Indian Conference on Computer Vision 2010,
and he was awarded the DAGM Olympus prize in 2009. He has co-edited (with Andrew Blake and Pushmeet Kohli)
a MIT Press volume on “Markov Random Fields for Vision and Image Processing”.


 


Back to top

Tutorial Description

The proposed tutorial will focus on inference in discrete graphical models and its application in computer vision.
The anticipated target audience are graduate students as well as researchers from fields where graphical models are used as a black box tool.
We will give an overview of state-of-the-art methods together with compact but sufficient theoretical basis, which is required to understand advantages and restrictions of these methods.
With this at hand, we will discuss the interplay between modeling and optimization and show how one can improve
performance by using specialized methods instead of the most general ones.

In particular, we will cover globally optimal methods provided by general ILP-solvers, which recently have become
significantly fast and popular. In this context we will discuss how partial optimality
can be obtained and used. Furthermore we will mention efficient exact solvers for important problem subclasses, such as max-flow for submodular energies
or dynamic programming and perfect matching for acyclic and planar graphs respectively.

For linear programming relaxations we will give the fundamental theory and show relations between the corresponding dedicated solvers.
This includes sub-gradient, smoothing and message passing based techniques.
In this context we will also discuss tightening of the underling relaxation and obtaining sound stopping conditions for iterative inference algorithms.

The part of the tutorial entitled Approximative and Move Making Methods will be devoted to such methods as alpha-expansion,
alpha-beta-swap as well as fusion-moves and (block-) ICM.

We will conclude the inference part of the tutorial by showing how different methods can be combined into meta-methods, which often have the best overall performance.

An important part of this tutorial are common use-cases, problems and pitfalls, which will be covered by a separate lecture block with several live demos.

We then will turn to the problem of learning parameters of discrete graphical models.
After giving the problem description we will introduce some classical learning methods and discuss problems that have to be considered in practice.

Back to top

Downloads

inference.pdf (32Mb)
learning.pdf (5Mb)