Combinatorial Optimization in Machine Learning and Computer Vision

Dr. Bogdan Savchynskyy, Prof. Dr. Carsten Rother, SoSe 2020

Summary

Machine learning techniques are tightly coupled with optimization methods. Many techniques become practical only if there exists a supporting optimization tool.

In the seminar we will discuss a number of recent articles on combinatorial optimization with applications in computer vision and machine learning.

The topic of this semester is

Neural Networks meet Combinatorial Optimization

In particular, we will consider methods for

  • training parameters of combinatorial optimization algorithms with the machine learning techniques,
  • combinatorial optimization based loss-functions for deep learning
and their applications.

General Information

The date of the first seminar will be announced via Müsli later. Please make sure to participate.
  • Seminar: Wed, 16:00 – 18:00
  • Credits: 2 / 4 / 6 CP depending on course of study

Schedule

The seminars will always start at 16:00 (strict).

Registration

Please register for the seminar in Müsli. If you have trouble registering, drop an email to lisa.kruse@iwr.uni-heidelberg.de.

Topics

Papers for presentation and discussion are in general pre-selected. A short introduction will be given at the first seminar session. The paper assignment will also take place during this seminar.

The following list of papers is incomplete and can be complemented to fit the number of enrolled students:

M. Vlastelica et al. - Differentiation of blackbox combinatorial solvers
Y. Kim et al. - Structured attention networks
P. Knöbelreiter et al. - End-to-end training of hybrid CNN-CRF models for stereo
P. Mohapatra et al. - Efficient optimization for rank-based loss functions
D. McAllester et al. - Direct loss minimization for structured prediction
G. Lorberbom et al. - Direct optimization through arg max for discrete variational auto-encoder
Y. Song et al. - Training deep neural networks via direct loss minimization
D. Marin et al. - Beyond gradient descent for regularized segmentation losses
M. Tang et al. - On regularized losses for weakly-supervised CNN segmentation
S. Zheng et al. - Conditional random fields as recurrent neural networks
J. Song et al. - End-to-end learning for graph decomposition
S. Wang et al. - End-to-end training of CNN-CRF via differentiable dual-decomposition
L. Chen et al. - Learning deep structured models
S. Schulter et al. - Deep network flow for multi-object trackings

Contact

Lisa Kruse
Dr. Bogdan Savchynskyy