Tamir Hazan  Probabilistic inference by randomly perturbing maxsolvers Modern inference problems can be increasingly understood in terms of discrete structures such as arrangements of objects in computer vision or molecular structures in computational biology. In a fully probabilistic treatment, all possible alternative assignments are considered thus requiring to estimate exponentially many structures with their respective weights. These computational difficulties are circumvented with a variety of optimization techniques that provide maxsolvers to recover the most likely structure. In this talk I will present a new approach for probabilistic inference that is based on randomly perturbing the scoring function of maxsolvers, thus measuring the stability of prediction to random perturbations. 
Bogdan Savchynskyy
 Global MAPoptimality by shrinking the combinatorial search area with convex relaxations
We consider energy minimization for undirected graphical models, also known as the MAPinference problem for Markov random fields. Although combinatorial methods, which return a provably optimal integral solution of the problem, made a significant progress in the past decade, they are still typically unable to cope with largescale datasets. On the other hand, large scale datasets are often defined on sparse graphs and convex relaxation methods, such as linear programming relaxations then provide good approximations to integral solutions.
We propose a novel method of combining combinatorial and convex programming techniques to obtain a global solution of the initial combinatorial problem. Based on the information obtained from the solution of the convex relaxation, our method confines application of the combinatorial solver to a small fraction of the initial graphical model, which allows to optimally solve much larger problems. We demonstrate the efficacy of our approach on a computer vision energy minimization benchmark.

Tomas Werner  How Hard Is Solving the LP Relaxation of the MaxSum Labeling Problem? We show that solving the standard LP relaxation of the minsum labeling problem (also known as MAP inference problem in graphical models, discrete energy minimization, or valued constraint satisfaction) is not easier than solving any linear program. Precisely, every polytope is lineartime representable by a local marginal polytope and every LP can be reduced in linear time to a linear optimization (allowing infinite costs) over a local marginal polytope. If time allows, we discuss also a similar result for the attractive model and relation of our results to fixed points of the convergent messagepassing algorithms. 
Thomas Schiex  Global decomposable cost functions for Cost Function Networks (CFN), aka MAPMRF In Constraint Satisfaction, one crucial modeling and solving component has been the introduction of global constraints. These constraints usually have polynomial time generalized arc consistency enforcing algorithms despite unbounded arity, thanks to precise semantics. A large number of these constraints are decomposable in a Berge acyclic network and in this case, Generalized Arc Consistency (GAC) on the decomposition is enough to enforce Arc Consistency on the Global Constraint. For the Weighted Constraint Satisfaction Problem, it is also possible to consider Global Cost Functions. There are however limits on the level of local consistency that can be enforced efficiently. it is also possible to relax Global Constraints in Global Cost functions and inherit decomposability from this relaxation. When the decomposition is Berge acyclic, we directly inherit the corresponding GAC result of CSP if the level of consistency enforced on the CFN is sufficient. Otherwise, a weaker result can be obtained.

Vladimir Kolmogorov  Extensions of submodularity and their applications in computer vision Submodular functions have played a prominent role in computer vision in the last decade; in many cases they can be minimized efficiently via graph cuts. I will describe a larger class of functions that can be minimized in polynomial time by solving a Basic Linear Programming relaxation of the problem. It was recently shown that this is the the only tractable class of FiniteValued Constraint Satisfaction Problems (VCSPs), assuming that P != NP.
A special case of the new class is socalled ksubmodular functions. In the second part I will describe an application of ksubmodular functions to computing partially optimal solutions of certain NPhard optimization problems. In the case of the Potts energy function there is a close connection to the approach proposed by Kovtun which also computes a part of an optimal solution. I will show how to improve the complexity of the Kovtun's approach from k to O(log k) maxflow computations, where k is the number of labels. The algorithm generalizes the method of Felzenszwalb et al. for Tree Metrics, and resembles a parametric maxflow algorithm. 
Paul Swoboda
 Partial Optimality by Pruning for MAPinference with General Graphical Models We consider the energy minimization problem for undirected graphical models, also known as MAPinference problem for Markov random fields which is NPhard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal {\em nonrelaxed integral} solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAPinference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method's runtime is determined by the runtime of the convex relaxation solver for the MAPinference problem.

Simon Degivry  Multiparadigm evaluation of exact solvers in graphical model discrete optimisation Graphical models on discrete variables allow to model NPhard optimization problems where the objective function is factorized into a set of local functions. In the graphical interpretation, each function's scope is represented by a clique. Deterministic graphical models such as weigthed Constraint Satisfaction Problem aim at minimizing the sum of all functions (or constraints if zero/infinite costs are used). Probabilistic graphical models such as Markov Random Fields aim at maximizing the product of all functions (or constraints if using zero/one probabilities). A direct transformation exists between the two frameworks that can also be modeled as weighted MaxSAT or Integer Linear Programming. In this presentation, I will report on a large comparison of stateoftheart exact solvers on several deterministic and probabilistic graphical models coming from the Probabilistic Inference Challenge 2011, the Computer Vision and Pattern Recognition OpenGM2 benchmark, the Weighted Partial MaxSAT Evaluation 2013, the MaxCSP 2008 Competition, the MiniZinc Challenge 20122013, and a library of Cost Function Networks. The results show that a few generalpurpose solvers are performing well on multiple paradigms, suggesting the interest for a portfolio approach. By doing so, we won the last UAI Evaluation 2014 for the MAP task.
