. In particular, I'm interested in
for the full list of papers since 2010. A complete list of my pre2010 publications can be found
. You may also check out my profile at
 U. Köthe: "Reliable LowLevel Image Analysis"
Habilitation Thesis, Department Informatik, University of Hamburg, 318 pages, Hamburg 2008
Abstract  PDF (10 MB)
What information give discrete images about the continuous world?
Image analysis uses discrete methods to make statements about the continuous real world. Since an infinite amount of information is lost by digitization, it is not obviuous whether or when this approach will succeed: Can one prove that certain properties of interest will be preserved, despite the information loss?
This habilitation thesis considers theories which explicitly connect continuous and discrete models, such as Shannon's famous sampling theorem and a recently discovered geometric sampling theorem. This analysis reveals important consequences regarding the necessary image quality (e.g. resolution and signaltonoiseratio) and the resulting limits of observation. These findings are subsequently applied to a large number of lowlevel image analysis problems (such edge and corner detection, segmentation, local estimation, and noise normalization), which leads to significantly improved algorithms that perform robustly and accurately in accordance to the predictions of theory.

U. Köthe:
"Generische Programmierung für die Bildverarbeitung",
PhD Thesis, Fachbereich Informatik, Universität Hamburg, 274 pages, Hamburg 2000, ISBN: 3831102392. (in German)
Abstract 
PDF (12.5 MB)
Das Problem, flexible Software für die Bildverarbeitung zu entwickeln, hat sich
als außerordentlich schwierig erwiesen. Dies liegt insbesondere daran, daß Flexibilität häufig mit einer deutlichen Verschlechterung der Rechengeschwindigkeit erkauft wird. Angesichts der großen Datenmengen, die in der Bildverarbeitung üblicherweise anfallen, ist dies nicht akzeptabel.
Die generische Programmierung, die zum Kernbestandteil der Programmiersprache C++ geworden ist, bietet neuartige Möglichkeiten, die Flexibilität von Software ohne Geschwindigkeitsverluste zu erhöhen. Das vorliegende Buch wendet diese neuen Methoden erstmals konsequent auf die Bildverarbeitung an. Es beschreibt ein umfassendes, konsistentes System von generischen Algorithmen und Objekten für die verschiedensten Problemstellungen, angefangen von grundlegenden Bilddatenstrukturen bis hin zu komplexen, graphbasierten Segmentierungsverfahren.
Die meisten der hier vorgestellten Konzepte sind in der frei zugänglichen Bildverarbeitungsbibliothek VIGRA realisiert und können dadurch vom Leser unmittelbar getestet und in eigenen Projekten verwendet werden.

C. Sommer, C. Straehle, U. Köthe, F.A. Hamprecht:
"ilastik: Interactive learning and segmentation toolkit",
In: IEEE International Symposium on Biomedical Imaging (ISBI), pp. 230233, 2011.
Abstract  PDF
Segmentation is the process of partitioning digital images into meaningful
regions. The analysis of biological high content images often requires
segmentation as a first step. We propose ilastik as an easytouse
tool which allows the user without expertise in image processing to perform
segmentation and classification in a unified way. ilastik learns from
labels provided by the user through a convenient mouse interface. Based
on these labels, ilastik infers a problem specific segmentation. A random
forest classifier is used in the learning step, in which each pixel's neighborhood
is characterized by a set of generic (nonlinear) features. ilastik
supports up to three spatial plus one spectral dimension and makes use
of all dimensions in the feature calculation. ilastik provides realtime feedback
that enables the user to interactively refine the segmentation result
and hence further finetune the classifier. An uncertainty measure guides
the user to ambiguous regions in the images. Real time performance is
achieved by multithreading which fully exploits the capabilities of modern
multicore machines. Once a classifier has been trained on a set of
representative images, it can be exported and used to automatically process
a very large number of images (e.g. using the CellProfiler pipeline).
ilastik is an open source project and released under the BSD license at
www.ilastik.org.

U. Köthe:
"Reusable Software in Computer Vision",
in: B. Jähne, H. Haussecker, P. Geissler (Eds.): Handbook of Computer Vision and Applications, Volume 3: Systems and Applications, pp. 103132, San Diego: Academic Press, 1999.
PDF

U. Köthe:
"Edge and Junction Detection with an Improved Structure Tensor",
in: B. Michaelis, G. Krell (Eds.): Pattern Recognition, Proc. of 25th DAGM Symposium, Magdeburg 2003, Lecture Notes in Computer Science 2781, pp. 2532, Berlin: Springer, 2003. (note: this article is © SpringerVerlag)
Abstract 
PDF – Awarded the main prize of the German Pattern Recognition Society (DAGM) 2003
We describe three modifications to the structure tensor approach
to lowlevel feature extraction. We first show that the structure
tensor must be represented at a higher resolution than the original image.
Second, we propose a nonlinear filter for structure tensor computation
that avoids undesirable blurring. Third, we introduce a method to
simultaneously extract edge and junction information. Examples demonstrate
significant improvements in the quality of the extracted features.

U. Köthe:
"Integrated Edge and Junction Detection with the Boundary Tensor",
in: ICCV ‘03, Proc. of 9th Intl. Conf. on Computer Vision, Nice 2003, vol. 1, pp. 424431, 2003.
Abstract  PDF
The boundaries of image regions necessarily consist of edges (in particular, step and roof edges), corners, and junctions. Currently, different algorithms are used to detect each boundary type separately, but the integration of the results into a single boundary representation is difficult. Therefore, a method for the simultaneous detection of all boundary types is needed. We propose to combine responses of suitable polar separable filters into what we will call the boundary tensor. The trace of this tensor is a measure of boundary strength, while the small eigenvalue and its difference to the large one represent corner/junction and edge strengths respectively. We prove that the edge strength measure behaves like a rotationally invariant quadrature filter. A number of examples demonstrate the properties of the new method and illustrate its application to image segmentation.

U. Köthe, M. Felsberg:
"RieszTransforms Versus Derivatives: "On the Relationship Between the Boundary Tensor and the Energy Tensor",
in: R. Kimmel, N. Sochen, J. Weickert (Eds.): Scale Space and PDE Methods in Computer Vision, Springer LNCS 3459, pp. 179191, 2005.
Abstract  PDF
Traditionally, quadrature filters and derivatives have been considered as alternative approaches to lowlevel image analysis. In this paper we show that there actually exist close connections: We define the quadraturebased boundary tensor and the derivativebased gradient energy tensor which exhibit very similar behavior. We analyse the reason for this and determine how to minimize the difference. These insights lead to a simple and very efficient integrated feature detection algorithm.

B. Andres, T. Kröger, K. Briggmann, W. Denk, N. Norogod, G. Knott, U. Köthe, F.A. Hamprecht:
"Globally Optimal ClosedSurface Segmentation for Connectomics",
in: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (Eds.) : 12th Eur. Conf. Computer Vision (ECCV 2012) part III, Springer LNCS 7574, pp. 778791, 2012 (note: this article is © SpringerVerlag)
Abstract  BibTeX  PDF
We address the problem of partitioning a volume image into a previously
unknown number of segments, based on a likelihood of merging adjacent
supervoxels. Towards this goal, we adapt a higherorder probabilistic graphical
model that makes the duality between supervoxels and their joint faces explicit
and ensures that merging decisions are consistent and surfaces of final segments
are closed. First, we propose a practical cuttingplane approach to solve the MAP
inference problem to global optimality despite its NPhardness. Second, we apply
this approach to challenging largescale 3D segmentation problems for neural
circuit reconstruction (Connectomics), demonstrating the advantage of this
higherorder model over independent decisions and finiteorder approximations.

T. Beier, B. Andres, U. Köthe, F.A. Hamprecht:
"An Efficient Fusion Move Algorithm for the Minimum Cost Lifted Multicut Problem",
in: Leibe, B., Matas, J., Sebe, N., Welling, M. (Eds.) : 14th Eur. Conf. Computer Vision (ECCV 2016), in press, 2016 (note: this article is © SpringerVerlag)
Abstract  PDF
Many computer vision problems can be cast as an optimization
problem whose feasible solutions are decompositions of a graph. The
minimum cost lifted multicut problem is such an optimization problem.
Its objective function can penalize or reward all decompositions for which
any given pair of nodes are in distinct components. While this property
has many potential applications, such applications are hampered by the
fact that the problem is NPhard. We propose a fusion move algorithm
for computing feasible solutions, better and more efficiently than existing
algorithms. We demonstrate this and applications to image segmentation,
obtaining a new state of the art for a problem in biological image analysis.

B. Andres, U. Köthe, M. Helmstaedter, W. Denk, F.A. Hamprecht:
"Segmentation of SBFSEM Volume Data of Neural Tissue by Hierarchical Classification",
in: G. Rigoll (Ed.): Pattern Recognition, Proc. DAGM 2008, Lecture Notes in Computer Science 5096 , pp. 142152, Berlin: Springer, 2008. (note: this article is © SpringerVerlag)
Abstract  BibTeX  PDF
– Received a Best Paper Award from the German Association for Pattern Recognition (DAGM)
Threedimensional electronmicroscopic image stacks with
almost isotropic resolution allow, for the first time, to determine the complete
connection matrix of parts of the brain. In spite of major advances
in staining, correct segmentation of these stacks remains challenging, because
very few local mistakes can lead to severe global errors. We propose
a hierarchical segmentation procedure based on statistical learning and
topologypreserving grouping. Edge probability maps are computed by a
random forest classifier (trained on handlabeled data) and partitioned
into supervoxels by the watershed transform. Oversegmentation is then
resolved by another random forest. Careful validation shows that the
results of our algorithm are close to human labelings.

A. Kreshuk, U. Köthe, E. Pax, D. Bock, F.A. Hamprecht:
"Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks",
PLoS ONE 9(2): e87351, 2014.
Abstract  BibTeX  PDF
We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on objectlevel features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally reverified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than stateoftheart, even though, unlike the stateoftheart method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (
http://www.ilastik.org/synapsedetection/sstem).

B. Kausler, M. Schiegg, B. Andres, M. Lindner, U. Köthe, H. Leitte, J. Wittbrodt, L. Hufnagel, F.A. Hamprecht:
"A discrete chain graph model for 3d+ t cell tracking with high misdetection robustness",
in: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (Eds.) : 12th Eur. Conf. Computer Vision (ECCV 2012) part III, Springer LNCS 7574, pp. 144157, 2012 (note: this article is © SpringerVerlag).
Abstract  BibTeX  PDF
Tracking by assignment is well suited for tracking a varying
number of divisible cells, but suffers from false positive detections. We
reformulate tracking by assignment as a chain graph{a mixed directed
undirected probabilistic graphical model{and obtain a tracking simul
taneously over all time steps from the maximum aposteriori configura
tion. The model is evaluated on two challenging fourdimensional data
sets from developmental biology. Compared to previous work, we obtain
improved tracks due to an increased robustness against false positive
detections and the incorporation of temporal domain knowledge.

M. Hanselmann, U. Köthe, M. Kirchner, B.Y. Renard, E.R. Amstalden, K. Glunde, R.M.A. Heeren, F.A. Hamprecht:
"Towards Digital Staining using Imaging Mass Spectrometry and Random Forests",
Journal of Proteome Research, 8(7):35583567, 2009
Abstract  BibTeX  PDF
We show on Imaging Mass Spectrometry (IMS) data that the Random Forest classifier can be used for automated
tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when
intersample variability is present in the data. We further demonstrate how Markov Random Fields and vectorvalued
median filtering can be applied to reduce noise effects to further improve the classification results in a
posthoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a
promising complement to chemical staining techniques.

B. Menze, B. Kelm, N. Splitthoff, U. Köthe, F.A. Hamprecht:
"On oblique random forests",
Mach. Learning and Knowledge Discovery in Databases, Springer LNCS 6912, pp. 453469, 2011
Abstract  PDF
In his original paper on random forests, Breiman proposed
two different decision tree ensembles: one generated from "orthogonal"
trees with thresholds on individual features in every split, and one from
"oblique" trees separating the feature space by randomly oriented hy
perplanes. In spite of a rising interest in the random forest framework,
however, ensembles built from orthogonal trees (RF) have gained most,
if not all, attention so far.
In the present work we propose to employ "oblique" random forests
(oRF) built from multivariate trees which explicitly learn optimal split
directions at internal nodes using linear discriminative models, rather
than using random coefficients as the original oRF. This oRF outper
forms RF, as well as other classifiers, on nearly all data sets but those
with discrete factorial features. Learned node models perform distinc
tively better than random splits. An oRF feature importance score shows
to be preferable over standard RF feature importance scores such as Gini
or permutation importance. The topology of the oRF decision space ap
pears to be smoother and better adapted to the data, resulting in im
proved generalization performance. Overall, the oRF propose here may
be preferred over standard RF on most learning tasks involving numeri
cal and spectral data.

U. Köthe, F. Herrmannsdörfer, I. Kats, F.A. Hamprecht:
"SimpleSTORM: a fast, selfcalibrating reconstruction algorithm for localization microscopy",
Histochemistry and Cell Biology, 141(6):613–627, 2014.
Abstract  PDF
Although there are many reconstruction algorithms
for localization microscopy, their use is hampered
by the difficulty to adjust a possibly large number
of parameters correctly. We propose SimpleSTORM, an
algorithm that determines appropriate parameter settings
directly from the data in an initial selfcalibration phase.
The algorithm is based on a carefully designed yet simple
model of the image acquisition process which allows
us to standardize each image such that the background has
zero mean and unit variance. This standardization makes
it possible to detect spots by a true statistical test (instead
of handtuned thresholds) and to denoise the images with
an efficient matched filter. By reducing the strength of the
matched filter, SimpleSTORM also performs reasonably on
data with highspot density, trading off localization accuracy
for improved detection performance. Extensive validation
experiments on the ISBI Localization Challenge Dataset,
as well as real image reconstructions, demonstrate the
good performance of our algorithm.

H. Meine, U. Köthe, P. Stelldinger:
"A Topological Sampling Theorem for Robust Boundary Reconstruction and Image Segmentation",
Discrete Applied Mathematics (DGCI Special Issue), 157(3):524541, 2009.
Abstract  PDF
Existing theories on shape digitization impose strong constraints on admissible
shapes, and require errorfree data. Consequently, these theories are not applicable
to most realworld situations. In this paper, we propose a new approach that overcomes
many of these limitations. It assumes that segmentation algorithms represent
the detected boundary by a set of points whose deviation from the true contours is
bounded. Given these error bounds, we reconstruct boundary connectivity by means
of Delaunay triangulation and alphashapes. We prove that this procedure is guaranteed
to result in topologically correct image segmentations under certain realistic conditions.
Experiments on real and synthetic images demonstrate the good performance
of the new method and confirm the predictions of our theory.

U. Köthe:
"What Can We Learn from Discrete Images about the Continuous World?",
in: D. Coeurjolly, I. Sivignon, L. Tougne, F. Dupont (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2008, Lecture Notes in Computer Science 4992, pp. 419, Berlin: Springer, 2008. (note: this article is © SpringerVerlag)
Abstract  PDF
Image analysis attempts to perceive properties of the continuous real
world by means of digital algorithms. Since discretization discards
an infinite amount of information, it is difficult to predict if and
when digital methods will produce reliable results. This paper reviews
theories which establish explicit connections between the continuous
and digital domains (such as Shannon's sampling theorem and a recent
geometric sampling theorem) and describes some of their consequences
for image analysis. Although many problems are still open, we can
already conclude that adherence to these theories leads to significantly
more stable and accurate algorithms.

P. Stelldinger, U. Köthe:
"Towards a general sampling theory for shape preservation",
Image and Vision Computing, Special Issue Discrete Geometry for Computer Vision, 23(2): 237248, 2005.
Abstract  PDF
Computerized image analysis makes statements about the continous world by looking
at a discrete representation. Therefore, it is important to know precisely which
information is preserved during digitization. We analyse this question in the context
of shape recognition. Existing results in this area are based on very restricted
models and thus not applicable to real imaging situations. We present generalizations
in several directions: first, we introduce a new shape similarity measure that
approximates human perception better. Second, we prove a geometric sampling theorem
for arbitrary dimensional spaces. Third, we extend our sampling theorem to
2dimensional images that are subjected to blurring by a disk point spread function.
Our findings are steps towards a general sampling theory for shapes that shall
ultimately describe the behavior of real optical systems.