In addition, I’m interested in generic software bringing state-of-the-art algorithms to the end user and maintain the VIGRA image analysis library.
- S. Radev, F. Graw, S. Chen, N. Mutters, V. Eichel, T. Bärnighausen, U. Köthe (2021). “OutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany”, PLOS Computational Biology, arXiv:2010.00300. [link], [arxiv], [pdf]
- S. Radev, M. D’Alessandro, U. Mertens, A. Voss, U. Köthe, P. Bürkner (2021). “Amortized Bayesian model comparison with evidential deep learning”, IEEE Trans. Neural Networks and Learning Systems, arXiv:2004.10629. [link], [arxiv], [pdf]
- P. Sorrenson, C. Rother, U. Köthe: “Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)”, Intl. Conf. Learning Representations, 2020.
Abstract | PDF
- R. Mackowiak, L. Ardizzone, U. Köthe, C. Rother (2021). “Generative Classifiers as a Basis for Trustworthy Image Classification”, CVPR 2021 (oral presentation), arXiv:2007.15036 [arxiv], [pdf]
- S. Radev, U. Mertens, A. Voss, L. Ardizzone, U. Köthe (2020). “BayesFlow: Learning complex stochastic models with invertible neural networks”, IEEE Trans. Neural Networks and Learning Systems, doi:10.1109/TNNLS.2020.3042395, arXiv:2003.06281. [link], [arxiv], [pdf]
- S. Wolf, A. Bailoni, C. Pape, N. Rahaman, A. Kreshuk, U. Köthe, F.A. Hamprecht: “The Mutex Watershed and its Objective: Efficient, Parameter-Free Graph Partitioning”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
Link | PDF
- L. Ardizzone, C. Lüth, J. Kruse, C. Rother, U. Köthe, “Guided Image Generation with Conditional Invertible Neural Networks”, arXiv:1907.02392, 2019.
Abstract | PDF
- S. Berg, D. Kutra, …, U. Köthe, F.A. Hamprecht, A. Kreshuk: “ilastik: interactive machine learning for (bio)image analysis”, Nature Methods, vol. 16, pp. 1226–1232, 2019.
Link
- L. Ardizzone, J. Kruse, S. Wirkert, D. Rahner, E.W. Pellegrini, R.S. Klessen, L. Maier-Hein, C. Rother, U. Köthe:
“Analyzing Inverse Problems with Invertible Neural Networks”
arXiv:1808.04730, Intl. Conf. Learning Representations, 2019.
Abstract | PDF
In many tasks, in particular in natural science, the goal is to determine hidden system parameters from a set of measurements. Often, the forward process from parameter- to measurement-space is a well-defined function, whereas the inverse problem is ambiguous: one measurement may map to multiple different sets of parameters. In this setting, the posterior parameter distribution, conditioned on an input measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task — so-called Invertible Neural Networks (INNs). Although INNs are not new, they have, so far, received little attention in literature. While classical neural networks attempt to solve the ambiguous inverse problem directly, INNs are able to learn it jointly with the well-defined forward process, using additional latent output variables to capture the information otherwise lost. Given a specific measurement and sampled latent variables, the inverse pass of the INN provides a full distribution over parameter space. We verify experimentally, on artificial data and real-world problems from astrophysics and medicine, that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
- Stefan T. Radev, Ulf K. Mertens, Andreas Voss, Ullrich Köthe:
“Towards end‐to‐end likelihood‐free inference with convolutional neural networks”
British Journal of Mathematical and Statistical Psychology. doi: 10.1111/bmsp.12159, 2019.
Abstract | PDF
Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter’s distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state‐of‐the‐art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.
- S. Wolf, C. Pape, A. Bailoni, N. Rahaman, A. Kreshuk, U. Köthe, F.A. Hamprecht:
“The Mutex Watershed: Efficient, Parameter-Free Image Partitioning”
in: Europ. Conf. Computer Vision (ECCV’18), pp. 546-562 , 2018.
Abstract | PDF
Image partitioning, or segmentation without semantics, is the task of decomposing an image into distinct segments; or equivalently, the task of detecting closed contours in an image. Most prior work either requires seeds, one per segment; or a threshold; or formulates the task as an NP-hard signed graph partitioning problem. Here, we propose an algorithm with empirically linearithmic complexity. Unlike seeded watershed, the algorithm can accommodate not only attractive but also repulsive cues, allowing it to find a previously unspecified number of segments without the need for explicit seeds or a tunable threshold. The algorithm itself, which we dub “mutex watershed”, is closely related to a minimal spanning tree computation. It is deterministic and easy to implement. When presented with short-range attractive and long-range repulsive cues from a deep neural network, the mutex watershed gives results that currently define the state-of-the-art in the competitive ISBI 2012 EM segmentation benchmark. These results are also better than those obtained from other recently proposed clustering strategies operating on the very same network outputs.
- S. Wolf, L. Schott, U. Köthe, F.A. Hamprecht:
“Learned Watershed: End-to-End Learning of Seeded Segmentation”
in: Intl. Conf. Computer Vision (ICCV’17), pp. 2030-2038, 2017.
Abstract | PDF
Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm. We show, for the first time, how to train watershed computation jointly with boundary map prediction. The estimator for the merging priorities is cast as a neural network that is con- volutional (over space) and recurrent (over iterations). The latter allows learning of complex shape priors. The method gives the best known seeded segmentation results on the CREMI segmentation challenge.
- C. Sommer, C. Straehle, U. Köthe, F.A. Hamprecht:
“ilastik: Interactive learning and segmentation toolkit”
In: IEEE International Symposium on Biomedical Imaging (ISBI), pp. 230-233, 2011.
Abstract | PDF
Segmentation is the process of partitioning digital images into meaningful regions. The analysis of biological high content images often requires segmentation as a first step. We propose ilastik as an easy-to-use tool which allows the user without expertise in image processing to perform segmentation and classification in a unified way. ilastik learns from labels provided by the user through a convenient mouse interface. Based on these labels, ilastik infers a problem specific segmentation. A random forest classifier is used in the learning step, in which each pixel’s neighborhood is characterized by a set of generic (nonlinear) features.ilastik supports up to three spatial plus one spectral dimension and makes use of all dimensions in the feature calculation. ilastik provides realtime feedback that enables the user to interactively refine the segmentation result and hence further fine-tune the classifier. An uncertainty measure guides the user to ambiguous regions in the images. Real time performance is achieved by multi-threading which fully exploits the capabilities of modern multi-core machines. Once a classifier has been trained on a set of representative images, it can be exported and used to automatically process a very large number of images (e.g. using the CellProfiler pipeline). ilastik is an open source project and released under the BSD license at www.ilastik.org.
- U. Köthe: “Edge and Junction Detection with an Improved Structure Tensor”
in: B. Michaelis, G. Krell (Eds.): Pattern Recognition, Proc. of 25th DAGM Symposium, Magdeburg 2003, Springer LNCS 2781, pp. 25-32, 2003.
Abstract | PDF – Awarded the main prize of the German Pattern Recognition Society (DAGM) 2003
We describe three modifications to the structure tensor approach to lowlevel feature extraction. We first show that the structure tensor must be represented at a higher resolution than the original image. Second, we propose a nonlinear filter for structure tensor computation that avoids undesirable blurring. Third, we introduce a method to simultaneously extract edge and junction information. Examples demonstrate significant improvements in the quality of the extracted features.
- U. Köthe: “Integrated Edge and Junction Detection with the Boundary Tensor”
in: ICCV ‘03, Proc. of 9th Intl. Conf. on Computer Vision, Nice 2003, vol. 1, pp. 424-431, 2003.
Abstract | PDF
The boundaries of image regions necessarily consist of edges (in particular, step and roof edges), corners, and junctions. Currently, different algorithms are used to detect each boundary type separately, but the integration of the results into a single boundary representation is difficult. Therefore, a method for the simultaneous detection of all boundary types is needed. We propose to combine responses of suitable polar separable filters into what we will call the boundary tensor. The trace of this tensor is a measure of boundary strength, while the small eigenvalue and its difference to the large one represent corner/junction and edge strengths respectively. We prove that the edge strength measure behaves like a rotationally invariant quadrature filter. A number of examples demonstrate the properties of the new method and illustrate its application to image segmentation.
- B. Andres, U. Köthe, M. Helmstaedter, W. Denk, F.A. Hamprecht:
“Segmentation of SBFSEM Volume Data of Neural Tissue by Hierarchical Classification”
in: G. Rigoll (Ed.): Pattern Recognition, Proc. DAGM 2008, Springer LNCS 5096 , pp. 142-152, 2008.
Abstract | BibTeX | PDF – Received a Best Paper Award from the German Association for Pattern Recognition (DAGM)
Three-dimensional electron-microscopic image stacks with almost isotropic resolution allow, for the first time, to determine the complete connection matrix of parts of the brain. In spite of major advances in staining, correct segmentation of these stacks remains challenging, because very few local mistakes can lead to severe global errors. We propose a hierarchical segmentation procedure based on statistical learning and topology-preserving grouping. Edge probability maps are computed by a random forest classifier (trained on hand-labeled data) and partitioned into supervoxels by the watershed transform. Over-segmentation is then resolved by another random forest. Careful validation shows that the results of our algorithm are close to human labelings.
- B. Andres, T. Kröger, K. Briggmann, W. Denk, N. Norogod, G. Knott, U. Köthe, F.A. Hamprecht:
“Globally Optimal Closed-Surface Segmentation for Connectomics”
in: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (Eds.) : 12th Eur. Conf. Computer Vision (ECCV 2012) part III, Springer LNCS 7574, pp. 778-791, 2012.
Abstract | BibTeX | PDF
We address the problem of partitioning a volume image into a previously unknown number of segments, based on a likelihood of merging adjacent supervoxels. Towards this goal, we adapt a higher-order probabilistic graphical model that makes the duality between supervoxels and their joint faces explicit and ensures that merging decisions are consistent and surfaces of final segments are closed. First, we propose a practical cutting-plane approach to solve the MAP inference problem to global optimality despite its NP-hardness. Second, we apply this approach to challenging large-scale 3D segmentation problems for neural circuit reconstruction (Connectomics), demonstrating the advantage of this higher-order model over independent decisions and finite-order approximations.
- T. Beier, B. Andres, U. Köthe, F.A. Hamprecht:
“An Efficient Fusion Move Algorithm for the Minimum Cost Lifted Multicut Problem”
in: Leibe, B., Matas, J., Sebe, N., Welling, M. (Eds.) : 14th Eur. Conf. Computer Vision (ECCV 2016), 2016.
Abstract | PDF
Many computer vision problems can be cast as an optimization problem whose feasible solutions are decompositions of a graph. The minimum cost lifted multicut problem is such an optimization problem. Its objective function can penalize or reward all decompositions for which any given pair of nodes are in distinct components. While this property has many potential applications, such applications are hampered by the fact that the problem is NP-hard. We propose a fusion move algorithm for computing feasible solutions, better and more efficiently than existing algorithms. We demonstrate this and applications to image segmentation, obtaining a new state of the art for a problem in biological image analysis.
- U. Köthe: “Reusable Software in Computer Vision”
in: B. Jähne, H. Haussecker, P. Geissler (Eds.): Handbook of Computer Vision and Applications, Volume 3: Systems and Applications, pp. 103-132, San Diego: Academic Press, 1999.
PDF
- U. Köthe, M. Felsberg:
“Riesz-Transforms Versus Derivatives: “On the Relationship Between the Boundary Tensor and the Energy Tensor”
in: R. Kimmel, N. Sochen, J. Weickert (Eds.): Scale Space and PDE Methods in Computer Vision, Springer LNCS 3459, pp. 179-191, 2005.
Abstract | PDF
Traditionally, quadrature filters and derivatives have been considered as alternative approaches to low-level image analysis. In this paper we show that there actually exist close connections: We define the quadrature-based boundary tensor and the derivative-based gradient energy tensor which exhibit very similar behavior. We analyse the reason for this and determine how to minimize the difference. These insights lead to a simple and very efficient integrated feature detection algorithm.
- A. Kreshuk, U. Köthe, E. Pax, D. Bock, F.A. Hamprecht:
“Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks”
PLoS ONE 9(2): e87351, 2014.
Abstract | BibTeX | PDF
We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (
http://ilastik.org/synapse-detection/sstem).
- B. Kausler, M. Schiegg, B. Andres, M. Lindner, U. Köthe, H. Leitte, J. Wittbrodt, L. Hufnagel, F.A. Hamprecht:
“A discrete chain graph model for 3D+t cell tracking with high misdetection robustness”
in: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (Eds.) : 12th Eur. Conf. Computer Vision (ECCV 2012) part III, Springer LNCS 7574, pp. 144-157, 2012.
Abstract | BibTeX | PDF
Tracking by assignment is well suited for tracking a varying number of divisible cells, but suffers from false positive detections. We reformulate tracking by assignment as a chain graph{a mixed directed- undirected probabilistic graphical model{and obtain a tracking simul- taneously over all time steps from the maximum a-posteriori configura- tion. The model is evaluated on two challenging four-dimensional data sets from developmental biology. Compared to previous work, we obtain improved tracks due to an increased robustness against false positive detections and the incorporation of temporal domain knowledge.
- M. Hanselmann, U. Köthe, M. Kirchner, B.Y. Renard, E.R. Amstalden, K. Glunde, R.M.A. Heeren, F.A. Hamprecht:
“Towards Digital Staining using Imaging Mass Spectrometry and Random Forests”
Journal of Proteome Research, 8(7):3558-3567, 2009
Abstract | BibTeX | PDF
We show on Imaging Mass Spectrometry (IMS) data that the Random Forest classifier can be used for automated tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when inter-sample variability is present in the data. We further demonstrate how Markov Random Fields and vectorvalued median filtering can be applied to reduce noise effects to further improve the classification results in a post-hoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a promising complement to chemical staining techniques.
- B. Menze, B. Kelm, N. Splitthoff, U. Köthe, F.A. Hamprecht:
“On oblique random forests”
in: Mach. Learning and Knowledge Discovery in Databases, Springer LNCS 6912, pp. 453-469, 2011.
Abstract | PDF
In his original paper on random forests, Breiman proposed two different decision tree ensembles: one generated from “orthogonal” trees with thresholds on individual features in every split, and one from “oblique” trees separating the feature space by randomly oriented hy- perplanes. In spite of a rising interest in the random forest framework, however, ensembles built from orthogonal trees (RF) have gained most, if not all, attention so far.In the present work we propose to employ “oblique” random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF. This oRF outper- forms RF, as well as other classifiers, on nearly all data sets but those with discrete factorial features. Learned node models perform distinc- tively better than random splits. An oRF feature importance score shows to be preferable over standard RF feature importance scores such as Gini or permutation importance. The topology of the oRF decision space ap- pears to be smoother and better adapted to the data, resulting in im- proved generalization performance. Overall, the oRF propose here may be preferred over standard RF on most learning tasks involving numeri- cal and spectral data.
- U. Köthe, F. Herrmannsdörfer, I. Kats, F.A. Hamprecht:
“SimpleSTORM: a fast, self-calibrating reconstruction algorithm for localization microscopy”
Histochemistry and Cell Biology, 141(6):613–627, 2014.
Abstract | PDF
Although there are many reconstruction algorithms for localization microscopy, their use is hampered by the difficulty to adjust a possibly large number of parameters correctly. We propose SimpleSTORM, an algorithm that determines appropriate parameter settings directly from the data in an initial self-calibration phase. The algorithm is based on a carefully designed yet simple model of the image acquisition process which allows us to standardize each image such that the background has zero mean and unit variance. This standardization makes it possible to detect spots by a true statistical test (instead of hand-tuned thresholds) and to de-noise the images with an efficient matched filter. By reducing the strength of the matched filter, SimpleSTORM also performs reasonably on data with high-spot density, trading off localization accuracy for improved detection performance. Extensive validation experiments on the ISBI Localization Challenge Dataset, as well as real image reconstructions, demonstrate the good performance of our algorithm.
- H. Meine, U. Köthe, P. Stelldinger:
“A Topological Sampling Theorem for Robust Boundary Reconstruction and Image Segmentation”
Discrete Applied Mathematics (DGCI Special Issue), 157(3):524-541, 2009.
Abstract | PDF
Existing theories on shape digitization impose strong constraints on admissible shapes, and require error-free data. Consequently, these theories are not applicable to most real-world situations. In this paper, we propose a new approach that overcomes many of these limitations. It assumes that segmentation algorithms represent the detected boundary by a set of points whose deviation from the true contours is bounded. Given these error bounds, we reconstruct boundary connectivity by means of Delaunay triangulation and alpha-shapes. We prove that this procedure is guaranteed to result in topologically correct image segmentations under certain realistic conditions. Experiments on real and synthetic images demonstrate the good performance of the new method and confirm the predictions of our theory.
- U. Köthe: “What Can We Learn from Discrete Images about the Continuous World?”
in: D. Coeurjolly, I. Sivignon, L. Tougne, F. Dupont (Eds.): Discrete Geometry for Computer Imagery, Proc. DGCI 2008, Springer LNCS 4992, pp. 4-19, 2008.
Abstract | PDF
Image analysis attempts to perceive properties of the continuous real world by means of digital algorithms. Since discretization discards an infinite amount of information, it is difficult to predict if and when digital methods will produce reliable results. This paper reviews theories which establish explicit connections between the continuous and digital domains (such as Shannon’s sampling theorem and a recent geometric sampling theorem) and describes some of their consequences for image analysis. Although many problems are still open, we can already conclude that adherence to these theories leads to significantly more stable and accurate algorithms.
- P. Stelldinger, U. Köthe:
“Towards a general sampling theory for shape preservation”
Image and Vision Computing, Special Issue Discrete Geometry for Computer Vision, 23(2): 237-248, 2005.
Abstract | PDF
Computerized image analysis makes statements about the continous world by looking at a discrete representation. Therefore, it is important to know precisely which information is preserved during digitization. We analyse this question in the context of shape recognition. Existing results in this area are based on very restricted models and thus not applicable to real imaging situations. We present generalizations in several directions: first, we introduce a new shape similarity measure that approximates human perception better. Second, we prove a geometric sampling theorem for arbitrary dimensional spaces. Third, we extend our sampling theorem to 2-dimensional images that are subjected to blurring by a disk point spread function. Our findings are steps towards a general sampling theory for shapes that shall ultimately describe the behavior of real optical systems.