Generative Neural Networks for the Sciences

Prof. Dr. Ullrich Köthe, WS 2023/24

Nobel laureate Richard Feynman famously summarized his approach to science as “What I cannot create, I do not understand.” Translated to machine learning, we can only claim full understanding of a dataset when we are able to create synthetic data that are indistinguishable from real observations. Generative neural networks are currently the most advanced approach toward this goal, and chatbots like ChatGPT or image generators like Midjourney demonstrate their tremendous recent achievements.

This lecture will focus on generative modelling as a new paradigm to gain insight and produce knowledge in the natural and life sciences. I will survey the hot developments in this area regarding model types, learning algorithms, quality assessment, and applications, covering both the latest literature and the results of my own research group.

The lecture belongs to the Master of Data and Computer Science program, but is also recommended for students towards a Master of Physics (specialization Computational Physics), Master in Scientific Computing and anyone interested. Basic machine learning knowledge (e.g. from the lectures "Machine Learning Essentials" or "Machine Learning and Physics") is very helpful


Lecture Wednesdays 11:15-12:45 Hörsaal Informatik (INF 205)
Lecture Fridays 11:15-12:45 Hörsaal Informatik (INF 205)
Tutorials TBD
Please sign up for the lecture via Muesli.


  • Short recap of neural network basics
  • Generative modelling as advanced probability density estimation
  • Fundamental types of generative models and their training methods: autoencoders, GANs, diffusion models, energy models, normalizing flows, rectangular flows
  • Simulation-based inference with the BayesFlow framework: solve inverse problems, define fast surrogate models, perform model comparison and selection
  • Evaluation, quality diagnostics, uncertainty quantification
  • Incorporation of scientific prior knowledge: physics-informed neural networks (PINNs), Hamiltonian networks, equivariant networks
  • Outlier detection, learning to reject, domain generalization and adaptation
  • Explainable machine learning (XAI), disentangled representations, similarity (metric) learning
  • Applications to medical imaging, astrophysics, epidemiology, cognitive science etc.
  • Open problems (lots of them!)
  • Optional: reinforcement learning for optimal experimental design
  • Optional: causal discovery with neural networks