Computer Vision

Dr. Bogdan Savchynskyy, SoSe 2024

This seminar belongs to the Master in Physics (specialization Computational Physics, code "MVSem") and Master of Applied Informatics (code "IS") , but is also open for students of Scientific Computing and anyone interested.

Summary

The topic of this semester is

Video-Based Scene Analysis.

We will consider inference and learning techniques for these problems as well as the related applications in computer vision.

General Information

Please register for the seminar in Müsli. The first seminar will take place on Thursday, April 18 at 14:00. Please make sure to participate!

  • Seminar: Thu, 14:00 – 16:00 in Mathematikon B (Berliner Str. 43), SR B128
    Entrance through the door at the side of Berlinerstrasse. Ring the door bell labelled "HCI am IWR" to be let in. The seminar room is on the 3rd floor.
  • Credits: 4/ 6 CP depending on course of study, see LSF

Seminar Repository:

Slides and schedule of the seminar will be placed in HeiBox .

Papers to Choose from:

[1] J. M. Fácil, A. Concha, L. Montesano, and J. Civera, “Single-View and Multi-View Depth Fusion,” 2017.
[2] A. Geiger, J. Ziegler, and C. Stiller, “StereoScan: Dense 3d reconstruction in real-time,” 2011.
[3] L. Koestler, N. Yang, N. Zeller, and D. Cremers, “TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo,” 2021.
[4] C. Liu, S. Kumar, S. Gu, R. Timofte, and L. Van Gool, “Single Image Depth Prediction Made Better: A Multivariate Gaussian Take,” 2023.
[5] A. Mitiche, Y. Mathlouthi, and I. Ben Ayed, “Monocular Concurrent Recovery of Structure and Motion Scene Flow,” 2015.
[6] V. Patil, W. Van Gansbeke, D. Dai, and L. Van Gool, “Don’t Forget The Past: Recurrent Depth Estimation from Monocular Video,” 2020.
[7] L. Piccinelli, C. Sakaridis, and F. Yu, “iDisc: Internal Discretization for Monocular Depth Estimation,” 2023.
[8] R. Ranftl, V. Vineet, Q. Chen, and V. Koltun, “Dense Monocular Depth Estimation in Complex Dynamic Scenes,” 2016.
[9] R. Schuster, C. Unger, and D. Stricker, “A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions,” 2020.
[10] Z. Teed, L. Lipson, and J. Deng, “Deep Patch Visual Odometry,” 2023.
[11] F. Wimbauer, N. Yang, L. Von Stumberg, N. Zeller, and D. Cremers, “MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera,” 2021.
[12] D. Xiao, Q. Yang, B. Yang, and W. Wei, “Monocular scene flow estimation via variational method,” 2017.
[13] K. Yamaguchi, D. McAllester, and R. Urtasun, “Robust Monocular Epipolar Flow Estimation,” 2013.
[14] K. Yamaguchi, D. McAllester, and R. Urtasun, “Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation,” 2014.
[15] N. Yang, L. Von Stumberg, R. Wang, and D. Cremers, “D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry,” 2020.
[16] X. Yin, X. Wang, X. Du, and Q. Chen, “Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields,” 2017.
[17] A. Schmied, T. Fischer, M. Danelljan, M. Pollefeys, and F. Yu, “R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras,” 2023.
[18] A. P. D. Cin, G. Boracchi, and L. Magri, “Multi-body Depth and Camera Pose Estimation from Multiple Views”.
[19] J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” 2014.
[20] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and mapping in real-time,” 2011.
[21] J. Engel, V. Koltun, and D. Cremers, “Direct Sparse Odometry,” 2016.
[22] N. Yang, R. Wang, J. Stückler, and D. Cremers, “Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry,” 2018.
[23] Z. Teed, L. Lipson, and J. Deng, “Deep Patch Visual Odometry,” 2023.
[24] S. M. H. Miangoleh, S. Dille, L. Mai, S. Paris, and Y. Aksoy, “Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging,” 2021.
[25] C. Tang and P. Tan, “BA-Net: Dense Bundle Adjustment Network,” 2019.

Presentation schedule

see the HeiBox repository.

Contact

Dr. Bogdan Savchynskyy
In case you contact me via email, its subject should contain the tag [SemCV]. Emails without this tag have a very high chance to be lost and get ignored therefore!