Geodesics in Latent Space ðŸ§
Apr, 2025
This project explores the geometry of latent spaces in Variational Autoencoders (VAEs) by computing geodesic paths between data points. Instead of just connecting points in the latent space with straight lines (Euclidean paths), we optimize smooth curves that better follow the learned manifold.
We first trained a standard VAE on a subset of MNIST and computed geodesics between latent representations using energy minimization. Then we extended the setup by introducing ensemble VAEs — using multiple decoders to better capture uncertainty in the learned geometry. This also allowed us to define a new energy function based on model averaging, using Monte Carlo sampling across decoders.
To evaluate robustness, we trained multiple models and compared how consistent the geodesics were across runs. As expected, using decoder ensembles led to more stable geodesics, especially in regions with less data support. We quantified this using the Coefficient of Variation (CoV), showing that ensembles reduce uncertainty compared to single-decoder setups.
The project was part of the Advanced Machine Learning course at DTU and done in collaboration with Jone Steinhoff, Mads Prip, and Petr Boska Nylander.

