Schedule for: 25w5430 - Wasserstein Gradient Flows in Math and Machine Learning

Beginning on Sunday, June 29 and ending Friday July 4, 2025

All times in Banff, Alberta time, MDT (UTC-6).

Sunday, June 29
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering
Meet and Greet at the BIRS Lounge (PDC building , 2nd floor).
(PDC BIRS Lounge)
Monday, June 30
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
08:45 - 09:00 Introduction and Welcome by BIRS Staff
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
09:00 - 10:00 Matthias Erbar: Covariance-modulated Optimal Transport and Gradient Flows
The first part of the talk will give an introduction to the mathematical structure of Wasserstein gradient flows. In the second part, I will present a variant of the dynamical optimal transport problem in which the energy to be minimised is modulated by the covariance matrix of the distribution. Such transport metrics arise naturally in mean-field limits of certain ensemble Kalman methods for solving inverse problems. We show that the transport problem splits into two coupled minimisation problems: one for the evolution of mean and covariance of the interpolating curve and one for its shape. The latter consists in minimising the usual Wasserstein length under the constraint of maintaining fixed mean and covariance along the interpolation. We analyse the geometry induced by this modulated transport distance on the space of probabilities as well as the dynamics of the associated gradient flows. Those show better convergence properties in comparison to the classical Wasserstein metric in terms of universal exponential convergence rates in the case of Gaussian targets.
(TCPL 201)
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:15 Beatrice Acciaio: Absolutely continuous curves of stochastic processes
We study absolutely continuous curves in the adapted Wasserstein space of filtered processes. We provide a probabilistic representation of such curves as flows of adapted processes on a common filtered probability space, extending classical superposition results to the adapted setting. We characterize geodesics in this space and derive an adapted Benamou--Brenier-type formula, obtaining -as an application- a Skorokhod-type representation for sequences of filtered processes under the adapted weak topology. Finally, we provide an adapted version of the continuity equation characterizing absolutely continuous curves of filtered processes.
(TCPL 201)
11:30 - 13:00 Lunch
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
13:15 - 14:00 Giulia Cavagnari: Stochastic approximation as dissipative Wasserstein flow: a measure-theoretic perspective
We propose a unified probabilistic framework to study the convergence of stochastic Euler schemes—such as stochastic gradient descent—in a separable Hilbert space X. These algorithms approximate deterministic ODEs driven by dissipative vector fields arising from stochastic superposition. By interpreting their evolution in the Wasserstein space over X through Probability Vector Fields, we establish convergence to an implicit limit dynamics governed by a maximal dissipative extension of the underlying barycentric field. As a direct result, our work recovers the well-known convergence of classical stochastic schemes in X to the unique solution of the underlying deterministic ODE. This is a joint work with Giuseppe Savaré (Bocconi University - Italy) and Giacomo Enrico Sodini (Universität Wien - Austria).
(TCPL 201)
14:00 - 14:30 Rentian Yao: Learning Density Evolution from Snapshot Data
Motivated by learning dynamical structures from static snapshot data, this paper presents a distributionon-scalar regression approach for estimating the density evolution of a stochastic process from its noisy temporal point clouds. We propose an entropy-regularized nonparametric maximum likelihood estimator (E-NPMLE), which leverages the entropic optimal transport as a smoothing regularizer for the density flow. We show that the E-NPMLE has almost dimension-free statistical rates of convergence to the ground truth distributions, which exhibit a striking phase transition phenomenon in terms of the number of snapshots and per-snapshot sample size To efficiently compute the E-NPMLE, we design a novel particle-based and grid-free coordinate KL divergence gradient descent (CKLGD) algorithm and prove its polynomial iteration complexity. This work contributes to the theoretical understanding and practical computation of estimating density evolution from noisy observations in arbitrary dimensions.
(TCPL 201)
14:30 - 15:00 Andrew Warren: Sampling beyond the log-Sobolev class
What sort of distributions can be sampled from effectively? For many popular sampling schemes and/or generative models, including Langevin MC, Hamiltonian MC, score-based generative models, existing convergence analysis requires that the target distribution satisfy the log-Sobolev inequality, with the rate of convergence depending on the log-Sobolev constant. Phrased differently, such algorithms can be expected to struggle to sample from "badly multimodal" distributions, and fail to converge completely for distributions with multiple connected components. In this talk I will offer several perspectives regarding why the log-Sobolev class is a "fundamental" structural assumption in such a variety of algorithms, and outline some directions for sampling schemes which ought to work well on qualitatively different classes of distributions.
(TCPL 201)
15:00 - 15:30 Coffee Break (TCPL Foyer)
15:30 - 16:00 Omar Abdul Halim: Multi-to one-dimensional screening and semi-discrete optimal transport
We study the monopolist's screening problem with a multi-dimensional distribution of consumers and a one-dimensional space of goods. We establish general conditions under which solutions satisfy a structural condition known as nestedness, which greatly simplifies their analysis and characterization. Under these assumptions, we go on to develop a general method to solve the problem, either in closed form or with relatively simple numerical computations, and illustrate it with examples. These results are established both when the monopolist has access to only a discrete subset of the one-dimensional space of products, as well as when the entire continuum is available.
(TCPL 201)
16:00 - 16:30 Forest Kobayashi: How to approximate a blob with a curve
The following is joint work with Jonathan Hayase (UW) and Young-Heon Kim (UBC). Given an $n$-dimensional measure $\mu$, how can we ``most efficiently'' approximate $\mu$ via an $m$-dimensional set $\Sigma$? In this talk we discuss one possible formalization of this question in which one seeks an "optimal" balance between an OT cost (representing "goodness-of-fit") and the size of a certain Sobolev norm (loosely, a "regularization" term that keeps $\Sigma$ simple). Here, we have good news and bad news. The bad: Finding precise solutions to this problem appears fundamentally difficult, since we are able to recover an NP-hard problem in a simple limiting case. The good: By (loosely speaking) approximating a certain "constrained" gradient flow, we obtain a surprisingly-effective local algorithm. We give the high-level ideas of our method, and, time permitting, discuss a novel interpretation for how regularization can improve training in certain generative learning problems.
(TCPL 201)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 21:00 Social gathering
We will introduce ourselves and get to know each other
(PDC BIRS Lounge)
Tuesday, July 1
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
09:00 - 10:00 Bharath Sriperumbudur: (De)regularized Wasserstein gradient flows via reproducing kernels (TCPL 201)
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:15 Youssef Mroueh: Gromov-Wasserstein Gradient Flows
The Wasserstein space of probability measures offers a rich Riemannian structure for gradient flow algorithms, but it may not always suit tasks where preserving global data structure is crucial. To address this, we explore gradient flows in the Gromov-Wasserstein (GW) geometry, which aligns more naturally with scenarios requiring global structural preservation. We focus on the inner product GW (IGW) distance, which retains data angles and provides analytical tractability. By proposing an implicit IGW minimizing movement scheme, we generate sequences of distributions that align in the GW sense. Our analysis reveals the intrinsic Riemannian structure of IGW geometry and establishes a Benamou-Brenier-like formulation for IGW flows, offering insights for potential applications in machine learning and biology . Numerical results demonstrate IGW's capacity for capturing global structures. Joint work with Zhengxin Zhang, Ziv Goldfeld, Kristjan Greenewald, and Bharath k. Sriperumbudur
(TCPL 201)
11:15 - 11:30 Group Photo
Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo!
(TCPL Foyer)
11:30 - 13:00 Lunch
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
13:15 - 14:00 Adil Salim: Theory for diffusion models with minimal data assumptions
Abstract: Diffusion models constitute the leading paradigm for automatically generating images and videos. They work by viewing images and videos as samples from a probability distribution, and learning a mapping from a standard Gaussian to that distribution. In this way, each Gaussian sample can be mapped to an image or video. Unfortunately, this mapping is learned imperfectly, introducing several sources of error in diffusion models. More precisely, this mapping takes the form of a diffusion process that cannot be implemented exactly. In this talk, we will prove the convergence of diffusion models by analyzing the sources of error arising from the inexact implementation of the diffusion. Unlike concurrent works, our analysis operates under minimal data assumptions, and our complexity results are polynomial in all relevant parameters of the problem. If time allows, we will also discuss some aspects of the convergence proof that were reused in subsequent works on diffusion models. Joint work with Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, and Anru Zhang
(TCPL 201)
14:00 - 14:30 Lauren Conger: Monotonicity of Coupled Multispecies Wasserstein-2 Gradient Flows
We present a notion of λ-monotonicity for an n-species system of PDEs governed by flow dynamics, extending monotonicity in Banach spaces to the Wasserstein-2 metric space. We show that monotonicity implies the existence of and convergence to a unique steady state. In the special setting of Wasserstein-2 gradient descent of different energies for each species, we prove convergence to the unique Nash equilibrium of the associated energies, and discuss the relationship between monotonicity and displacement convexity. This extends known zero-sum (min-max) results in infinite-dimensional game theory to the general-sum setting. We provide examples of monotone coupled gradient flow systems, including cross-diffusion, nonlocal interaction, and linear and nonlinear diffusion. Numerically, we demonstrate convergence of a four-player economic model for market competition, and an optimal transport problem. This is joint work with Ricardo Baptista, Franca Hoffmann, Eric Mazumdar, and Lillian Ratliff.
(TCPL 201)
14:30 - 15:00 Clément Bonet: Flowing Datasets with Wasserstein over Wasserstein Gradient Flows
Many applications in machine learning involve data represented as probability distributions. The emergence of such data requires radically novel techniques to design tractable gradient flows on probability distributions over this type of (infinite-dimensional) objects. For instance, being able to flow labeled datasets is a core task for applications ranging from domain adaptation to transfer learning or dataset distillation. In this setting, we propose to represent each class by the associated conditional distribution of features, and to model the dataset as a mixture distribution supported on these classes (which are themselves probability distributions), meaning that labeled datasets can be seen as probability distributions over probability distributions. We endow this space with a metric structure from optimal transport, namely the Wasserstein over Wasserstein (WoW) distance, derive a differential structure on this space, and define WoW gradient flows. The latter enables to design dynamics over this space that decrease a given objective functional. We apply our framework to transfer learning and dataset distillation tasks, leveraging our gradient flow construction as well as novel tractable functionals that take the form of Maximum Mean Discrepancies with Sliced-Wasserstein based kernels between probability distributions. Joint work with Christophe Vauthier and Anna Korba.
(TCPL 201)
15:00 - 15:30 Coffee Break (TCPL Foyer)
15:30 - 16:00 Jakwang Kim: Stability of adversarial training
In this talk, I will pose the stability of adversarial training problems. Since the revolution in AI by neural networks, most of their behavior are still not clearly understood. In particular, one of the most fundamental questions about neural networks is robustness. It has been observed that neural networks are highly sensitive to small perturbations, which is a big obstacle for implementing AI to real world applications. Adversarial training is one way to surmount this issue: introducing the adversary, classifiers are trained to be more robust against any small perturbations. However, it is still not known that in adversarial training such a classifier which is trained for finite samples eventually converges to the true one for the population distribution. I aim at this question, and show that the stability of adversarial training: after smoothing an empirical distribution, a saddle point for the smoothed empirical distribution, a pair of an optimal classifier and an optimal adversarial attack, converges to the saddle point for the true smoothed distribution. An idea is to leverage the equivalence between the adversarial training model and the total-variation regularization model. Based on an ongoing joint work with Dohyun Kwon(University of Seoul, KIAS)
(TCPL 201)
16:00 - 16:30 Sibylle Marcotte: Conservation law for ReNets and Transformers
Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This “implicit bias” is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. In this talk, I will first introduce the notion of ''conservation laws'’ that are quantities exactly preserved during the gradient flow of a model (e.g., a ReLU network) regardless of the dataset. Using Lie algebra techniques, we determine the exact number of independent conservation laws, recovering all known laws in linear and ReLU networks, and proving there are no others in the shallow case. We also fully determine all conservation laws for single attention layers and two-layer convolutional ReLU networks, and show that residual blocks inherit the same conservation laws as their underlying blocks without skip connections. We then introduce the notion of conservation laws that depend only on a subset of parameters (corresponding e.g. to a residual block). We demonstrate that the characterization of such laws can be exactly reduced to the analysis of the corresponding building block in isolation. Finally, we examine how these newly discovered conservation principles, initially established in the continuous gradient flow regime, persist under SGD. Joint work with Gabriel Peyré and Rémi Gribonval. Associated papers: https://arxiv.org/abs/2307.00144 https://arxiv.org/abs/2506.06194
(TCPL 201)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building.
(Vistas Dining Room)
Wednesday, July 2
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
09:15 - 10:00 Flavien Leger: A comparison principle for variational problems, with applications to optimal transport
We study a new approach to prove comparison principles for variational problems, based on the notion of submodularity. Our synthetic method is well adapted to the infinite dimension, to non-differentiable objects, and behaves well with respect to many types of relaxations. We apply it to obtain new comparison principles in optimal transport. This is joint work with Maxime Sylvestre.
(TCPL 201)
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:15 Austin Stromme: Asymptotic log-Sobolev constants and the Polyak-Łojasiewicz gradient domination condition
The Polyak-Łojasiewicz (PL) constant for a given function exactly characterizes the exponential rate of convergence of gradient flow uniformly over initializations, and has been of major recent interest in optimization and machine learning because it is strictly weaker than strong convexity yet implies many of the same results. In the world of sampling, the log-Sobolev inequality plays an analogous role, governing the convergence of Langevin dynamics from arbitrary initialization in Kullback-Leibler divergence. In this talk, we present a new connection between optimization and sampling by showing that the PL constant is exactly the low temperature limit of the re-scaled log-Sobolev constant, under mild assumptions. Based on joint work with Sinho Chewi.
(TCPL 201)
11:15 - 12:00 Daniel Lacker: Geodesic convexity and strengthened functional inequalities on submanifolds of Wasserstein space
We study geodesic convexity properties of various functionals on submanifolds of Wasserstein spaces with their induced geometry. We obtain short new proofs of several known results, such as the strong convexity of entropy on sphere-like submanifolds due to Carlen-Gangbo, as well as new ones, such as the $\lambda$-convexity of entropy on the space of couplings of $\lambda$-log-concave marginals. The arguments revolve around a simple but versatile principle, which crucially requires no knowledge of the structure or regularity of geodesics in the submanifold (and which is valid in a general metric spaces): If the EVI($\lambda$) gradient flow of a functional exists and leaves a submanifold invariant, then the restriction of the functional to the submanifold is geodesically $\lambda$-convex. In these settings, we derive strengthened forms of Talagrand and HWI inequalities on submanifolds, which we show to be related to large deviation bounds for conditioned empirical measures. This is joint work with Louis-Pierre Chaintron.
(TCPL 201)
11:30 - 13:00 Lunch
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
13:30 - 17:30 Free Afternoon (Banff National Park)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building.
(Vistas Dining Room)
Thursday, July 3
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
09:00 - 10:00 Katy Craig: Gradient Flows with Different Gradients: Wasserstein, Hellinger-Kantorovich, and Vector Valued Gradient Flows
Following an overview of Wasserstein and Hellinger-Kantorovich gradient flows, I will introduce the notion of vector valued gradient flows, which arise in applications including multispecies PDEs and classification of vector valued measures. Our main result is a unified framework that connects four existing notions of vector valued optimal transport, along with a sharp inequality relating the four notions. I will close by comparing and contrasting the properties of each metric from the perspective of gradients flows and linearization
(TCPL 201)
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:15 Jose Carrillo: The Stein-Log-Sobolev inequality and the exponential rate of convergence for the continuous Stein variational gradient descent method
The Stein Variational Gradient Descent method is a variational inference method in statistics that has recently received a lot of attention. The method provides a deterministic approximation of the target distribution, by introducing a nonlocal interaction with a kernel. Despite the significant interest, the exponential rate of convergence for the continuous method has remained an open problem, due to the difficulty of establishing the related so-called Stein-log-Sobolev inequality. Here, we prove that the inequality is satisfied for each space dimension and every kernel whose Fourier transform has a quadratic decay at infinity and is locally bounded away from zero and infinity. Moreover, we construct weak solutions to the related PDE satisfying exponential rate of decay towards the equilibrium. The main novelty in our approach is to interpret the Stein-Fisher information, also called the squared Stein discrepancy, as a duality pairing between H⁻¹(ℝⁿ) and H¹(ℝⁿ), which allows us to employ the Fourier transform. We also provide several examples of kernels for which the Stein-log-Sobolev inequality fails, partially showing the necessity of our assumptions.
(TCPL 201)
11:30 - 13:00 Lunch
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
13:30 - 14:15 Sinho Chewi: Toward ballistic acceleration for log-concave sampling
The underdamped (or kinetic) Langevin dynamics are conjectured to provide a diffusive-to-ballistic speed-up for log-concave sampling. This was recently established in continuous time via the space-time Poincaré inequality of Cao, Lu, Wang, and placed in the context of non-reversible lifts by Eberle and Lörler. However, these results have so far not led to accelerated iteration complexities for numerical discretizations. In this talk, I will describe a framework for establishing KL divergence bounds for SDE discretizations based on local error computations. We apply this to show the first algorithmic result for log-concave sampling with sublinear dependence on the condition number (i.e., a partial result toward the conjectured acceleration phenomenon). At the heart of this result is a technique to use coupling arguments to control information-theoretic divergences. This technique, which we call “shifted composition”, builds on works developed with my co-authors Jason M. Altschuler and Matthew S. Zhang.
(TCPL 201)
14:15 - 15:00 Li Wang: Learning-enhanced particle methods for gradient flow PDEs
In the current stage of numerical methods for PDE, the primary challenge lies in addressing the complexities of high dimensionality while maintaining physical fidelity in our solvers. In this presentation, I will introduce deep learning assisted particle methods aimed at addressing some of these challenges. These methods combine the benefits of traditional structure-preserving techniques with the approximation power of neural networks, aiming to handle high dimensional problems with minimal training. I will begin with a discussion of general Wasserstein-type gradient flows and then extend the concept to the Landau equation in plasma physics. If time allows, I will also mention our recent progress in extending this framework to operator learning.
(TCPL 201)
15:00 - 15:30 Coffee Break (TCPL Foyer)
15:30 - 16:00 Wenjun Zhao: Data analysis through Wasserstein barycenter with general factors
In this talk, we introduce an extension of Wasserstein barycenter to general types of factors. To showcase its applicability in data analysis, we propose a general framework using the barycenter problem for simulating conditional distributions and beyond. Real-world examples on meteorological time series within purely data-driven settings will be presented to demonstrate our methodology. This talk is based on joint work with the group of Esteban Tabak (NYU).
(TCPL 201)
16:00 - 16:30 Aram-Alexandre Pooladian: Theoretical and computational guarantees for variational inference via optimal transport
Variational inference (VI) is a fundamental problem in Bayesian statistics, where an unnormalized posterior is approximated through an optimization problem over a prescribed family of probability distributions. First, we investigate mean-field VI (MFVI), where our approximating family is the space of product measures. Using tools from optimal transport, we provide the first ever end-to-end optimization guarantees for this setting based on first-order algorithms. For the second half, we consider the natural extension to star-structured VI (SSVI), where a root variable impacts all the other ones. We prove the first results for existence, uniqueness, and self-consistency of the variational approximation, and, in turn, we derive quantitative approximation error bounds for the variational approximation to the posterior. Based on joint work with Roger Jiang (NYU); Shunan Sheng, Bohan Wu, and Bennett Zhu (Columbia); and Sinho Chewi (Yale)
(TCPL 201)
16:30 - 17:00 Matthew (Shunshi) Zhang: Analysis of Langevin midpoint methods using an anticipative Girsanov theorem
We introduce a new method for analyzing midpoint discretizations of stochastic differential equations (SDEs), which are frequently used in Markov chain Monte Carlo (MCMC) methods for sampling from a target measure pi proportional to exp(-V). Borrowing techniques from Malliavin calculus, we compute estimates for the Radon--Nikodym derivative for processes on which may anticipate the Brownian motion, in the sense that they may not be adapted to the filtration at the same time. Applying these to various popular midpoint discretizations, we are able to improve the regularity and cross-regularity results in the literature on sampling methods. We also obtain a query complexity bound of O(kappa^{5/4} d^{1/4}/epsilon^{1/2}) for obtaining a \epsilon^2-accurate sample in KL-divergence, under log-concavity and strong smoothness assumptions for the Hessian of V.
(TCPL 201)
17:30 - 19:30 Dinner
A buffet dinner is served daily between 5:30pm and 7:30pm in Vistas Dining Room, top floor of the Sally Borden Building.
(Vistas Dining Room)
Friday, July 4
07:00 - 08:45 Breakfast
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
08:45 - 09:30 Hugo Lavenant: Gradient flows of potential energies in the geometry of Sinkhorn divergences
What happens to Wasserstein gradient flows if one uses entropic optimal transport in the JKO scheme instead of plain optimal transport? I will explain why it may be relevant to use Sinkhorn divergences, built on entropic optimal transport, as they allow the regularization parameter to remain fixed. This approach leads to a new flow on the space of probability measure: a gradient flow with respect to the Riemannian geometry induced by Sinkhorn divergences. I will discuss the intriguing structure and features of this flow. This is joint work with Mathis Hardion.
(TCPL 201)
09:30 - 10:00 Garrett Mulcahy: Langevin Diffusion Approximation to Same Marginal Schroedinger Bridge
We introduce a novel approximation to the same marginal Schr\"{o}dinger bridge using the Langevin diffusion. As $\varepsilon \downarrow 0$, it is known that the barycentric projection (also known as the entropic Brenier map) of the Schr\"{o}dinger bridge converges to the Brenier map, which is the identity. Our diffusion approximation is leveraged to show that, under suitable assumptions, the difference between the two is $\varepsilon$ times the gradient of the marginal log density (i.e., the score function), in $\mathbf{L}^2$. More generally, we show that the family of Markov operators, indexed by $\varepsilon > 0$, derived from integrating test functions against the conditional density of the static Schr\"{o}dinger bridge at temperature $\varepsilon$, admits a derivative at $\varepsilon=0$ given by the generator of the Langevin semigroup. Hence, these operators satisfy an approximate semigroup property at low temperatures. Joint work with Medha Agarwal, Zaid Harchaoui, and Soumik Pal.
(TCPL 201)
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:00 Checkout by 11AM
5-day workshop participants are welcome to use BIRS facilities (TCPL ) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 11AM.
(Front Desk - Professional Development Centre)
10:30 - 11:15 Jan Maas: Absolutely continuous curves in kinetic optimal transport
We discuss kinetic versions of the optimal transport problem for probability measures on phase space. These problems arise from a large deviation principle and they are based on the minimisation of the squared acceleration. We argue that a natural geometry on probability measures is obtained by an additional minimisation over the time-horizon. While the resulting object is not a metric, it defines a geometry in which absolutely continuous curves of measures can be characterised as reparametrised solutions to the Vlasov continuity equation. This is based on joint work with Giovanni Brigati (ISTA) and Filippo Quattrocchi (ISTA).
(TCPL 201)
12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)