Advancement: Measure-valued proximal recursions for learning and control

Iman Nodozi
Electrical & Computer Engineering Student
Electrical & Computer Engineering PhD
Location
Virtual Event
Advisor
Abhishek Halder

Join us on Zoom: https://www.google.com/url?q=https://ucsc.zoom.us/j/95446131828?pwd%3DbjUvWFRkSHMySVBka0F5czlOdlg3UT09&sa=D&source=calendar&ust=1642025507529007&usg=AOvVaw2LyXTqatga3LRDUYnJX5qp/ Passcode: 487590

Description: We investigate convex optimization problems over the space of probability measures. Several problems in machine learning and statistics can be cast in this form. This includes sampling from an unnormalized prior, policy iteration in reinforcement learning, mean-field dynamics of classification, and GAN via variants of stochastic gradient descent. Several problems in control theory can also be cast in this form. Exemplars include prediction and nonlinear estimation of conditional joint state distributions, optimal distribution steering a.k.a. Schrodinger bridge, and its zero-noise limit: optimal mass transport. 

We propose theory and algorithms for solving such problems via generalized proximal recursions of the associated measure-valued functionals. A proximal map, in our context, generalizes the notion of the gradient step on the space of probability or population measure. We provide case studies from machine learning and stochastic control to highlight the performance of the proposed proximal algorithms. We then detail the theory and algorithm for solving such problems via distributed computation using what we call "Wasserstein consensus ADMM." This generalizes a variant of the standard Euclidean alternating direction method of multipliers (ADMM) to the space of probability measures but departs significantly from its Euclidean counterpart. Being both distributed and nonparametric, such algorithms are particularly suitable for large-scale implementation via parallelization.