Fisher-inspired Approach to Inference and Modeling

Speaker Name
Vahid Tarokh
Speaker Title
Rhodes Family Professor of Electrical and Computer Engineering
Start Time
End Time
Location
Virtual Event and Engineering 2-192

Join us on Zoomhttps://ucsc.zoom.us/j/99831240196?pwd=cVFBVW0yYXhTZ21zZVA0K05ROStsUT09

Description: It has been empirically observed that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence. In some sense the former can be thought of as the derivative of the KL divergence with respect to a Gaussian perturbation. We use this connection to present a new set of information quantities which we refer to as gradient information. These measures serve as surrogates for classical information measures such as those based on logarithmic loss, Kullback-Leibler divergence, directed Shannon information, etc. in many data-processing scenarios of interest, and often provide significant computational advantage, improved stability, robustness.

As a first example, we apply these measures to the Chow-Liu tree algorithm and demonstrate remarkable performance and significant computational reduction using both synthetic and real data. We will then briefly discuss other applications of Fisher divergence include those in model matching, etc. using Fisher score (also known as Hyvarinen score)

In the generative modeling direction, we will first use Fisher divergence to design of a new class of robust generative auto-encoders (AE) referred to as Fisher auto-encoders. Our approach is to design Fisher AEs by minimizing the Fisher divergence between the intractable joint distribution of observed data and latent variables, with that of the postulated/modeled joint distribution. In contrast to KL-based variational AEs (VAEs), the Fisher AE can exactly quantify the distance between the true and the model-based posterior distributions.

Qualitative and quantitative results are provided on both MNIST and celebA datasets demonstrating the competitive performance of Fisher AEs in terms of robustness compared to other AEs such as VAEs and Wasserstein AEs. We may also briefly discuss applications of Fisher divergence to other generative models such as RBMs and Deep Belief Networks.

Bio: Vahid Tarokh worked at AT&T Labs-Research until 2000. From 2000-2002, he was an Associate Professor at Massachusetts Institute of Technology (MIT). In 2002, he joined Harvard University as a Hammond Vinton Hayes Senior Fellow of Electrical Engineering and Perkins Professor of Applied Mathematics. He joined Duke University in Jan 2018, as the Rhodes Family Professor of Electrical and Computer Engineering and Mathematics and a Bass Connections Endowed Professor. He was also a Gordon Moore Distinguished Research Fellow at CALTECH in 2018.