Abstract: General purpose machine learning has been known to be infeasible for even moderate dimensional problems for a long time. And yet problems with millions of dimensions are routinely solved every day. Clearly these problems must contain special structure that alleviates the curse-of-dimensionality, and the methods employed must somehow leverage this structure. Three such types of special structure are regularity, independence, and redundancy. For example, the feature of interest may be independent of most of the variability, or most of the apparent variability may contain redundant information. Sometimes these independent or redundant nuisance variables can be linearly removed by PCA, but generically we should expect nonlinear relationships between these variables and the feature of interest. A collection of smooth nonlinear relationships between variables generically defines a mathematical structure known as a manifold, and the theory of manifold learning interprets modern machine learning algorithms under this prior assumption. In this discussion, we introduce the manifold learning paradigm, overview key results, and show how manifold learning has influenced practical machine learning methods including forecasting dynamical systems. We also discuss the shortcomings of the manifold assumption as a model for real data, as well as several potential ways forward that are currently being explored.
Speaker bio: Tyrus Berry received his Ph.D. from George Mason University in 2013 studying dynamical systems and manifold learning under Timothy Sauer. He then spent two fruitful years at Pennsylvania State University as a postdoc working with John Harlim before returning to George Mason as a postdoc in 2015. He is currently an assistant professor at George Mason University where he continues to focus on manifold learning research with applications to dynamical systems.