# CMI Seminar

*,*CMI Postdoctoral Fellow

*,*CMS

*,*Caltech

*,*

Generalized Low Rank Models (GLRM) extend the idea of Principal Components Analysis (PCA) to embed arbitrary data tables consisting of numerical, Boolean, categorical, ordinal, and other data types into a low dimensional space. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously.

In these two talks, we introduce the GLRM framework with an eye towards open problems, spending time at the interface between practical large scale data analysis and statistical and optimization theory. We'll discuss

- what can you model as a GLRM?
- what can they be used for?
- when can we trust them?
- how can we fit them?
- what kinds of information or control might be needed to do better?

The first talk in the series focuses on understanding the modeling flexibility and large scale data analysis applications for low rank models, and on the connecting theoretical statistical results with the practical setting of data analysis.

The second talk delves further into optimization theory, and into recent results using these ideas in a "bandit" setting in which data table entries can be actively queried.