Caltech Home > PMA Home > Calendar > Guaranteed Non-convex Learning Algorithms...
open search form
Wednesday, April 27, 2016
4:00 PM - 5:00 PM
Annenberg 105

Guaranteed Non-convex Learning Algorithms through Tensor Decomposition

Seminar in Computing & Mathematical Sciences
Anima Anandkumar, Assistant Professor, Electrical Engineering & Computer Science, U.C. Irvine,
Speaker's Bio:
Anima Anandkumar is a faculty at the EECS Dept. at U.C. Irvine since August 2010. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms for a variety of learning problems. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF CAREER Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM SIGMETRICS society, IBM Fran Allen PhD fellowship, and best paper awards from the ACM SIGMETRICS and IEEE Signal Processing societies. She has been featured in a number of forums such as the O'Reilly media, Quora ML session, Huffington post, Microsoft research connections, MIT News, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, and a visiting faculty at Microsoft Research New England in 2012 and 2014.

Unsupervised learning is the challenging problem of making automated discoveries without external supervision. It requires fitting unlabeled data to large-scale latent variable models. Traditional learning approaches such as expectation maximization or variational inference are slow to converge and get stuck in local optima due to non-convexity of the likelihood function. In contrast, we have developed a method of moments approach, based on decomposition of low order moment tensors, which is guaranteed to learn the correct model under mild conditions with (low order) polynomial sample and computational complexity. In practice, tensor methods significantly outperform previous learning approaches, both in training time and model fitting, on a wide range of problems such as document categorization, social network analysis, discovering neuronal cell types, and learning sentence embeddings. Further, we have established that tensor methods are guaranteed to find globally optimal solution to other challenging non-convex problems such as training multi-layer neural networks and reinforcement learning of partially observable Markov decision processes.  These positive results demonstrate that many learning tasks, previously considered intractable, can be solved efficiently under mild and transparent conditions

For more information, please contact Sheila Shull by phone at 626.395.4560 or by email at [email protected].