A Geometric Theory of Phase Transitions in Convex Optimization
Professor Joel A. Tropp
Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the l1 minimization method for identifying a sparse vector from random linear samples. Indeed, l1 minimization succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability.
This talk summarizes a rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to convex methods for denoising, for regularized linear inverse problems with random measurements, and to demixing problems under a random incoherence model.
Joint with D. Amelunxen, M. Lotz, and M. B. McCoy.