Sparsity-based methods like compressed sensing have led to significant technological breakthroughs in signal processing, compression and medical imaging. Deep generative models like GANs, VAEs and Diffusions are data-driven signal models that are showing impressive performance. We will survey our framework of how pre-trained generative models can be used as priors to solve inverse problems like denoising, filling missing data, and recovery from linear projections in an unsupervised way. We generalize compressed sensing theory beyond sparsity, extending Restricted Isometries to sets created by deep generative models. We will also discuss applications to accelerating MRI, fairness in imaging and numerous open problems.
Alex Dimakis is a UT Austin Professor and the co-director of the National AI Institute on the Foundations of Machine Learning. He received his Ph.D. from UC Berkeley and the Diploma degree from NTU in Athens, Greece. He has received several awards including the James Massey Award, NSF Career, a Google research award, the UC Berkeley Eli Jury dissertation award, and several best paper awards. He served as an Associate Editor for several journals including IEEE Transactions on Information Theory and as an Area Chair for Machine Learning conferences (NeurIPS, ICML, AAAI). His research interests include information theory and machine learning. He is an IEEE Fellow for contributions to distributed coding and learning.