Advances in Inverse Problems With (and Without) Data-Driven Priors
Ill-posed inverse problems appear in many signal and image processing applications, such as deblurring, super-resolution and compressed sensing. Traditional reconstruction strategies exhibit limited performance due to the hardness in the mathematical modeling of complex signals, such as natural images, which are the subject of this talk. Many recent works have mitigated this difficulty by (exhaustively) training deep neural networks to learn the inverse mappings of given observation models. However, these methods suffer from a huge performance drop when the observation model that is used in training is inexact. In this talk, I focus on a promising line of work that uses deep learning models, such as CNN denoisers and GANs, for handling only the signal's prior in inverse problems, and is therefore fully flexible in the observation model. Our contributions include a back-projection (BP) fidelity term, which is an alternative for the traditional least squares objective. Using the simple proximal gradient method with the BP term and off-the-shelf denoisers or GANs (a scheme that we term IDBP) gives excellent results, requires minimal parameter tuning, and is accompanied with theoretical motivations. I will also briefly discuss our image-adaptive approach, where we tune CNN denoisers or GANs in test-time to specialize them on the image at hand. This approach leads to a significant performance boost, especially for GANs which often suffer from limited representation capabilities (known in the literature also as mode collapse). Another contribution that will be shown is inspired by the literature on generalized sampling. In this work, we mitigate the poor robustness of state-of-the-art CNN super-resolvers to mismatch in the downscaling kernel by a designated correction filter that modifies a given low-resolution image to mimic one that is obtained by another kernel. I will conclude the talk with future directions in this line of research and beyond.
Bio: Tom Tirer is a post-doctoral researcher at NYU. He received his Ph.D. degree from the School of Electrical Engineering at Tel Aviv University in 2020, where he also was a postdoctoral researcher during 2021. His research interests are in the (often intersecting) fields of signal and image processing, machine learning and optimization. Alongside his academic endeavors, he also worked for several years in the industry in various engineering, algorithms and research roles.
תאריך עדכון אחרון : 11/01/2022