Skip to Main Content
College Home Page
E C E Home Page

EE Seminars

ML and AI for Breaking Imaging Limits


  Add to Google Calendar
Date:  Tue, March 05, 2019
Time:  10:00am - 11:00am
Location:  Holmes 389
Speaker:  Dr. Il Yong Chun

Abstract:

"Extreme" imaging collects extremely undersampled or inaccurate measurements, and provides significant benefits to diverse imaging applications. The examples in medical imaging include highly undersampled MRI to reduce imaging time, ultra-low-dose (or sparse-view) CT to reduce radiation dose and cancer risk from CT scanning, etc. However, obtaining an accurate image within a reasonable computing time is challenging in extreme imaging.

Since 2016, researches have been applying a deep regression convolutional neural network (CNN) to conquer extreme imaging problems. However, existing non-recurrent regression CNNs, e.g., FBPConvNet (Jin et al., '17), have overfitting risks. This talk introduces my contributions in regulating the overfitting risks via recurrent CNN architectures from model-based image reconstruction (MBIR) perspectives, and in building their theoretical foundations. The first half of this talk introduces learning and optimization theories for unsupervised training of autoencoding CNNs, convolutional analysis operator learning (CAOL). Specifically, it presents that 1) using many training samples improves CAOL, and 2) new Block Proximal Gradient method using a Majorizer (BPG-M) achieves fast and convergent CAOL. In addition, it shows a benefit of the MBIR model using autoencoding CNNs trained via CAOL to sparse-view CT collecting only 12.5% projection measurements, along with its generalization capability.

The second half of this talk introduces the current state-of-the-art recurrent CNN architecture, Momentum-Net, that achieves the fastest and most accurate MBIR within a finite time and can solve a wide range of inverse imaging problems. Specifically, it presents that 1) Momentum-Net resolves practical and theoretical challenges of existing recurrent CNN architectures, e.g., BCD-Net (Chun & Fessler, '18), PnP (Bouman et al., '16-18), and TNRD (Chen & Pock, '17), and introduces 2) convergence theories of Momentum-Net and their connection to those of BPG-M-based MBIR using learned convolutional autoencoders above. In addition, it shows a benefit of the trained Momentum-Net to a light-field imaging system that aims to recover 81 subaperture images from only five detectors.

Making extreme imaging practically feasible breaks new ground in providing safe and comfortable medical imaging to patients, and high-resolution photograph that is potentially useful in autonomous car and augmented reality


Bio:

Il Yong Chun received the B.Eng.E.E. degree from Korea University, South Korea, and the Ph.D. degree in electrical and computer engineering from Purdue University, IN, USA, in 2009 and 2015 respectively. From 2015 to 2016, he was a Postdoctoral Research Associate in Mathematics, Purdue University, IN, USA. He is currently a Research Fellow in Electrical Engineering and Computer Science, the University of Michigan, MI, USA. His research interests include regression neural networks, machine learning with big data, nonconvex optimization, and compressed sensing, applied to imaging science and neuroscience.


Return to EE Seminars