Il Yong Chun (chun.ilyong at gmail)

Research Fellow in Electrical Engineering and Computer Science
under the supervision of Professor Jeffrey A. Fessler
Research interests in machine learning & artificial intelligence, compressed sensing,
nonconvex optimization, adaptive imaging,
"extreme" computational imaging, and translational neuroimaging

Find me on Scholar, LinkedIn.


Recurrent Regression CNN with (Un)supervised Training: Theory and Application

Training a regression convolutional neural network (CNN), and applying the trained CNNs is a recently growing trend in a wide range of applications in computational imaging, signal/image processing, and computer vision. For inverse imaging problems, researchers often aim to find a regression CNN that maps from "noisy" to "clean" images. Existing non-recurrent regression CNNs (trained in a supervised way) often suffer from the overfitting problems [6]. I constructed recurrent CNNs (whose each layer has feedback steps) [1, 2, 4, 5] that can resolve the overfitting or underfitting problems by considering imaging physics – specifically, (back)projecting the output of image mapping CNNs onto a measurement domain at each layer.

I am a pioneer in developing fast and convergence-guaranteed unsupervised learning frameworks that train regression CNNs; examples include convolutional analysis operator learning (CAOL [1]) and convolutional dictionary learning [2]. In particular, CAOL trains autoencoding CNNs (generating hierarchical features). For fast and convergent training, I developed the so-called block proximal gradient method using majorizer that sheds new light on tight majorization in solving block multi-(non)convex problems [1, 2]. In addition, I proved that big data can improve filter estimates in CAOL [3]. The regression CNNs trained by the unsupervised approaches improved image recovery performance for some extreme imaging applications, e.g., sparse-view CT [1, 6] and denoising low SNR images [2]. Nonetheless, the corresponding iterative image recovery algorithm needs several hundreds of iterations to converge, detracting from its practical use.

In addition, I am a leading expert in developing fast and convergent recurrent regression CNNs, e.g., BCD-Net [5] and Momentum-Net [4], that are trained in a supervised way to give "best" image estimates at each layer and moderate the aforementioned issue in the unsupervised frameworks. BCD-Net consists of image mapping and reconstruction modules at each layer, and achieved fast and accurate iterative image recovery for highly undersampled MRI and denoising low SNR images [5], while guaranteeing the sequence convergence. However, for other imaging modalities, e.g., low-count positron emission tomography [7], BCD-Net needs to use an iterative reconstruction module at each layer, and thus, it increases total image recovery time and breaks the convergence guarantees. To overcome the limitations of BCD-Net, I proposed Momentum-Net that consists of extrapolation, image mapping, and reconstruction modules at each layer. Momentum-Net achieved fast and accurate iterative image recovery for sparse-view CT and light-field photography using limited focal stack data [4]. Importantly, it guarantees convergence to a fixed-point for a wide range of imaging problems.

References
[1] Il Yong Chun and Jeffrey A. Fessler, "Convolutional analysis operator learning: Acceleration, convergence, application, and neural networks," submitted, Jan. 2018.
[Online] Available: http://arxiv.org/abs/1802.05584
[2] Il Yong Chun and Jeffrey A. Fessler, "Convolutional dictionary learning: Acceleration and convergence," IEEE Trans. Image Process., vol. 27, no. 4, Apr. 2018.
[3] Il Yong Chun, David Hong, Ben Adcock, and Jeffrey A. Fessler, "Convolutional analysis operator learning: Dependence on training data and compressed sensing recovery guarantees," preprint, Jul. 2018.
[4] Il Yong Chun, Hongki Lim*, Zhengyu Huang*, and Jeffrey A. Fessler, "Fast and convergent iterative signal recovery using trained convolutional neural networks," in Proc. Allerton (to appear), Oct. 2018.
[5] Il Yong Chun and Jeffrey A. Fessler, "Deep BCD-Net using identical encoding-decoding CNN structures for iterative image recovery," in Proc. IEEE IVMSP Workshop, Apr. 2018.
[Online] Available: http://arxiv.org/abs/1802.07129
[6] Xuehang Zheng*, Il Yong Chun*, Zhipeng Li, Yong Long, and Jeffrey A. Fessler, "Sparse-view X-ray CT reconstruction using ℓ1 prior with learned transform," submitted, Oct. 2017.
[Online] Available: http://arxiv.org/abs/1711.00905
[7] Hongki Lim, Yuni K. Dewaraja, Jeffrey A. Fessler, and Il Yong Chun, "Application of trained deep BCD-Net to iterative low-count PET image reconstruction," in Proc. IEEE NSS/MIC (to appear), Nov. 2018.
(The asterisks (*) indicate equal contributions.)

Go to top

Compressed Sensing and Parallel Acquisition: Theory and Application

Parallel acquisition systems are employed successfully in a variety of different sensing/imaging applications (e.g., parallel MRI, multi-view imaging, wireless sensor networks, light-field imaging with multiple focal stacks, synthetic aperture radar imaging, derivative sampling in seismic imaging, etc.) when a single sensor cannot provide enough measurements for a high-quality signal recovery. Compressed sensing (CS), random sub-sampling theory dependent on the sparsity of signal, has been used to establish the theoretical improvements of such systems by providing recovery guarantees for which, subject to appropriate conditions, the number of measurements required per sensor decreases linearly with the total number of sensors [8, 9]. I am a pioneer in establishing theoretical improvements of CS parallel acquisition architecture [8, 9, 10], developing CS-based imaging fundamentals, and applying these theories in practical applications, e.g., parallel MRI (MRI using multiple receive coils [10]) and light-field photography with multiple focal stacks [11].

References
[8] Il Yong Chun and Ben Adcock, "Compressed sensing and parallel acquisition," IEEE Trans. Inf. Theory, vol. 63, no. 8, pp. 4860--4882, May 2017.
[9] Il Yong Chun and Ben Adcock, "Uniform recovery from subgaussian multi-sensor measurements," Appl. Comput. Harmon. Anal., Nov. 2018. [Online] Available: http://arxiv.org/abs/1610.05758
[10] Il Yong Chun, Ben Adcock, and Thomas M. Talavage, "Efficient compressed sensing SENSE pMRI reconstruction with joint sparsity promotion," IEEE Trans. Med. Imag., vol. 5, no. 1, pp. 354--368, Jan. 2016.
[11] Cameron J. Blocker*, Il Yong Chun*, and Ben Adcock, "Low-rank plus sparse tensor models for light-field reconstruction from focal stack data," in Proc. IEEE IVMSP Workshop, Apr. 2018.
(The asterisks (*) indicate equal contributions.)

Go to top

Adaptive Computational Imaging: Theory and Application

Most imaging devices have complicated imaging physics and suffer from various types of noise. This is the main reason that underlying mathematical analyses are largely absent in their image recovery performance. If undersampling schemes are involved, the performance analysis becomes a tricky math problem. By exploiting signal processing estimation techniques (e.g., mean square error, MSE, and SNR) and stochastic modeling, my third research interest is to adaptively control imaging techniques to improve the quality of reconstructed images [12, 13]. For example, I initially established the imaging fundamentals of (high-field) MRI using multiple transmit/receive coils and its relation to MSE, and proposed a new excitation pattern design which further reduces MSE and manages the specific absorption rate [12]. In particular, adaptively considering the spatial information of transmit and receive coils and expected aliasing patterns, the proposed excitation pattern not only successfully reduces the error variances but also suppresses the aliasing artifacts caused by extremely accelerating (i.e., 8-fold) scanning.

References
[12] Il Yong Chun, Song Noh, David J. Love, Thomas M. Talavage, Stephen Beckley, and Sherman J. Kisner, "Mean squared error (MSE)-based excitation pattern design for parallel transmit and receive SENSE MRI image reconstruction," IEEE Trans. Comput. Imag., vol. 2, no. 4, pp. 424--439, Dec. 2016.
[13] Il Yong Chun, Ben Adcock, and Thomas M. Talavage, "Non-convex compressed sensing CT reconstruction based on tensor discrete Fourier slice theorem," in Proc. IEEE EMBC, Aug. 2014.

Go to top

Neuroimaging and Neuroscience

Recently, rapid interest has grown in the neuroscience community in evaluating the impact of brain changes caused by repetitive sub-concussive hits to the head. Using diffusion tensor MRI, I evaluated longitudinal white matter changes in high-school football players and examined how these changes may be linked to an athlete’s history of accumulated head collision events during practices and games [14, 15].

On the other hand, an appropriate statistical image analysis framework (e.g., hypothesis testing) to reliably detect subtle changes in athletes who experienced repetitive sub-concussive head blows is largely absent. I initially proposed a stronger randomized hypothesis testing method that exploits both completely and incompletely paired data [15]. The method successfully detects more significantly deviated regions in the sub-concussed brains, thereby providing a stronger evidence to suggest that head impacts commonly occur during contact sports have the potential for neurological injury although those impacts do not result in visible symptoms of neurological dysfunction.

References
[14] Il Yong Chun, Xianglun Mao, Eric L. Breedlove, Larry J. Leverenz, Eric A. Nauman, and Thomas M. Talavage, "DTI detection of longitudinal WM abnormalities due to accumulated head impacts," Dev. Neuropsychol., vol. 40, no. 2, pp. 92--97, May 2015.
[15] Sumra Bari, Il Yong Chun, Larry J. Leverenz, Eric A. Nauman, and Thomas M. Talavage, "DTI detection of WM abnormalities using randomization test with complete and incomplete pairs," in Proc. Org. for Hum. Brain Mapp. (OHBM), Jun. 2015.

Go to top


To see the complete list of publications, please click here.