Parametric comparison between sparsity-based and deep learning-based image reconstruction of super-resolution fluorescence microscopy |
| |
Authors: | Junjie Chen Yun Chen |
| |
Affiliation: | 1.Department of Mechanical Engineering, Johns Hopkins University, 3400 N Charles Street, Baltimore, MD 21218, USA;2.Institute for NanoBioTechnology, Johns Hopkins University, 3400 N Charles Street, Baltimore, MD 21218, USA;3.Center for Cell Dynamics, Johns Hopkins University, 855 N Wolfe Street, Baltimore, MD 21205, USA |
| |
Abstract: | Sparsity-based and deep learning-based image reconstruction algorithms are two promising approaches to accelerate the image acquisition process for localization-based super-resolution microscopy, by allowing a higher density of fluorescing emitters to be imaged in a single frame. Despite the surging popularity, a comprehensive parametric study guiding the practical applications of sparsity-based and deep learning-based image reconstruction algorithms is yet to be conducted. In this study, we examined the performance of sparsity- and deep learning-based algorithms in reconstructing super-resolution images using simulated fluorescent microscopy images. The simulated images were synthesized with varying levels of sparsity and connectivity. We found the deep learning-based VDSR recovers image faster, with a higher recall rate and localization accuracy. The sparsity-based SPIDER recovers more zero pixels truthfully. We also compared the two algorithms using images acquired from a real super-resolution experiment, yielding results agreeing with the results from the evaluation using simulated images. We concluded that VDSR is preferable when accurate emitter localization is needed while SPIDER is more suitable when evaluation of the number of emitters is critical. |
| |
Keywords: | |
|
|