Open Access
27 February 2021 Accelerating 3D single-molecule localization microscopy using blind sparse inpainting
Author Affiliations +
Abstract

Significance: Single-molecule localization-based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, this technique is limited by the requirements of imaging an extremely large number of frames of biological samples to generate a super-resolution image, thus requiring a longer acquisition time. Additionally, the processing of such a large image sequence leads to longer data processing time. Therefore, accelerating image acquisition and processing in single-molecule localization microscopy (SMLM) has been of perennial interest.

Aim: To accelerate three-dimensional (3D) SMLM imaging by leveraging a computational approach without compromising the resolution.

Approach: We used blind sparse inpainting to reconstruct high-density 3D images from low-density ones. The low-density images are generated using much fewer frames than usually needed, thus requiring a shorter acquisition and processing time. Therefore, our technique will accelerate 3D SMLM without changing the existing standard SMLM hardware system and labeling protocol.

Results: The performance of the blind sparse inpainting was evaluated on both simulation and experimental datasets. Superior reconstruction results of 3D SMLM images using up to 10-fold fewer frames in simulation and up to 50-fold fewer frames in experimental data were achieved.

Conclusions: We demonstrate the feasibility of fast 3D SMLM imaging leveraging a computational approach to reduce the number of acquired frames. We anticipate our technique will enable future real-time live-cell 3D imaging to investigate complex nanoscopic biological structures and their functions.

1.

Introduction

Single-molecule localization microscopy (SMLM) such as (direct) stochastic optical reconstruction microscopy [(d)STORM],1,2 (fluorescence) photoactivated localization microscopy [(f)PALM],3,4 and other variants58 have extended the imaging resolution of conventional optical fluorescence microscopy beyond the diffraction limit (250  nm). In these methods, a random and sparse subset of fluorophores in the sample is imaged in each diffraction-limited image frame, whereas a large number of such frames are obtained sequentially. Then, the detected individual fluorophores in each frame are precisely localized, and finally, all the localization positions from these frames are assembled together to generate the super-resolution image. Three-dimensional (3D) SMLM916 requires additional axial (z axis) information, which is obtained by using z dependent point spread function (PSF).17 Optically engineered PSFs such as astigmatic,9 double-helix,18 biplane,19 interferometric,20 airy-beam,21 and tetrapod12 are commonly used in existing 3D SMLM imaging to encode the axial information of blinking fluorescent molecules. PSFs shapes are generally engineered via the introduction of optical elements such as cylindrical lens,9 phase mask,22 or deformable mirror15 in the imaging pathway of the microscope. In both 2D and 3D SMLM imaging, to achieve sufficient dense localizations to reveal biological samples’ details, a large number of sequential diffraction-limited frames (typically >104) are needed, suggesting a long acquisition time. Such slow imaging makes potential live-cell and high-throughput imaging more challenging. Practically, the acquisition of such long frame sequences also results in the degradation of image quality due to the dyes’ photobleaching. Furthermore, the processing of such a large number of image frames requires considerable processing times.23 Therefore, a faster SMLM technique is always desirable.

Several approaches have been reported to accelerate the imaging and data processing time of SMLM. One of them is to increase the fluorophore blinking kinetics using a high-power laser and to use a high-speed camera (with higher frames per second) to capture those fast blinking single-molecule events.10,24 Huang et al.25 achieved video-rate SMLM using scientific complementary metal-oxide-semiconductor (sCMOS) cameras. These acceleration methods provide faster imaging at the cost of image quality degradation.10,26 Specifically, high-excitation laser intensity and fast detection decreased the photon count per localization, resulting in deterioration of localization precision and resolution.26 Another approach is to increase the number of active fluorophores per frame.27,28 However, the high activation density causes fluorescent spots to overlap in the diffraction-limited images, making it more difficult to precisely localize the fluorophores.28 Despite this challenge, most of the existing techniques2931 use higher molecular density per frame to increase the imaging speed. Recently, deep learning has been used to accelerate the SMLM methods. Typically, deep learning is implemented to precisely localize the 2D or 3D position (or color separation in case of multicolor imaging) of blinking single-molecules PSFs in each frame.3239 These methods ultimately accelerate the data processing time of SMLM methods, but still require a large number of frames. Further, deep learning is leveraged by Ouyang et al.40 to accelerate 2D SMLM and by Gaire et al.41 to accelerate 2D multicolor spectroscopic SMLM using very few frames. However, the limitation of a deep learning method is that it requires a large quantity of training data with similar structures.

Here, we present a computational approach to accelerate 3D SMLM imaging. The experimental setup, data acquisition procedure, and localization methods remain the same as those of standard 3D SMLM methods, except that very few diffraction-limited frames are acquired, which will reduce the acquisition time and ultimately accelerate imaging speed. Further, the data processing time will also be reduced accordingly. For the standard 3D SMLM method, the final image rendered from very few frames is sparse and provides less information to extract the biological sample’s fine structures. Our approach can recover those unresolved structures in the sparse image with low emitter densities and reconstruct the high-quality 3D super-resolution image. The high-density estimation of 2D SMLM imaging using the blind sparse inpainting has been previously reported in detail.42 Here, we extended it to accelerate 3D SMLM imaging by introducing a sparsifying transform appropriate for the 3D structure. In our previous work, high-density 2D SMLM images were reconstructed by solving an l1 minimization problem using the alternating direction method of multipliers (ADMM)43 with curvelet transform44 as the sparsifying transform. Here, we also use ADMM but with combined curvelet transform and an additional total variation (TV) regularization for the depth direction. We confirm the efficacy of the proposed algorithm using both simulated and experimental 3D SMLM datasets. The preliminary results of this article were reported in Ref. 45. This expanded article includes additional simulation, experimental, and quantitative evaluation results and their analysis.

2.

Reconstruction Approach

In standard 3D SMLM, a large number of diffraction-limited frames (suppose N frames) are imaged with a total acquisition time of NΔt, where Δt is the time to acquire a single frame (typically 10 to 30 ms) and processed to produce a high-density 3D super-resolution image. A smaller number of frames (suppose Q frames and QN) with a very short acquisition time of QΔt will generate a low-density 3D image (Fig. 1). Our goal is to reduce the acquisition time by reconstructing the high-density 3D super-resolution image using a low-density 3D image acquired using fewer frames, which is sparse and incomplete. For reconstruction, we need to restore the unknown fluorophore localization positions based on the available fluorophore localization points on the low-density 3D image. Thus, the restoration problem can be formulated as an image inpainting task aiming to restore the mission regions of the corrupted image and reconstruct the original image.

Fig. 1

Comparison of blind sparse inpainting method with the existing 3D SMLM method. 3D super-resolution image in standard SMLM is obtained by imaging and processing a large number of diffraction-limited single-molecule frames (suppose N frames). The proposed method uses very few diffraction-limited frames (suppose Q frames and QN), which results in a low-density 3D image. The high-density 3D image is then reconstructed using blind sparse inpainting.

JBO_26_2_026501_f001.png

Mathematically, the relationship between the vectorized low-density 3D image xQ from the localization emitters acquired in Q frames and the desired high-density 3D image vector x can be modeled as

Eq. (1)

xQ=PQx,
where PQ is a diagonal matrix with either element 1, for the acquired location or 0, for the missing location. To solve Eq. (1), we first need to estimate the unknown measurement matrix PQ (called “blind”) based on the low-density 3D image and then reconstruct x from xQ. The estimation of PQ is challenging in the sense that a zero-valued pixel in xQ can be background without any fluorophore or those with fluorophore but not detected in the acquired Q frames. The locations of fluorescence molecules captured in Q frames are determined by performing hard-thresholding on the low-density image xQ.

After PQ is obtained, x can be estimated from xQ, which is still nontrivial because of infinite possible solutions. Prior information has to be exploited as a constraint to obtain a unique reconstruction with good fidelity to the true structures. Here, we reconstruct the desired high-density 3D image by employing sparseness as an image prior. Specifically, the high-density 3D image is reconstructed by solving the following unconstrained minimization problem

Eq. (2)

minxλ1PQxxQ22+Φx1+λ2TV(x),
where ·1 and ·2 represent 1 and 2 norms, respectively, λ1 and λ2 are the weight parameter and regularization parameter, respectively, Φ represents a sparsifying transform, and TV(·) is a total variation regularization. The first term enforces data consistency, the second term enforces the sparsity in the transform domain, and the third term promotes the piecewise smoothness of the image.

The choice of sparsifying transform depends on the image content and plays a crucial role in image reconstruction. Many biological structures, such as microtubules, are of anisotropic curve-like nature. Therefore, we use the curvelet transform as a sparsifying transform in the lateral plane. It provides sparsity and excellent directional sensitivity and anisotropy. Thus, curvelet transform can efficiently characterize anisotropic features such as curves, edges, and arcs.46 The discrete curvelet transform was implemented using CurveLab47 with curvelets via wrapping approach. It includes four steps: 2D fast Fourier transform (FFT) , windowing, frequency wrapping, and 2D inverse FFT.44 TV regularization is used in the depth direction only. TV is defined as TV(x)=Gx1, where G is the first-order finite-difference operator along the depth direction and ·1 denotes 1 norm. More detail about the optimization algorithm is in the next section.

3.

Optimization Algorithm

The convex optimization problem of Eq. (2) is a standard 1 minimization problem. It can be solved using efficient approaches such as variable splitting and augmented Lagrangian method (ALM).48,49 In this paper, we are using a specific variation of ALM called ADMM.43 We first introduce the auxiliary variable d=Φx and e=Gx in Eq. (2) to decouple the 1 terms from other parts and obtain the following equivalent form

Eq. (3)

minxλ1PQxxQ22+d1+λ2e1s.t.  Φx=dandGx=e.

The scaled form of the augmented Lagrangian function (ALF) of Eq. (3) can be written as

Eq. (4)

L(x,d,e,u,v)=λ1PQxxQ22+d1+λ2e1+ρ2Φxd+u22+μ2Gxe+v22,
where u and v are Lagrangian multipliers representing scaled dual variables. Similarly, ρ and μ are the penalty parameters. The ADMM iteration scheme will be

Eq. (5)

xk+1=argminxλ1PQxxQ22+ρ2Φxdk+uk22+μ2Gxek+vk22,

Eq. (6)

dk+1=argmindd1+ρ2Φxk+1d+uk22,

Eq. (7)

ek+1=argmineλ2e1+μ2Gxk+1e+vk22,

Eq. (8)

uk+1=uk+Φxk+1dk+1,

Eq. (9)

vk+1=vk+Gxk+1ek+1.

The x-subproblem has a closed-form solution

Eq. (10)

xk+1=B(2λ1PQTxQ+ρΦH(dkuk)+μGT(ekvk)),
where B=(2λ1PQTPQ+ρI+μI)1. The superscripts H and T denote the Hermitian transpose and the transpose of a matrix, respectively. The optimum values of d-subproblem and e-subproblem are obtained through the element-wise shrinkage operator48

Eq. (11)

dk+1=shrink(Φxk+1+uk,1ρ),

Eq. (12)

ek+1=shrink(Gxk+1vk,λ2μ),
where shrink(.) is defined as

Eq. (13)

shrink(x,γ)=x|x|max(|x|γ,0).

The algorithm terminates when the predefined maximum number of iteration is reached. The proposed ADMM optimization algorithm for blind sparse inpainting is summarized in Algorithm 1. The algorithm was implemented in MATLAB® R2018a.

Algorithm 1

Input:xQ-low-density 3D image.
   λ1, λ2-weight and TV regularization parameters.
   ρ, μ-penalty parameters.
   Φ-sparsifying transform operator.
   n-maximum number of iterations (stopping criteria).
Output:x-high-density 3D image.
Initialization:d0=0, e0=0, u0=0, v0=0, count=1.
forcount=1:ndo
  Solve x-subproblem using Eq. (10).
  Solve d-subproblem using Eq. (11).
  Solve e-subproblem using Eq. (12).
  Update u using Eq. (8).
  Update v using Eq. (9).
End for.

All the parameters in our implementation were tuned heuristically, and the best results obtained from the quantitative evaluations are presented. In general, the weight parameter λ1 balances the sparsity constraint/smoothness and data consistency. Typically, smaller λ1 weights the smoother image, while large λ1 penalizes data consistency more (preserving more acquired information). Such control of sparsity constraint and data consistency in the lateral direction is also affected by the value of ρ. Similarly, the smoothness and data consistency in the axial direction is also controlled by the parameter μ. Due to the variation of intensity and density in each image, a single value of these parameters may not work for all images. To simplify the parameter-tuning process of all images, the maximum intensity was truncated to 255, and then intensity values were rescaled to the interval of [0, 1]. In our implementation, we used the value of λ1 in the range of 10 to 80, and ρ in the range of 10 to 150. Similarly, λ2=106 and μ=0.1/0.01 were used. The results are insensitive to a small change in the values of these parameters.

4.

Results

4.1.

Simulation Results

To demonstrate the performance of blind sparse inpainting reconstruction, we used two sets of simulated localization data.

For the first one, we generated a simulated 3D SMLM image in the shape of a knot as the “ground-truth” specimen. The knot had a volume of dimension 4.02×4.02×0.18  μm3. The localization list was simulated by randomly selecting some locations in the knot to mimic the activated molecules with an activation density of approximately ten molecules per frame (0.62  molecules/μm2 per frame).16 We directly recorded the localized coordinates (x,y,z) and their intensities of blinking molecules in each camera frame. Since the localization emitters were directly obtained from the true image, there were no localization errors or background noise. The localization list was then used to render the 3D image. The increasing density images can be synthesized by combining these localization points using an increasing number of frames. The resulting high-density super-resolution 3D image (Video 1) has lateral and axial resolutions of 20 and 17  nm, respectively [Figs. 2(e) and 2(g)]. We used fewer frames to generate the low-density 3D image and then applied our algorithm to reconstruct the high-density 3D image.

Fig. 2

Blind sparse inpainting reconstruction of simulated 3D SMLM image. (a) Low-density image using 1000 frames; (b) blind inpainting reconstruction; and (c) high-density ground-truth image using 10,000 frames (also see Video 1 and Video 2). The right panel in each image shows a (y,z) slice at the position indicated by the white dashed line. The color bar shows the depth of z. Pixel size: 8 nm. Scale bars: 0.5  μm. (d) and (e) Intensity profile and FWHM at the white line segment shown in the (b) reconstructed image and (c) the ground-truth image, respectively. (f) and (g) Intensity profile and FWHM along the line segment (not shown) on z direction at white boxes on (y,z) slices in the images (b) and (c), respectively. Black dots in the intensity profiles are measured intensities, and blue curves are fitted Gaussian functions, with standard deviation σ and FWHM (double orange arrow) as indicated [Video 1, MP4, 5.5 MB [URL: https://doi.org/10.1117/1.JBO.26.2.026501.1]; Video 2, MP4, 4 MB https://doi.org/10.1117/1.JBO.26.2.026501.2].

JBO_26_2_026501_f002.png

To reconstruct the high-density 3D image from the low-density image, we constructed 22 z-slices of the low-density 3D image by grouping the localization data in the z axis with thickness 8 nm. ThunderSTORM,50 an open-source SMLM data analysis plugin for Fiji,51 was used to computationally render the z-stack with the 3D simulated localization list as an input. Due to simultaneous reconstruction of multiple z-slices (lateral and axial direction), the reconstruction of the 3D SMLM image is much more complicated compared to the reconstruction of the 2D SMLM image as in Ref. 42. The result in Fig. 2(b) shows that the blind sparse inpainting reconstruction of the low-density image rendered with Q=1000 frames and having 15,910 localization points significantly improved the density and is visually equivalent to the ground-truth [Fig. 2(c)] rendered with N=10,000 frames with a total of 96,203 fluorophore localization points. The 3D projection of Figs. 2(a)2(c) is presented in Video 2. Additionally, the volume visualization of the simulated low-density, blind-inpainting reconstruction, and ground-truth 3D images using the Volume Viewer52 plugin in Fiji is shown in Fig. 3. Most of the incomplete and rough curvilinear structures due to reduced localization points in the low-density image are reconstructed almost perfectly, giving complete and continuous filament structures with an excellent agreement with the ground-truth image. At some positions, where the input low-density image has very little information available, the reconstruction still deviates from the ground-truth [red arrows in Figs. 2(b) and 3(b)]. Such errors can be reduced by increasing the frame numbers (thereby the number of localization points), but at the cost of reduced acceleration.

Fig. 3

Volume visualization of the simulated 3D image. (a) Low-density; (b) reconstructed; and (c) ground-truth images, respectively. The low-density image was rendered using 1000 frames and the ground-truth image was obtained using 10,000 frames.

JBO_26_2_026501_f003.png

The reconstructed image resolution was evaluated using the full width at half maximum (FWHM) of the intensity profile. The FWHM values along the lateral and axial direction for the reconstructed image are shown in Figs. 2(d) and 2(f), respectively. Similarly, Figs. 2(e) and 2(g), respectively, show FWHM values of the ground-truth image in lateral and axial directions. The intensity profile in lateral direction was taken along the white line segments in Figs. 2(b) and 2(c). Similarly, line segments (not shown) along the z direction at the white boxes on (y,z) slice of Figs. 2(b) and 2(c) were used to obtain axial intensity profiles. The black dots in the intensity profiles are measured intensities, and the blue curves are fitted Gaussian functions, with standard deviation σ and FWHM (double orange arrow) as indicated. The FWHM values were calculated using FWHM=22ln2σ2.355σ. The FWHM values of the reconstructed image, for both lateral and axial directions, are similar (2.5  nm higher) to those of the ground-truth image, indicating the inpainted reconstruction is able to preserve the resolution of a 3D structure. Additionally, we perform the quantitative evaluation of the reconstruction by calculating the root-mean-squared error (RMSE) between the reconstructed image and the ground-truth image and it is shown in Fig. 4. The RMSE values for each reconstruction are the average RMSE values from all the slices. Since the localization list was generated randomly, we conducted 10 simulations and calculated the average RMSE of each reconstruction for the different number of frames. The unit of the RMSE is the same as the intensity (photons) of the image. The curve [Fig. 4] shows significant improvements in the reconstruction with >800 frames, suggesting that increasing frames improve the fidelity of reconstructed structures. The RMSE value for the reconstruction of Fig. 2 using 1000 frames was 0.0748.

Fig. 4

Quantitative evaluation of knot simulation results using RMSE for the different number of frames.

JBO_26_2_026501_f004.png

In the experimental condition, localization microscopy images are corrupted by noise sources such as false detection from the background noise due to unbound or out of focus light or unspecific binding of antibodies.40 To test our method’s performance for realistic simulation conditions, we used publicly available realistic 3D simulation data of microtubules from the École Polytechnique Fédérale de Lausanne (EPFL) 3D SMLM software Benchmarking.53 The Alexa 647 labeled STORM data “MT0.N1.LD” consists of 19,996 frames with a molecule density of 0.2 molecules per μm2. We adopted the 3D-Double Helix datasets and used SMLocalizer54 to process diffraction-limited frames. Once the localization list was obtained, we used 5000 frames to generate the low-density 3D image, as shown in Fig. 5(a). To reconstruct the 3D high-density image from the low-density image, we constructed 90 z-slices of the low-density 3D image by grouping the localization data in the z axis with a thickness of 12.5 nm. The field of view (FOV) of the images in Fig. 5 was 5.62×5.15  μm2. The overall axial range was 1.125  μm. Figure 5(b) shows reconstruction using 5000 frames, having much smoother and improved density in both lateral and axial directions. The result is comparable to the high-density image rendered using all frames [Fig. 5(c)]. The ground-truth image is shown in Fig. 5(d). The RMSE values (average of all slices) of the low-density, reconstructed, and high-density images were 0.0167, 0.0144, and 0.0202, respectively. The RMSE values show that our reconstruction has much less deviation from the ground-truth image than the high-density image obtained using 19,996 frames.

Fig. 5

Blind sparse inpainting reconstruction result of realistic simulation data MT0.N1.LD. The (a) low-density; (b) reconstructed; (c) high-density; and (d) the ground-truth super-resolution 3D image with color indicating the depth of z. The low-density image was rendered using 5000 frames and the high-density image was obtained using 19,996 frames. Pixel size: 12.5 nm. Scale bars: 0.5  μm.

JBO_26_2_026501_f005.png

4.2.

Experimental Results

To demonstrate the performance of blind sparse inpainting reconstruction for real 3D SMLM images, we used publicly available localization lists of two microtubules image data and one mitochondrial image data.

The first data set was from the EPFL SMLM software benchmarking.53 The details about sample preparation and microscopy setup of the data can be found in Ref. 55. In brief, microtubules in U-2 OS cells were labeled with anti-alpha tubulin primary and Alexa Fluor 647-coupled secondary antibodies. The diffraction-limited frames (with an exposure time of 15 ms) were imaged using the optical setup of dSTORM with a cylindrical lens. We used the wobble and drift corrected “Tubulin-A647-3D” localization list obtained from 112,683 frames and processed using Super-resolution Microscopy Analysis Platform (SMAP)-2018.56 Since the localization list was already available, we did not process the diffraction-limited frame data, but instead directly used them. The isolated localization points due to background noise were filtered using density filtering. When all 112,683 frames with about 1.7 million localization points were used, we obtained a high-density super-resolution 3D image as a reference image [Fig. 6(c)]. The low-density image [Fig. 6(a)] was synthesized using 2254 frames, i.e., 50-fold fewer frames, with about 34 thousand localization points from the same localization list data. To reconstruct the 3D high-density image from the low-density image, we constructed 23 z-slices of the low-density 3D image with FOV of 37.5×33.4  μm2 by grouping the localization list data in z axis with a thickness of 40 nm. The overall axial range was 920 nm. The microtubules filaments could be seen in the low-density image, but structural details were hard to discern. To reconstruct the high-density 3D image, our blind sparse inpainting algorithm was applied to the low-density 3D image. The reconstructed image is shown in Fig. 6(b). The color in Figs. 6(a)6(c) indicates the depth in the z direction. Visual observation shows that blind sparse inpainting reconstruction significantly improves the localization density of the low-density image. The microtubules filament structures are much denser and more clearly revealed in the reconstruction. Additionally, reconstruction for a region of interest (ROI) (12×12  μm2) of the same data set with much smaller pixel size (24 nm) and z-slice width (Δz=25  nm) is shown in Fig. 7. The superior reconstruction result shows much denser and smoother microtubules structures in both lateral and axial directions.

Fig. 6

Blind sparse inpainting reconstruction results of Tubulin-A647-3D data. The (a) low-density; (b) reconstructed; and (c) high-density super-resolution 3D image with color indicating the depth of z. The low-density image was rendered using 2254 frames and the high-density image was obtained using 112,683 frames. Pixel size: 40 nm. Scale bars: 3  μm.

JBO_26_2_026501_f006.png

Fig. 7

Blind sparse inpainting reconstruction results of an ROI of Tubulin-A647-3D data. The (a) low-density; (b) reconstructed; and (c) high-density super-resolution 3D image with color indicating the depth of z. (x,z) and (y,z) views of the regions enclosed by the white box are also shown. Pixel size: 24 nm. Scale bars: 1.5  μm.

JBO_26_2_026501_f007.png

For the quantitative evaluation of the reconstructed images of experimental data, we used the multiscale structural similarity index (MS-SSIM),57 a perceptually motivated metric, between the reference high-density image and the reconstructed image. Since the ground-truth was not available for the experimental data, the high-density 3D images rendered with all available frames were used as reference images. It is also worth noting that this reference high-density image still might deviate from the ground-truth (as seen in Sec. 4.1). Thus, the RMSE with reference image is not a proper metric for the quantitative evaluation of reconstruction as the pixel value difference can be large even for perfect reconstruction.42 Thus, we used MS-SSIM to evaluate the reconstruction capability to capture the structural information along with the slices in the reference image of experimental data sets. The MS-SSIM index has a scale between 0 and 1, with 1 being a perfect match with the reference image. The higher MS-SSIM value indicates a better match of structural information. Figure 8 shows the improvement in the MS-SSIM index of the slices of the reconstructed 3D image [Fig. 6(b)] compared to that of the input low-density 3D image [Fig. 6(a)]. It demonstrates that our method is capable of recovering the structures of microtubules with high similarities to the reference high-density image. The MS-SSIM index of the edge slices (slices 1 to 3, and 21 to 23) are still low because of having very low localization densities with a wide gap between the fluorophore localization in those slices.

Fig. 8

The plot of the MS-SSIM index versus z-slices for comparing the reconstruction of microtubules structures for the Tubulin-A647-3D image of Fig. 6.

JBO_26_2_026501_f008.png

To further evaluate the blind sparse inpainting reconstruction for 3D SMLM experimental data, we used another publicly available microtubule localization list result from Zernike Optimized Localization Approach in 3D (ZOLA-3D).58 Details about sample preparation, imaging setup, and processing steps can be found in Ref. 13. In brief, the microtubules in a U-373 MG cell were labeled with anti-alpha tubulin primary and Alexa-647 conjugated secondary antibodies. A total of 87,959 frames were acquired using the saddle point PSF with a variable exposure time of 30 (for the early stage) to 100 ms (in the later stage). Since the localization list was already available, we directly used them. The isolated localization points due to background noise were filtered using density filtering. The high-density 3D super-resolution image [Fig. 9(c)] was generated using all frames with around 899,600 localization points, visualizing the whole cell with an axial range of 2.3  μm. The low-density 3D image [Fig. 9(a)] was generated using 4400 frames, i.e., 20 fold fewer frames, with approximately 57,500 localization points from the same localization data. For reconstruction, we constructed 46 z-slices of the low-density image by grouping the localization data in the z axis with a size of 50 nm with an FOV of 51.58×37.62  μm2. Then, the low-density image was given as an input to our blind sparse inpainting algorithm. The reconstructed image is shown in Fig. 9(b). The color in Figs. 9(a)9(c) indicates the depth in the z direction. Microtubule structures are more clearly revealed in reconstruction with much higher-localization densities comparable to the reference high-density image. Superior reconstructions in the edge of the cell can be observed in the reconstruction. The improvement in the MS-SSIM index, as shown in Fig. 10, also verifies higher similarities with the high-density reference image after the reconstruction. However, some fine features in the high-density image with the dense or close-by structure were not appropriately resolved (red arrow) due to more isolated localization data in those regions.

Fig. 9

Blind sparse inpainting reconstruction of microtubules data from ZOLA-3D. The (a) low-density; (b) reconstructed; and (c) high-density 3D super-resolution image with color indicating the depth of z. The low-density image was obtained using 4400 frames and the high-density image was obtained using 87,959 frames. The right panel in each image shows a (y,z) slice at the position indicated by the white dashed line. Pixel size: 37 nm. Scale bars: 5  μm.

JBO_26_2_026501_f009.png

Fig. 10

The plot of the MS-SSIM index versus z-slices for comparing the reconstruction of microtubules structures for the data from ZOLA-3D of Fig. 9.

JBO_26_2_026501_f010.png

Similarly, we also evaluate the reconstruction of another 3D SMLM image from ZOLA-3D. The 3D mitochondrial image in COS7 Cells was obtained using saddle point PSF. The high-density 3D image of Fig. 11(c) was rendered using 81,578 frames (175,000 localizations after density filtering). For reconstruction, we used 5500 frames (19,500 localizations) to generate the low-density 3D image [Fig. 11(a)]. The reconstructed 3D image in Fig. 11(b) shows improvement in the density of the mitochondrial structures both in lateral and axial directions. Due to the tubular structure of the mitochondria, the curvelet transform performed well to give superior reconstruction. The result demonstrates the versatility of our method to reconstruct high-quality 3D super-resolution images by reducing the number of frames.

Fig. 11

Blind sparse inpainting reconstruction of the mitochondrial 3D image from ZOLA-3D. The (a) low-density; (b) reconstructed; and (c) high-density 3D super-resolution image with color indicating the depth of z. The low-density image was obtained using 5500 frames and the high-density image was obtained using 81,578 frames. The right panel in each image shows a (y,z) slice at the position indicated by the white dashed line. Pixel size: 34 nm. Scale bars: 2  μm.

JBO_26_2_026501_f011.png

5.

Conclusion

We present a computational method based on blind sparse inpainting to reconstruct the high-density 3D images using the low-density 3D images synthesized using very few camera frames obtained from the standard 3D SMLM data. We demonstrate high-quality reconstructions with up to a 10-fold reduction in the number of frames in the simulated 3D SMLM images and up to 50-fold reduction for experimental microtubules 3D SMLM images. Thus, the acquisition time is reduced considerably using fewer camera frames, and the 3D imaging is accelerated without compromising resolution. Furthermore, no change in the existing optical setup or labeling protocol is needed. Additionally, our method can be applied to any of the existing localization algorithms. We expect that our method can offer further improvement in the acquisition time by integrating with the existing higher molecular density labeling methods.

However, the proposed method has several limitations. First, because of the use of the curvelet transform, it may be restricted to the filament structures such as microtubules. For noncurvature structures, appropriate sparsifying transform, such as wavelet transform, can be used. Second, the reconstruction also depends on the localization algorithms. If there are some artefacts due to background noise or incorrect localizations, such artefacts propagate during the reconstructions. Third, when the input image quality is limited due to scarcity of the localization points or increased noise or nonuniform localizations, the reconstructed images may misrepresent the actual structures (e.g., broken structures). Such misrepresentation can be alleviated by improving the input image quality using more frames, but at the cost of reduced acceleration. Finally, since missing localization positions are estimated blindly, there may be some errors in predicting the PQ, which may give some artefacts or loss of resolution. Again, such limitations can also be alleviated by using more frames data. We anticipate combining super-resolution optical microscopy and our blind inpainting method will enable future real-time live-cell and high-throughput 3D imaging to investigate the complex nanoscopic biological structures and their functions.

Disclosures

The authors declare no conflicts of interest.

Acknowledgments

This work was supported in part by the National Science Foundation (NSF) under Grant Nos. CBET-1604531, CBET-1706642, and EFMA-1830969. The authors would like to thank the Biomedical Imaging Group at École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, and the Imaging and Modeling Lab at the Pasteur Institute, Paris, France, for making data available online.

References

1. 

S. Van de Linde et al., “Direct stochastic optical reconstruction microscopy with standard fluorescent probes,” Nat. Protocols, 6 (7), 991 (2011). https://doi.org/10.1038/nprot.2011.336 1754-2189 Google Scholar

2. 

M. J. Rust, M. Bates and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods, 3 (10), 793 (2006). https://doi.org/10.1038/nmeth929 1548-7091 Google Scholar

3. 

S. T. Hess, T. P. Girirajan and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J., 91 (11), 4258 –4272 (2006). https://doi.org/10.1529/biophysj.106.091116 BIOJAU 0006-3495 Google Scholar

4. 

E. Betzig et al., “Imaging intracellular fluorescent proteins at nanometer resolution,” Science, 313 (5793), 1642 –1645 (2006). https://doi.org/10.1126/science.1127344 SCIEAS 0036-8075 Google Scholar

5. 

J. Fölling et al., “Fluorescence nanoscopy by ground-state depletion and single-molecule return,” Nat. Methods, 5 (11), 943 (2008). https://doi.org/10.1038/nmeth.1257 1548-7091 Google Scholar

6. 

R. Henriques et al., “QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ,” Nat. Methods, 7 (5), 339 (2010). https://doi.org/10.1038/nmeth0510-339 1548-7091 Google Scholar

7. 

A. Sharonov and R. M. Hochstrasser, “Wide-field subdiffraction imaging by accumulated binding of diffusing probes,” Proc. Natl. Acad. Sci. USA, 103 (50), 18911 –18916 (2006). https://doi.org/10.1073/pnas.0609643104 Google Scholar

8. 

B. Dong et al., “Super-resolution spectroscopic microscopy via photon localization,” Nat. Commun., 7 12290 (2016). https://doi.org/10.1038/ncomms12290 NCAOBW 2041-1723 Google Scholar

9. 

B. Huang et al., “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science, 319 (5864), 810 –813 (2008). https://doi.org/10.1126/science.1153529 SCIEAS 0036-8075 Google Scholar

10. 

S. A. Jones et al., “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods, 8 (6), 499 (2011). https://doi.org/10.1038/nmeth.1605 1548-7091 Google Scholar

11. 

Y. Shechtman et al., “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett., 113 (13), 133902 (2014). https://doi.org/10.1103/PhysRevLett.113.133902 PRLTAO 0031-9007 Google Scholar

12. 

Y. Shechtman et al., “Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett., 15 (6), 4194 –4199 (2015). https://doi.org/10.1021/acs.nanolett.5b01396 NALEFD 1530-6984 Google Scholar

13. 

A. Aristov et al., “ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range,” Nat. Commun., 9 (1), 2409 (2018). https://doi.org/10.1038/s41467-018-04709-4 NCAOBW 2041-1723 Google Scholar

14. 

S. Liu et al., “Three dimensional single molecule localization using a phase retrieved pupil function,” Opt. Express, 21 (24), 29462 –29487 (2013). https://doi.org/10.1364/OE.21.029462 OPEXFF 1094-4087 Google Scholar

15. 

A.-K. Gustavsson et al., “3D single-molecule super-resolution microscopy with a tilted light sheet,” Nat. Commun., 9 (1), 123 (2018). https://doi.org/10.1038/s41467-017-02563-4 NCAOBW 2041-1723 Google Scholar

16. 

K.-H. Song et al., “Three-dimensional biplane spectroscopic single-molecule localization microscopy,” Optica, 6 (6), 709 –715 (2019). https://doi.org/10.1364/OPTICA.6.000709 Google Scholar

17. 

B. Huang et al., “Whole-cell 3D STORM reveals interactions between cellular structures with nanometer-scale resolution,” Nat. Methods, 5 (12), 1047 (2008). https://doi.org/10.1038/nmeth.1274 1548-7091 Google Scholar

18. 

S. R. P. Pavani et al., “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA, 106 (9), 2995 –2999 (2009). https://doi.org/10.1073/pnas.0900245106 Google Scholar

19. 

M. F. Juette et al., “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods, 5 (6), 527 –529 (2008). https://doi.org/10.1038/nmeth.1211 1548-7091 Google Scholar

20. 

G. Shtengel et al., “Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure,” Proc. Natl. Acad. Sci. USA, 106 (9), 3125 –3130 (2009). https://doi.org/10.1073/pnas.0813131106 Google Scholar

21. 

S. Jia, J. C. Vaughan and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nat. Photonics, 8 (4), 302 –306 (2014). https://doi.org/10.1038/nphoton.2014.13 NPAHBY 1749-4885 Google Scholar

22. 

A. S. Backer and W. Moerner, “Extending single-molecule microscopy using optical fourier processing,” J. Phys. Chem. B, 118 (28), 8313 –8329 (2014). https://doi.org/10.1021/jp501778z JPCBFK 1520-6106 Google Scholar

23. 

I. Munro et al., “Accelerating single molecule localization microscopy through parallel processing on a high-performance computing cluster,” J. Microsc., 273 (2), 148 –160 (2019). https://doi.org/10.1111/jmi.12772 JMICAR 0022-2720 Google Scholar

24. 

Y. Lin et al., “Quantifying and optimizing single-molecule switching nanoscopy at high speeds,” PLoS One, 10 (5), e0128135 (2015). https://doi.org/10.1371/journal.pone.0128135 POLNCL 1932-6203 Google Scholar

25. 

F. Huang et al., “Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms,” Nat. Methods, 10 (7), 653 –658 (2013). https://doi.org/10.1038/nmeth.2488 1548-7091 Google Scholar

26. 

R. Diekmann et al., “Optimizing imaging speed and excitation intensity for single-molecule localization microscopy,” Nat. Methods, 17 (9), 909 –912 (2020). https://doi.org/10.1038/s41592-020-0918-5 1548-7091 Google Scholar

27. 

S. J. Holden, S. Uphoff and A. N. Kapanidis, “DAOSTORM: an algorithm for high-density super-resolution microscopy,” Nat. Methods, 8 (4), 279 (2011). https://doi.org/10.1038/nmeth0411-279 1548-7091 Google Scholar

28. 

L. Zhu et al., “Faster STORM using compressed sensing,” Nat. Methods, 9 (7), 721 (2012). https://doi.org/10.1038/nmeth.1978 1548-7091 Google Scholar

29. 

S. Zhang, D. Chen and H. Niu, “3D localization of high particle density images using sparse recovery,” Appl. Opt., 54 (26), 7859 –7864 (2015). https://doi.org/10.1364/AO.54.007859 APOPAI 0003-6935 Google Scholar

30. 

M. Ovesný et al., “High density 3D localization microscopy using sparse support recovery,” Opt. Express, 22 (25), 31263 –31276 (2014). https://doi.org/10.1364/OE.22.031263 OPEXFF 1094-4087 Google Scholar

31. 

L. Gu et al., “High-density 3D single molecular analysis based on compressed sensing,” Biophys. J., 106 (11), 2443 –2449 (2014). https://doi.org/10.1016/j.bpj.2014.04.021 BIOJAU 0006-3495 Google Scholar

32. 

N. Boyd et al., “DeepLoco: fast 3D localization microscopy using neural networks,” (2018). Google Scholar

33. 

P. Zelger et al., “Three-dimensional localization microscopy using deep learning,” Opt. Express, 26 (25), 33166 –33179 (2018). https://doi.org/10.1364/OE.26.033166 OPEXFF 1094-4087 Google Scholar

34. 

E. Nehme et al., “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica, 5 (4), 458 –464 (2018). https://doi.org/10.1364/OPTICA.5.000458 Google Scholar

35. 

M. Lu, T. Zhou and X. Liu, “3-D super-resolution localization microscopy using deep-learning method,” Proc. SPIE, 11190 111900U (2019). https://doi.org/10.1117/12.2538577 PSISDG 0277-786X Google Scholar

36. 

T. Kim, S. Moon and K. Xu, “Information-rich localization microscopy through machine learning,” Nat. Commun., 10 (1), 1996 (2019). https://doi.org/10.1038/s41467-019-10036-z NCAOBW 2041-1723 Google Scholar

37. 

E. Hershko et al., “Multicolor localization microscopy and point-spread-function engineering by deep learning,” Opt. Express, 27 (5), 6158 –6183 (2019). https://doi.org/10.1364/OE.27.006158 OPEXFF 1094-4087 Google Scholar

38. 

E. Nehme et al., “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods, 17 (7), 734 –740 (2020). https://doi.org/10.1038/s41592-020-0853-5 1548-7091 Google Scholar

39. 

E. Nehme et al., “Learning an optimal PSF-pair for ultra-dense 3D localization microscopy,” (2020). Google Scholar

40. 

W. Ouyang et al., “Deep learning massively accelerates super-resolution localization microscopy,” Nat. Biotechnol., 36 (5), 460 –468 (2018). https://doi.org/10.1038/nbt.4106 NABIF9 1087-0156 Google Scholar

41. 

S. K. Gaire et al., “Accelerating multicolor spectroscopic single-molecule localization microscopy using deep learning,” Biomed. Opt. Express, 11 (5), 2705 –2721 (2020). https://doi.org/10.1364/BOE.391806 BOEICL 2156-7085 Google Scholar

42. 

Y. Wang et al., “Blind sparse inpainting reveals cytoskeletal filaments with sub-Nyquist localization,” Optica, 4 1277 –1284 (2017). https://doi.org/10.1364/OPTICA.4.001277 Google Scholar

43. 

S. Boyd et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., 3 (1), 1 –122 (2011). https://doi.org/10.1561/2200000016 Google Scholar

44. 

E. Candes et al., “Fast discrete curvelet transforms,” Multiscale Model. Simul., 5 (3), 861 –899 (2006). https://doi.org/10.1137/05064182X Google Scholar

45. 

S. K. Gaire et al., “Accelerated 3D localization microscopy using blind sparse inpainting,” in IEEE 16th Int. Symp. Biomed. Imaging, 526 –529 (2019). https://doi.org/10.1109/ISBI.2019.8759209 Google Scholar

46. 

A. P. Yazdanpanah and E. E. Regentova, “Compressed sensing MRI using curvelet sparsity and nonlocal total variation: CS-NLTV,” Electron. Imaging, 2017 (13), 5 –9 (2017). https://doi.org/10.2352/ISSN.2470-1173.2017.13.IPAS-197 ELIMEX Google Scholar

47. 

“curvelet.org,” (2008) http://www.curvelet.org/index.html Google Scholar

48. 

T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM J. Imag. Sci., 2 (2), 323 –343 (2009). https://doi.org/10.1137/080725891 Google Scholar

49. 

M. V. Afonso, J. M. Bioucas-Dias and M. A. Figueiredo, “An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Trans. Image Process., 20 (3), 681 –695 (2011). https://doi.org/10.1109/TIP.2010.2076294 IIPRE4 1057-7149 Google Scholar

50. 

M. Ovesný et al., “ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,” Bioinformatics, 30 (16), 2389 –2390 (2014). https://doi.org/10.1093/bioinformatics/btu202 BOINFP 1367-4803 Google Scholar

51. 

J. Schindelin et al., “Fiji: an open-source platform for biological-image analysis,” Nat. Methods, 9 (7), 676 (2012). https://doi.org/10.1038/nmeth.2019 1548-7091 Google Scholar

53. 

“Single-molecule localization microscopy—software benchmarking,” http://bigwww.epfl.ch/smlm/challenge2016/index.html?p=datasets Google Scholar

54. 

K. Bernhem and H. Brismar, “SMLocalizer, a GPU accelerated ImageJ plugin for single molecule localization microscopy,” Bioinformatics, 34 (1), 137 –138 (2018). https://doi.org/10.1093/bioinformatics/btx553 BOINFP 1367-4803 Google Scholar

55. 

Y. Li et al., “Real-time 3D single-molecule localization using experimental point spread functions,” Nat. Methods, 15 (5), 367 (2018). https://doi.org/10.1038/nmeth.4661 1548-7091 Google Scholar

56. 

J. Ries, “SMAP: a modular super-resolution microscopy analysis platform for smlm data,” Nat. Methods, 17 (9), 870 –872 (2020). https://doi.org/10.1038/s41592-020-0938-1 1548-7091 Google Scholar

57. 

Z. Wang, E. P. Simoncelli and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Thirty-Seventh Asilomar Conf. Signals, Syst. and Comput., 1398 –1402 (2003). https://doi.org/10.1109/ACSSC.2003.1292216 Google Scholar

58. 

A. Aristov et al., “ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range,” (2018) https://github.com/imodpasteur/ZOLA-3D Google Scholar

Biography

Sunil Kumar Gaire received his PhD in electrical engineering from The State University of New York at Buffalo (SUNY-Buffalo), Buffalo, New York, in Fall 2020. He received his MS degree in electrical engineering from the University of North Dakota, Grand Forks, North Dakota, in 2017 and his bachelor’s degree in electronics and communication engineering from Purbanchal University, Nepal, in 2007. His research interests include optical imaging, machine learning, signal and image processing, and wireless and optical communications.

Yanhua Wang received his PhD from the Beijing Institute of Technology, Beijing, China in 2011. After his PhD, he worked as a postdoctoral research associate at SUNY-Buffalo, Buffalo, New York, USA. Currently, he is an associate professor at the Beijing Institute of Technology. His research interests include image reconstruction, compressed sensing, magnetic resonance imaging, and radar signal processing.

Hao F. Zhang received his PhD in biomedical engineering from Texas A&M University, College Station, Texas, in 2006. Currently, he is a professor in the Department of Biomedical Engineering and the Department of Ophthalmology (by courtesy) at Northwestern University, Evanston, Illinois. His research interests include single-molecule imaging, optical coherence tomography, ophthalmic imaging, and image processing.

Dong Liang received his PhD in pattern recognition and intelligent systems from Shanghai Jiao Tong University in 2006. He is currently a professor at Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China. His research interests include image reconstruction, compressed sensing, magnetic resonance imaging, and machine learning.

Leslie Ying received her PhD in electrical engineering from the University of Illinois at Urban-Champaign, in 2003. Currently, she is the Clifford C. Furnas Professor of Electrical Engineering and a professor of biomedical engineering at the SUNY-Buffalo. She is also the editor-in-chief of IEEE Transactions on Medical Imaging and a fellow of the American Institute for Medical and Biological Engineering (AIMBE). Her research interests include image reconstruction, magnetic resonance imaging, compressed sensing, machine learning, and statistical signal processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Sunil Kumar Gaire, Yanhua Wang, Hao F. Zhang, Dong Liang, and Leslie Ying "Accelerating 3D single-molecule localization microscopy using blind sparse inpainting," Journal of Biomedical Optics 26(2), 026501 (27 February 2021). https://doi.org/10.1117/1.JBO.26.2.026501
Received: 30 November 2020; Accepted: 21 January 2021; Published: 27 February 2021
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
3D image reconstruction

3D image processing

3D acquisition

Microscopy

Super resolution

Image processing

Stereoscopy

Back to Top