Open Access
18 July 2024 Quantitative phase imaging techniques for measuring scattering properties of cells and tissues: a review—part I
Author Affiliations +
Abstract

Significance

Quantitative phase imaging (QPI) techniques offer intrinsic information about the sample of interest in a label-free, noninvasive manner and have an enormous potential for wide biomedical applications with negligible perturbations to the natural state of the sample in vitro.

Aim

We aim to present an in-depth review of the scattering formulation of light–matter interactions as applied to biological samples such as cells and tissues, discuss the relevant quantitative phase measurement techniques, and present a summary of various reported applications.

Approach

We start with scattering theory and scattering properties of biological samples followed by an exploration of various microscopy configurations for 2D QPI for measurement of structure and dynamics.

Results

We reviewed 157 publications and presented a range of QPI techniques and discussed suitable applications for each. We also presented the theoretical frameworks for phase reconstruction associated with the discussed techniques and highlighted their domains of validity.

Conclusions

We provide detailed theoretical as well as system-level information for a wide range of QPI techniques. Our study can serve as a guideline for new researchers looking for an exhaustive literature review of QPI methods and relevant applications.

1.

Introduction: Quantitative Phase Imaging

Determination of the 3D refractive index distribution of a thin phase object (transparent to visible light, with negligible absorption) requires precise knowledge of phase information of the scattered field. Traditional brightfield microscopy provides poor contrast for biological samples, such as single cells or thin tissue slices. Zernike phase contrast (PC) microscopy1 and differential interference contrast (DIC) microscopy2 were the two techniques developed to enhance the contrast for visualization of biological samples. The mode of operation of the above-mentioned techniques is the interference between two field components (scattered and unscattered beam for phase contrast microscopy and two laterally sheared orthogonal polarizations for DIC). Although the contrast is enhanced, the precise reconstruction of the object’s 3D refractive index distribution is still not possible using PC or DIC microscopy alone, as these techniques are qualitative in nature. Accurate phase extraction is made possible today by a wide variety of label-free, quantitative phase imaging (QPI) techniques.3,4 Due to this ability of precise phase retrieval, determination of an object’s 3D refractive index distribution can be achieved. QPI techniques are extremely sensitive to optical pathlength (of the order of nm, both spatially and temporally). These QPI techniques are proving to be highly successful in different realms of biomedicine ranging from characterization of cell membrane fluctuations, transport of cellular materials through organelles, cell growth, etc.3

Although extremely sensitive to optical pathlength fluctuations, both spatially and temporally, QPI techniques inherently have lower specificity to the type of subcellular components as compared to fluorescence imaging. To address this issue, researchers have employed deep learning to introduce computational specificity to the phase images.58

In this review, we aim to discuss different realizations of 2D QPI techniques as applied to biological specimens. We start with a mathematical discussion of principles of QPI in Sec. 2. We then focus on 2D scattering and present the theoretical framework of 2D scattering from cells and thin tissue slices in Secs. 2.2.1 and 2.2.2. This is followed by a discussion of 2D QPI techniques, such as spatial light interference microscopy (SLIM),9 diffraction phase microscopy (DPM),10 gradient light interference microscopy (GLIM),11 Epi-GLIM,12 Hilbert phase microscopy (HPM),13 and other digital holographic microscopy (DHM)14 techniques in Sec. 2.2.3. Fourier transform light scattering (FTLS)15 [an elastic/static light scattering (ELS) technique] is described in Sec. 2.2.4. We conclude with a survey of reported biological applications based on the scattering measurements in Sec. 3.

2.

Principles of QPI

2.1.

Mathematical Formulation

Phase retrieval through QPI techniques is possible due to the interference of the reference and object fields. A complex object field (the field that results from the interaction of the incident-illumination field with the object) can be expressed as3

Eq. (1)

U(x,y,t)=|U(x,y)|ei[ωtk·r],
where the ensemble average of a quantity is denoted by , the temporal frequency is denoted by ω, and the spatial frequency (wavevector of the field) is denoted by k.

Upon interference of the object field U with a reference Ur=|Ur|ei(ωtrkr·r) (assumed to be a plane wave), the resulting irradiance at a detector plane is

Eq. (2)

ID(x,y)=|U(x,y)|2+|Ur|2+2|U(x,y)||Ur|cos[ω(ttr)(kkr)·r+ϕ(x,y)],
where ϕ(x,y) denotes the phase difference between the object and reference fields. To retrieve ϕ(x,y), a controlled modification of the total phase is required.

From Eq. (2), we note that there are two controllable quantities that can be modified to introduce desired phase modulations. The first is the temporal modulation of the reference field tr. QPI methods that exploit temporal modulations are called (temporal) phase-shifting methods [Figs. 1(a) and 1(c)].4,16 The second is the spatial modulation induced by a tilted reference field, described by wavevector kr, which gives rise to another family of QPI techniques called (spatial) off-axis interferometry methods [Figs. 1(b) and 1(d)].4,16

Fig. 1

Modes of QPI operation: (a) temporal phase-shifting setup; (b) off axis setup; and (c) phase reconstruction from four temporal phase-shifted measurements in (a), (d). Phase reconstruction from single-shot measurement in (b). IS, imaging system; BS, beam splitter. (a), (b) Adapted from Ref. 16 under CC BY license; (c), (d) reproduced with permission from Ref. 4, © 2018, Springer Nature Limited.

JBO_29_S2_S22713_f001.png

QPI techniques can be compared based on four figures of merit. These are: speed of acquisition (time required to produce one phase image), transverse resolution (maximum spatial frequency coverage), temporal optical pathlength sensitivity (the smallest detectable fluctuations in the optical pathlength at a specific spatial location over time), and spatial optical pathlength sensitivity (the smallest detectable fluctuations in the optical pathlength across a field of view at a constant time).3 Temporal phase-shifting QPI methods have superior transverse resolution and spatial optical pathlength sensitivity but have slower rate of acquisition due to the acquisition of multiple intensity frames required to produce one phase image. Hence, for this class of QPI, optimal space-bandwidth product is achieved but with a suboptimal time bandwidth.

Optimal time bandwidth (acquisition rate cut-off determined by the acquisition rate of the camera) can be achieved in off-axis QPI methods because of a single-shot operation. The downside of such off-axis QPI techniques comes in the form of reduction in the space-bandwidth product because of the use of a tilted reference beam. For such systems, the maximum spatial frequency coverage now depends on the angle of tilt in the reference field in addition to the numerical aperture (NA) of the imaging optics.3 Thus off-axis QPI techniques provide high time bandwidth but a suboptimal space bandwidth.

Temporal phase noise refers to the temporal fluctuation per pixel in the measured image and is mainly caused by the environmental fluctuations in the optical paths of the two interfering beams, due to vibrations, temperature fluctuation, air current, etc., in addition to the detector temporal noise. To reduce this type of noise in off-axis QPI, a common path geometry is used such that both the object and the reference fields share the same optical path. This geometry results in both fields being degraded by similar noise levels, which can then be suppressed during the phase retrieval process that is described later. This results in improved temporal sensitivity of the QPI system.

Spatial phase noise is defined as the spatial inhomogeneity in the pixel values of the phase image over the entire field of view, which arises due to the speckles generated by imperfections in the optical components along the beam path, such as small scatterers like dust, etc. in addition to the spatial noise introduced by the cameras. To reduce this type of noise, broadband illumination is commonly employed. Such broadband illumination sources have low temporal coherence (few microns), which diminishes the speckles and results in an improved spatial sensitivity.3

The signal-to-noise ratio (SNR) of the measured data strongly affects the performance of phase retrieval methods.17,18 For optical imaging, amplitude SNR is defined as |I|/σ, where I is the measured image and σ is its standard deviation representing noise.3 The source of noise can be both spatial as well as temporal as discussed above.19 To increase the SNR, several techniques can be employed. One method is employing averaging during image acquisition. For incoherent noise (where the noise is uncorrelated between successive frames), the SNR improvement is of the order of N, where N is the number of acquisitions per frame. For coherent noise, due to the correlations between the noise in successive frames, the SNR improvement factor is less than N and is dependent on the degree of correlation between the noise within repeat acquisitions. Another technique to increase SNR is spatial filtering the images. Caution must be exercised in the choice of the type of spatial filtering (low pass, high pass, or band pass) so as to not to sacrifice the resolution of the image.3 SNR can also be increased using a high-power illumination source, but care must be exercised in adjusting the exposure time as photodamage to the sample can occur. A better way to increase SNR is noise reduction, using noise-efficient or low-noise detectors20 or utilizing computational denoising techniques.2125 An interesting development to offset temporal noise using the intensity correlations as the measured parameter has been recently reported in literature as a phase imaging method amenable to low-light conditions.26

2.2.

2D Scattering, QPI Geometries, Fourier Transform Light Scattering

In this section, we discuss some QPI techniques for extracting 2D light scattering signals and describe their applications.

2.2.1.

Scattering from thin tissue slices under the Born approximation

The linear interaction of light with matter results in two optical phenomena: absorption and scattering. The relative strength of these interactions is determined by the optical properties of the object, i.e., the imaginary and real parts of the refractive index as well as the object’s thickness. Solving inverse problems to determine the underlying refractive index distribution gives insight into the object structure. Here we focus on optical methodologies that can recover the scattering properties via quantitative phase measurements and estimate structural information of the object using tomographic techniques. However, before discussing these, a description of the physical meaning of the phase measurements in forward and backward geometries is provided.27

QPI techniques provide highly sensitive optical phase measurements in both transmission and reflection geometries.3 When the incident light associated with a wavevector ki interacts with a scattering sample characterized by an inhomogeneous refractive index distribution n(r) and thickness L, it is scattered in both forward and backward directions [Fig. 2(a)].27 The meaning of the wavefield phase in forward scattering measurements is straightforward. Under weakly scattering conditions that are satisfied by thin biological samples, it is the pathlength accumulation encountered by the incident light upon propagation through the object that gives rise to the phase measurements. As such, the measurement in transmission as shown in Figs. 2(b) and 2(c)27 is related to the optical pathlength or the product of refractive index difference and depth of the object. Thus from forward scattering measurements, we can recover the structural information of the object with high accuracy. Differently, the backscattered phase [Fig. 2(d)] originates from the interference of the backscattered waves from different depths within the object.27

Fig. 2

Forward and backscattering phase: (a) scattering geometry; (b) refractive index distribution of a neuron from forward scattering measurements; (c) calculated phase measurements in forward scattering mode; (d) calculated phase map for backscattering mode; and (e) phase profiles along the white lines in (c) and (d). (a)–(e) Reproduced with permission from Ref. 27, © 2017, Optical Society of America (Optica).

JBO_29_S2_S22713_f002.png

To mathematically describe the forward and backscattered field components, consider the geometry in Fig. 2(a).27 A plane wave with wavevector ki=β0z^ is incident on an object with refractive index n(r) and thickness L. The quantities Ub and Uf denote the backscattered and forward scattered scalar fields, respectively. Consider the inhomogeneous Helmholtz equation:

Eq. (3)

2U1(r,ω)+n02β02U1(r,ω)=β02χ(r,ω)U0(r,ω),
where U0(r,ω)=A(ω)ein0β0z is the incident plane wave along the z axis, U1 is the scattered field, n0 is the refractive index of the surrounding medium, χ(r,ω)=n2(r,ω)n02 is the scattering potential of the object, and β0=ω/c is the wavenumber in vacuum. Following the Fourier transform procedure outlined in Ref. 27, the forward scattered measurement Uf can be expressed as the sum of the incident and scattered light as

Eq. (4)

Uf(k,z,ω)=A(ω)δ(k)eiβziβ02A(ω)eiγz2γχ(k,γβ,ω),
where k=kxx^+kyy^, γ=β2k2, and β=n0β0, kx and ky denote the wavevectors in the transverse (x and y) directions, respectively, and δ denotes the Dirac delta function. Similarly, the detected field in the backscattered geometry can be expressed as

Eq. (5)

Ub(k,z,ω)=A(ω)δ(k)eiβz+iβ02A(ω)eiγz2γχ(k,γβ,ω)A(ω)δ(k)eiβz.

Note that there is no incident field in backscattered measurement. Also note that Eq. (5) includes an addition and subtraction of the incident field (first and third terms). The second term on the right hand side of Eq. (5) is the backscattered solution of Eq. (3).

Under the small angle approximation, γβ, transforming Eqs. (4) and (5) to the spatial domain leads to the following expressions:

Eq. (6)

Uf(r,z,ω)=A(ω)eiβzi2n0β0A(ω)eiβzL/2L/2[n2(r,z,ω)n02]eikzzdz|kz=0,

Eq. (7)

Ub(r,z,ω)=A(ω)eiβz+i2n0β0A(ω)eiβzL/2L/2[n2(r,z,ω)n02]eikzzdz|kz=2βA(ω)eiβz.

Since for most biological samples, the refractive index contrast between the surrounding media and object is low, n2(r,ω)n02 can be approximated as 2n0(n(r,ω)n0) and Eqs. (6) and (7) can be simplified to27

Eq. (8)

Uf(r,z,ω)=A(ω)eiβzeiβ0[n¯(r,ω)n0]L,

Eq. (9)

Ub(r,z,ω)=A(ω)eiβzeiβ0L/2L/2[n(r,z,ω)n0]ei2βzdzA(ω)eiβz.

Here n¯(r,ω)=1LL/2L/2n(r,z,ω)dz is the refractive index averaged along the z direction. These equations show that, in the transmission geometry, the argument of the second exponential in Eq. (8) gives the phase shift induced by the object ϕ=β0[n¯(r,ω)n0]L. However, from Eq. (9), there is no such straightforward explanation of the phase in the backscattered field. From the argument of the second exponential in the first term of Eq. (9), the phase difference is the weighed axial projection of refractive index shift with the weighing factor being ei2βz, which is the accumulated phase at different depths. Thus the phase in backscattering arises from the superposition of plane waves, backscattered from different depths within the object.27

The 3D refractive index distribution of a sample imaged through SLIM is shown in Fig. 2(b).27 Figures 2(c) and 2(d) show the phase measurements reported in Ref. 27 for forward scattering and backscattering. SLIM images were used to obtain the forward scattered phase map and using the mathematical relations outlined in Ref. 27 and mentioned in detail in “Spatial light interference microscopy” section, the backscattered phase map was obtained. As evident in Fig. 2(e), there is a considerable difference between forward scattered and backscattered phase measurements.27

2.2.2.

Scattering phase theorem

The scattering parameters of an object include the scattering mean free path ls and the scattering anisotropy factor g. The scattering mean free path ls represents the mean length between two scattering incidents and the anisotropy factor g describes the directivity of the scattering or the average cosine of the scattering angle. Traditionally, the estimation of these parameters requires experiments combined with diffusion models or Monte Carlo simulations.2830

The scattering phase theorem31 connects the phase measurements for thin tissue slices to the scattering parameters of the bulk object through simple mathematical formulas. Mathematical relations between the phase ϕ, scattering mean free path ls, and scattering anisotropy factor g were derived in Ref. 31. The reported results allow one to extract the corresponding ls and g maps from experimental phase measurements.31

To derive the scattering phase theorem relations, it was assumed (in Ref. 31) that there is no effect of absorption; the sample is thus a phase object and hence satisfies the first-order Born approximation. It was also assumed that the phase shift is random and follows Gaussian distribution. By the use of the geometry shown in Fig. 3(a)31 and following the discussion in the previous section, the measured field in the transmission geometry, Uf, is the sum of the incident (unscattered) field U0 and forward scattered field U1:

Eq. (10)

Uf(r)=U0+U1(r)=U0eiϕ(r),
where ϕ is the phase information extracted from the intensity measurements. The incident field U0 represents a spatially homogenous or an average (DC) term (here and elsewhere in this paper, we use DC to indicate quantity at zero-spatial frequency or an average term), which represents ballistic or unscattered component of total detected field. The DC term U0 can hence be calculated as the spatial average of detected field (spatial averaging removes the dependence on spatial coordinate and hence makes the term a constant in space), Uf, so U0=U0eiϕ(r)r.

Fig. 3

Scattering from thin tissue slices: (a) scattering geometry for a sample of thickness L; (b) phase map of entire rat liver tissue slice; (c) extracted scattering mean free path map; (d) extracted scattering anisotropy map; (e) histograms of scattering mean free path for heart versus liver; and (f) histograms of scattering anisotropy for heart versus liver. (a) Adapted with permission from Ref. 31, © 2011, Optica; (b)–(f) reproduced with permission from Ref. 32, © 2011, Optica.

JBO_29_S2_S22713_f003.png

Considering that the probability density function of phase shifts follows a Gaussian distribution, the DC term U0 can be calculated as

Eq. (11)

U0=U02πΔϕ2reiϕeϕ22Δϕ2rdϕ=U0eΔϕ2r2,
where Δϕ2r denotes the variance of phase shift. Taking square of both sides in Eq. (11) to get the irradiance relationship I0=I0eΔϕ2r and comparing with Lambert–Beer law I0=I0eL/ls, the scattering mean free path ls can be expressed as31

Eq. (12)

ls=LΔϕ2(r)r.

This equation relates the scattering mean free path ls to the variance in the measured phase shift Δϕ2(r)r through an inverse relationship. It implies that for objects with large variance in phase have shorter ls. This draws an intuitive picture, in the sense that high variance in phase suggests large inhomogeneity in the sample that causes higher scattering and low scattering mean free path.

The scattering anisotropy factor g denotes the directionality of the scattering events and is defined as the angular mean of cosine of the scattering angle g=cosθθ. The scattering angle θ can be expressed in terms of the vector difference between incident wavevector k0, with |k0|=β0 and the scattered wavevector ks, with |ks|=β0, the difference termed as momentum transfer q:

Eq. (13)

q=ksk0,q=qx2+qy2,

Eq. (14)

q=2β0sin(θ2),cosθ=1q22β02.

The probability distribution function for the angular scattering can be expressed as31

Eq. (15)

P(qx,qy)=|Uf˜(qx,qy)|2|Uf˜(qx,qy)|2dqxdqy.

Since the mean of a random function f(x) with probability density function g(x) can be calculated as f(x)x=f(x)g(x)dx, using Eqs. (14) and (15), g can be expressed as

Eq. (16)

g=11cos(θ)p[cos(θ)]d[cos(θ)]11p[cos(θ)]d[cos(θ)]=1|[ϕls(r)]|2r2β02,
where |[ϕls(r)]|2r is the variance of the gradient of measured phase for an object of thickness ls. For an object thickness L, which is much smaller than the scattering mean-free path ls, i.e., for weakly scattering samples, g can be written as31

Eq. (17)

g=1(lsL)2|[ϕ(r)]|2r2β02.

Using Eq. (12), g can be expressed in terms of phase alone as31

Eq. (18)

g=112β02|[ϕls(r)]|2rΔϕ2(r)r2.

This expression states that the higher the ratio of variance of phase gradient to the phase variance, the smaller the g is and the higher the scattering angle θ is.31

These relationships in Eqs. (12) and (18) that relate the measured phase and scattering parameters are referred to as the scattering-phase theorem.31 The significance of this theorem lies in the fact that to determine the scattering parameters of an object, measurement of phase perturbations introduced by the object is sufficient. The effects of frequency averaging of the measured scattering parameters due to broadband illumination and limited NA of the objective are discussed in Ref. 32.

Experimental corroboration of the scattering phase theorem is provided in Ref. 32, where SLIM, a form of QPI discussed in detail later, is used to quantitatively measure the phase of entire rat liver tissue slice of depth 5  μm. The phase map is shown in Fig. 3(b).32 Using Eqs. (12) and (18), maps of scattering mean free path ls and scattering anisotropy g are shown in Figs. 3(c) and 3(d), respectively.32 Histograms of the scattering parameter measurements for heart and liver tissue slices are shown in Figs. 3(e) and 3(f),32 which show that the heart tissue is a stronger scatterer as compared to the liver tissue and that the scattering is mostly forward in heart tissue, as g is nearing unity. These measurements are consistent with the traditional diffusion scattering measurements.32

2.2.3.

Transmission/reflection geometries

Spatial light interference microscopy

SLIM is a temporal phase-shifting QPI technique that enables the extraction of the phase delay introduced by the sample.9 SLIM is based on the principles of phase contrast microscopy (for contrast enhancement of weak scattering biological samples) and Gabor’s holography (for phase extraction) and is implemented in a common path phase-shifting geometry. The optical setup for SLIM is shown in Fig. 4(a).9 A SLIM module is installed at the output port of a standard phase contrast microscope, which includes broadband annular illumination and a phase contrast objective lens. The Fourier lens L1 performs the spatial Fourier transform on the output from the phase contrast microscope. This spatial frequency content is projected on a liquid crystal phase modulator (LCPM) or a reflective, electrically addressed spatial light modulator (SLM), which provides external phase modulations between the reference and object fields. The Fourier lens L2 projects the frequency content back to spatial domain and an intensity image is captured at the camera (CCD or sCMOS) [Fig. 4(a)].9

Fig. 4

SLIM principle and operation: (a) SLIM system setup; (b) four-phase shifted intensity measurements; (c) resulting SLIM optical pathlength map; (d) spectrum of broadband illumination source; (e) autocorrelation curve for the spectrum shown in (d); (f) calibration curve showing the captured intensity versus grayscale value fed to the LCPM; (g) phase calibration curve for the LCPM extracted from (f); (h) optical pathlength fluctuation map for a sample-less field of view; and (i) histogram of optical pathlength fluctuations in (h) showing the spatial optical pathlength sensitivity of SLIM to be 0.7 nm. (a)–(g) Reproduced with permission from Ref. 9, © 2011, Optica; (h), (i) reproduced from Ref. 33 under CC BY license.

JBO_29_S2_S22713_f004.png

As described earlier, the complex field on the image plane (where the image forms) after passing through a weakly scattering sample can be expressed as a sum of an incident reference field U0 and a scattered object field U1(x,y).9,34

The recorded intensity at the detector (camera) plane (for collinear reference and object fields) can be expressed as

Eq. (19)

I(x,y;φ)=|U0|2+|U1(x,y)|2+2|U0||U1(x,y)|cos(Δϕ(x,y)+φ).

Here Δϕ(x,y) is the phase difference between the reference and the object fields, and φ is the external phase shift (applied to the reference field) that is modulated in steps of π/2 to enable phase extraction [Fig. 4(b)].9 These phase modulations are temporal in nature and represent the first term of the cosine (ω(ttr)) in Eq. (2). For each of the four temporal phase modulation steps corresponding to φ=0,π/2,3π/2, and π, individual intensity frames are recorded. The phase difference Δϕ(x,y) is extracted using the phase-shifting algorithm35 and is expressed as9,34

Eq. (20)

Δϕ(x,y)=tan1[I(x,y;π/2)I(x,y;π/2)I(x,y;0)I(x,y;π)].

Considering the field amplitude division factor ξ=|U1(x,y)|/|U0|, the phase of the complex field is expressed as9,34

Eq. (21)

ϕ(x,y)=tan1[ξ(x,y)sin(Δϕ(x,y))1+ξ(x,y)cos(Δϕ(x,y))].

Optical pathlength maps can also be extracted from the measured phase shifts (substituting the wavelength with the central wavelength for broadband illumination) as shown in Fig. 4(c).9 A uniform background of the phase/optical pathlength maps due to the suppression of speckles is achieved due to the broadband illumination used in SLIM. The spectrum and the derived temporal autocorrelation for such a source is shown in Figs. 4(d) and 4(e), respectively.9

During calibration, the dependence of the phase shift introduced by the SLM on the input gray level values is determined. The SLM is operated in the amplitude modulation mode by placing it in between two crossed polarizers. The SLM is fed discrete gray level values in the range of 0 to 511 and the corresponding intensity image is recorded [Fig. 4(f)].9 The sine component of the complex exponential signal is calculated using Hilbert transform of the recorded intensity, which is an interferogram with a DC and a cosine component, and the phase is extracted [Fig. 4(g)].9

Figures 4(h) and 4(i) show the spatial optical pathlength sensitivity of the SLIM system.33 Temporal sensitivity (defined as the standard deviation of the optical pathlength fluctuations at a specific spatial location over time) was characterized by a time-lapse measurement of the same 10×10  μm2 field of view for 256 frames and was found to be 0.03 nm.9 The spatial sensitivity (defined as the standard deviation of the optical pathlength fluctuations over space) was characterized to be <1  nm.9,33 SLIM’s high optical pathlength sensitivity makes it an ideal choice of instrument for multiscale applications, such as diffraction-limited virus imaging [Fig. 5(a)],33 single cell and intracellular organelle imaging [Fig. 5(b)], blood smears for pathology studies,36 and tissue imaging [Fig. 5(c)].

Fig. 5

SLIM applications: (a) virology: the top row is the volumetric and the bottom row is the surface reconstruction from the SLIM images of SARS-CoV-2, H1N1, HAdV, and ZIKV particles, respectively. (b) Single cell imaging and intracellular organelle detection and (c) whole tissue imaging. (a) Reproduced from Ref. 33 under CC BY license.

JBO_29_S2_S22713_f005.png

Fourier phase microscopy (FPM) is the predecessor of SLIM.37 Its principle of operation is similar to SLIM except for the use of an annular illumination and a base phase contrast microscope in the case of SLIM. FPM and its later variants were developed as an add-on module for brightfield microscopes and have been employed successfully for evaluation and monitoring of cell growth and cellular dynamics.38

Diffraction phase microscopy

DPM is an off-axis technique, and thus requires only a single shot measurement to extract the phase map of the sample.10,3943 It is a common path instrument based on Mach–Zehnder interferometric configuration, which employs spatial modulation to extract phase information. The DPM optical setup is shown in Fig. 6(a), as a module installed at the output port of a standard optical microscope.43 At the image plane, a grating is placed that separates the light into different diffraction orders. Each diffraction order carries full image information.10,43

Fig. 6

Diffraction phase microscopy: principle of operation, reconstruction, and application: (a) optical setup for DPM; (b) spatial frequency information of the interferogram; (c) single side-band is selected after bandpass filtering (b); (d) band-passed frequency domain image, (c) is translated to DC; (e) optical pathlength map of RBC obtained with DPM; (f) thickness profile through the center of the cell indicated by arrow in (e); and (g) nanometer scaled temporal fluctuations in the membrane thickness observed at three points marked in (e) through DPM. (a)–(d) Reproduced with permission from Ref. 43, © 2014, Optica; (e)–(g) reproduced with permission from Ref. 10, © 2006, Optica.

JBO_29_S2_S22713_f006.png

The field at the diffraction grating plane can be written as43

Eq. (22)

UG(x,y)=U0(x,y)+U1(x,y)eiαx,
where U0(x,y) is the zeroth order and U1(x,y) is the first-order diffracted field. At the focal plane of lens L1, a custom spatial filter is placed that passes the zeroth-order diffraction beam and filters the first-order diffraction beam, whereas blocking all other orders of diffraction. Lens L2 performs another Fourier transform to make the first order approximately a plane wave at CCD plane. This homogenous first-order diffraction beam serves as the reference beam and interferes with the zeroth order object beam that contains information of the sample. The interferogram is recorded, 2D Fourier transformed, and spatially filtered around kx=α, with a radius of kmax=β0NAobj, to extract the cosine term. The extracted frequency content is translated back to the frequency origin (DC), as can be seen in Figs. 6(b)6(d).43 The spatial filter is apodized with a Gaussian kernel to avoid ringing in the final image. This procedure is repeated for both sample image and a sample-less background calibration image. After the inverse FFT, the two complex fields are divided to remove the background phase, and the phase information is recovered by taking the argument of the complex field.43 In addition to the above-described method for phase extraction from off-axis measurements, several other computational techniques are used in the literature like integral transforms, derivative methods, etc.4345 Thus the phase extraction is a single shot, and the throughput is only limited by the acquisition rate of the CCD.10 The spatial and temporal optical pathlength sensitivities of DPM is 3 and 0.6 nm, respectively.43

The high optical pathlength sensitivity and single shot measurements enable DPM to be used for the measurement of fast dynamic processes, such as measurements of membrane fluctuations over time. Popescu et al.10 were able to successfully determine the membrane fluctuations of a single RBC using DPM.10 Figure 6(e)10 shows the optical pathlength map of a single RBC with thickness profile along the direction of the arrow shown in Fig. 6(f).10 The nanometer membrane fluctuations over three points in the cell marked as 1, 2, and 3 are shown in Fig. 6(g) for a period of 10 s.10 It was observed that the fluctuations toward the edge of the cell, signified through the standard deviation, are more pronounced as compared to those at the center point of RBC. This remarkable measurement was possible due to the common path configuration of DPM, which provides nm sensitivity in optical pathlength measurements and the time resolution is small due to a single shot measurement capability of DPM.10

DPM can be implemented in both transmission and reflection modes. Figure 7(a) shows the transmission mode DPM where a 532 nm Nd:YAG laser is used as illumination source.43 The incident laser beam is passed through a single mode fiber to be coupled to a fiber collimation assembly that produces a collimated beam. This collimated beam passes through the collector lens in the microscope that focuses the beam into a point source at the condenser aperture. The condenser lens then transforms the point source into a collimated beam, fully illuminating the sample homogenously. Light after the sample is collected by the objective lens and relayed to the output port by the tube lens. The rest of the setup is as explained previously in this section.

Fig. 7

Geometries and variations of DPM: (a) transmission mode DPM (b); reflection mode DPM; (c) white light DPM (wDPM); (d) comparison of white light; and (e) laser DPM for sample-less images and the spatial frequency information (f) white light and (g) laser DPM, respectively; and (h) endoscopic DPM. (a), (b), (d), and (e) Reproduced with permission from Ref. 43, © 2014, Optica; (c), (f), and (g) reproduced with permission from Ref. 39, © 2012, Optica; and (h) reproduced with permission from Ref. 46, © 2018, Optica.

JBO_29_S2_S22713_f007.png

The reflection mode geometry of DPM is as shown in Fig. 7(b).43 Instead of passing through the sample, light is now reflected or backscattered by it. The DPM setup remains the same as that in the transmission mode.43

Since these DPM systems employ a laser as an illumination source, they inherently suffer from a high-contrast speckle pattern across the field of view. The source of the speckle pattern is the high coherence of the laser radiation. In Refs. 39, 42, and 47, white light was used as an illumination source in both transmission and reflection geometry to eliminate the speckle in DPM images. White light DPM increases the spatial sensitivity of the DPM images by decreasing the noise threshold. Figure 7(c) shows the white light DPM (wDPM) setup, where light through a broadband source serves as an illumination beam.39 Creating a perfect plane wave reference beam is not possible with such a source. However, an approximation can be created by increasing the spatial coherence of illumination, by closing the condenser aperture. Due to the imperfect reference beam, the image formed at the camera plane suffers from halo artifact around the edges of the objects.39,42,43,47 Figures 7(d)7(g) show the comparison between a sample-less area image obtained through wDPM and laser DPM, respectively.39,43

The spatial optical pathlength sensitivity is improved from 2.8 nm in laser DPM to 1.1 nm in wDPM. However, since white light is a broadband source, it is typically of low brightness and does suffer from dispersion, which can be reduced using achromatic optics and image processing to remove artifacts.43 Endoscopic DPM as shown in Fig. 7(h), is a very promising tool for in vivo diagnostics.46 It consists of an endoscope module constructed using a GRIN lens assembly and a standard DPM module.

DPM has been utilized for the study of various dynamic phenomena that include erythrocyte membrane fluctuations,48,49 which will be discussed in detail in Sec. 2.2.4.

Gradient light interference microscopy

Nonlinear (multiphoton) microscopy has been the traditional optical imaging method for thick/multicellular samples, which are multiple scattering in nature, such as embryos and organoids. Such nonlinear microscopy methods, however, involve high peak power illumination of the sample, which could result in damage to the sample by phototoxicity. GLIM, which is a low-coherence interferometry technique, was developed to counteract the problems associated with nonlinear microscopy and extract phase information from multiple scattering samples. The operating principle of GLIM is based on DIC, low-coherence interferometry, and holography.11 Due to broadband, incoherent illumination, and coherence gating phenomena, GLIM can achieve higher depth sectioning. The low-coherence interferometric aspect of GLIM and the resultant high-depth sectioning combined with phase-shifting reconstruction helps to suppress multiple scattering from the sample. Thus GLIM has been applied to imaging 3D samples for both structural as well as compositional information.11,12,50,51

GLIM has been developed as an add-on module for a standard DIC microscope [Fig. 8(a)].11 In a DIC microscope, the broadband (a broadband LED or a halogen lamp) illumination beam is divided by a Nomarski prism into two orthogonally polarized beams that are laterally sheared. The lateral separation is less than the width of the diffraction spot. After passing through the sample, these two beams carry similar image information but are phase shifted due to the lateral separation. A second Nomarski prism after the objective lens recombines the two beams. These two beams still cannot interfere due to having orthogonal polarizations. For interference to happen, in DIC operation, a polarizer is placed after the second Nomarski prism and is aligned at 45 deg to the polarizations of both the incoming beams. For GLIM operation, a liquid crystal variable retarder (LCVR) or an SLM that can introduce additional phase shifts ϕn=nπ/2, where n=0,1,2,3, to one of the beams [Fig. 8(b)11] is placed between the objective Nomarski prism and the final polarizer.11

Fig. 8

GLIM and epi-GLIM: (a) optical setup for GLIM; (b) four-phase shifted intensity detections; (c) phase gradient map extracted from four frames in (b); (d) phase map obtained after integration of (c) along shear direction; (e) comparison of measured and calculated phase distribution for the polystyrene bead; (f) time lapse imaging of Bovine embryo for a week performed using GLIM; (g) optical setup for epi-GLIM; (h) four-frames and extracted phase gradient image of a bead using epi-GLIM; (i) phase distribution comparison between calculated and measured backscattered phase for the polystyrene bead; (j) phase-gradient map of neurons in a high-density well-plate (1536 wells) imaged with epi-GLIM; (k) phase gradient map of a neuron imaged on a plastic substrate obtained through epi-GLIM; (l) comparison of axial psf in transmission and reflection GLIM geometries. (a)–(f) Reproduced from Ref. 11 under CC BY license; (g)–(l) reproduced from Ref. 12 under CC BY license.

JBO_29_S2_S22713_f008.png

The field at the camera plane is

Eq. (23)

Un(r)=U(r)+U(r+δr)eiϕn.

Here δr is the lateral shift between the two orthogonal polarization fields, and ϕn=nπ/2, n=0 to 3 is the phase modulation offset introduced by the LCVR, corresponding to the temporal modulation term (ω(ttr)) in Eq. (2). U(r) and U(r+δr) correspond to the two interfering fields in Eq. (2), representing object and reference fields. The intensity on the detector plane can then be expressed as

Eq. (24)

In(r)=I(r)+I(r+δr)+2|γ(r)|cos[ϕ(r+δr)+ϕnϕ(r)],
where |γ(r)|=|I(r)||I(r+δr)|=U(r)U*(r+δr)t is the mutual intensity, which is the temporal cross correlation between the two laterally shifted, interfering beams at zero delay.11

The four-intensity frames, each captured with one of the four successive values of ϕn, are then processed using the phase-shifting algorithm35 to extract phase gradient rϕ(r)=(Δϕ/δr)δr^=tan1[I(x,y;3π/2)I(x,y;π/2)I(x,y;0)I(x,y;π)]/δr. [Fig. 8(c)].11 Note that since GLIM is based on the DIC configuration, which generates an intensity image proportional to the gradient of the phase delay, the extracted quantity is a gradient. Whereas, in SLIM, where the four-frames provide phase delay [Eq. (20)] because the base modality is PC, which generates an intensity image proportional to the phase delay. The phase information is the integral of the gradient phase along the direction of shear [Figs. 8(d) and 8(e)]:11

Eq. (25)

ϕ(r)=0r[rϕ(r)]dr+ϕ(0),
where ϕ(0) is the initial /background phase. The risk of photodamage is extremely low due to the use of low-power density illumination and hence samples can be observed for longer durations. Nguyen et al.11 employed GLIM to study embryo growth and viability for a week [Fig. 8(f)].11

Epi-illumination gradient light interference microscopy

Thicker 3D objects, or objects placed on opaque substrates, are a limitation for transmission GLIM. Though nonlinear microscopy achieves better penetration depth due to larger wavelengths, it suffers from photodamage to the object eventually as illumination power requirements are high. To enable imaging of much thicker samples, GLIM was implemented in reflective mode by Kandel et al.12 This new QPI instrument, known as Epi-GLIM works on the same principle as GLIM. Figure 8(g) shows the optical setup for epi-GLIM.12 The base microscope in this case is a reflective, upright microscope configured in DIC mode. An Epi-GLIM module is attached to the output port of the microscope. Broadband illumination light is passed through a polarizer to make light incident on the Nomarski prism at 45 deg to its axis. Light is split into two laterally sheared, orthogonal polarization components by the Nomarski prism. Light reflected from the sample is combined into one beam with two orthogonal polarizations by the same Nomarski prism. This beam then enters the LCVR and is modulated in steps of π/2. An analyzer placed right before the camera at 45 deg to the incoming polarizations causes interference to happen at the camera plane. The reconstruction principle is the same as described in transmission GLIM.

Figures 8(h)8(i) show the phase reconstruction of a 1.9  μm polystyrene bead immersed in oil and placed on a reflective surface.12 It is to be noted that since the object is placed on a reflective surface the expected phase value is twice when the same sample is placed on glass and measured through transmission GLIM.12

Another advantage of Epi-GLIM is that it can be used to image high-density well plates, in which the well size is smaller because of large number of wells. Liquid media form a meniscus at the edge of these wells, which causes the illumination to be nonuniform, and thus transmission imaging is not well suited for such plates.12 Figure 8(j) shows an Epi-GLIM image of one field of view from a 1536 well plate.12 The background shows no inhomogeneity. Transmission DIC is not recommended when using a plastic bottom plate, which is birefringent and thus degrades the DIC operation. However, Epi-GLIM overcomes this problem as the measured signal is reflected from and not passing through the plastic. Figure 8(k) shows a neuron image cultured on a plastic substrate and measured through Epi-GLIM.12

Since in reflection geometry the objective also acts as a condenser, it helps in increasing the frequency coverage of Epi-GLIM system as compared to the transmission geometry. A double pass through the objective effectively increases the NA of the system.12 The improvement in the axial point spread function (psf) can be seen in Fig. 8(l).12

Digital holographic microscopy

DHM is a computed imaging technique based on the optical holography that seeks to numerically reconstruct the complex-valued field. An interferogram is formed at the hologram plane by interference between the object and reference wave from which complex field is extracted. The extracted field is then numerically propagated to different axial locations by the use of wave propagation equations.14

Figure 9 shows different configurations of digital holography as discussed in detail in Ref. 14. In Gabor holography, both reference and object beams are extracted from the same incoming beam based on the weakly scattering property of the sample [Fig. 9(a)].14 The portion of the incident light scattered from the object becomes the object beam and the unscattered portion becomes the reference beam.14 In off-axis Fresnel holography [Fig. 9(b)],14,52 the reference beam is an off-axis plane wave. As explained previously in the discussion of DPM, for phase recovery, one of the diffraction orders is extracted from the Fourier transform of the hologram.53 The Fresnel transform (based on the distance between the object and the hologram plane) is applied for the numerical reconstruction of complex fields from the measured hologram.14 In Fourier transform holographic microscopy, the reference is a point source placed at the object plane. To reconstruct the image, only one Fourier transform is required [Fig. 9(c)].14,54,55 For a lens-based Fourier holography setup, the reference is a plane wave and the object and the hologram plane are both at a focal length away from the Fourier transform lens.14 Anand et al.56 discussed another geometry for Fourier transform holography where a single beam is divided into reference and object beams using an image plane aperture. In this configuration, the pinhole samples a portion of the object beam, which acts as a reference point source. The object beam is created by passing through a diffuser (placed at the same plane as the pinhole). These beams then form a hologram at the CCD plane.

Fig. 9

Configurations of digital holography: (a) Gabor holography; (b) off-axis Fresnel holography; (c) Fourier transform holography; (d) image plane holography; (e) inline holography; and (f) phase-shifting holography. (a)–(f) Reproduced with permission from Ref. 14, © 2010, SPIE.

JBO_29_S2_S22713_f009.png

Another holography configuration is image plane holography,57 which requires the object to be placed very close to the hologram plane.14 Figure 9(d)14 illustrates one example where the object for the hologram is a magnified object image projected at the hologram plane57 or in another configuration the reference beam is also passed through another lens to magnify the hologram. This configuration is suitable for low-coherence interferometric techniques.14 For in-line holography [Fig. 9(e)], both the object and reference beams are coincident.14 Due to this coincidence, twin images [the intended (real) image and the unwanted, out-of-focus, complex conjugate image of the real image] and strong DC components occur because of the presence of incident reference wave. Image processing techniques are used to remove these unwanted artifacts. The in-line geometry, however, provides an increase in resolution as required for biological applications.14,58 Phase-shifting digital holography [Fig. 9(f)]14,59 replaces the use of Fourier transform of a single hologram to retrieve complex field by phase-shifting reconstruction using multiple holograms (discussed in SLIM and GLIM). The phase of the reference beam is modulated in steps of π/2 to generate four holograms.59,60 The complex field is retrieved at the hologram plane using phase-shifting reconstruction (discussed previously in the presentation of SLIM/GLIM). This field can then be numerically propagated along the propagation direction. Phase-shifting digital holography has also been reported with two-step phase-shifting6163 or three-step phase-shifting and reconstruction.64 Phase-shifting reconstruction eliminates the artifacts of DC and, first-order twin images, conditioned upon accurate phase shifting.14,59,6571

The reconstructed image in DHM can suffer from phase aberrations due to imaging optics. A large amount of research has been devoted to numerically compensating for or eliminating the aberrations using methods, such as using a numerical lens,72,73 conjugated holograms,74 principal component analysis (PCA) of phase map to extract aberrations,75 nonlinear optimization to minimize phase variations,76 geometric transformation-based aberration correction,77 least square fitting and segmentation,78 and wavefront correction at the hologram plane.79 Aberration correction through deep learning is addressed in Ref. 80, where a convolutional neural network can estimate the background phase, and aberration is then corrected using Zernike polynomial fitting. Kim14 and Marquet et al.71 provided a useful review of different digital holographic techniques and numerical methods employed for reconstruction and aberration correction.

QPI based on DHM principles has been employed extensively in biomedical applications.58,71,81,82 Figures 10(a) and 10(e) show one representative optical setup for each reflection and transmission DHM geometry.52,71 In the reflection geometry illustrated in Fig. 10(a), light from a coherent source (a HeNe laser, 632.8 nm) is spatially filtered (using two lenses and a pinhole in a beam expander geometry) and a half of the beam illuminates the object upon reflection through a beam splitter and is back scattered from the object (O). The other half (R) is transmitted by the beam splitter onto a mirror (M), which tilts the beam at an off-axis angle with respect to the object beam. The tilted reference beam is then reflected toward the CCD plane where it interferes with the backscattered object field and forms the hologram, which can then be numerically reconstructed and propagated at different axial locations.52 Figures 10(b)10(d) show the images of fabricated phase object measured through reflection DHM.52 Transmission DHM is shown in Fig. 10(e),71 where a collimated coherent source is focused onto the sample by condenser lens. The other half of the collimated illumination beam is reflected by the beam splitter onto a mirror that reflects the collimated beam at an angle toward the CCD plane. The field scattered from the sample is collected by the objective lens and is relayed to the CCD plane as the object beam. The hologram from the interference of object and reference fields is formed at the CCD plane, which can then be reconstructed using techniques discussed above. The thickness profile of an RBC and phase and thickness profile of a mouse cortical neuron measured through transmission DHM are as shown in Figs. 10(f) and 10(g), respectively.52,71

Fig. 10

DHM: (a) reflection mode geometry. Reflection mode DHM measurements of a pure phase object: (b) amplitude; (c) phase; and (d) 3D height distribution; (e) transmission mode geometry; (f) thickness profile of RBC; (g) phase and thickness maps of a mouse cortical neuron measured through transmission DHM. (a)–(d) Reproduced with permission from Ref. 52, © 1999, Optica; (e)–(g) reproduced from Ref. 71 under CC BY license.

JBO_29_S2_S22713_f010.png

We consider the illumination, sample, objective lens and CCD configuration as depicted in Fig. 10(e), where the sample is illuminated with a plane wave U0 with incident wavevector ki. The sample is placed at a distance ds from the objective lens (of focus f<ds). The image plane is at a distance di from the objective lens, where the field is U1. CCD is positioned in between the objective lens and image plane at a distance d from the image plane. Following the mathematical formulation in Ref. 3, the field at the image plane can be expressed as

Eq. (26)

U1(x,y)=1|M|U0(xM,yM),
where the field is a magnified version of the object with magnification M=dids. This field can be back propagated to the CCD plane using Fresnel propagation kernel in the reverse direction as

Eq. (27)

UD(x,y)=U1(x,y)(x,y)eiβ02d(x2+y2).

This field interferes with the off-axis reference field Ur incident with wavevector kr [to introduce spatial phase modulations as indicated by the second term of cosine [(kkr)·r] in Eq. (2)], assumed to be a plane wave here, and the resultant hologram is

Eq. (28)

UH(x,y)=UD(x,y)+|Ur|eikr·r.

This detected hologram is then Fourier transformed, and the complex field is retrieved by spatial filtering where the signal corresponding to one of the sinusoidal peaks is selected and the rest (DC and the twin image) is filtered out. The 2D Fourier transform of the resultant intensity gives

Eq. (29)

I˜H(kx,ky)=I˜0(kx,ky)+I˜+1(kx,ky)+I˜1(kx,ky),
where

Eq. (30)

I˜0(kx,ky)=I[|UD(x,y)|2+|Ur|2],I˜+1(kx,ky)=|Ur|U˜D(kxkrx,ky),I˜1(kx,ky)=|Ur|U˜D(kx+krx,ky).

Here I denotes the 2D spatial Fourier transform, and krx is the reference wavevector incident along x direction.

After removing the DC and twin image, shifting the first-order term back to origin and taking the inverse Fourier transform, the resultant object field can be recovered as

Eq. (31)

U1(x,y)I1[I˜+1(kx+krx,ky).eid2β0(kx2+ky2)].

Now, this field can be axially propagated over any distance through either Fresnel propagation or Huygens’s convolution with the Green’s function.

DHM based on a backscattering geometry claims to have axial sensitivity/accuracy of 0.9 nm and temporal stability of 0.8 nm71,83 in a dual-wavelength reflection mode instrument. To remove the object dependent noise, interwavelength noise and shot noise, different averaging schemes have been employed as described in Ref. 83.

The high spatial sensitivity and low acquisition time—of the order of tens of μs—enables DHM to visualize and quantify the dynamic processes occurring at single cell level. As an example,84 Figs. 11(a)11(c) reveal the changes induced in single neuron cell bodies under hypotonic stress, with the left panel showing the thickness maps before stress [Fig. 11(a)], middle panel after stress [Fig. 11(b)], and right panel [Fig. 11(c)] showing the difference phase map between normal and stressed cell bodies.84 Thickness changes and refractive index fluctuations are decoupled using a procedure that involves measurement of holograms with varying refractive index of the surrounding media.84 The corresponding phase changes are shown in Fig. 11(d) with the phase measurements averaged over the surface of cell shown in the inset with boundaries marked in red.84 The phase measured at point 1 can be expressed as84

Eq. (32)

ϕ1i=β0(n¯ins)dci,
where ϕ1i is the phase at i’th spatial location, β0=2π/λ is the free-space wavenumber, n¯i is the axial projection of refractive index of cell at i’th spatial location, ns is the refractive index of the surrounding media at point 1, and dci is the thickness of cell at i’th spatial location.

Fig. 11

Applications of DHM in Single cell dynamics-Measurement of thickness variations of neuron cell body in (a) standard conditions (before); (b) Hypotonic conditions (3 minutes after the onset of a hypotonic shock); (c) difference between (b) and (a). (d) refractive-index and thickness decoupling: temporal phase profiles with solid rectangles at bottom showing durations with standard solution and dashed rectangles showing durations with decoupling solution, for refractive index decoupling with different immersion media; (e) QPIU; (f) raw holographic data; (g) extracted phase profile; and (h) machine learning application based on holographic phase data in (g) to detect Bacillus anthracis. FR, field retrieval. (a)–(d) Reproduced with permission from Ref. 84, © 2005, Optica; (e)–(h) reproduced with permission from Ref. 85 under CC-BY license, with some rights reserved; reprinted with permission from AAAS.

JBO_29_S2_S22713_f011.png

After replacing the immersion media with another one, having a different refractive index, the phase at point 2 can be expressed as84

Eq. (33)

ϕ2i=β0(n¯i[ns+Δn])dci=ϕ1iβ0Δndci,
where Δn is the change in refractive index between the two solutions. Solving Eqs. (32) and (33) for n¯i and dci yields84

Eq. (34)

dci=ϕ1iϕ2iβ0Δn
and

Eq. (35)

nci=ns+Δnϕ1iϕ1iϕ2i.

Hence, the measured optical phase map can be decoupled into refractive index and thickness map by immersing the sample in two different media.71,84,86 Other decoupling procedures involve assumptions, such as the spherical structure of cell, or involve measurements at two different wavelengths.86,87 An extensive review of refractive index decoupling techniques can be found in Ref. 86.

Self-referencing DHM is an important mode of DHM that can be implemented in both in-line or off-axis configurations.88,89 In this type of holography, a single object beam is divided into object and reference beam like in DPM, which then interfere to form the hologram. Division of a single beam can be obtained either by splitting the object beam on the basis of amplitude,89 polarization,90,91 by providing a lateral shear88 etc., or through division of wavefront88,92 as shown in Figs. 12(a) and 12(e).88 Figures 12(b)12(d) and Figs. 12(f)12(g) show the corresponding holograms and reconstructed thickness maps for RBCs for the lateral shearing and wavefront division, respectively.88 An example of a QPI technique based on the polarization splitting is shown in Fig. 12(h) and Fig. 12(l).90,91 Through this instrument called a quantitative phase imaging unit (QPIU),90,91 phase information can be extracted from the recorded hologram provided the two sheared object and reference beams form nonoverlapping sample images at the detector plane.90,91 Quantitative mean phase measurements of an RBC are shown in Figs. 12(i)12(k), with Fig. 12(i) shows the mean phase map, Fig. 12(j) shows the height fluctuations represented by standard deviation map, and Fig. 12(k) shows the temporal fluctuations of height of the cell, all measured through QPIU.90 Figure 12(m) shows the four-phase-shifted interferograms acquired for phase extraction.91 Anand et al.88 provided an informative review of self-referencing DHM techniques. Recently, Kumar et al.93 introduced a common-path reflective mode off-axis DHM based on the division of the illumination beam into object (90% of incidence beam) and reference (10% of incidence beam) beams using a plate beam splitter with reportedly increased field-of-view.

QPI through DHM has been used for the identification and characterization of diseases by studying the refractive index distribution of healthy and diseased cells.94 Cell sorting or classification of cells using quantitative phase data obtained through DHM and data processing techniques has been discussed in Ref. 95. Phase imaging can be very useful for studying erythrocytes, as the decoupling of refractive index and thickness from the phase measurements is easy due to the anucleate structure of these cells. There are several studies involving RBCs in the QPI field, such as differentiation of mature and immature RBCs based on the phase measurements,96 measurements of cell membrane dynamics,97 etc. There are also studies that involve observation and quantification of neuron activity by monitoring water transport through membranes and its effect on phase.71,97 DHM has also been used for the disease detection and quantification of corresponding pathophysiological effects, i.e., in sickle cell disease.98

Fig. 12

Self-referencing DHM: (a) lateral shearing DHM; (b) recorded hologram; (c) zoomed-in view of the white box in (b); (d) thickness profile of RBC measured through (a); (e) wavefront division DHM; (f) recorded hologram; (g) thickness profile of RBC measured through (e); (h) Wollaston-prism based polarization-splitting DHM with corresponding measurements of RBC; (i) mean phase map; (j) characterization of mean phase fluctuations in terms of standard deviation; (k) temporal phase fluctuations in various parts of RBC [indicated by dots in Figs. (i) and (j)] as compared to background; (l) beam displacer-based white-light DHM; and (m) corresponding four-phase shifted interferograms. (a)–(g) Reproduced from Ref. 88 under CC BY license, with subfigure (a) adapted with permission from Ref. 99, © 2012, Optica, and Ref. 100, © 2017, Optica; and subfigure (e) adapted with permission from Ref. 101, © 2012, Optica. (h)–(k) Reproduced with permission from Ref. 90, © 2014, Optica. (l), (m) Reproduced with permission from Ref. 91, © 2016, Optica.

JBO_29_S2_S22713_f012.png

Algorithms for numerical reconstruction, aberration correction, and image analysis have been developed simultaneously for DHM. Machine learning is another research direction that is quickly being incorporated in the microscopy regime. As an example, for automatically differentiating between RBCs from two different organisms and diagnosis of sickle cell disease in patients, machine learning algorithms were employed in Ref. 102. That study demonstrated high accuracy in both classification tasks. Kim,14 Xu et al.,58 Marquet et al.,71 Lee et al.,81 Dardikman and N. T. Shaked,86 and Kemper et al.103 reported more examples of biological applications of DHM. Another important example relates to biowarfare security.85 In the study of Jo et al.,85 holographic data [Figs. 11(f) and 11(g)] obtained using QPIU90,91 [Fig. 11(e)] were combined with deep learning to train a network to detect and classify Bacillus anthracis spores from other cells within the Bacillus species as shown in Fig. 11(h).85

Hilbert phase microscopy

HPM is a QPI technique that is based on off-axis digital holography.13 The optical setup is as shown in Fig. 13(a).104 Light from the laser (HeNe, 632.8 nm) is coupled to a single-mode fiber and fed into a 1×2 fiber splitter that splits the beam into two paths. The object beam is collimated and passes through the sample, collected by the objective lens, and imaged onto the CCD plane by the tube lens. The other half of the original beam traverses a separate path where it is collimated and focused by a lens to a point source, which is then Fourier transformed by the tube lens to form a plane wave at the CCD plane. This beam, which is slightly tilted with respect to the optical axis, becomes the reference beam for forming the interferogram.

Fig. 13

HPM: (a) setup and (b) phase measurements of whole blood smear with temporal and spatial optical pathlength sensitivity measurements corresponding to one point and all points respectively for the region inside the square marked O in the phase map. Cell topography and membrane MSD curves for (c), (e) discocyte and (d), (f) echinocyte. (g) Cellular lysis, volumetric fluctuations, and optical pathlength fluctuations measurements. Refractive index decoupling: (i) contour image and thickness measurements of a HeLa cell from reflectance confocal microscopy, (j) phase measurements of the same cell in (i) from HPM, and (k) decoupled refractive index distribution of the same cell as in (i). (a), (b), (g), and (h) Reproduced with permission from Ref. 104, © 2005, SPIE; (c)–(f) reproduced with permission from Ref. 105, © 2006, Optica; and (i)–(k) reproduced with permission from Ref. 106, © 2009, American Chemical Society.

JBO_29_S2_S22713_f013.png

The interferometric image captured by the CCD can be expressed as3,13

Eq. (36)

I(r)=I0+I1(r)+2I0I1(r)cos[αx+ϕ(r)],
where I0 is the reference irradiance, I1 is the object irradiance, r=(x,y), and α is the spatial modulation frequency defined by the angular offset between the reference and object beams. In Eq. (36), ϕ(r) is the quantity of interest, i.e., the spatially dependent phase perturbation induced by the object. The Fourier transform of this interferogram contains three peaks, one represents zero-frequency information, and the other two are peaks corresponding to the cosine term. To extract the cosine term, the interferogram is high-pass filtered. This cosine term represents the real part of the complex analytic signal ei[αx+ϕ(r)]. The imaginary part of the exponent is related by Kramers–Kronig relationship and can be calculated from the real part using Hilbert transform as3,13

Eq. (37)

Im(ei[αx+ϕ(r)])=1πPRe(ei[αx+ϕ(r)])xxdx.

After the extraction of full complex analytic signal, the desired phase term can be retrieved as φ=[αx+ϕ(r)]=tan1(Im(ei[αx+ϕ(r)])Re(ei[αx+ϕ(r)])), and the phase information from the object can be calculated as ϕ(r)=φαx.13,104 Since HPM only requires a single-shot measurement, the speed of acquisition is remarkably high and is only limited by the camera frame rate. This high acquisition speed (ms) and quantitative measurements of optical pathlengths (nm) make HPM suitable for fast dynamic observations, such as membrane dynamics.104 As discussed earlier, decoupling of thickness and refractive index is quite easy in case of RBCs because of their anucleate structure, which makes the cell just a homogenous solution of hemoglobin. Figure 13(b) shows the phase map of a collection of RBCs placed between two thin glass coverslips.104 The temporal and spatial sensitivities of HPM were reported to be 1.2 and 1.73 nm, respectively.104 Note that this is not a common path instrument and to improve the temporal stability another HPM geometry was reported in Ref. 105, where a stabilization loop-based negative feedback corrects for the vibration induced orientation shifts of the interfering beams. The ability of HPM to quantify cell membrane fluctuations was shown in Ref. 105, where the cell topography for two types of RBCs (normal discocyte and spiculated echinocyte) is shown in Figs. 13(c) and 13(d). The time-lapse phase images of the two cell types were acquired for 1000 instances with a delay of 10.3 ms between two consequent acquisitions. The mean squared displacement (MSD) was calculated and Figs. 13(e) and 13(f) show the inverse square dependence of MSD on wavenumber q.105 These results hint toward the existence of a deterministic component in the membrane fluctuations, which was otherwise supposed to be of purely thermal (and hence random) in origin because of the lack of such nanometer precision measurement techniques.3,105

In addition to membrane dynamics, many other cellular phenomena like lysis can be observed and quantified by HPM. An example of an RBC undergoing lysis acquired through HPM is shown in Fig. 13(g), and the associated changes in the volume of the cell calculated through the thickness measurements are indicated by the blue curve in Fig. 13(h).104 This curve shows that during lysis, the volume of a cell decreases as it empties its hemoglobin content to the surrounding media. The optical pathlength increase of a neighboring point is also shown in the same plot indicated with red curve, which shows that the optical pathlength of the point is increasing due to increased hemoglobin release from the cell over time.104

Simultaneous thickness measurements from the contour maps [Fig. 13(i)] and phase [Fig. 13(j)] from a reflectance confocal and HPM system, respectively,106 are shown to aid the decoupling process and consequently the pure axially averaged refractive index map [Fig. 13(k)] of any cell can be extracted as discussed in Ref. 106.

HPM has also been applied for studying scattering properties of tissues.107 When the slice thickness of tissue slices is known, one can infer the axially averaged refractive index from the phase maps. Using HPM, it was shown that the axially averaged refractive index distribution varies between the organs from which the tissue slice originated, with the brain having the lowest mean of distribution and liver, the highest.107

2.2.4.

Fourier transform light scattering

Light scattering measurements are divided into two categories: ELS (discussed here) and quasielastic or dynamic light scattering (DLS) (discussed in part 2 of this review).15 Because they can inform refractive index estimation, elastic light scattering measurements provide information about the structure of the static objects under study. On other hand, DLS measurements can provide information about the temporal dynamics of the sample because they are sensitive to the diffusion coefficient of an ensemble of particles.15 Current methods that enable light scattering measurements suffer from either low spatial resolution by estimating the scattering properties based on a group of particles or low throughput in case of particle tracking measurements.15

FTLS is a technique that combines the high spatial resolution of image plane measurements in optical microscopy and intrinsic averaging of scattering measurements to measure elastic and DLS by biological samples.15 It is a spatial domain equivalent of Fourier-transform infrared spectroscopy since all possible scattering frequencies are measured at each spatial location.3

The complex field is measured at the image plane using DPM (for details see “Diffraction phase microscopy” section) and is then propagated to the Fourier or scattering plane to determine the scattered intensity distribution.15

Following the reconstruction process in DPM as discussed previously in “Diffraction phase microscopy” section, the complex field U(r,t) comprising of amplitude and phase is extracted from the single interferometric intensity measurement. Propagation of this extracted, complex field to the scattering plane can be realized through a spatial Fourier transformation:3

Eq. (38)

U˜(q,t)=U(r,t)eiq·rd2r,
where the 2D integration is performed over the field of view.

Consider a group of N similar particles distributed in a finite volume as shown in Figs. 14(a) and 14(b).15 These particles are assumed to be under random Brownian motion such that their positions are dependent on time and uncorrelated. This dynamic distribution of particles can be summarized as3

Eq. (39)

U(r,t)=UF(r,t)ri=1Nδ[rri(t)],
where UF(r,t) is the field due to a single particle, ri(t) is the time dependent position of i’th particle, and r denotes the spatial convolution. The scattered field is then3,15

Eq. (40)

U˜(q,t)=UF˜(q,t)·US˜(q,t),
where UF˜(q,t) is called the form function, which is dependent on the shape of a single particle and thus signifies the overall spatially slowly varying envelope of the total scattered field. The quantity US˜(q,t)=i=1Neiq·ri(t) is the structure function, which describes the contribution of the positions of particles on the scattered field.3,15

Fig. 14

Fourier transform light scattering (FTLS): (a) amplitude measurement; (b) phase measurement; (c) spatial Fourier spectra of the complex field; (d) scattered intensity versus scattering angle curves obtained from FTLS for varying number of beads as compared to the calculated curve from Mie calculations; and (e) Spectrum of fluctuations in scattering intensity for a system of beads in water for characterizing diffusion coefficient and viscosity. Extraction of scattering parameters through FTLS of rat organ tissue slice: (f) scattering mean-free path ls, (g) scattering anisotropy, (h) transport mean-free path lt. (i), (j) Phase map and angular scattering results obtained by applying FTLS on rat mammary tumor tissue slice sample over broad range of scattering angles. (a)–(e), (i), and (j) Reproduced with permission from Ref. 15, © 2008, American Physical Society; (f)–(h) reproduced with permission from Ref. 108, © 2009, Optica.

JBO_29_S2_S22713_f014.png

Time-dependent studies of dynamic systems involve time-lapse image acquisition. The magnitude squared temporal Fourier transform of Eq. (40) yields the power spectral density as15

Eq. (41)

P(q,ω)=|U˜(q,t)eiωtdt|2.

Since the fluid dynamics can also be described by a Lorentzian function of the form 1/[1+(ω/Dq2)2], where the spectral bandwidth is Dq2 and the diffusion constant D is expressed as kBT/4πηr0, with viscosity η and diameter of particle r0,15 the diffusion coefficient and hence the viscosity can be obtained by fitting the acquired power spectral density data with a Lorentzian curve fit.

An experimental study based on the above discussed principle for a system of multiple 3  μm beads was reported in Ref. 15. There, DPM was used to measure the amplitude and phase information as shown in Figs. 14(a) and 14(b).15 Background subtraction was performed by measuring a sample less area and subtracting it from the acquired images. Computing the spatial Fourier transform [Fig. 14(c)] of the measured complex field and averaging along the contours of equal length q, where q=2β0sin(θ2), gives the scattered intensity for every scattered angle θ.15 This scattered intensity after normalization is known as the scattering phase function. Figure 14(d) shows the plot of scattered intensity versus the scattering angle for varying number of beads in the field of view.15 To compare the accuracy of FTLS measurements, calculations based on Mie theory for 3  μm beads were performed in Ref. 15 and plotted on the scattered intensity plot obtained through measurements. As evident in Fig. 14(d), the oscillations predicted by Mie theory agree well with the FTLS measurements for varying number of beads.15 These results showed that FTLS is even sensitive to a single-particle scattering as there is a clear distinction between the background FTLS signal and a single-bead FTLS signal.15 Measurements of dynamic system of beads in water are shown in Fig. 14(e),15 where the power spectral density P(q,ω) is fitted with the Lorentzian function to extract diffusion coefficient D and viscosity η.

Experimental estimation of scattering parameters from the measured FTLS data was demonstrated in Ref. 108. Considering that the attenuation of the field after passing through a thin tissue slice is due to scattering, the scattering mean free path ls can be extracted from the FTLS measurements by applying Lambert–Beer’s law I0(L)=I0eL/ls, where L is the thickness of the tissue slice, I0 is the total incident irradiance, which is a sum of both scattered and unscattered irradiance.108 The quantity I0(L) denotes the unscattered irradiance after traversing the tissue and can be calculated by integrating the scattered intensity over the DC or the diffraction spot.108 To determine the scattering anisotropy g as defined in Sec. 2.2.2, direct application of Eq. (17) to full thickness of the tissue is not valid. To determine the appropriate g, the complex field can be propagated through N=ls/L layers of tissue slices such that the normalized angular scattering distribution or the phase function can be expressed as108

Eq. (42)

p(q)|[U(r)]Neiq·rd2r|2.

Using Eq. (42), g can be calculated as in Eq. (16).

The range of measured angular spectrum data is limited by the NA of the imaging system (the maximum scattering wavevector q that can be captured and detected by the system), which can result in inaccurate estimates of g. For this reason, Ding et al.108 used the Gengenbauer kernel phase function curve-fitting for the measured scattering anisotropy data.108 Aided by the measurement of ls and g, the transport mean free path lt, which determines the fate of light after traversing a highly scattering media with multiple scattering incidents, can be calculated according to the relation lt=ls/(1g). Measurements of scattering parameters ls, g, and lt for tissue sections from different rat organs are shown in Figs. 14(f)14(h).108 Although the maximum angle is dictated by the optics of the measurement system, FTLS is still able to characterize the scattering behavior of thick tissue slices starting from very low angles as shown in Figs. 14(i) and 14(j), where angular scattering measurements from angles as low as 0.01 deg were made.15 This broad range of about three decades of angular scattering data makes FTLS, a unique tool for measuring scattering parameters of the entire organ tissue slice.15

3.

Applications: Cell and Tissue Scattering, Scattering Parameters as Markers of Disease

QPI has found tremendous success in addressing biomedical applications. This success can be attributed to the high phase sensitivity, label-free operation, and noninvasive properties of QPI.4,34,109 It is known that the dry mass density is linearly proportional to the phase map, and hence through the knowledge of phase obtained through QPI applications, such as monitoring the growth of cells,50,110117 understanding brain and neuronal networks,113,118124 measurement of cellular dynamics,50,115,116,119,123,125129 organelle detection and characterization,130 cancer tissue pathology,131134 virology,33,135 reproductive science,136141 etc., have been studied. In addition to the applications discussed with each QPI configuration in Sec. 2, here we present some additional, general applications of QPI in the study of cells and tissues.

3.1.

Cell Sorting Through FTLS

Cells can be differentiated from each other using their angular scattering signatures. FTLS provides high sensitivity to measuring scattering signals from very weak scatterers like a neurite [Figs. 15(a) and 15(b)]9,142 and can cover a large range of angles limited only by the optics of the system [Figs. 15(c) and 15(d)].15 Ding et al.142 proposed a cell sorting technique based on the FTLS measurement of phase functions. Figures 15(e)15(g) show the measured phase function for three different cell types, RBC, C2C12, and neurons, respectively.142 PCA was performed on the angular scattering data for 15 measurements per cell type [Figs. 15(h)15(j)] and the results are shown in Figs. 15(k) and 15(l).142 There is a clear distinction between the scattering signals from three cell types. Ding et al.142 reported sensitivity/specificity values of 1/1, 1/0.88, and 0.7/1 for RBC, neurons, and C2C12 cells, respectively. These results, though obtained on a relatively simple classification problem, present the benefit of intrinsic markers as compared to flow-cytometer measurements, which requires fluorescently tagged cells.142

Fig. 15

FTLS applications: High dynamic range angular scattering measurements for parts of cells as in (a), (b) neurites to (c), (d) whole cells. Cell sorting: phase maps and scattering phase function for (e), (h) RBCs; (f), (i) C2C12 cells; and (g), (j) neurons. (k), (l) PCA results for cell sorting based on angular scattering measurements through FTLS. (a), (b), (e)–(l) Reproduced with permission from Ref. 142, © 2010, IEEE, with (a) reproduced with permission from Ref. 9, © 2011, Optica. (c), (d) Reproduced with permission from Ref. 15, © 2008, American Physical Society.

JBO_29_S2_S22713_f015.png

3.2.

Detection of Cancer in Tissues

The standard method of detecting an abnormality in a biopsy is based on inspection of a stained tissue by a trained pathologist. The changes in the tissue preparation can lead to inconsistencies in inference by different pathologists and hence is a potentially subjective procedure. This drawback is associated with every type of sample staining method. To avoid such variability and infer the presence of disease accurately, quantitative markers derived from the intrinsic properties of tissues, like refractive index distribution, are of importance. Since no human intervention is required, any quantitative marker can thus be used in conjunction with modern algorithms for both diagnosis and prognosis of the disease.132,143146

The basis for such quantitative markers derived from QPI is that the disease onset and development is accompanied by changes in tissue refractive index, which, in turn, induces changes in light scattering.145147 QPI measurements provide an optical pathlength difference map, which is highly sensitive, of the order of nanometer.9 For a histopathology tissue slice, the thickness is generally known (2 to 4  μm). Thus the optical pathlength map directly corresponds to the refractive index distribution integrated along the beam path in the sample.146 Since there is a direct linear relationship between the observed phase difference map and the dry mass density of the sample,3 QPI informs not only on the refractive index distribution but also on the density or inhomogeneity of the tissue.146,147

One of the quantitative markers for cancer detection is tissue disorder strength.147 In Ref. 147, a study that involved quantitative imaging and analysis of tissue microarray from 400 patients containing both benign and malignant tissue samples was presented. For each core, two adjacent slices were imaged, one stained with H & E and one unstained. From the QPI measurements, the tissue disorder strength was quantified as147

Eq. (43)

Ld(x,y)=Δn(x,y)2wlc.

Here Ld(x,y) is the tissue disorder strength, Δn(x,y)2w is the windowed spatial variance of refractive index, and lc is the spatial autocorrelation length.

From the measured phase map Δϕ(x,y), Δn(x,y)2w can be calculated as147

Eq. (44)

Δn(x,y)2w=Δϕ(x,y)2w(2πλL)2.

In Eq. (44), Δϕ(x,y)2w denotes the windowed spatial variance of the measured phase difference. From Eq. (43), the disorder strength can be expressed as147

Eq. (45)

Ld(x,y)=Δϕ(x,y)2wϕ(x,y)w2nm2lc,
where ϕ(x,y)w2=(2πλL)2nm2 is the squared windowed spatial mean of the phase difference, and nm=n(x,y)w is the windowed average mean of the refractive index in the sample window. As suggested by Eq. (45), another parameter to characterize the tissue disorder strength is spatial autocorrelation length, which can be extracted from the autocorrelation function of measured phase.

Results of the study in Ref. 147 are shown in Fig. 16. Figure 16(a) shows the phase map obtained from SLIM for a tissue biopsy slice of thickness 4  μm.147 Figure 16(b) shows the normalized autocorrelation Γ(x,y) of the measured phase difference ϕ(x,y).147 Figure 16(c) presents the 1D plot of Fig. 16(b), where spatial correlation length is defined as the distance (x axis), in which the autocorrelation falls to 1/e of its maximum value.147 Based on the expressions mentioned above, the disorder strength map was extracted in Ref. 147 from the SLIM measurements and is shown in Fig. 16(d) for benign and Fig. 16(e) for malignant tissues.147 It is evident that the tissue disorder strength is higher in the malignant tissue slice. A statistical justification of this observation is presented in Fig. 16(f),147 where the bar graph shows the measured tissue disorder strengths for 20 benign and 20 malignant tissues. The difference between the two groups was statistically significant with a p-value of 0.0066 obtained by applying two-sided Wilcoxon rank-sum test to the data.147 The implication of Fig. 16(f) is that the tissue disorder strength is a potentially significant intrinsic marker for malignancy in tissues.147

Fig. 16

Disorder strength as a cancer marker: (a) tissue biopsy core phase map through SLIM; (b) spatial autocorrelation of phase measurements in (a); (c) 1D plot of (b) for lc extraction; (d) disorder strength map for benign tissue; (e) disorder strength map for malignant tissue; and (f) statistically significant difference in the tissue disorder strength between benign and malignant tissue samples. (a)–(f) Reproduced from Ref. 147 under CC BY license.

JBO_29_S2_S22713_f016.png

Comparing Eq. (45) (for disorder strength) and Eq. (12) (for scattering mean free path), the scattering mean free path and disorder strength are inversely proportional to each other.147 This implies that the higher the disorder strength is, the lower the scattering mean free path is, and, hence, the stronger the scattering will be. The stronger scattering also indicates that there is a high degree of inhomogeneity in the refractive index of malignant tissues.147

Another intrinsic cancer marker is the local spatial autocorrelation length.146 Figure 17 shows that the local spatial autocorrelation length [full and thresholded in Figs. 17(b) and 17(c), respectively] derived from phase measurements [Fig. 17(a)] can be employed to detect and differentiate between benign [first column, Figs. 17(a)17(c)] and different grades of malignant tissue [rest three columns, Figs. 17(a)17(c)].146 The boxplot [Fig. 17(d)] and the statistical significance test results [Fig. 17(e)] show that except for three cases [benign–malignant (G1), malignant (G1)–malignant (G2), malignant (G2)–malignant (G3)], all the rest combinations are significantly different.146

Fig. 17

Tissue spatial correlation length as cancer marker: (a) phase measurements through SLIM for benign and three grades of malignant tissue cores; (b) local autocorrelation length maps; (c) thresholded local autocorrelation length maps; (d), (e) differentiation of benign versus malignant and differentiation of grades of malignancy based on local autocorrelation lengths with their respective p values as an indicator of success or failure per case. (a)–(e) Reproduced from Ref. 146 under CC BY license.

JBO_29_S2_S22713_f017.png

3.3.

Assessment of Injuries

Like the gold standard detection methods for cancer, the evaluation of acute kidney injury also requires examination of H & E-stained tissue sections by a trained physician. Current imaging modalities are limited by either spatial resolution (MRI, PET) or require extensive staining or sample preparation (μCT, fluorescence imaging, etc).148 Understanding the scattering properties of injured kidney tissue and extracting intrinsic markers to study disease progression through high spatial resolution QPI techniques will eliminate the shortcomings of the previously mentioned modalities. In Ref. 148, DPM was employed as a QPI instrument to extract phase and hence scattering properties associated with thin (5  μm) injured kidney tissue slices.

3.4.

Assessment of Type and Grade of Breast Cancer

Studies indicate that there is an increased and observable change in the extracellular matrix or stroma of the diseased tissues as the tumor progresses.149,150 The arrangement of collagen fibers in the extracellular matrix is an indication of progression of stage of the tumor.151 Since collagen fibers have strong second-order nonlinear susceptibility χ(2), second harmonic generation microscopy (SHGM) has been the most popular tool to assess the fibers in relation to tumor growth. However, SHGM cannot provide a holistic view of biopsy due to the presence of centrosymmetric structures, such as epithelial tissues.

Thus a modality that can detect and quantify both centrosymmetric and noncentrosymmetric structures is much more desirable. It was shown in Ref. 151 that SLIM can generate the orientation map of collagen fibers with accuracy comparable to SHGM measurements. In addition, SLIM can also provide cellular information for centrosymmetric structures that SHGM cannot. In addition to the above advantage, SLIM has other advantages over SHGM in terms of hardware requirements and ease of measurements. While SHGM requires expensive and bulky femtosecond laser sources that are inherently high peak power sources, SLIM operates with white light, broadband sources like halogen lamps or LEDs, which are amenable to observation for larger durations since the amount of illumination power delivered is much less than the damage threshold of the sample under study. Also SLIM is a widefield measurement method as compared to point scanning SHGM modality, which in turn increases the throughput.151

To prove that SLIM can provide information about collagen fibers similar to the SHGM, an SLIM, H & E, and SHGM investigation of a 24-core TMA with 8 benign and 16 malignant cores of varying grades was carried out in Ref. 151. To quantify the fiber orientations from SLIM images, isotropic measurements due to centrosymmetric structures, such as epithelial tissues, were segmented out using a set of filters and image gradient-based segmentation algorithms as discussed in Ref. 151. Figures 18(a)18(c) show the SLIM image (top row) for two different regions of the same core [Fig. 18(b)] highlighting mostly anisotropic structures in the first column [Fig. 18(a)] and mixed structures in the third column [Fig. 18(c)].151 Figures 18(d)18(f)151 show the segmented SLIM to recover fibers, which is comparable to SHGM images in Figs. 18(g)18(i).151 The presence of isotropic structures in SLIM [Figs. 18(a)18(c)], such as cells and epithelial tissue, is confirmed through H & E images in Figs. 18(j)18(l).151

Fig. 18

SLIM application for grading cancer based on collagen information. SLIM images (first row), segmented SLIM images (second row), SHGM images (third row) and H&E images (fourth row) of (a), (d), (g), (j) stromal tissue region; (b), (e), (h), (k) full core; and (c), (f), (i), (l) stromal and epithelial tissue regions. (a)–(l) Reproduced with permission from Ref. 151, © 2017, SPIE.

JBO_29_S2_S22713_f018.png

3.5.

Cellular and Organelle Investigations, Integration with Machine Learning

QPI has been used extensively for characterizing cellular and organelle perturbations. Few examples include characterization of the effect of bacteria on human primary T-cells using time lapse DHM,152 where the T-cells displayed changes in morphology (area and circularity) and mean phase contrast in response to the bacterial stress, indicating the utility of QPI in assessing host-pathogen interplay in a label-free manner. QPI has also been employed to study intracellular components like lipid droplets in live microalgal culture cells.153 Giugliano et al.154 recently demonstrated the ability of QPI in detecting and characterizing lysosomal compartments in mouse embryonic fibroblasts between different populations. Such a characterization of lysosomal compartments can provide meaningful information about lysosomal storage diseases. Recently, the QPI techniques have gained enhanced cellular and subcellular specificity with the rapid advent of deep learning.8 These computationally enhanced QPI techniques are making significant advances from basic science-in detection and characterization of intracellular organelles in a label-free manner,155 study of embryos for health grading,140 to pathology-in detection of hematologic disorders,156 and finally toward clinical applications-in detection and classification of cancer cells.157

4.

Conclusions

In this paper, we reviewed the state-of-the-art in QPI techniques for biomedical applications. We started with explanation and mathematical description of phase measurements for thin tissue slices in both forward and backscattering cases and reviewed 2D scattering phase theory. A detailed review of 2D phase imaging techniques based on holographic and interferometric principles of measurement was provided. Fourier transform light scattering was reviewed. Toward the end, we discussed a few applications of the scattering measurement techniques for characterizing cells and tissues.

Disclosures

G.P. had financial interests in Phi Optics Inc., a company that manufactures phase imaging instruments for biomedical applications. The rest of the authors declare no competing interests.

Code and Data Availability

No new data were generated in this study. No codes were used for this study.

Acknowledgments

This study was conducted with the support from the following grant: National Institutes of Health (Grant No. R01CA238191) (previously awarded to G.P., now headed by M.A.A.). M.A.A. was also supported by NIH (Award No. P41EB031772). We dedicate this paper to our dear friend, colleague, and mentor G.P. (Gabi) who tragically passed away during the preparation of this paper. May his immense knowledge in quantitative phase imaging keep on shining eternally!

References

1. 

F. Zernike, “How I discovered phase contrast,” Science, 121 345 –349 https://doi.org/10.1126/science.121.3141.345 SCIEAS 0036-8075 (1955). Google Scholar

2. 

W. Lang, Nomarski Differential Interference-Contrast Microscopy, Carl Zeiss( (1982). Google Scholar

3. 

G. Popescu, Quantitative Phase Imaging of Cells and Tissues, McGraw-Hill Education( (2011). Google Scholar

4. 

Y. Park, C. Depeursinge and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics, 12 578 –589 https://doi.org/10.1038/s41566-018-0253-x NPAHBY 1749-4885 (2018). Google Scholar

5. 

Y. Rivenson et al., “PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,” Light Sci. Appl., 8 23 https://doi.org/10.1038/s41377-019-0129-y (2019). Google Scholar

6. 

M. E. Kandel et al., “Phase imaging with computational specificity (PICS) for measuring dry mass changes in sub-cellular compartments,” Nat. Commun., 11 6256 https://doi.org/10.1038/s41467-020-20062-x NCAOBW 2041-1723 (2020). Google Scholar

7. 

Y. Rivenson, Y. Wu and A. Ozcan, “Deep learning in holography and coherent imaging,” Light Sci. Appl., 8 85 https://doi.org/10.1038/s41377-019-0196-0 (2019). Google Scholar

8. 

J. Park et al., “Artificial intelligence-enabled quantitative phase imaging methods for life sciences,” Nat. Methods, 20 1645 –1660 https://doi.org/10.1038/s41592-023-02041-4 1548-7091 (2023). Google Scholar

9. 

Z. Wang et al., “Spatial light interference microscopy (SLIM),” Opt. Express, 19 1016 –1026 https://doi.org/10.1364/OE.19.001016 OPEXFF 1094-4087 (2011). Google Scholar

10. 

G. Popescu et al., “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett., 31 775 –777 https://doi.org/10.1364/OL.31.000775 OPLEDP 0146-9592 (2006). Google Scholar

11. 

T. H. Nguyen et al., “Gradient light interference microscopy for 3D imaging of unlabeled specimens,” Nat. Commun., 8 210 https://doi.org/10.1038/s41467-017-00190-7 NCAOBW 2041-1723 (2017). Google Scholar

12. 

M. E. Kandel et al., “Epi-illumination gradient light interference microscopy for imaging opaque structures,” Nat. Commun., 10 4691 https://doi.org/10.1038/s41467-019-12634-3 NCAOBW 2041-1723 (2019). Google Scholar

13. 

T. Ikeda et al., “Hilbert phase microscopy for investigating fast dynamics in transparent systems,” Opt. Lett., 30 1165 –1167 https://doi.org/10.1364/OL.30.001165 OPLEDP 0146-9592 (2005). Google Scholar

14. 

M. K. Kim, “Principles and techniques of digital holographic microscopy,” SPIE Rev., 1 018005 https://doi.org/10.1117/6.0000006 (2010). Google Scholar

15. 

H. Ding et al., “Fourier transform light scattering of inhomogeneous and dynamic structures,” Phys. Rev. Lett., 101 238102 https://doi.org/10.1103/PhysRevLett.101.238102 PRLTAO 0031-9007 (2008). Google Scholar

16. 

V. N. Astratov et al., “Roadmap on label-free super-resolution imaging,” Laser Photonics Rev., 17 (12), 2200029 https://doi.org/10.1002/lpor.202200029 (2023). Google Scholar

17. 

F. Charrière et al., “Influence of shot noise on phase measurement accuracy in digital holographic microscopy,” Opt. Express, 15 8818 –8831 https://doi.org/10.1364/OE.15.008818 OPEXFF 1094-4087 (2007). Google Scholar

18. 

F. Charrière et al., “Shot-noise influence on the reconstructed phase image signal-to-noise ratio in digital holographic microscopy,” Appl. Opt., 45 7667 –7673 https://doi.org/10.1364/AO.45.007667 APOPAI 0003-6935 (2006). Google Scholar

19. 

P. A. Cheremkhin et al., “Shot noise and fixed-pattern noise effects on digital hologram reconstruction,” Opt. Lasers Eng., 139 106461 https://doi.org/10.1016/j.optlaseng.2020.106461 (2021). Google Scholar

20. 

P. Hosseini et al., “Pushing phase and amplitude sensitivity limits in interferometric microscopy,” Opt. Lett., 41 1656 –1659 https://doi.org/10.1364/OL.41.001656 OPLEDP 0146-9592 (2016). Google Scholar

21. 

S. Montresor and P. Picart, “Quantitative appraisal for noise reduction in digital holographic phase imaging,” Opt. Express, 24 14322 –14343 https://doi.org/10.1364/OE.24.014322 OPEXFF 1094-4087 (2016). Google Scholar

22. 

I. Shevkunov et al., “Super-resolution microscopy for biological specimens: lensless phase retrieval in noisy conditions,” Biomed. Opt. Express, 9 5511 –5523 https://doi.org/10.1364/BOE.9.005511 BOEICL 2156-7085 (2018). Google Scholar

23. 

S. Montresor et al., “Computational de-noising based on deep learning for phase data in digital holographic interferometry,” APL Photonics, 5 030802 https://doi.org/10.1063/1.5140645 (2020). Google Scholar

24. 

Z. Lin et al., “Digital holographic microscopy phase noise reduction based on an over-complete chunked discrete cosine transform sparse dictionary,” Opt. Lasers Eng., 166 107571 https://doi.org/10.1016/j.optlaseng.2023.107571 (2023). Google Scholar

25. 

J. Tang et al., “Continuous phase denoising via deep learning based on Perlin noise similarity in digital holographic microscopy,” IEEE Trans. Ind. Inf., 20 (6), 8707 –8716 https://doi.org/10.1109/TII.2024.3375375 (2024). Google Scholar

26. 

J. Szuniewicz et al., “Noise-resistant phase imaging with intensity correlation,” Sci. Adv., 9 eadh5396 https://doi.org/10.1126/sciadv.adh5396 STAMCV 1468-6996 (2023). Google Scholar

27. 

C. Hu and G. Popescu, “Physical significance of backscattering phase measurements,” Opt. Lett., 42 4643 –4646 https://doi.org/10.1364/OL.42.004643 OPLEDP 0146-9592 (2017). Google Scholar

28. 

G. Hall et al., “Goniometric measurements of thick tissue using Monte Carlo simulations to obtain the single scattering anisotropy coefficient,” Biomed. Opt. Express, 3 2707 –2719 https://doi.org/10.1364/BOE.3.002707 BOEICL 2156-7085 (2012). Google Scholar

29. 

V. Kuzmin and I. Meglinski, “Coherent effects of multiple scattering for scalar and electromagnetic fields: Monte-Carlo simulation and Milne-like solutions,” Opt. Commun., 273 307 –310 https://doi.org/10.1016/j.optcom.2007.01.025 OPCOB8 0030-4018 (2007). Google Scholar

30. 

S. T. Flock et al., “Monte Carlo modeling of light propagation in highly scattering tissues. I. Model predictions and comparison with diffusion theory,” IEEE Trans. Biomed. Eng., 36 1162 –1168 https://doi.org/10.1109/TBME.1989.1173624 IEBEAX 0018-9294 (1989). Google Scholar

31. 

Z. Wang, H. Ding and G. Popescu, “Scattering-phase theorem,” Opt. Lett., 36 1215 –1217 https://doi.org/10.1364/OL.36.001215 OPLEDP 0146-9592 (2011). Google Scholar

32. 

H. Ding et al., “Measuring the scattering parameters of tissues from quantitative phase imaging of thin slices,” Opt. Lett., 36 2281 –2283 https://doi.org/10.1364/OL.36.002281 OPLEDP 0146-9592 (2011). Google Scholar

33. 

N. Goswami et al., “Label-free SARS-CoV-2 detection and classification using phase imaging with computational specificity,” Light Sci. Appl., 10 176 https://doi.org/10.1038/s41377-021-00620-8 (2021). Google Scholar

34. 

X. Chen, M. E. Kandel and G. Popescu, “Spatial light interference microscopy: principle and applications to biomedicine,” Adv. Opt. Photonics, 13 353 –425 https://doi.org/10.1364/AOP.417837 AOPAC7 1943-8206 (2021). Google Scholar

35. 

K. Creath, “Phase-measurement interferometry techniques,” Progr. Opt., 26 349 –393 https://doi.org/10.1016/S0079-6638(08)70178-1 POPTAN 0079-6638 (1988). Google Scholar

36. 

M. Mir, K. Tangella and G. Popescu, “Blood testing at the single cell level using quantitative phase and amplitude microscopy,” Biomed. Opt. Express, 2 3259 –3266 https://doi.org/10.1364/BOE.2.003259 BOEICL 2156-7085 (2011). Google Scholar

37. 

G. Popescu et al., “Fourier phase microscopy for investigation of biological structures and dynamics,” Opt. Lett., 29 2503 –2505 https://doi.org/10.1364/OL.29.002503 OPLEDP 0146-9592 (2004). Google Scholar

38. 

G. Popescu et al., “Optical imaging of cell mass and growth dynamics,” Am. J. Physiol.-Cell Physiol., 295 C538 –C544 https://doi.org/10.1152/ajpcell.00121.2008 (2008). Google Scholar

39. 

B. Bhaduri et al., “Diffraction phase microscopy with white light,” Opt. Lett., 37 1094 –1096 https://doi.org/10.1364/OL.37.001094 OPLEDP 0146-9592 (2012). Google Scholar

40. 

C. Edwards et al., “Diffraction phase microscopy: monitoring nanoscale dynamics in materials science,” Appl. Opt., 53 G33 –G43 https://doi.org/10.1364/AO.53.000G33 APOPAI 0003-6935 (2014). Google Scholar

41. 

T. Kim et al., “White-light diffraction tomography of unlabelled live cells,” Nat. Photonics, 8 256 –263 https://doi.org/10.1038/nphoton.2013.350 NPAHBY 1749-4885 (2014). Google Scholar

42. 

M. Shan et al., “White-light diffraction phase microscopy at doubled space-bandwidth product,” Opt. Express, 24 29033 –29039 https://doi.org/10.1364/OE.24.029033 OPEXFF 1094-4087 (2016). Google Scholar

43. 

B. Bhaduri et al., “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics, 6 57 –119 https://doi.org/10.1364/AOP.6.000057 AOPAC7 1943-8206 (2014). Google Scholar

44. 

T. M. Kreis, M. Adams and W. P. Jüptner, “Methods of digital holography: a comparison,” Proc. SPIE, 3098 224 –233 https://doi.org/10.1117/12.281164 PSISDG 0277-786X (1997). Google Scholar

45. 

C.-S. Guo et al., “Phase derivative method for reconstruction of slightly off-axis digital holograms,” Opt. Express, 22 30553 –30558 https://doi.org/10.1364/OE.22.030553 OPEXFF 1094-4087 (2014). Google Scholar

46. 

C. Hu et al., “Endoscopic diffraction phase microscopy,” Opt. Lett., 43 3373 –3376 https://doi.org/10.1364/OL.43.003373 OPLEDP 0146-9592 (2018). Google Scholar

47. 

C. Edwards et al., “Epi-illumination diffraction phase microscopy with white light,” Opt. Lett., 39 6162 –6165 https://doi.org/10.1364/OL.39.006162 OPLEDP 0146-9592 (2014). Google Scholar

48. 

Y. Park et al., “Refractive index maps and membrane dynamics of human red blood cells parasitized by Plasmodium falciparum,” Proc. Natl. Acad. Sci. U. S. A., 105 13730 –13735 https://doi.org/10.1073/pnas.0806100105 (2008). Google Scholar

49. 

Y. Park et al., “Measurement of red blood cell mechanics during morphological changes,” Proc. Natl. Acad. Sci. U. S. A., 107 6731 –6736 https://doi.org/10.1073/pnas.0909533107 (2010). Google Scholar

50. 

M. J. Fanous et al., “Effects of substrate patterning on cellular spheroid growth and dynamics measured by gradient light interference microscopy (GLIM),” J. Biophotonics, 12 e201900178 https://doi.org/10.1002/jbio.201900178 (2019). Google Scholar

51. 

Y. Wang et al., “Harmonically decoupled gradient light interference microscopy (HD-GLIM),” Opt. Lett., 45 1487 –1490 https://doi.org/10.1364/OL.379732 OPLEDP 0146-9592 (2020). Google Scholar

52. 

E. Cuche, F. Bevilacqua and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett., 24 291 –293 https://doi.org/10.1364/OL.24.000291 OPLEDP 0146-9592 (1999). Google Scholar

53. 

E. Cuche, P. Marquet and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” Appl. Opt., 39 4070 –4075 https://doi.org/10.1364/AO.39.004070 APOPAI 0003-6935 (2000). Google Scholar

54. 

W. S. Haddad et al., “Fourier-transform holographic microscope,” Appl. Opt., 31 4973 –4978 https://doi.org/10.1364/AO.31.004973 APOPAI 0003-6935 (1992). Google Scholar

55. 

W. S. Haddad et al., “Fourier-transform holographic microscope,” Proc. SPIE, 1448 81 –88 https://doi.org/10.1117/12.45347 PSISDG 0277-786X (1991). Google Scholar

56. 

A. Anand et al., “Single beam Fourier transform digital holographic quantitative phase microscopy,” Appl. Phys. Lett., 104 103705 https://doi.org/10.1063/1.4868533 APPLAB 0003-6951 (2014). Google Scholar

57. 

E. Cuche, P. Marquet and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt., 38 6994 –7001 https://doi.org/10.1364/AO.38.006994 APOPAI 0003-6935 (1999). Google Scholar

58. 

W. Xu et al., “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. U. S. A., 98 11301 –11305 https://doi.org/10.1073/pnas.191361398 (2001). Google Scholar

59. 

I. Yamaguchi, Digital Holography and Three-Dimensional Display, 145 –171 Springer( (2006). Google Scholar

60. 

I. Yamaguchi, T. Matsumura and J.-I. Kato, “Phase-shifting color digital holography,” Opt. Lett., 27 1108 –1110 https://doi.org/10.1364/OL.27.001108 OPLEDP 0146-9592 (2002). Google Scholar

61. 

Y. Awatsuji et al., “Parallel two-step phase-shifting digital holography,” Appl. Opt., 47 D183 –D189 https://doi.org/10.1364/AO.47.00D183 APOPAI 0003-6935 (2008). Google Scholar

62. 

J.-P. Liu and T.-C. Poon, “Two-step-only quadrature phase-shifting digital holography,” Opt. Lett., 34 250 –252 https://doi.org/10.1364/OL.34.000250 OPLEDP 0146-9592 (2009). Google Scholar

63. 

T. Tahara et al., “Experimental demonstration of parallel two-step phase-shifting digital holography,” Opt. Express, 18 18975 –18980 https://doi.org/10.1364/OE.18.018975 OPEXFF 1094-4087 (2010). Google Scholar

64. 

Y. Awatsuji et al., “Parallel three-step phase-shifting digital holography,” Appl. Opt., 45 2995 –3002 https://doi.org/10.1364/AO.45.002995 APOPAI 0003-6935 (2006). Google Scholar

65. 

C.-S. Guo et al., “Phase-shifting error and its elimination in phase-shifting digital holography,” Opt. Lett., 27 1687 –1689 https://doi.org/10.1364/OL.27.001687 OPLEDP 0146-9592 (2002). Google Scholar

66. 

P. Xia et al., “Calibrated phase-shifting digital holography based on a dual-camera system,” Opt. Lett., 42 4954 –4957 https://doi.org/10.1364/OL.42.004954 OPLEDP 0146-9592 (2017). Google Scholar

67. 

N. Yoshikawa, “Phase determination method in statistical generalized phase-shifting digital holography,” Appl. Opt., 52 1947 –1953 https://doi.org/10.1364/AO.52.001947 APOPAI 0003-6935 (2013). Google Scholar

68. 

S. Zhang, “A non-iterative method for phase-shift estimation and wave-front reconstruction in phase-shifting digital holography,” Opt. Commun., 268 231 –234 https://doi.org/10.1016/j.optcom.2006.07.039 OPCOB8 0030-4018 (2006). Google Scholar

69. 

T. Seyler et al., “Multiwavelength digital holography in the presence of vibrations: laterally resolved multistep phase-shift extraction,” Appl. Opt., 58 G112 –G119 https://doi.org/10.1364/AO.58.00G112 APOPAI 0003-6935 (2019). Google Scholar

70. 

M. Jung et al., “Color digital holography based on generalized phase-shifting algorithm with monitoring phase-shift,” Photonics, 8 241 https://doi.org/10.3390/photonics8070241 (2021). Google Scholar

71. 

P. Marquet, C. Depeursinge and P. J. Magistretti, “Review of quantitative phase-digital holographic microscopy: promising novel imaging technique to resolve neuronal network activity and identify cellular biomarkers of psychiatric disorders,” Neurophotonics, 1 020901 https://doi.org/10.1117/1.NPh.1.2.020901 (2014). Google Scholar

72. 

T. Colomb et al., “Numerical parametric lens for shifting, magnification, and complete aberration compensation in digital holographic microscopy,” J. Opt. Soc. Am. A, 23 3177 –3190 https://doi.org/10.1364/JOSAA.23.003177 (2006). Google Scholar

73. 

T. Colomb et al., “Automatic procedure for aberration compensation in digital holographic microscopy and applications to specimen shape compensation,” Appl. Opt., 45 851 –863 https://doi.org/10.1364/AO.45.000851 APOPAI 0003-6935 (2006). Google Scholar

74. 

T. Colomb et al., “Total aberrations compensation in digital holographic microscopy with a reference conjugated hologram,” Opt. Express, 14 4300 –4306 https://doi.org/10.1364/OE.14.004300 OPEXFF 1094-4087 (2006). Google Scholar

75. 

C. Zuo et al., “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett., 38 1724 –1726 https://doi.org/10.1364/OL.38.001724 OPLEDP 0146-9592 (2013). Google Scholar

76. 

S. Liu et al., “Automatic phase aberration compensation for digital holographic microscopy based on phase variation minimization,” Opt. Lett., 43 1870 –1873 https://doi.org/10.1364/OL.43.001870 OPLEDP 0146-9592 (2018). Google Scholar

77. 

D. Deng et al., “Phase aberration compensation for digital holographic microscopy based on geometrical transformations,” J. Opt., 21 085702 https://doi.org/10.1088/2040-8986/ab2528 (2019). Google Scholar

78. 

S. Liu, Q. Lian and Z. Xu, “Phase aberration compensation for digital holographic microscopy based on double fitting and background segmentation,” Opt. Lasers Eng., 115 238 –242 https://doi.org/10.1016/j.optlaseng.2018.12.001 (2019). Google Scholar

79. 

P. Ferraro et al., “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt., 42 1938 –1946 https://doi.org/10.1364/AO.42.001938 APOPAI 0003-6935 (2003). Google Scholar

80. 

T. Nguyen et al., “Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,” Opt. Express, 25 15043 –15057 https://doi.org/10.1364/OE.25.015043 OPEXFF 1094-4087 (2017). Google Scholar

81. 

K. Lee et al., “Quantitative phase imaging techniques for the study of cell pathophysiology: from principles to applications,” Sensors, 13 4170 –4191 https://doi.org/10.3390/s130404170 SNSRES 0746-9462 (2013). Google Scholar

82. 

B. Kemper and G. Von Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt., 47 A52 –A61 https://doi.org/10.1364/AO.47.000A52 APOPAI 0003-6935 (2008). Google Scholar

83. 

J. Kühn et al., “Axial sub-nanometer accuracy in digital holographic microscopy,” Meas. Sci. Technol., 19 074007 https://doi.org/10.1088/0957-0233/19/7/074007 MSTCEP 0957-0233 (2008). Google Scholar

84. 

B. Rappaz et al., “Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy,” Opt. Express, 13 9361 –9373 https://doi.org/10.1364/OPEX.13.009361 OPEXFF 1094-4087 (2005). Google Scholar

85. 

Y. Jo et al., “Holographic deep learning for rapid optical screening of anthrax spores,” Sci. Adv., 3 e1700606 https://doi.org/10.1126/sciadv.1700606 STAMCV 1468-6996 (2017). Google Scholar

86. 

G. Dardikman and N. T. Shaked, “Review on methods of solving the refractive index–thickness coupling problem in digital holographic microscopy of biological cells,” Opt. Commun., 422 8 –16 https://doi.org/10.1016/j.optcom.2017.11.084 OPCOB8 0030-4018 (2018). Google Scholar

87. 

J. Kühn et al., “Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Express, 15 7231 –7242 https://doi.org/10.1364/OE.15.007231 OPEXFF 1094-4087 (2007). Google Scholar

88. 

A. Anand, V. Chhaniwal and B. Javidi, “Tutorial: common path self-referencing digital holographic microscopy,” APL Photonics, 3 071101 https://doi.org/10.1063/1.5027081 (2018). Google Scholar

89. 

J. Jang et al., “Self-reference quantitative phase microscopy for microfluidic devices,” Opt. Lett., 35 514 –516 https://doi.org/10.1364/OL.35.000514 OPLEDP 0146-9592 (2010). Google Scholar

90. 

K. Lee and Y. Park, “Quantitative phase imaging unit,” Opt. Lett., 39 3630 –3633 https://doi.org/10.1364/OL.39.003630 OPLEDP 0146-9592 (2014). Google Scholar

91. 

Y. Baek et al., “White-light quantitative phase imaging unit,” Opt. Express, 24 9308 –9315 https://doi.org/10.1364/OE.24.009308 OPEXFF 1094-4087 (2016). Google Scholar

92. 

S. Ebrahimi et al., “Stable and simple quantitative phase-contrast imaging by Fresnel biprism,” Appl. Phys. Lett., 112 113701 https://doi.org/10.1063/1.5021008 APPLAB 0003-6951 (2018). Google Scholar

93. 

M. Kumar, T. Murata and O. Matoba, “Live cell imaging by single-shot common-path wide field-of-view reflective digital holographic microscope,” Sensors, 24 720 https://doi.org/10.3390/s24030720 SNSRES 0746-9462 (2024). Google Scholar

94. 

I. Moon et al., “Identification of malaria-infected red blood cells via digital shearing interferometry and statistical inference,” IEEE Photonics J., 5 6900207 https://doi.org/10.1109/JPHOT.2013.2278522 (2013). Google Scholar

95. 

R. Liu et al., “Entropy-based clustering of embryonic stem cells using digital holographic microscopy,” J. Opt. Soc. Am. A, 31 677 –684 https://doi.org/10.1364/JOSAA.31.000677 JOAOD6 0740-3232 (2014). Google Scholar

96. 

M. Mihailescu et al., “Automated imaging, identification, and counting of similar cells from digital hologram reconstructions,” Appl. Opt., 50 3589 –3597 https://doi.org/10.1364/AO.50.003589 APOPAI 0003-6935 (2011). Google Scholar

97. 

P. Marquet and C. Depeursinge, “Digital holographic microscopy: a new imaging technique to quantitatively explore cell dynamics with nanometer sensitivity,” in Multi-Dimens. Imaging, 197 –223 (2014). https://doi.org/10.1002/9781118705766.ch9 Google Scholar

98. 

B. Javidi et al., “Sickle cell disease diagnosis based on spatio-temporal cell dynamics analysis using 3D printed shearing digital holographic microscopy,” Opt. Express, 26 13614 –13627 https://doi.org/10.1364/OE.26.013614 OPEXFF 1094-4087 (2018). Google Scholar

99. 

A. S. Singh et al., “Lateral shearing digital holographic imaging of small biological specimens,” Opt. Express, 20 (21), 23617 –23622 https://doi.org/10.1364/OE.20.023617 (2012). Google Scholar

100. 

S. Rawat et al., “Compact and field-portable 3D printed shearing digital holographic microscope for automated cell identification,” Appl. Opt., 56 (9), D127 –D133 https://doi.org/10.1364/AO.56.00D127 (2017). Google Scholar

101. 

V. Chhaniwal et al., “Quantitative phase-contrast imaging with compact digital holographic microscope employing Lloyd’s mirror,” Opt. Lett., 37 (24), 5127 –5129 https://doi.org/10.1364/OL.37.005127 (2012). Google Scholar

102. 

T. O’Connor et al., “Deep learning-based cell identification and disease diagnosis using spatio-temporal cellular dynamics in compact digital holographic microscopy,” Biomed. Opt. Express, 11 4491 –4508 https://doi.org/10.1364/BOE.399020 BOEICL 2156-7085 (2020). Google Scholar

103. 

B. Kemper et al., Label-Free Monitoring of Cells In Vitro, 219 –272 Springer( (2019). Google Scholar

104. 

G. Popescu et al., “Erythrocyte structure and dynamics quantified by Hilbert phase microscopy,” J. Biomed. Opt., 10 060503 https://doi.org/10.1117/1.2149847 JBOPFO 1083-3668 (2005). Google Scholar

105. 

G. Popescu et al., “Observation of apparent membrane tension in red blood cells using actively stabilized Hilbert phase microscopy,” in Biomed. Top. Meet., TuH8 (2006). https://doi.org/10.1364/BIO.2006.TuH8 Google Scholar

106. 

N. Lue et al., “Live cell refractometry using Hilbert phase microscopy and confocal reflectance microscopy,” J. Phys. Chem. A, 113 13327 –13330 https://doi.org/10.1021/jp904746r JPCAFH 1089-5639 (2009). Google Scholar

107. 

N. Lue et al., “Tissue refractometry using Hilbert phase microscopy,” Opt. Lett., 32 3522 –3524 https://doi.org/10.1364/OL.32.003522 OPLEDP 0146-9592 (2007). Google Scholar

108. 

H. Ding et al., “Optical properties of tissues quantified by Fourier-transform light scattering,” Opt. Lett., 34 1372 –1374 https://doi.org/10.1364/OL.34.001372 OPLEDP 0146-9592 (2009). Google Scholar

109. 

C. Hu and G. Popescu, “Quantitative phase imaging (QPI) in neuroscience,” IEEE J. Sel. Top. Quantum Electron., 25 1 –9 https://doi.org/10.1109/JSTQE.2018.2869613 IJSQEN 1077-260X (2018). Google Scholar

110. 

M. Mir et al., “Optical measurement of cycle-dependent cell growth,” Proc. Natl. Acad. Sci. U. S. A., 108 13124 –13129 https://doi.org/10.1073/pnas.1100506108 (2011). Google Scholar

111. 

S. Sridharan, M. Mir and G. Popescu, “Simultaneous optical measurements of cell motility and growth,” Biomed. Opt. Express, 2 2815 –2820 https://doi.org/10.1364/BOE.2.002815 BOEICL 2156-7085 (2011). Google Scholar

112. 

M. Mir et al., “Highly sensitive quantitative imaging for monitoring single cancer cell growth kinetics and drug response,” PLoS One, 9 e89000 https://doi.org/10.1371/journal.pone.0089000 POLNCL 1932-6203 (2014). Google Scholar

113. 

Y. J. Lee et al., “Quantitative assessment of neural outgrowth using spatial light interference microscopy,” J. Biomed. Opt., 22 066015 https://doi.org/10.1117/1.JBO.22.6.066015 JBOPFO 1083-3668 (2017). Google Scholar

114. 

S. Sridharan Weaver et al., “Simultaneous cell traction and growth measurements using light,” J. Biophotonics, 12 e201800182 https://doi.org/10.1002/jbio.201800182 (2019). Google Scholar

115. 

Y. Li et al., “Quantitative phase imaging reveals matrix stiffness-dependent growth and migration of cancer cells,” Sci. Rep., 9 248 https://doi.org/10.1038/s41598-018-36551-5 SRCEC3 2045-2322 (2019). Google Scholar

116. 

E. C. Qin et al., “Graphene oxide substrates with N-cadherin stimulates neuronal growth and intracellular transport,” Acta Biomaterialia, 90 412 –423 https://doi.org/10.1016/j.actbio.2019.04.005 (2019). Google Scholar

117. 

M. E. Kandel et al., “Cell-to-cell influence on growth in large populations,” Biomed. Opt. Express, 10 4664 –4675 https://doi.org/10.1364/BOE.10.004664 BOEICL 2156-7085 (2019). Google Scholar

118. 

M. Fanous et al., “Label-free screening of brain tissue myelin content using phase imaging with computational specificity (PICS),” (2021). Google Scholar

119. 

M. E. Kandel et al., “Multiscale assay of unlabeled neurite dynamics using phase imaging with computational specificity,” ACS Sens., 6 1864 –1874 https://doi.org/10.1021/acssensors.1c00100 (2021). Google Scholar

120. 

M. Fanous et al., “Quantifying myelin content in brain tissue using color Spatial Light Interference Microscopy (cSLIM),” PLoS One, 15 e0241084 https://doi.org/10.1371/journal.pone.0241084 POLNCL 1932-6203 (2020). Google Scholar

121. 

C. Yin et al., “Network science characteristics of brain-derived neuronal cultures deciphered from quantitative phase imaging data,” Sci. Rep., 10 15078 https://doi.org/10.1038/s41598-020-72013-7 SRCEC3 2045-2322 (2020). Google Scholar

122. 

C. Hu et al., “Optical excitation and detection of neuronal activity,” J. Biophotonics, 12 e201800269 https://doi.org/10.1002/jbio.201800269 (2019). Google Scholar

123. 

A. Fan et al., “Coupled circumferential and axial tension driven by actin and myosin influences in vivo axon diameter,” Sci. Rep., 7 14188 https://doi.org/10.1038/s41598-017-13830-1 SRCEC3 2045-2322 (2017). Google Scholar

124. 

P. Cintora et al., “Cell density modulates intracellular mass transport in neural networks,” Cytom. Part A, 91 503 –509 https://doi.org/10.1002/cyto.a.23111 1552-4922 (2017). Google Scholar

125. 

M. E. Kandel et al., “Three-dimensional intracellular transport in neuron bodies and neurites investigated by label-free dispersion-relation phase spectroscopy,” Cytom. Part A, 91 519 –526 https://doi.org/10.1002/cyto.a.23081 1552-4922 (2017). Google Scholar

126. 

M. E. Kandel et al., “Label-free imaging of single microtubule dynamics using spatial light interference microscopy,” ACS Nano, 11 647 –655 https://doi.org/10.1021/acsnano.6b06945 ANCAC3 1936-0851 (2017). Google Scholar

127. 

S. Ceballos et al., “Active intracellular transport in metastatic cells studied by spatial light interference microscopy,” J. Biomed. Opt., 20 111209 https://doi.org/10.1117/1.JBO.20.11.111209 JBOPFO 1083-3668 (2015). Google Scholar

128. 

R. Gannavarpu et al., “Spatiotemporal characterization of a fibrin clot using quantitative phase imaging,” PLoS One, 9 e111381 https://doi.org/10.1371/journal.pone.0111381 POLNCL 1932-6203 (2014). Google Scholar

129. 

B. Bhaduri et al., “Cardiomyocyte imaging using real-time spatial light interference microscopy (SLIM),” PLoS One, 8 e56930 https://doi.org/10.1371/journal.pone.0056930 POLNCL 1932-6203 (2013). Google Scholar

130. 

Y. Ma et al., “Panoramic quantitative phase imaging of adherent live cells in a microfluidic environment,” Biomed. Opt. Express, 14 5182 –5198 https://doi.org/10.1364/BOE.498602 BOEICL 2156-7085 (2023). Google Scholar

131. 

M. Fanous et al., “Quantitative phase imaging of stromal prognostic markers in pancreatic ductal adenocarcinoma,” Biomed. Opt. Express, 11 1354 –1364 https://doi.org/10.1364/BOE.383242 BOEICL 2156-7085 (2020). Google Scholar

132. 

M. E. Kandel et al., “Label-free tissue scanner for colorectal cancer screening,” J. Biomed. Opt., 22 066016 https://doi.org/10.1117/1.JBO.22.6.066016 JBOPFO 1083-3668 (2017). Google Scholar

133. 

S. Sridharan et al., “Prediction of prostate cancer recurrence using quantitative phase imaging: validation on a general population,” Sci. Rep., 6 33818 https://doi.org/10.1038/srep33818 SRCEC3 2045-2322 (2016). Google Scholar

134. 

H. Majeed et al., “Breast cancer diagnosis using spatial light interference microscopy,” J. Biomed. Opt., 20 111210 https://doi.org/10.1117/1.JBO.20.11.111210 JBOPFO 1083-3668 (2015). Google Scholar

135. 

N. Goswami et al., “Monitoring reactivation of latent HIV by label-free gradient light interference microscopy,” iScience, 24 102940 https://doi.org/10.1016/j.isci.2021.102940 (2021). Google Scholar

136. 

L. Liu et al., “Topography and refractometry of sperm cells using spatial light interference microscopy,” J. Biomed. Opt., 23 025003 https://doi.org/10.1117/1.JBO.23.2.025003 JBOPFO 1083-3668 (2018). Google Scholar

137. 

M. Rubessa et al., “SLIM microscopy allows for visualization of DNA-containing liposomes designed for sperm-mediated gene transfer in cattle,” Mol. Biol. Rep., 46 695 –703 https://doi.org/10.1007/s11033-018-4525-9 MLBRBU 0301-4851 (2019). Google Scholar

138. 

M. Rubessa et al., “Morphometric analysis of sperm used for IVP by three different separation methods with spatial light interference microscopy,” Syst. Biol. Reprod. Med., 66 26 –36 https://doi.org/10.1080/19396368.2019.1701139 (2020). Google Scholar

139. 

M. E. Kandel et al., “Reproductive outcomes predicted by phase imaging with computational specificity of spermatozoon ultrastructure,” Proc. Natl. Acad. Sci. U. S. A., 117 18302 –18309 https://doi.org/10.1073/pnas.2001754117 (2020). Google Scholar

140. 

N. Goswami et al., “EVATOM: an optical, label-free, machine learning assisted embryo health assessment tool,” Commun. Biol., 7 268 https://doi.org/10.1038/s42003-024-05960-w (2024). Google Scholar

141. 

M. Nassir et al., “Evaluation of women’s aging influence on sperm passage inside the fallopian tube using 3D dynamic mechanical modeling,” Front. Bioeng. Biotechnol., 12 1324802 https://doi.org/10.3389/fbioe.2024.1324802 (2024). Google Scholar

142. 

H. Ding et al., “Fourier transform light scattering of biological structure and dynamics,” IEEE J. Sel. Top. Quantum Electron., 16 909 –918 https://doi.org/10.1109/JSTQE.2009.2034752 IJSQEN 1077-260X (2010). Google Scholar

143. 

M. Hunter et al., “Tissue self-affinity and polarized light scattering in the Born approximation: a new model for precancer detection,” Phys. Rev. Lett., 97 138102 https://doi.org/10.1103/PhysRevLett.97.138102 PRLTAO 0031-9007 (2006). Google Scholar

144. 

Z. Wang et al., “Tissue refractive index as marker of disease,” J. Biomed. Opt., 16 116017 https://doi.org/10.1117/1.3656732 JBOPFO 1083-3668 (2011). Google Scholar

145. 

H. Majeed et al., “Quantitative histopathology of stained tissues using color spatial light interference microscopy (cSLIM),” Sci. Rep., 9 14679 https://doi.org/10.1038/s41598-019-50143-x SRCEC3 2045-2322 (2019). Google Scholar

146. 

M. Takabayashi et al., “Tissue spatial correlation as cancer marker,” J. Biomed. Opt., 24 016502 https://doi.org/10.1117/1.JBO.24.1.016502 JBOPFO 1083-3668 (2019). Google Scholar

147. 

M. Takabayashi et al., “Disorder strength measured by quantitative phase imaging as intrinsic cancer marker in fixed tissue biopsies,” PLoS One, 13 e0194320 https://doi.org/10.1371/journal.pone.0194320 POLNCL 1932-6203 (2018). Google Scholar

148. 

S. Ban et al., “Optical properties of acute kidney injury measured by quantitative phase imaging,” Biomed. Opt. Express, 9 921 –932 https://doi.org/10.1364/BOE.9.000921 BOEICL 2156-7085 (2018). Google Scholar

149. 

Z.-L. Li et al., “Changes in extracellular matrix in different stages of colorectal cancer and their effects on proliferation of cancer cells,” World J. Gastrointest. Oncol., 12 267 https://doi.org/10.4251/wjgo.v12.i3.267 (2020). Google Scholar

150. 

J. Winkler et al., “Concepts of extracellular matrix remodelling in tumour progression and metastasis,” Nat. Commun., 11 5120 https://doi.org/10.1038/s41467-020-18794-x NCAOBW 2041-1723 (2020). Google Scholar

151. 

H. Majeed et al., “Quantifying collagen fiber orientation in breast cancer using quantitative phase imaging,” J. Biomed. Opt., 22 046004 https://doi.org/10.1117/1.JBO.22.4.046004 JBOPFO 1083-3668 (2017). Google Scholar

152. 

K. L. Vom Werth et al., “Application of digital holographic microscopy to analyze changes in T-cell morphology in response to bacterial challenge,” Cells, 12 762 https://doi.org/10.3390/cells12050762 (2023). Google Scholar

153. 

C. Yourassowsky et al., “Lipid quantification in living microalgal cultures with digital holographic microscopy,” Front. Photonics, 4 1301708 https://doi.org/10.3389/fphot.2023.1301708 (2024). Google Scholar

154. 

G. Giugliano et al., “Investigation on lysosomal accumulation by a quantitative analysis of 2D phase-maps in digital holography microscopy,” Cytom. Part A, 105 (5), 323 –331 https://doi.org/10.1002/cyto.a.24833 1552-4922 (2024). Google Scholar

155. 

Y. Jo et al., “Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning,” Nat. Cell Biol., 23 1329 –1337 https://doi.org/10.1038/s41556-021-00802-x NCBIFN 1465-7392 (2021). Google Scholar

156. 

G. Kim et al., “Learning-based screening of hematologic disorders using quantitative phase imaging of individual red blood cells,” Biosens. Bioelectron., 123 69 –76 https://doi.org/10.1016/j.bios.2018.09.068 BBIOE4 0956-5663 (2019). Google Scholar

157. 

D. Pirone et al., “Label-free liquid biopsy through the identification of tumor cells by machine learning-powered tomographic phase imaging flow cytometry,” Sci. Rep., 13 6042 https://doi.org/10.1038/s41598-023-32110-9 SRCEC3 2045-2322 (2023). Google Scholar

Biography

Neha Goswami received her PhD in bioengineering from the University of Illinois Urbana-Champaign, Illinois, United States, her master’s degree in photonics science and engineering from IIT Kanpur, India, and her bachelor’s degree in electronics and communication engineering from GEU, India. Currently, she is a postdoctoral research fellow at Cold Spring Harbor Laboratory, New York, United States. She previously worked for a year in the Structural Biology Department at Memorial Sloan Kettering Cancer Center, New York, United States. Her research interests include quantitative phase and super-resolution optical imaging in combination with multimodal machine learning for biomedical applications.

Mark A. Anastasio is Donald Biggar Willett Professor of Engineering and head of the Department of Bioengineering at the University of Illinois Urbana-Champaign, Champaign, Illinois, United States. His research interests include computational image science, inverse problems in imaging, and machine learning for imaging applications. He has contributed broadly to emerging biomedical imaging technologies that include diffraction tomography, photoacoustic computed tomography, ultrasound computed tomography, and X-ray phase-contrast imaging. He is a fellow of the SPIE, IEEE, AIMBE, and IAMBE.

Gabriel Popescu was William L. Everitt Distinguished Professor in Electrical and Computer Engineering and affiliated with Bioengineering at the University of Illinois Urbana-Champaign, Illinois, United States. He was a distinguished leader in quantitative phase imaging, who unfortunately passed away in June 2022. Hailing from Romania, he made pioneering contributions to advanced biomedical optics including quantitative phase imaging and spectroscopy. He was a fellow of SPIE, Optica, and AIMBE. Known for his innovative research and inspiring mentorship, he leaves behind a great legacy in optics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Neha Goswami, Mark A. Anastasio, and Gabriel Popescu "Quantitative phase imaging techniques for measuring scattering properties of cells and tissues: a review—part I," Journal of Biomedical Optics 29(S2), S22713 (18 July 2024). https://doi.org/10.1117/1.JBO.29.S2.S22713
Received: 29 February 2024; Accepted: 20 May 2024; Published: 18 July 2024
Advertisement
Advertisement
KEYWORDS
Scattering

Tissues

Phase measurement

Phase imaging

Refractive index

Light scattering

Biological samples

Back to Top