|
1.IntroductionDiffuse optical tomography (DOT) is a noninvasive technique for imaging biological tissues with applications in imaging of human brain1–3 and breast4,5 and small animals.6,7 In brain imaging, DOT has been used for functional brain activation studies,1,3,8–10 imaging hemorrhages at birth,2 and stroke.7,11 DOT is portable, which makes it a potential imaging method for bed-side monitoring of newborn infants or adults in intensive care.8 Recently, it has also been suggested that using high-density imaging arrays DOT could achieve spatial resolution comparable to functional magnetic resonance imaging.12,13 DOT has also applications in cancer tumor diagnosis in humans and small animals.4–6 Absolute imaging in DOT uses a single set of measurements to reconstruct spatially distributed absorption and scattering coefficients. However, absolute imaging is very sensitive to modeling errors, which can be caused, e.g., by inaccurately known object shape or unknown optode coupling coefficients or optode positions. A variety of techniques have been developed that could provide tolerance toward such modeling errors. Some of the techniques rely on explicit calculation of coupling coefficients from experiments14,15 or using computational methods.16,17 For estimating the domain shape, several registration methods are available that typically use a finite set of measured surface points to fit a generic head model, in the form of an anatomical atlas.18 However, interpolating the object shape using a few measured points does not guarantee obtaining the exact surface of the patient; hence, the process might still retain modeling errors. Also, techniques using data from other imaging modalities such as computed tomography (CT)19 or magnetic resonance imaging (MRI)20,21 to obtain the domain shape and optode positions have been proposed. Such data might not always be available. The Bayesian approximation error approach22 is an alternative computational technique where the statistics of such model-based errors are precomputed using prior probability distributions of the unknowns and the uncertain nuisance parameters. These statistics are then used in the image reconstruction process to compensate for the modeling errors.23–28 Nevertheless, the most popular technique for in vivo DOT imaging has been difference imaging where the objective is to reconstruct change in the optical properties using measurements before and after the change. Conventionally, the image reconstruction is carried out using the difference of the measurements and a linearized approximation of the observation model.3,8–10,14,29–32 In the case of imaging brain activation, the reference measurement from the state before the change is typically obtained at the “rest stage” of a brain.8,9 In some cases, the reference measurement can also be obtained from a homogeneous tissue mimicking phantom.32 In Refs. 2930.–31, 33, the difference imaging problem is stated as reconstructing moment-by-moment relative differences of the parameters using mean of the time series of measured data as the reference data. One of the main benefits of the linearized difference imaging approach is that it has a good tolerance to (invariant) modeling errors, such as inaccurately known source and detector locations and coupling as well as inaccurately known body shape. When reconstructing images using differences in data resulting from a change in the optical properties, several modeling errors, which are invariant between the measurements, cancel out (partially) in the subtraction of the measurement before the change from the measurement after the change.14 The performance of linear difference imaging in the presence of mismodeled background was studied in Refs. 25, 31 and in imaging objects of different sizes and target optical properties in Ref. 29. The method was extended for extracting dynamic information from time series data in Refs. 33 and 34. A nonlinear (postprocessing) update method based on spatial deconvolution of the difference images was implemented in Refs. 3435.36.37.38.–39 to improve on the linearized solutions. A drawback of the linear reconstruction approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model. The performance of the linear reconstruction also depends on the linearization point, which ideally should be equivalent to the initial state, which in practice is always unknown. According to simulations in Ref. 40, inaccurately known background optical properties together with linear difference imaging can lead to inaccurate contrast in the reconstructed images, and in some cases it is possible that the method even fails to detect and localize the changes. To overcome the limitations of the linear reconstruction approach, we apply a new nonlinear approach for difference imaging in DOT that has been initially developed for estimating conductivity distribution in electrical impedance tomography in Refs. 41 and 42. In this approach, the optical parameters after the change are parameterized as a linear combination of the initial state and the change. The approach is based on the regularized nonlinear least-squares approach. Instead of using the difference of the data before and after the change, the measurement before and after the change is concatenated into a single measurement vector, and the objective is to estimate the unknown initial state and the change based on the data. This approach allows naturally for modeling independently the spatial characteristics of the background optical parameters and the change of the optical parameters by separate regularization functionals. The approach also allows the restriction of the optical parameter changes to a region of interest (ROI) inside the domain in cases where the change is known to occur in a certain subvolume of the body. We test the feasibility of the method with two-dimensional (2-D) simulations using frequency domain data from a simplified head geometry and experimental frequency domain data from a cylindrical phantom using three-dimensional (3-D) models. Since good tolerance for modeling errors such as domain truncation, inaccurately known optode coupling coefficients, and inaccurately known domain shape are some of the main benefits of linear difference imaging, we also study how the new nonlinear approach tolerates the same modeling errors. The remainder of the paper is organized as follows. In Sec. 2, a brief review of the light transport modeling in DOT is given. Next, the absolute and difference imaging approaches in DOT and the reconstruction algorithms are explained. In Sec. 3, we describe the methods used in data simulation and constructing the prior models. Then, we present the results in a 2-D geometry with and without modeling errors. Next, we describe our experimental setup and present the reconstruction results with the experimental data. Finally, the conclusions are given in Sec. 4. 2.Forward Model of Diffuse Optical TomographyLet , , denote the object domain and the domain boundary. In a diffusive medium, the commonly used light transport model for DOT is the diffusion approximation to the radiative transport equation. 43 In this paper, the frequency domain version of the diffusion approximation is used, where is the fluence, is the absorption coefficient, and is the diffusion coefficient. The diffusion coefficient is given by , where is the (reduced) scattering coefficient. Furthermore, is the imaginary unit, is the angular modulation frequency of the input signal, and is the speed of light in the medium. The parameter is the strength of the light source at locations , , operating at angular frequency . The parameter is a dimension-dependent constant ( when , when ) and is a parameter governing the internal reflection at the boundary . The measurable quantity exitance by detector under illumination from source is given by where is the outward normal to the boundary at point and , are the detector locations.The numerical approximation of the forward model (1)–(3) is based on a finite-element (FE) approximation. In the FE approximation, the domain is divided into nonoverlapping elements joined at vertex nodes. The photon density in the finite dimensional basis is given by where is the nodal basis functions of the FE mesh and is the photon densities in the nodes of the FE mesh. Furthermore, we write finite dimensional (piecewise constant) approximations for and where denotes the characteristic functions of disjoint image pixels.The measurement data for frequency domain DOT typically contains the measured log amplitude and phase for all source-detector pairs where is the data vector. The FE-based solution of Eqs. (1)–(3) is denoted by , where is the discretized optical coefficients. The observation model is written as where models the random noise in measurements.2.1.Absolute ImagingIn absolute imaging, the optical coefficients are reconstructed using a single set of measurements during which the target is assumed to be nonvarying. Assuming that the additive measurement noise is independent of the unknowns and distributed as zero-mean Gaussian ,44 the estimate amounts to the minimization problem where the is the Cholesky factor and is the regularization functional, which should be constructed based on the prior information on the unknowns.Usual choices for the regularization functional include standard Tikhonov regularization , smoothness regularization , where is a (possibly spatially and directionally weighted) differential operator,3,45,46 total variation (TV)47–50 regularization , and so on. We note that the estimate (9) can be interpreted in the Bayesian inversion framework as the maximum a posteriori estimate from a posterior density model, which is based on the observation model (8) and a prior model for the unknowns.22,23,51 2.2.Difference ImagingConsider two DOT measurement realizations and obtained from the body at times and with optical coefficients and , respectively. The observation models corresponding to the two DOT measurement realizations can be written as in Eq. (8) where , . The aim in difference imaging is to reconstruct the change in optical parameters based on the measurements and .2.2.1.Conventional linear approach to difference imagingConventionally, the image reconstruction in difference imaging is carried out as follows. Equations (10) and (11) are approximated by the first-order Taylor approximations as where is the linearization point and is the Jacobian matrix evaluated at . Using the linearization and subtracting from gives the linear observation model where and .Given the model (13), the change in optical coefficients can be estimated as where is the regularization functional. The weighting matrix is defined as , where .The regularization functional is often chosen to be of the quadratic form , where is the regularization matrix. In such a case, the problem (14) is linear and has a closed form solution13,52 In this paper, we refer to Eq. (15) as the conventional difference imaging estimate. The main benefit of the difference imaging is that at least part of the modeling errors cancel out when considering the difference data . Hence, the estimates are often to some extent tolerant of modeling errors. A drawback of the approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model (8). Moreover, the estimates depend on the selection of the linearization point . Typically, is selected as a homogeneous (spatially constant) estimate of the initial state . This choice can lead to errors in the reconstructions if the initial optical coefficients are not accurately known.25,31 2.2.2.Nonlinear approach to difference imagingIn this section, we formulate the reconstruction of the change of optical coefficients in the nonlinear regularized least squares framework. However, instead of reconstructing both states and using Eq. (9) separately for datasets , and then subtracting , we use an approach where is reconstructed together with by using both datasets and simultaneously.41,42 This approach allows us to model prior information in cases where, e.g., the spatial characteristics of the initial state and change are different (e.g., smooth and sparse ). This approach also allows in a straightforward way the restriction of the change into a ROI. Let us assume that the change in optical coefficients is known to occur in , and denote the change of optical coefficients within by . Then, , where is an extension mapping such that Obviously, if no ROI constraint is used, we set . The optical coefficients after the change can now be represented as a linear combination of the initial state and the change as Inserting Eq. (17) into Eq. (11) and concatenating the measurement vectors and and the corresponding models in Eqs. (10) and (11) into block vectors, leads to an observation model41 whereGiven the model in Eq. (19), the initial state and the change can be simultaneously estimated as Here, is the Cholesky factor such that , where and is an all-zero matrix. is the joint regularization functional of , which allows for separate models for and asThe estimate in Eq. (21) can be computed iteratively using for example a Gauss–Newton (GN) algorithm53 as where is the step length, and and are the gradient and Hessian of the regularization functional , respectively. The Jacobian matrix is of the form where is the Jacobian matrix for the forward mapping , is an all-zero matrix, and is the dimension of the vector .3.ResultsThe feasibility of the method was tested with 2-D simulations and with 3-D experimental measurement data. 3.1.EstimatesThe following reconstruction approaches were considered: Nonlinear: nonlinear reconstruction of with the proposed (nonlinear) difference imaging method Linear: conventional (linear) difference reconstruction by solving 3.2.Regularization FunctionalsFor modeling in the nonlinear difference imaging approach Eq. (24), we used a quadratic Gaussian smoothness regularization of the form In the construction of the Gaussian smoothness regularization functional Eq. (26), the absorption and scatter coefficients and were modeled as mutually independent Gaussian random fields. In the construction of prior covariances, the random field was considered in the form where is a spatially inhomogeneous parameter with zero mean, , and is a spatially constant (background) parameter with nonzero mean. For the latter, we can write , where is a vector of ones and is a scalar random variable with Gaussian distribution . In the construction of , the approximate correlation length was adjusted to match the expected size of the inhomogeneities and the marginal variances of were tuned based on the expected variation of the optical properties in the initial state. See Refs. 22, 23, and 26 for further details. Modeling and as mutually independent, we obtain where the Cholesky factor in Eq. (26). This particular Gaussian smoothness regularization has been previously used in absolute imaging in Refs. 2324.–25, 27, 28, 48 and in difference imaging in Ref. 54. In this paper, the correlation length was chosen as 8 mm. The standard deviations of the background and inhomogeneities are given in Table 1.Table 1Parameters of Gaussian smoothness regularization. The first two rows show standard deviations of the background σbg,δx and the inhomogeneities σin,δx chosen for modeling δx. The next two rows show σbg,x1 and σin,x1 chosen for modeling the initial state x1. Here, x1,* is selected as a homogeneous (spatially constant) estimate of the initial state x1.
For modeling (in all the following test cases except for case 2 in Sec. 3.3.2), we used a sparsity promoting TV functional , where is a differentiable approximation of the isotropic TV functional55 and is the gradient of at element . is a small parameter that ensures TV is differentiable and is the regularization parameter. In this paper, and were manually selected. For systematic approaches of the regularization parameter selection see, e.g., Refs. 56 and 57. The values of and used in our reconstructions are listed in Table 2.Table 2Regularization parameters of the TV regularization for δx. The first row shows α and β values of μa and μs′ used in the 2-D reconstructions. The second row shows the regularization parameters for the 3-D reconstructions.
For modeling of in the conventional linear difference imaging Eq. (25), we used the same quadratic Gaussian smoothness regularization as before, except in this case, while modeling the random field , the spatially constant part had a zero mean with . The standard deviations of the background and inhomogeneities are given in Table 1.Note that the standard deviations chosen for are larger than in since are absolute values of optical parameters unlike . Also, since is the “unchanging” parameter between observations and in the model (18), the presence of the same modeling errors in both observations and should mainly affect the estimate (not ). Thus, to allow for large variations in in the presence of modeling errors, we choose larger std’s for prior modeling of . 3.3.Two-Dimensional Target and SimulationsThe measurements and were simulated using 2-D simulation target states and shown in Fig. 1. The shape of the domain was extracted from a segmented adult brain CT scan. The diameter along the saggital plane was scaled to 100 mm (approximately the size of a newborn baby head). The state had three outer layers (mimicking the skin, skull, and cerebrospinal fluid) and two overlapping circular inclusions in the brain area. State was the same as , except that one additional inclusion in and one in were added in the dorsal (back) part of the brain. The optical properties of the states and are listed in Table 3. The measurement setup consisted of 16 sources and 16 detectors modeled as 1-mm-wide surface patches located on the boundary . Random measurement noise , drawn from zero-mean Gaussian distributions , where the standard deviations were specified as 1% of the simulated noise free measurement data, were added to the simulated measurement data. The means , and covariances , were assumed known. Table 3Optical properties of the target head.
3.3.1.Using different optode arrangements on the boundaryWe investigated situations where the optodes (16 sources, 16 detectors) were arranged at (a) equiangular intervals around the boundary of the 2-D target and at (b) equiangular intervals only in the dorsal part of the boundary of the 2-D target. In this case, we used a quadratic smoothness promoting functional for modeling , and sparsity promoting TV functional for modeling , in the estimate (24). We used a quadratic smoothness promoting functional for modeling , in the estimate (25). Figures 2(a) and 2(b) show the estimated optical coefficients from a simulation setup where the optodes were placed at equiangular intervals around the whole boundary. Panel (a) shows estimates of and using nonlinear difference imaging, Eq. (24). The ROI in the computations was selected as the dorsal hemisphere of the brain domain. The computation time () of the nonlinear estimate was . The reconstructions converged after three GN iterations. Panel (b) shows the corresponding conventional linear difference imaging estimate of , Eq. (25). The computation time of the linear estimate was . The :s of all the remaining 2-D nonlinear and linear estimates were of the same magnitude as the values reported here. Figure 2(c) shows estimates and , Eq. (24) where the optodes were placed at the dorsal (upper) half of the boundary. Figure 2(d) shows the corresponding estimate using a linear difference imaging, Eq. (25). The error in the reconstructions where is estimated or and is the simulated (true) target change, are listed in Table 4.Table 4Reconstruction errors.
As can be seen from Fig. 2 and Table 4, nonlinear difference imaging shows better localization and recovery of the change compared to the conventional linear reconstruction for both optode arrangements. The nonlinear estimates of the change are not as spatially spread as the linear estimates. The effect is especially evident when only the upper part of the boundary is covered by sources and detectors. Since only the partial sensor coverage would be usually available in practical head imaging, the remaining 2-D simulations using the head domain were carried out using sources and detectors only at the dorsal part of the boundary. 3.3.2.Using different region of interest constraints and regularizationsThe purpose of this test case is to demonstrate that the improvement of the proposed approach over the conventional linearized reconstruction is not only because of (1) ROI constraint and (2) TV regularization for . The results are shown in Fig. 3. Figure 3(a) shows nonlinear difference reconstruction Eq. (24) without any ROI constraint i.e., . Figure 3(b) shows nonlinear difference reconstruction Eq. (24) using quadratic smoothness promoting functional for modeling , . Figure 3(c) shows conventional linear difference reconstruction Eq. (25), and Fig. 3(d) shows conventional linear difference reconstruction Eq. (25) with ROI constraint We can see that the results with nonlinear difference imaging [Figs. 3(a) and 3(b)] are quite similar to those obtained when using ROI constraint and TV regularization [Fig. 2(c)]. Also, the difference imaging estimates do not improve much by adding the ROI constraint. This result shows that the improvement of reconstruction is not only due to the choice of regularization or using the ROI constraint alone—it is in significant part due to the specific parametrization and formulation of the nonlinear difference imaging problem. 3.3.3.Tolerance toward modeling errorsWe tested with simulations the tolerance of the nonlinear and linear difference imaging toward modeling errors. Reconstructions in the presence of (a) domain truncation, (b) unknown optode coupling, and (c) unknown object shape were considered. The reconstructions are shown in Fig. 4. The errors in the reconstructions are listed in Table 4. Domain truncation: the first and second rows in Fig. 4 show estimates , and , Eqs. (24) and (25) in the presence of domain truncation, i.e., using a truncated domain as the model domain. The errors in the reconstructions are listed in Table 4. Unknown optode coupling: the second and third rows in Fig. 4 show estimates , , and , Eq. (24) and (25) in the case of an inaccurately known optode coupling. The errors in the reconstructions are listed in Table 4. The coupling error was simulated as follows. Let represent the coupling coefficients of the 16 sources’ and 16 detectors’ amplitude and phases. Let us define a vector valued mapping such that where and are the source and detector indexes, respectively. The coupling error is given by27The data were corrupted with coupling error as , . In this case, the source and detector amplitude and phase coefficients were drawn from uniform distribution , , , . The data used in estimate (24) was and for estimate (25), it was . Unknown object shape: the fourth and fifth rows in Fig. 4 show estimates , , and , Eqs. (24) and (25) in the case of using incorrect model domain, i.e., in this case an incorrectly shaped domain (domain obtained using some other segmented adult CT scan) was used as the model domain. The reconstruction errors for this case are not listed in Table 4 since the deformation map of the true optical properties from the measurement domain to the model domain is not known. From the estimates shown in Fig. 4, we can observe that in most cases, the linear difference reconstruction Eq. (25) is indicative of the location of change, although the spatial resolution is weak. In the estimates obtained using the nonlinear difference imaging Eq. (24), the reconstructed initial state is heavily affected by the modeling errors; however, the estimates are relatively free from artifacts. The reason for the modeling errors not affecting significantly the estimates lies in the parametrization of as a linear combination of the initial state and the change as the unknowns. The variable is invariant between models and . Consequently, when the proposed parametrization is used, the errors caused by the (invariant) modeling errors are propagated mainly to the estimate of , which consists the parameters that are common for the models of both observations and . 3.4.Reconstructions in Three Dimensions with Experimental Data3.4.1.Experimental detailsThe experiment was carried out with the frequency domain DOT instrument at the Aalto University, Helsinki.58 The measurement domains were cylinders with radius 35 mm and height 110 mm (see Fig. 5). The cylindrical phantoms corresponding to states and are illustrated in Fig. 5. The background optical properties were approximately and at wavelength 800 nm for both phantoms. The cylindrical inclusions in , which both have diameter and height of 9.5 mm, are located such that the central planes of the inclusions coincide with the central -plane of the cylinder domain. The optical properties of inclusion 1 are approximately , (i.e., purely absorption contrast) and the properties of inclusion 2 are , (i.e., purely scatter contrast), respectively. Absolute imaging reconstructions using the phantom with inclusions are presented in Refs. 53, 58, and 59. The source and detector configuration in the experiment consisted of 16 sources and 15 detectors arranged in interleaved order on two rings located 6 mm above and below the central -plane of the cylinder domain. The locations of sources and detectors are shown with red and blue circles respectively in Fig. 5. The measurements were carried out at 785 nm with an optical power of 8 mW and modulation frequency . The log amplitude and phase shift of the transmitted light was recorded and the nearest measurement data from each source position were removed, leading to real valued measurement vectors , . We employed an error model where the square roots of the diagonal elements (standard deviations) of error covariances were specified as 1% of the absolute values of , , implying that the standard deviations of the measurement errors are assumed to be 1% of the measured absolute values of the log(amplitude) and phase.3.4.2.Data calibration and initialization of the estimationRaw instrument data cannot be directly used as a direct equivalent to simulated data from a model. The measured phase and amplitude were calibrated using the procedure described in Ref. 58, accounting for differences between the lengths and coupling efficiencies between different source and detector channels, as well as for the effects of detector gain adjustment during the data collection. Finally, to calibrate the forward model to the measurement setup, the initial estimates for the optical properties of the model were assumed to be homogeneous and the measured data was used to fit global values of absorption and scattering in the model as well as global coupling factors for phase and amplitude between model and measurement. The coupling is intrinsically unknown a priori; therefore, the measured data are corrected by the coupling coefficient obtained from this optimization process. In a sense, the instrument data are calibrated to match the absolute values of the simulations from the forward model. However, to avoid unrealistical calibration setup of having access to data from the homogeneous target, the four-parameter calibration was carried out using the data , corresponding to the nonhomogeneous state after the change. Following the initial estimation procedure in Ref. 59, the log of source strength and phase coupling between the modeled phase and log amplitude, and the measured phase and log amplitude were modeled by additive constants. In other words, we assumed that the coupling factors are constant for all source and detector fibers. Thus, the initialization step consisted of a four-parameter fit of global background parameters and as well as a global additive shift of log amplitude data and a global additive shift of phase data where . The initialization problem was solved by a GN optimization method with an explicit line search.53 Once the initialization was completed, the measurement data were transformed for the nonlinear estimation Eq. (24) by the recovered global offsets as , and the initial parameter values for the nonlinear estimation were set to the estimated values and .3.4.3.ResultsFigure 6 shows 2-D slices of 3-D reconstructions obtained using nonlinear difference imaging Eq. (24). The ROI in this case was selected as a central part of height of the cylinder (see Fig. 5). Figs. 6(a) and 6(b) are the absorption images and Figs. 6(c) and 6(d) are the scattering images. The computation time of the nonlinear estimate was . Figure 7 shows the corresponding 3-D reconstructions with linear difference imaging Eq. (25). The computation time of the linear estimate was . From Figs. 6 and 7, one can see that the proposed approach of nonlinear difference imaging shows better recovery of the inclusions when compared to the conventional linear difference imaging. Also, there is no “ringing” effect in nonlinear reconstruction, while there are severe ones in linear reconstruction. These could result in a masking effect when more target inclusions are present. These results are in agreement with the 2-D simulation results. In the 3-D case, the CPU time needed for the reconstruction with the nonlinear approach was approximately 36 times the time needed for the linear reconstruction. This can be potentially problematic when processing long time series of data, especially if the reconstructions are needed online. However, in many cases the requirement of better reconstruction accuracy may outweigh the disadvantage of longer CPU time, especially if it is sufficient to obtain the results off-line. It should be also noted that for both the linear and nonlinear approach the reported CPU times are based on nonoptimized implementation on MATLAB® using the TOAST toolbox. Where needed, the computations can be made faster by optimizing the implementation. 4.ConclusionsWe applied a new approach to difference imaging in DOT. In the approach, the optical coefficients after a change are parameterized as a linear combination of the (unknown) initial state and the change in optical properties. The DOT measurements taken before and after the change are concatenated into a single measurement vector and the inverse problem is stated as finding the initial optical coefficients and the change given the combined data. This model allows the use of separate spatial models for the initial state and the change in optical coefficients. Furthermore, it allows the use of a ROI constraint for the change in optical coefficients in a straightforward way. The approach was also tested with 2-D simulations using different optode placement settings and also in the presence of modeling errors arising from domain truncation, unknown optode coupling and unknown domain shape. The conventional linearized difference imaging was used as a reference approach. The approach was tested with experimental phantom data. The results show that the proposed approach produces better reconstructions compared to standard linear difference imaging and the approach is robust for the modeling errors at least in a similar extent as the conventional linear reconstruction approach. We believe that the proposed approach can improve the accuracy of difference imaging compared to the linearization-based approach, which is the current standard in difference imaging. AcknowledgmentsThis work was supported by the Academy of Finland (projects 136220, 272803, 269282, and 250215 Finnish Centre of Excellence in Inverse Problems Research) and Finnish Doctoral Programme in Computational Sciences. The authors would also like to acknowledge Dong Liu for useful discussions. ReferencesJ. P. Culver et al.,
“Volumetric diffuse optical tomography of brain activity,”
Opt. Lett., 28
(21), 2061
–2063
(2003). http://dx.doi.org/10.1364/OL.28.002061 OPLEDP 0146-9592 Google Scholar
J. C. Hebden et al.,
“Three-dimensional optical tomography of the premature infant brain,”
Phys. Med. Biol., 47
(23), 4155
–4166
(2002). http://dx.doi.org/10.1088/0031-9155/47/23/303 PHMBA7 0031-9155 Google Scholar
T. Näsi et al.,
“Effect of task-related extracerebral circulation on diffuse optical tomography: experimental data and simulations on the forehead,”
Biomed. Opt. Express, 4
(3), 412
–426
(2013). http://dx.doi.org/10.1364/BOE.4.000412 BOEICL 2156-7085 Google Scholar
B. J. Tromberg et al.,
“Assessing the future of diffuse optical imaging technologies for breast cancer management,”
Med. Phys., 35
(6), 2443
(2008). http://dx.doi.org/10.1118/1.2919078 MPHYA6 0094-2405 Google Scholar
D. Leff et al.,
“Diffuse optical imaging of the healthy and diseased breast: a systematic review,”
Breast Cancer Res. Treat., 108
(1), 9
–22
(2008). http://dx.doi.org/10.1007/s10549-007-9582-z BCTRD6 Google Scholar
G. Gulsen et al.,
“Combined diffuse optical tomography (DOT) and MRI system for cancer imaging in small animals,”
Technol. Cancer Res. Treat., 5
(4), 351
–363
(2006). http://dx.doi.org/10.1177/153303460600500407 Google Scholar
J. P. Culver et al.,
“Diffuse optical tomography of cerebral blood flow, oxygenation and metabolism in rat during focal ischemia,”
J. Cereb. Blood Flow Metab., 23
(8), 911
–924
(2003). http://dx.doi.org/10.1097/01.WCB.0000076703.71231.BB Google Scholar
S. R. Hintz et al.,
“Bedside functional imaging of the premature infant brain during passive motor activation,”
J. Perinat. Med., 29
(4), 335
–343
(2001). http://dx.doi.org/10.1515/JPM.2001.048 Google Scholar
A. Gibson et al.,
“Three-dimensional whole-head optical tomography of passive motor evoked responses in the neonate,”
NeuroImage, 30
(2), 521
–528
(2006). http://dx.doi.org/10.1016/j.neuroimage.2005.08.059 NEIMEF 1053-8119 Google Scholar
J. C. Hebden et al.,
“Imaging changes in blood volume and oxygenation in the newborn infant brain using three-dimensional optical tomography,”
Phys. Med. Biol., 49
(7), 1117
(2004). http://dx.doi.org/10.1088/0031-9155/49/7/003 PHMBA7 0031-9155 Google Scholar
T. Zhang et al.,
“Pre-seizure state identified by diffuse optical tomography,”
Sci. Rep., 4 3798
(2014). Google Scholar
A. T. Eggebrecht et al.,
“A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,”
NeuroImage, 61
(4), 1120
–1128
(2012). http://dx.doi.org/10.1016/j.neuroimage.2012.01.124 NEIMEF 1053-8119 Google Scholar
Y. Zhan et al.,
“Image quality analysis of high-density diffuse optical tomography incorporating a subject-specific head model,”
Front. Neuroenerg., 4
(6),
(2012). http://dx.doi.org/10.3389/fnene.2012.00006 Google Scholar
E. M. C. Hillman et al.,
“Calibration techniques and datatype extraction for time-resolved optical tomography,”
Rev. Sci. Instrum., 71
(9), 3415
(2000). http://dx.doi.org/10.1063/1.1287748 RSINAK 0034-6748 Google Scholar
C. H. Schmitz et al.,
“Instrumentation and calibration protocol for imaging dynamic features in dense-scattering media by optical tomography,”
Appl. Opt., 39
(34), 6466
–6486
(2000). http://dx.doi.org/10.1364/AO.39.006466 APOPAI 0003-6935 Google Scholar
D. Boas, T. Gaudette and S. Arridge,
“Simultaneous imaging and optode calibration with diffuse optical tomography,”
Opt. Express, 8
(5), 263
–270
(2001). http://dx.doi.org/10.1364/OE.8.000263 OPEXFF 1094-4087 Google Scholar
M. Schweiger et al.,
“Image reconstruction in optical tomography in the presence of coupling errors,”
Appl. Opt., 46
(14), 2743
–2756
(2007). http://dx.doi.org/10.1364/AO.46.002743 APOPAI 0003-6935 Google Scholar
X. Wu et al.,
“Quantitative evaluation of atlas-based high-density diffuse optical tomography for imaging of the human visual cortex,”
Biomed. Opt. Express, 5
(11), 3882
–3900
(2014). http://dx.doi.org/10.1364/BOE.5.003882 BOEICL 2156-7085 Google Scholar
A. Gibson et al.,
“Optical tomography of a realistic neonatal head phantom,”
Appl. Opt., 42
(16), 3109
–3116
(2003). http://dx.doi.org/10.1364/AO.42.003109 APOPAI 0003-6935 Google Scholar
A. H. Barnett et al.,
“Robust inference of baseline optical properties of the human head with three-dimensional segmentation from magnetic resonance imaging,”
Appl. Opt., 42
(16), 3095
–3108
(2003). http://dx.doi.org/10.1364/AO.42.003095 APOPAI 0003-6935 Google Scholar
B. W. Pogue and K. D. Paulsen,
“High-resolution near-infrared tomographic imaging simulations of the rat cranium by use of a priori magnetic resonance imaging structural information,”
Opt. Lett., 23
(21), 1716
–1718
(1998). http://dx.doi.org/10.1364/OL.23.001716 OPLEDP 0146-9592 Google Scholar
J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Springer, New York
(2005). Google Scholar
S. R. Arridge et al.,
“Approximation errors and model reduction with an application in optical diffusion tomography,”
Inverse Probl., 22
(1), 175
–195
(2006). http://dx.doi.org/10.1088/0266-5611/22/1/010 Google Scholar
T. Tarvainen et al.,
“An approximation error approach for compensating for modelling errors between the radiative transfer equation and the diffusion approximation in diffuse optical tomography,”
Inverse Probl., 26
(1), 015005
(2010). http://dx.doi.org/10.1088/0266-5611/26/1/015005 Google Scholar
T. Tarvainen et al.,
“Corrections to linear methods for diffuse optical tomography using approximation error modelling,”
Biomed. Opt. Express, 1
(1), 209
–222
(2010). http://dx.doi.org/10.1364/BOE.1.000209 BOEICL 2156-7085 Google Scholar
V. Kolehmainen et al.,
“Marginalization of uninteresting distributed parameters in inverse problems–application to diffuse optical tomography,”
Int. J. Uncertainty Quantif., 1
(1), 1
–17
(2011). http://dx.doi.org/10.1615/Int.J.UncertaintyQuantification.v1.i1.10 Google Scholar
M. Mozumder et al.,
“Compensation of optode sensitivity and position errors in diffuse optical tomography using the approximation error approach,”
Biomed. Opt. Express, 4
(10), 2015
–2031
(2013). http://dx.doi.org/10.1364/BOE.4.002015 BOEICL 2156-7085 Google Scholar
M. Mozumder et al.,
“Compensation of modeling errors due to unknown domain boundary in diffuse optical tomography,”
J. Opt. Soc. Am. A, 31
(8), 1847
–1855
(2014). http://dx.doi.org/10.1364/JOSAA.31.001847 JOAOD6 0740-3232 Google Scholar
Y. Pei, F. B. Lin and R. Barbour,
“Modeling of sensitivity and resolution to an included object in homogeneous scattering media and in MRI-derived breast maps,”
Opt. Express, 5
(10), 203
–219
(1999). http://dx.doi.org/10.1364/OE.5.000203 OPEXFF 1094-4087 Google Scholar
A. Bluestone et al.,
“Three-dimensional optical tomography of hemodynamics in the human head,”
Opt. Express, 9
(6), 272
–286
(2001). http://dx.doi.org/10.1364/OE.9.000272 OPEXFF 1094-4087 Google Scholar
Y. Pei, H. L. Graber and R. L. Barbour,
“Influence of systematic errors in reference states on image quality and on stability of derived information for DC optical imaging,”
Appl. Opt., 40
(31), 5755
–5769
(2001). http://dx.doi.org/10.1364/AO.40.005755 APOPAI 0003-6935 Google Scholar
T. Austin et al.,
“Three dimensional optical imaging of blood volume and oxygenation in the neonatal brain,”
NeuroImage, 31
(4), 1426
–1433
(2006). http://dx.doi.org/10.1016/j.neuroimage.2006.02.038 NEIMEF 1053-8119 Google Scholar
R. L. Barbour et al.,
“Optical tomographic imaging of dynamic features of dense-scattering media,”
J. Opt. Soc. Am. A, 18
(12), 3018
–3036
(2001). http://dx.doi.org/10.1364/JOSAA.18.003018 JOAOD6 0740-3232 Google Scholar
Y. Xu, H. L. Graber and R. L. Barbour,
“Image correction algorithm for functional three-dimensional diffuse optical tomography brain imaging,”
Appl. Opt., 46
(10), 1693
–1704
(2007). http://dx.doi.org/10.1364/AO.46.001693 APOPAI 0003-6935 Google Scholar
R. L. Barbour et al.,
“Strategies for imaging diffusing media,”
Transp. Theory Stat. Phys., 33
(3–4), 361
–371
(2004). http://dx.doi.org/10.1081/TT-200051950 TTSPB4 0041-1450 Google Scholar
H. L. Graber et al.,
“Spatial deconvolution technique to improve the accuracy of reconstructed three-dimensional diffuse optical tomographic images,”
Appl. Opt., 44
(6), 941
–953
(2005). http://dx.doi.org/10.1364/AO.44.000941 APOPAI 0003-6935 Google Scholar
Y. Xu et al.,
“Improved accuracy of reconstructed diffuse optical tomographic images by means of spatial deconvolution: two-dimensional quantitative characterization,”
Appl. Opt., 44
(11), 2115
–2139
(2005). http://dx.doi.org/10.1364/AO.44.002115 APOPAI 0003-6935 Google Scholar
Y. Xu et al.,
“Image quality improvement via spatial deconvolution in optical tomography: time-series imaging,”
J. Biomed. Opt., 10
(5), 051701
(2005). http://dx.doi.org/10.1117/1.2103747 JBOPFO 1083-3668 Google Scholar
H. L. Graber, Y. Xu and R. L. Barbour,
“Image correction scheme applied to functional diffuse optical tomography scattering images,”
Appl. Opt., 46
(10), 1705
–1716
(2007). http://dx.doi.org/10.1364/AO.46.001705 APOPAI 0003-6935 Google Scholar
J. Heiskala, P. Hiltunen and I. Nissilä,
“Significance of background optical properties, time-resolved information and optode arrangement in diffuse optical imaging of term neonates,”
Phys. Med. Biol., 54
(3), 535
(2009). http://dx.doi.org/10.1088/0031-9155/54/3/005 PHMBA7 0031-9155 Google Scholar
D. Liu et al.,
“A nonlinear approach to difference imaging in EIT; assessment of the robustness in the presence of modelling errors,”
Inverse Probl., 31
(3), 035012
(2015). http://dx.doi.org/10.1088/0266-5611/31/3/035012 INPEEY 0266-5611 Google Scholar
D. Liu et al.,
“Estimation of conductivity changes in a region of interest with electrical impedance tomography,”
Inverse Probl. Imaging, 9
(1), 211
–229
(2015). http://dx.doi.org/10.3934/ipi.2015.9.211 Google Scholar
A. Ishimaru, Wave Propagation and Scattering in Random Media, Academic, New York
(1978). Google Scholar
S. Arridge,
“Optical tomography in medical imaging,”
Inverse Probl., 15
(2), R41
–R93
(1999). http://dx.doi.org/10.1088/0266-5611/15/2/022 INPEEY 0266-5611 Google Scholar
A. Douiri et al.,
“Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,”
Meas. Sci. Technol., 18
(1), 87
(2007). http://dx.doi.org/10.1088/0957-0233/18/1/011 MSTCEP 0957-0233 Google Scholar
P. K. Yalavarthy et al.,
“Structural information within regularization matrices improves near infrared diffuse optical tomography,”
Opt. Express, 15
(13), 8043
–8058
(2007). http://dx.doi.org/10.1364/OE.15.008043 OPEXFF 1094-4087 Google Scholar
K. D. Paulsen and H. Jiang,
“Enhanced frequency-domain optical image reconstruction in tissues through total-variation minimization,”
Appl. Opt., 35
(19), 3447
–3458
(1996). http://dx.doi.org/10.1364/AO.35.003447 APOPAI 0003-6935 Google Scholar
V. Kolehmainen,
“Novel approaches to image reconstruction in diffusion tomography,”
University of Kuopio, Kuopio, Finland,
(2001). Google Scholar
N. Cao, A. Nehorai and M. Jacobs,
“Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm,”
Opt. Express, 15
(21), 13695
–13708
(2007). http://dx.doi.org/10.1364/OE.15.013695 OPEXFF 1094-4087 Google Scholar
J. F. P.-J. Abascal et al.,
“Fluorescence diffuse optical tomography using the split Bregman method,”
Med. Phys., 38
(11), 6275
–6284
(2011). http://dx.doi.org/10.1118/1.3656063 MPHYA6 0094-2405 Google Scholar
J. C. Ye et al.,
“Optical diffusion tomography by iterative-coordinate-descent optimization in a Bayesian framework,”
J. Opt. Soc. Am. A, 16
(10), 2400
–2412
(1999). http://dx.doi.org/10.1364/JOSAA.16.002400 JOAOD6 0740-3232 Google Scholar
D. A. Boas, A. M. Dale and M. A. Franceschini,
“Diffuse optical imaging of brain activation: approaches to optimizing image sensitivity, resolution, and accuracy,”
NeuroImage, 23
(Suppl 1), S275
–S288
(2004). http://dx.doi.org/10.1016/j.neuroimage.2004.07.011 NEIMEF 1053-8119 Google Scholar
M. Schweiger, S. R. Arridge and I. Nissilä,
“Gauss-Newton method for image reconstruction in diffuse optical tomography,”
Phys. Med. Biol., 50
(10), 2365
–2386
(2005). http://dx.doi.org/10.1088/0031-9155/50/10/013 PHMBA7 0031-9155 Google Scholar
J. Heiskala et al.,
“Approximation error method can reduce artifacts due to scalp blood flow in optical brain activation imaging,”
J. Biomed. Opt., 17
(9), 096012
(2012). http://dx.doi.org/10.1117/1.JBO.17.9.096012 JBOPFO 1083-3668 Google Scholar
L. I. Rudin, S. Osher and E. Fatemi,
“Nonlinear total variation based noise removal algorithms,”
Phys. D Nonlinear Phenom., 60
(1–4), 259
–268
(1992). http://dx.doi.org/10.1016/0167-2789(92)90242-F Google Scholar
Y. Mamatjan et al.,
“An experimental clinical evaluation of EIT imaging with l-1 data and image norms,”
Physiol. Meas., 34
(9), 1027
(2013). http://dx.doi.org/10.1088/0967-3334/34/9/1027 PMEAE3 0967-3334 Google Scholar
A. S. K. Karhunen et al.,
“Electrical resistance tomography for assessment of cracks in concrete,”
ACI Mater. J., 107
(5), 523
–531
(2010). Google Scholar
I. Nissilä et al.,
“Instrumentation and calibration methods for the multichannel measurement of phase and amplitude in optical tomography,”
Rev. Sci. Instrum., 76
(4), 044302
(2005). http://dx.doi.org/10.1063/1.1884193 RSINAK 0034-6748 Google Scholar
V. Kolehmainen et al.,
“Approximation errors and model reduction in three-dimensional diffuse optical tomography,”
J. Opt. Soc. Am. A., 26
(10), 2257
–2268
(2009). http://dx.doi.org/10.1364/JOSAA.26.002257 JOAOD6 0740-3232 Google Scholar
BiographyMeghdoot Mozumder is a doctoral student at the University of Eastern Finland. He received his MSc degree in physics from the Indian Institute of Technology, Kanpur, in 2011. His current research interests include optical imaging and inverse problems. Tanja Tarvainen is an academy research fellow at the University of Eastern Finland. She received her PhD in 2006 from the University of Kuopio. She is the author of more than 30 journal papers. Her current research interests include diffuse optical tomography, quantitative photoacoustic tomography, and Bayesian inverse problems. Aku Seppänen is an academy research fellow at the University of Eastern Finland. He received his PhD degree in 2006 from the University of Kuopio. He has authored 27 journal papers and two book chapters. His research focuses on inverse problems, with applications to imaging, nondestructive testing, and remote sensing. Ilkka Nissilä is an Academy Research Fellow at Aalto University. He received his DSc degree at Helsinki University of Technology in 2004. His main research interests includes the development of diffuse optical tomography in combination with other modalities for the neuroimaging of children. Simon R. Arridge is a professor of image processing at the Department of Computer Science and the director for the Centre for Inverse Problems at University College London. He received his PhD in 1990 from University College London. He is an author on more than 150 journal papers. His research interests include inverse problems methods, image reconstruction, regularization, modeling, and numerical methods. Ville Kolehmainen is an associate professor at the University of Eastern Finland. He received his PhD in 2001 from the University of Kuopio. He is the author of more than 50 journal papers and has written four book chapters. His current research interests include computational inverse problems, especially in tomography imaging. |