Significance: Image reconstruction of fNIRS data is a useful technique for transforming channel-based fNIRS into a volumetric representation and managing spatial variance based on optode location. We present an innovative integrated pipeline for image reconstruction of fNIRS data using either MRI templates or individual anatomy. Aim: We demonstrate a pipeline with accompanying code to allow users to clean and prepare optode location information, prepare and standardize individual anatomical images, create the light model, run the 3D image reconstruction, and analyze data in group space. Approach: We synthesize a combination of new and existing software packages to create a complete pipeline, from raw data to analysis. Results: This pipeline has been tested using both templates and individual anatomy, and on data from different fNIRS data collection systems. We show high temporal correlations between channel-based and image-based fNIRS data. In addition, we demonstrate the reliability of this pipeline with a sample dataset that included 74 children as part of a longitudinal study taking place in Scotland. We demonstrate good correspondence between data in channel space and image reconstructed data. Conclusions: The pipeline presented here makes a unique contribution by integrating multiple tools to assemble a complete pipeline for image reconstruction in fNIRS. We highlight further issues that may be of interest to future software developers in the field. |
1.IntroductionIn recent years, there has been a rise in the usage of functional near infra-red spectroscopy (fNIRS) in neuroimaging studies.1–5 fNIRS is a safe, non-invasive method of neuroimaging that measures the concentration of oxygenated and deoxygenated hemoglobin. fNIRS has become an important neuroimaging tool for developmental neuroscience and other fields where participant characteristics, naturalistic study paradigms, or costs make functional MRI difficult to use, such as in low-resource settings.6–8 A key limitation of fNIRS is that optode locations can vary, making it difficult to compare across participants within a study as well as across studies. Spatial variance in brain sampling can occur due to slight differences in cap placement on the head. But even with precise cap placement, variance can occur due to subtle differences in head shape and size. This makes it challenging to make robust inferences across participants from analyses conducted in channel space given potentially erroneous but strong assumptions about the consistent placement of source-detector pairs relative to functional brain areas. To overcome this limitation, there have been key innovations in image-reconstructed fNIRS that uses a head model as a spatial prior to transform channel-based fNIRS data into a volumetric representation. Such approaches use either Monte Carlo simulations of photon migration in the brain9 or solve the photon diffusion equation,10 to move statistical analyses from channel space on the surface of the head into voxel space within the brain volume. Using such methods, therefore, localizes patterns of activation from fNIRS into specific brain regions, allowing for more robust comparisons across participants and groups as data are registered into a common space. In image-reconstruction approaches, the head model used for reconstruction typically comes from an anatomical head atlas,1,11–14 although one can also use subject-specific anatomy from an individual MRI scan.15 To date, the majority of image-reconstruction approaches have used anatomical templates, where a template atlas is used for registering the digitization and transforming into subject space. While this allows for uniformity and consistency between subjects, it overlooks the role that individual anatomical differences may play in the analyses. A subject with higher-than-average cerebrospinal fluid (CSF) in a particular area of the brain, for example, will have different tissue properties in that region which will lead to different amounts of scattering and absorption of near infra-red light. The use of individual anatomy allows such individual-level differences to inform the analyses. However, standardizing and preparing individual anatomies adds layers of complexity to the analysis—a hurdle that is addressed in the present paper. The process of image reconstruction, particularly when using subject-specific anatomy is complex and requires multiple steps. Optode locations must be collected and checked relative to a template, with a method for correcting any digitization errors. The digitized positions must then be formatted for use with software that can create a light model. Individual MRI images must be optimized through removal of background noise, and the image must then be segmented to create the tissue model needed for photon migration simulations. The segmented head and optode locations then need to be aligned and photon migration simulations run to create the sensitivity profiles needed for image reconstruction. The fNIRS data then need to be pre-processed in channel space and combined with the sensitivity profiles for image reconstruction. These subject-specific results then need to be analyzed at the individual level [e.g., using a general linear modeling (GLM) approach] and aligned in a common group space for statistical analysis. A current difficulty for this type of work is the lack of analysis pipelines that pull all of these steps together. No current software or published pipeline covers all aspects of this approach, making it difficult for users to have an integrated approach that includes image reconstruction within the full scope of the processing pipeline. The present paper, therefore, presents a pipeline that takes the user from the initial pre-processing and cleanup steps all the way through to group analysis, with the hope that this will lead to greater transparency of results and make data sharing easier between research groups. The tools employed in this pipeline are all publicly available online. 1.1.Overview of the PipelineFigure 1 shows the steps to be followed using this pipeline. These steps are detailed below. Briefly, the first step handles preparing and cleaning the digitization data, i.e., estimates of the three-dimensional (3D) positions of the fNIRS optodes on the head. Next, we discuss how to prepare the MRI images. Here, we focus on preparing individual MRI images (see left, top in Fig. 1). For studies using an atlas that already contains a segmented head (see right, top in Fig. 1), the user can skip to setting up and running Monte Carlo simulations to create the light model. Where users have a template MRI which has not yet been segmented, they may wish to use the segmentation steps described in this pipeline, but bypassing the cleaning steps, similar to a user in possession of individual MRIs without background noise in a good orientation. Nonetheless, it is recommended that users read the steps on preparing individual anatomical files to determine whether their files meet the formatting requirements. Once the MRI data are prepared, the next step is to create the light model. There are multiple software packages that handle this step; we describe one implementation using Homer216 ( https://homer-fnirs.org/). It is noteworthy that the pipeline is also fully compatible with Homer3 as well. Then, fNIRS data must be pre-processed. Again, there are multiple options here, and we describe a solution using Homer2. The next step is image reconstruction. Here, we use tools from the NeuroDOT package17 ( https://github.com/WUSTL-ORL/NeuroDOT_Beta). We then run an individual-level GLM on the data, again, using NeuroDOT tools. Finally, we describe how to transfer the data into a common group space for a group-level analysis. We demonstrate the final data produced by the pipeline using recent data from a study looking at the development of visual working memory (VWM) in children. All code and full documentation for the pipeline steps can be found on the GitHub sites described in each section below. 2.Preparing Digitizations2.1.Collecting Digitization DataTo locate the optodes on the head, we collect digitizations with a Polhemus Patriot FCC Class B digitizing device ( https://polhemus.com/scanning-digitizing/digitizing-products/). The device consists of an electromagnetic receiver, placed on top of the head, and a wand used to mark the placement of the tip at each optode. The device default is set to capture the locations in inches from the receiver, but we changed this to centimeters to be compatible with Homer214,16 (which we use in subsequent pipeline steps). Typically, digitizations are assumed to be in the format of one point per row, where the first set of points are reference landmarks on the head, followed by the sources, and then the detectors. Figure 2 shows a typical digitization captured with a small amount of error, pulling points out of alignment (in this case due to reaching across the head). 2.2.Creating Digitization TemplatesOne of the challenges of using a digitizing pen is that errors in positioning can arise. For instance, in Fig. 2, several points are slightly mis-localized, resulting in a distorted geometry. This can be caused by a slight slippage in the digitizing pen when the participant moves, inducing a slight mis-localization of the pen tip. Such minor errors are difficult to identify on the fly, particularly when the head being digitized is on the body of a squirmy child. To combat errors in mis-localization of points, we developed a package for the R language ( https://www.r-project.org/), digitizeR ( www.github.com/samhforbes/digitizeR), which takes digitized input from multiple participants for a given head size, creates a digitization template, and identifies and fixes mis-localization errors in individual caps. In particular, once collected, the digitizations for each participant can be read into digitizeR by cap size. For ease of visualizing caps as they change, we also recommend RStudio. The digitizeR package will read in the Polhemus output files (saved in ASCII text format) and create a list for each cap size. Next, digitizeR finds the best caps to align the other caps to for each cap size. This is done using the Kabsch algorithm,18 a method of finding the optimal rotation between two matrices to reduce the root mean squared deviation. DigitizeR uses this algorithm to align each cap to all the other caps in that size. Since every digitization runs the risk of participant movement throwing off the values for any given point (or the entire cap), each cap is rotated to match every other cap. The assumption here is that all the caps that were digitized well will all align with each other, and the caps that were digitized poorly will fail to align. So, for example, for three caps A, B, and C, if caps A and B are good caps, and C is a poor digitization, A will align with B within a given 3D criterion distance and B will align with A, whereas C will align with neither A nor B. The next step is to use this information about which caps align to create a template. In our hypothetical situation, two caps, A and B, align with each other within the criterion distance, so the template will be defined by calculating the 3D mean of each corresponding point in A and B. Using this process, a template cap is created for each cap size. Figure 3 shows a template created from a group of caps. In this instance, the template digitization was created from a group of caps with a head circumference of 48 cm. Caps were aligned using the above method where the 3D distance threshold was chosen as 10 cm. This took the reference cap with the most other matches at that cap size where no point was allowed to exceed a 3D distance of 10 cm from the reference; in this instance, the aligned caps had a mean distance of 3.91 cm from the reference cap. The mean of each of the points across all of the matching caps was then calculated to create the template. Once created, templates can be saved in digitizeR and used for all participants in that cap size if desired. This would yield an average acceptable digitization that is unique to each cap size, but captures the data collected across the sample. Alternatively, as we describe in the next section, digitizeR allows using the template to clean and save modified versions of the individual digitizations, allowing the user to preserve as much of the individual-level variation in optode location and cap placement as possible. 2.3.Correcting Individual CapsOnce templates have been constructed, each participant cap can then be aligned to the template for that cap size, again using the Kabsch algorithm. There are two sources of variation in the digitized locations in 3D space: true variation in cap placement and optode location due to factors such as wrinkles in the cap or the cap being off-center; and errors such as digitization of the incorrect point or movement causing the point to be unrealistically far away from the head. The goal of the correction at the individual cap level is to permit as much of the former variance as possible, which is due to the true placement of the optode on the head, while eliminating the large sources of the latter type of variance. The digitizeR package allows three different correction methods: a three-step alignment correction method, an iterative method, and a head-wise method. The three-step method allows the user to specify three threshold 3D distances, at which the points will be sequentially tested against the template for that cap size, and those outside those distances will be removed, and then the cap re-aligned without those points. As an example, a three-step alignment on a digitization with distances set at 12, 8, and 6 would first exclude points beyond a 3D distance of 12 (cm), re-align without those points and do a second pass with a threshold of 8 cm, re-align without those points, and then do a third pass with a threshold of 6 cm, removing the points beyond this threshold. After the final distance check, the missing points are replaced with points from the relevant template. The difficulty with the three-step method is selecting the three distances, and this can be dependent on the data that is being used. The data that were cleaned using the template in Fig. 3 were cleaned using threshold distances of 12, 10, and 7 cm, with the end goal of making sure no more than 1/3 of the original data points had been replaced using this method. The iterative approach takes the same method as the three-step approach, but rather than the user-specified distances, it starts with a 3D distance of 20 cm and replaces points outside of that distance, and then reduces the difference by 1 cm, repeating the procedure of aligning to the template and removing the points outside of the 3D distance each time until the points replaced reach the user-specified proportion of replaced points, or until the points are within half a centimeter of the template. The head-wise method follows a similar logic, but deals with entire caps rather than individual points, so that rather than individual points being replaced, the mean 3D distance from all the points in the cap is calculated against the template, and caps exceeding the threshold distance are replaced with a complete template cap, otherwise, the cap remains as it was. In this paper, we used the three-step method which uses the R function threestep_alignment(), although the best approach may be dependent on the types of errors being seen in the data. In the case of our dataset, we wanted to minimize the number of points we were replacing while ensuring our caps were as correct as possible. While some trial and error can be necessary to determine the best distances for the caps at hand, the three-point method is a good option for this purpose. The points that were removed as part of this procedure are then filled in with the equivalent point from the template once the cap is aligned to make a complete cap that is aligned in the same space as the template cap. We tested the three-step method by taking two templates, loading them in as caps and moving the first point [the landmark nasion (NZ)] much further away from the other points. We then used the original templates to repair them. The test of this method is that the three-step alignment correction should correct that point on each of the two templates, but no other points, as all other points derive from the template itself. When this is tested with the three-step method, only these two bad points are corrected, having been replaced with the original template points, and all other points remain the same. 2.4.Preparing for Use with Homer2/ Homer3The final versions of the digitized caps after alignment to the template can be saved and exported from R using the built-in functions from the digitizeR package. The save_caps option automatically saves each of the caps in its own folder as a digpts.txt file, saved in the format that is necessary for Homer2/Homer 3, which includes reference, source, or detector labels for each point. In addition, the caps are converted by default to distances in millimeters (from centimeters), making them automatically ready to use for import into Homer2 or Homer3. Full documentation for any of the digitizeR functions can be found at www.github.com/samhforbes/digitizeR. 3.MRI Cleanup and SegmentationOne of the challenges with fNIRS image reconstruction currently is that there are few pipelines that can handle both atlas-based and individual-MRI-based registration. The latter case is particularly challenging as individual MRIs require both cleanup and segmentation before this information can be used in the registration process. These steps can be quite challenging, particularly in developmental research as infant and child MRI data can have unique characteristics. Thus, in this section, we describe steps in our pipeline to help researchers standardize MRI cleanup and segmentation. The segmentation process is based on having an anatomical T1-weighted image from each participant. The T1-weighted MRI images can be treated differently in this pipeline based on the age of the participants and image quality. The segmentation process uses tools from Analysis of Functional NeuroImages (AFNI) ( https://afni.nimh.nih.gov); to generate a label map that can readily be imported into AtlasViewerGUI, a visual graphical user interface contained within Homer2,14,19 providing brain and skull surfaces used in the analysis of the fNIRS data (note that AtlasViewerGUI is also available in Homer3). The steps of the segmentation process are outlined here along with the options that can be employed to handle scans from infants as well as low SNR images. The starting point for the processing pipeline is an anatomical T1-weighted image that approximately has the nose aligned with the axis of the scanner [i.e., the nose should be pointing forward (anterior) in an axial image]. If the orientation of the nose is more than 15 deg from this orientation—for example in a sleeping infant—we have found it helpful to rotate the image such that the nose is roughly aligned with the y-axis for initiating the pipeline. This can be done using 3drotate from AFNI or similar functionality from other image analysis packages. Next, the image is resampled (3dresample) into a standard right-axial-superior orientation. This ensures that the image will be oriented properly once imported into AtlasViewer. The next step of the segmentation pipeline is bias field correction (3dUnifize); this is optional and can be used if large variations in the signal exist across the image. We then create a brain mask to define brain tissue in the image. This step is sensitive to the age of the subject being analyzed since infants have a significantly smaller intracranial volume as compared to adults. Thus, this step has two options to generate the brain mask: 3dSkullstrip and ROBEX.20 The 3dSkullstrip option performs well in most cases and the user is provided with the ability to set the initial radius and expansion rate for the sphere inflation used in the underlying algorithm. In infant scans, we have found improved performance with the ROBEX-based brain mask. For details on using these functions, we can see the instruction guide available at https://github.com/developmentaldynamicslab/MRI-NIRS_Pipeline. The next step of the pipeline is to put the T1 weighted scan into anterior commissure (AC)-posterior commissure (PC) alignment. To do this, the brain mask in the prior step is used to extract the brain from the T1-weighted image, which is subsequently aligned with a skull-stripped Talairach Atlas image using auto_tlrc command from AFNI. The rigid body transform from the resulting transform is saved and used to reorient the subject T1-weighted scan and brain mask into AC-PC alignment. Next, we pad the image with zeros. This ensures that the surface generated will be closed when imported into AtlasViewer. We then generate a skull mask from the image by identifying an optimal threshold (3dClipLevel) for background removal. The resulting mask generated from thresholding at this optimal level is then filled to define all voxels within the skull. The next step of the pipeline is optional and median filters the image. This may be necessary for low SNR images. The final step of the pipeline uses the 3dSeg command from AFNI to define the tissue into gray matter, white matter, and CSF. The resulting segmentation is then combined with the skull segmentation to generate a label map that contains four labels: gray matter, white matter, CSF, and skull. Difficulties in this processing pipeline can arise when the original T1-weighted MRI image has a large amount of background noise, such as having captured additional objects (such as the participant’s arm) in the scan. For these more extreme cases, we have found it helpful to align the scan to a template, and then create a filled mask of the template, which is used to remove the background noise from the original scan. This leaves the image with only the head and no objects in the background. Further information on this is provided in the pipeline instructions in the repository. Once these steps are complete, the label map can then be read into Homer2/Homer3 to create the anatomical files needed for the spatial prior (i.e., the light model). This image is read in using the import MRI functionality from AtlasViewerGUI, and then anatomical reference points are selected by the user. This allows for the construction of the anatomy which can then be combined with the cleaned digitizations from the prior step. 4.Photon Migration SimulationThe next step is to create the light model that will be used for image reconstruction. As this uses previously described tools,14 we describe these steps briefly here. We used the AtlasViewerGUI package within Homer2 or Homer3 to project the corrected digitized points for each participant onto the segmented atlas created by cleaning up and segmenting the participant’s MRI scan. To ensure that the photon migration simulations were age-specific, absorption and scattering coefficients for the scalp, CSF, gray matter, and white matter for both wavelengths of light were provided such that each node of the mesh was assigned optical properties determined by tissue type based on values from the literature.17,21–24 Photon migration Monte Carlo simulations with 100 million photons were run to generate sensitivity profiles for each channel and wavelength. In the data presented here, this process was carried out using the high performance computing cluster supported by the Research and Specialist Computing Support service at the University of East Anglia, which enabled the running of simulations for all channels and participants in parallel by utilizing multiple cores. As this is computationally expensive, we recommend use of a server or other high performance computing device for this step. Alternatively, users looking to reduce computation time can reduce the number of simulations down to 10 million or use MCX as implemented through the more recent versions of AtlasViewerGUI.9 Next, the head volume and the sensitivity profiles were converted into NIFTI format for use in image reconstruction. Sample photon migration simulations are shown on a segmented head from the MRI scan of a 6-month-old infant in Fig. 4(a). The head volume and an exemplar channel for the same participant is shown in Fig. 4(b). 5.Pre-Processing fNIRS Data in Channel-SpaceFor the purposes of demonstrating the quality of image reconstruction below, we used data from a TechEn CW7 system (12 sources and 24 detectors) with wavelengths of 690 and 830 nm. Fiber optic cables were used to carry data from the machine to a customized probe geometry of 40 channels covering bilateral frontal and parietal cortices [see Fig. 4(a)]. fNIRS data in the examples provided here were pre-processed using the EasyNIRS package in Homer2. In the pipeline documentation on github, we also provide guidance for users of Homer3 and the *.snirf file format. Raw data were pruned to include channels with signals between 80 dB and 130 dB using the enPruneChannels function which prunes channels based on the signal to noise ratio. Next, these data were converted to optical density units using the hmrIntensity2OD function. Motion artifacts were corrected using the hmrMotionCorrectPCArecurse function which used targeted principal components analysis25 (tMotion = 1.0, tMask = 1.0, StdevThresh = 50 and AmpThresh = 0.5). The corrected data were examined for uncorrected motion artifacts using hmrMotionArtifactByChannel (tMotion = 1.0, tMask = 1.0, StdevThresh = 50 and AmpThresh = 0.5). If these uncorrected artifacts fell within to 18 s of a stimulus trigger, that trigger was removed from further data processing using the enStimRejection function. Finally, these data were bandpass filtered using the hmrBandpassFilt function with high-pass and low-pass cutoff frequencies of 0.016 and 0.5 Hz, respectively. Relevant information from these processed *.nirs files were then used in subsequent steps. 6.Image Reconstruction and VerificationThe next step in the processing pipeline is image reconstruction. This step integrates the 3D light model created using AtlasViewer with the channel-based fNIRS data processed using Homer2 or Homer3. The image reconstruction is conducted in Matlab using NeuroDOT tools17 ( https://github.com/WUSTL-ORL/NeuroDOT_Beta). In the sections below, we provide an overview of the image reconstruction approach including details for streamlining processing steps between AtlasViewer, Homer2/Homer3, and NeuroDOT. We then show sample image-reconstructed data plotted relative to the channel-based fNIRS data. 6.1.Overview of Image Reconstruction in NeuroDOTFirst, the sensitivity volumes for each source-detector pair that were calculated in AtlasViewer were converted into NeuroDOT format. In NeuroDOT, the sensitivity volumes are arranged in a single two-dimensional matrix with measurements in the first dimension and voxels in the second dimension. A structure variable info contains sub-structures that house meta-data describing the sensitivity volumes (info.tissue), the optode locations (info.optodes), the measurement list (info.pairs), the stimulus paradigm (info.paradigm), and other parameters from data acquisition (info.system). The header that contains the spatial meta-data from the NIFTI *.nii files from AtlasViewer is contained within info.tissue.infoT1. The order of the measurements in corresponds to that described in the measurement list located in info.pairs. The voxels are ordered as if the entire sensitivity volume is unwrapped into a single vector. To maximize computational efficiency, only voxels with a sensitivity above a given threshold for any of the measurements are included. The indexing of these voxels relative to the entire volume is described in the variable info.tissue.dim. Good_Vox. This step typically saves around a factor of 5 to in disk space and lowers computation time due to lower demand on RAM. For this study, the threshold was set to 0.01 of the maximum sensitivity within the combined volumes of all measurement sensitivity profiles. Second, the preprocessed optical density data from Homer2 in the *.nirs file (or *.snirf file in the case of Homer3) was converted directly into NeuroDOT format. The NeuroDOT measurement list represented in the table info.pairs is populated from the SD variables in Homer2 (or snirfdata.data.measurementList in Homer3). The table info.pairs contains the source, detector, wavelength, source-detector separation in 2D and 3D, and other information for each measurement. The measurements to be used for reconstruction are listed in the info.MEAS.GI variable. The stimulus paradigm timing information is populated into info.paradigm for use in the post-reconstruction analyses (described below). To minimize contributions from extra-cerebral tissue in the variance of the measurement data, we performed global signal regression separately for each wavelength. The mean signal across all low-noise measurement channels was regressed from all channels in a manner akin to superficial signal regression as previously described using the NeuroDOT function regcorr.26–28 Though short measurement channels were not utilized in these data, the mean across all measurement channels provides an estimate of systemic physiology present in all of the data. Removing this global mean increases the contrast to noise and spatial specificity of the variance remaining in the measurement data. The conversion of measurement-wise data to voxel-wise data through reconstruction significantly increases the overall size of the optical data due to the presence of many times more voxels than measurements. To help alleviate the large computational demands, the data were temporally down-sampled to 25 Hz before reconstruction using the NeuroDOT function resample_tts. Once the optical, sensitivity, and meta-data have been converted into NeuroDOT format, the sensitivity matrix can be inverted separately for each wavelength using the Moore–Penrose generalized inverse with a Tikhonov regularization parameter of and spatially variant regularization parameter of using the NeuroDOT function Tikhonov_invert_Amat.29 The Tikhonov regularization parameter tunes the balance between spatial smoothness and high-spatial frequency noise in the image. Spatially variant regularization helps control for the tendency for diffuse optical tomography (DOT) methods to reconstruct too superficially and is set to optimize spatial correspondence with fMRI based on other studies.30 This process minimizes the objective function: . We include as a spatially variant regularization term, such that: .26,30,31 Given these regularization steps, the direct inverse that provides the solution, is given by the Moore–Penrose generalized inverse: with . Regularization parameters of and work well for a broad range of geometries and array designs. These steps are all carried out within Tikhonov_invert_Amat.m. We also recommend using at least a modest amount of direct spatial smoothing using a Gaussian smoothing kernel to control for speckly noise in the images. Recommended smoothing kernel sizes range from 3 to 10 mm at their full width at half max. The optical data are reconstructed into the voxelated space using the NeuroDOT function reconstruct_img. Relative changes in oxygenated (), deoxygenated (HbR), and total hemoglobin (HbT) concentrations are obtained from the reconstructed absorption coefficients by spectral decomposition of the extinction coefficients of and HbR at the two wavelengths.32,33 Conversion from relative changes in absorption coefficients to and HbR is performed with the NeuroDOT function spectroscopy_img. 6.2.Sample Image Reconstructed DataFigure 5 below shows samples of image reconstructed fNIRS data from one participant plotted against the processed channel-based fNIRS data. To create these images, we first placed the centroid of a 2-cm diameter sphere at the maximum value of each channel’s sensitivity volume from the photon migration simulations for each chromophore. Note that we constrained the sensitivity volumes to the cortex to ensure that the maximum value was located in the cortex and not in the skull or surface tissues. Next, we extracted the mean reconstructed fNIRS time series from each sphere, averaging over voxels within the sphere. The red curve in each panel shows the mean image-reconstructed time series, while the black line shows the channel-based data from the associated channel. We show data from six channels in total [see brain images in Figs. 5(e) and 5(f) for mapping of channels to spheres]. We selected these channels because they are located in parietal and ventral occipital regions of interest for our sample data set. As can be seen in the figure, the image-reconstructed data match the channel-based data well for both [Figs. 5(a) and 5(b)] and HbR [Figs. 5(c) and 5(d)]. The signals are within the same range, with the image-reconstructed data typically showing a bit more signal intensity. 6.3.Verifying the Image Reconstruction ApproachTo verify the image reconstruction approach, we correlated the average image-reconstructed time series (e.g., red lines in Fig. 5) with the channel-based time series (e.g., black lines in Fig. 5) for all data run for four sample participants. In total, we conducted 504 correlations: seven runs (over four participants) × 36 channels × 2 chromophores (). Data from 18 channels were discarded because the channels were pruned in the initial fNIRS processing. The correlation results are shown in Fig. 6. Most of the correlations were quite high and above our minimum acceptable threshold of 0.25. In particular, 473 of 496 correlations were ; the mean value for this subset was 0.8. Thus, only 2.6% of the sample was below the criterion. In these cases, the channel-based fNIRS data were of low quality with poor SNR. 7.Individual-Level Analysis of Data (General Linear Model)Once the participant fNIRS data have been reconstructed, a typical approach is to model these time series data using GLM. For our test dataset, brain responses were analyzed using a GLM run with the NeuroDOT function GLM_181206. We used a hemodynamic response function (HRF) derived from DOT data as it has been shown to be a better fit for optical data than those derived from fMRI.34 The same HRF was used for and HbR responses. Each event was modeled with a 10 s extent and variable inter-trial intervals with an exponentially distributed set of durations. We focused our analyses over the window from one second before each stimulus onset to 20 s post onset. To control for variability in the number of events per run and per participant, we computed a weighted average of the betas over runs, weighted by the number of events per condition in each run. The functions included in the pipeline allow the user to use the default NeuroDOT HRF as we did here; alternatively, the user can specify a custom HRF file to use. 8.Transfer to Group Space and Group-level AnalysisThis step employs a common or group space for the study. Often it is convenient to use the MNI coordinate system to report functional activation since it is the standard space used for fMRI and PET activation studies. The registration from subject space to the atlas space is driven by the AC-PC aligned image and brain mask generated in Sec. 3 and corresponding T1-weighted image and brain mask of the template image for the group space. We used ANTS software to implement a hierarchical image registration procedure that includes a high dimensional diffeomorphic non-linear symmetric image normalization35 as the last step in the registration. The prior steps of the image registration hierarchy include a rigid body registration followed by an affine registration. The resulting transform was then applied to the and HbR beta maps from the individual-level GLM results. 9.Demonstration of the Pipeline on an Example Dataset from NIRSSportTo demonstrate the type of data the processing pipeline yields, we report sample results here from a second test dataset taken using a NIRSSport (NIRx) system. This has the added utility of showing that our pipeline generalizes across NIRS systems (as the prior examples in this report used a TechEn system). The data presented in this case study are part of a longitudinal study investigating the effects of schooling on neurocognitive function.36 9.1.ParticipantsData from seventy-four 4.5-year-old children (34 females, age = 53.5 months, and ) who participated in a longitudinal study in Scotland were used to verify the analyses pipeline. The study was conducted in the participants’ homes. Parents gave written informed consent and children gave verbal assent prior to the testing session. The research was approved by the General University Ethics Panel (GUEP 375) at the University of Stirling. 9.2.Exerimental TaskVWM performance was measured as children took part in a color change detection task.37 The experimenter explained the task to the children using flash cards. Once the experimenter was satisfied that the children understood the rules, they moved to the experiment. The experimental task was run in E-prime V.3 software on an HP laptop with a 14-inch screen. In each trial, a fixation appeared on either side of the screen to orient the child to the screen. Then, a memory array of colored squares was presented for 2 s, followed by a blank screen for 1 s (delay period). Then, a test array of colored squares was presented and stayed on the screen until a response was made. After the presentation of the test array, the child was asked whether the colored squares were same or different between the memory and test arrays. The experimenter registered the child’s response using the keyboard. Each trial was followed by an inter-trial interval of 1 s (50% of the trials), 3 s (25% of the trials), or 5 s (25% of the trials). VWM load was manipulated from 1 to 3 items (load 1, load 2, and load 3). There were eight trials for the same condition and eight trials for the different condition for each load. Thus, there were 12 possible conditions for trials in this task, three loads (1, 2, and 3) × 2 trial types (same and different) × accuracy (correct and incorrect). 9.3.fNIRS Data AcquisitionAn NIRX system with eight sources and eight detectors was used to collect neuroimaging data from children as they engaged in the VWM task. Data were collected at 7.81 Hz and the wavelengths of the sources was 760 and 850 nm. Probe geometry with fourteen channels overlaying the bilateral frontal and parietal areas [see Fig. 7(a)] was constructed based on regions of interest from previous fMRI VWM literature.38 The head circumference of the child was measured and a suitable cap was chosen. Four cap sizes were used across all children and source-detector distances were scaled according to cap size (50 cm: 2.5 cm, 52 cm: 2.6 cm, 54 cm: 2.7 cm and 56 cm: 2.8 cm). The distances from the NZ to inion and between the peri-auricular points were measured to ensure that the cap was centrally placed on the child’s head. 9.4.fNIRS Data ProcessingThe probe geometry was digitized to obtain the coordinates of sources and detectors for each of the four capsizes. A 4.5-year-old MRI atlas was obtained from the Neurodevelopmental MRI database39–43 and segmented into four tissue types (scalp, CSF, gray matter, and white matter). Monte Carlo simulations were run with 1 million photons to generate sensitivity profiles for channels for each capsize (step 2, see Fig. 1). The head volume and sensitivity profiles were converted into NIFTI format. Next, channel-based fNIRS raw data were pre-processed, corrected for motion artifacts, filtered, and converted to optical density units. Data were also corrected for superficial scalp artifacts using global signal regression (step 3). These channel-based fNIRS optical density data were integrated with the volumetric sensitivity profiles using image reconstruction as described previously (step 4). A GLM with 6 regressors (for 12 conditions) was run on each voxel by convolving a modified gamma function from SPM (delay of response = 4; delay of undershoot = 15, dispersion of response = 1; dispersion of undershoot = 1; ratio of response to undershoot = 6; onset = 0; length of kernel = 16)44 with a boxcar window of duration 4.5 s (step 5). Note that this gamma function was chosen after visually comparing the duration of response onset, time to peak, and response offset of the generated HRFs and conventional channel-based block averages. Further, this boxcar window was chosen to account for a sample array of 2 s, delay of 1 s and the rest of the time dedicated towards the test array period. Beta coefficient maps were obtained for all 12 conditions in each voxel for each chromophore and participant. Next, these individual-level beta maps were registered to the MNI space (step 6). For Group analyses, the beta maps were entered into a linear mixed effects model (using 3dLME function in AFNI) with within-subjects factors of load, trial type, and chromophore (HbO and HbR). Note that only correct trials were used for our analyses. For the current case-study, we focused on the interaction between load and chromophore to examine whether HbO activation in significant clusters show the canonical trend of increasing activation with increasing VWM load. Only those voxels that contained data from 60% of the subjects were included for further analyses. To control the family-wise error in our data, we used the 3dFWHMx function in AFNI to estimate the empirical ACF in our fNIRS data and fit a mixed ACF model to this function. We then used the mixed-ACF parameters (0.9939 8.3354 5.2485) in 3dClustSim with a voxelwise , , and 10,000 iterations. We selected two-sided thresholding with the NN1 option (first-nearest neighbor clustering). The cluster size criterion was 169 voxels. 9.5.ResultsTwo clusters showed a significant interaction between load and chromophore: left middle frontal gyrus and left angular gyrus. Figure 7(b) shows the two clusters in red. Block averaged change in HbO/HbR activation extracted from these clusters is also shown. In the left angular gyrus, HbO activation increased with increasing load. In the left middle frontal gyrus, activation was highest for load 2. Activation in both regions show the canonical difference between HbO and HbR activation, with the largest difference visible at the higher loads. Masked sensitivity profiles from channels 5, 6, 7, and 8 in the left frontal cortex and 12, 13, and 14 in the left parietal cortex are shown in yellow. Block-averaged activation from these channels is shown in Fig. 7(c). We see similar trends in activation with increasing load between the block averages from the significant clusters from the image reconstruction approach and block averages from the conventional channel-based approach. Image reconstruction provides more precise localization of the source of activation in the brain. The change in activation with increasing load is more pronounced when localized to the left angular gyrus compared to the changes in activation in the three channels. It is important to note that the goal of employing image reconstruction techniques is not to empirically compare against and arrive at the same result observed in channel space. Instead, it affords the opportunity to obtain spatial localization of activation from multiple channels to specific clusters and compare these activation maps across participant groups and ages while accounting for key sources of spatial variance in the data. 10.Discussion/ConclusionImage reconstruction of fNIRS data is a critical tool in standardizing the spatial location of fNIRS data from both individual and multiple channels into voxel space. Doing so allows for analysis of fNIRS data based on anatomical region rather than channel, accounting for several known sources of spatial variance in the data including variance in exact optode location on any given participant. This approach also enables users to tap powerful fMRI analysis tools in the process. Despite the considerable benefits of this approach, there are complexities in the analysis that make image reconstruction difficult. These include a lack of standard approaches for cleaning digitized optode locations and creating templates, standardizing individual anatomies, creating a light model based on either template or individual data, running the GLM, moving the data into a common space, and analyzing the data. In the present paper, we described a flexible pipeline with accompanying code to push through these complexities along with initial verification data acquired from different fNIRS systems. In the present report, we demonstrated that with fNIRS data and both individual anatomical MRI images and templates, fNIRS data can be reconstructed in a manner that correlates with the original channel-based data and is analyzable in voxel space. The accompanying publicly available code can be adapted by users for their own data requirements. The code also contains additional functionality not discussed in the current paper, such as functions to cross-check GLM results against the time-course data from a given region. We also demonstrated with datasets from child fNIRS studies that the analyses and reconstruction methods are reliable and correspond with the measurements taken in channel space. There are important limitations in the reconstruction of fNIRS data to consider. The methods and data we describe show a high level of correspondence between the reconstructed data and the channel-based data; however, it would be useful in future to work to validate the pipeline using synthetic data. It would also be useful for future work to clarify how different sources of variability contribute to reconstruction accuracy, evaluating, for instance, the relative contributions of the optical properties used during photon migration versus mis-localization in optode position estimates. We also note that our pipeline is not fully automated; rather, users must make key decisions at different points. We have highlighted several decision points in this paper. Additional discussion is provided in the on-line documentation. We stress that users should verify their data as they move through the pipeline to understand what effect these decisions have on the data and to make sure, for instance, that the sensitivity profile created after photon migration simulations matches the locations of optodes. Finally, we note that the photon migration method we used works best with a high-performance computing cluster. To reduce this overhead, we recommend the use of MCX.9 The pipeline presented here contains several important contributions to the field. In the digitizeR package for the R language, we present a novel method of creating digitization templates and preparing them for use with other software. In addition, we provide code, suggestions, and guidelines for the cleaning and segmenting of individual MRIs to turn individual anatomical data into a brain model suitable for use in the creation of a light model. We then used a combination of existing software to demonstrate how to create a light model, perform a GLM, and move data into group space for analysis. While some individual aspects of this pipeline, such as the creation of a light model, feature in a number of software packages, these have not previously been combined in a way that allows users to go from raw data to final group analyses as reported here. The pipeline reported here, therefore, provides a roadmap of the functionality that users require to take their data all the way through analysis. As such, we hope this paper will be of use to software developers and others interested in creating standardization of analysis approaches within a rapidly developing field. AcknowledgmentsThis work supported by Grant No. OPP1164153 from the Bill & Melinda Gates Foundation under Grant No. R01HD083287 from the National Institutes of Health. Both grants were awarded to J. P. Spencer. The authors are grateful for the contributions of Aaron Buss and Jessica Defenderfer for their comments during testing of the pipeline. We also thank Meryem Yücel and Stephen Tucker for their help making the pipeline compatible with Homer3 and the *.snirf format. Data, Code, and Materials AvailabilityThe code required to run the analyses is available at https://github.com/developmentaldynamicslab/MRI-NIRS_Pipeline. The code for digitizeR is available at https://github.com/samhforbes/digitizeR. Data used in this paper can be made available upon request to the authors. ReferencesA. Custo et al.,
“Anatomical atlas-guided diffuse optical tomography of brain activation,”
NeuroImage, 49
(1), 561
–567
(2010). https://doi.org/10.1016/j.neuroimage.2009.07.033 NEIMEF 1053-8119 Google Scholar
H. Y. Kim et al.,
“Application of functional near-infrared spectroscopy to the study of brain function in humans and animal models,”
Mol. Cells, 40
(8), 523
–532
(2017). https://doi.org/10.14348/molcells.2017.0153 Google Scholar
A. Curtin et al.,
“A systematic review of integrated functional near-infrared spectroscopy (fNIRS) and transcranial magnetic stimulation (TMS) studies,”
Front. Neurosci., 13 84
(2019). https://doi.org/10.3389/fnins.2019.00084 1662-453X Google Scholar
L. D. Reyes et al.,
“The functional brain networks that underlie visual working memory in the first two years of life,”
NeuroImage, 219 116971
(2020). https://doi.org/10.1016/j.neuroimage.2020.116971 NEIMEF 1053-8119 Google Scholar
B. Walsh et al.,
“Hemodynamics of speech production: an fNIRS investigation of children who stutter,”
Sci. Rep., 7
(1), 4034
(2017). https://doi.org/10.1038/s41598-017-04357-6 SRCEC3 2045-2322 Google Scholar
S. Lloyd-Fox et al.,
“Habituation and novelty detection fNIRS brain responses in 5- and 8-month-old infants: The Gambia and UK,”
Dev. Sci., 22 e12817
(2019). https://doi.org/10.1111/desc.12817 Google Scholar
S. Wijeakumar et al.,
“Early adversity in rural India impacts the brain networks underlying visual working memory,”
Dev. Sci., 22 e12822
(2019). https://doi.org/10.1111/desc.12822 Google Scholar
A. K. Fishell et al.,
“Portable, field-based neuroimaging using high-density diffuse optical tomography,”
NeuroImage, 215 116541
(2020). https://doi.org/10.1016/j.neuroimage.2020.116541 Google Scholar
Q. Fang and D. A. Boas,
“Monte carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units,”
Opt. Express, 17
(22), 20178
(2009). https://doi.org/10.1364/OE.17.020178 OPEXFF 1094-4087 Google Scholar
M. Doulgerakis-Kontoudis et al.,
“Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU,”
J. Biomed. Opt., 22
(12), 125001
(2017). https://doi.org/10.1117/1.JBO.22.12.125001 JBOPFO 1083-3668 Google Scholar
F. Tian et al.,
“Test-retest assessment of cortical activation induced by repetitive transcranial magnetic stimulation with brain atlas-guided optical topography,”
J. Biomed. Opt., 17
(11), 116020
(2012). https://doi.org/10.1117/1.JBO.17.11.116020 JBOPFO 1083-3668 Google Scholar
R. J. Cooper et al.,
“Validating atlas-guided DOT: a comparison of diffuse optical tomography informed by atlas and subject-specific anatomies,”
NeuroImage, 62
(3), 1999
–2006
(2012). https://doi.org/10.1016/j.neuroimage.2012.05.031 NEIMEF 1053-8119 Google Scholar
S. L. Ferradal et al.,
“Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: in vivo validation against fMRI,”
NeuroImage, 85 117
–126
(2014). https://doi.org/10.1016/j.neuroimage.2013.03.069 NEIMEF 1053-8119 Google Scholar
C. M. Aasted et al.,
“Anatomical guidance for functional near-infrared spectroscopy: AtlasViewer tutorial,”
Neurophotonics, 2
(2), 020801
(2015). https://doi.org/10.1117/1.NPh.2.2.020801 Google Scholar
Y. Zhan et al.,
“Image quality analysis of high-density diffuse optical tomography incorporating a subject-specific head model,”
Front. Neuroenergetics, 4 6
(2014). https://doi.org/10.3389/fnene.2012.00006 Google Scholar
T. J. Huppert et al.,
“HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain,”
Appl. Opt., 48
(10), D280
(2009). https://doi.org/10.1364/AO.48.00D280 APOPAI 0003-6935 Google Scholar
A. T. Eggebrecht and J. P. Culver,
“NeuroDOT: an extensible Matlab toolbox for streamlined optical functional mapping,”
Proc. SPIE, 11074 110740M
(2019). https://doi.org/10.1117/12.2527164 PSISDG 0277-786X Google Scholar
W. Kabsch,
“A solution for the best rotation to relate two sets of vectors,”
Acta Crystallogr. A, 32
(5), 922
–923
(1976). https://doi.org/10.1107/S0567739476001873 Google Scholar
D. Tsuzuki et al.,
“Stable and convenient spatial registration of stand-alone NIRS data through anchor-based probabilistic registration,”
Neurosci. Res., 72
(2), 163
–171
(2012). https://doi.org/10.1016/j.neures.2011.10.008 Google Scholar
J. E. Iglesias et al.,
“Robust brain extraction across datasets and comparison with publicly available methods,”
IEEE Trans. Med. Imaging, 30
(9), 1617
–1634
(2011). https://doi.org/10.1109/TMI.2011.2138152 ITMID4 0278-0062 Google Scholar
A. T. Eggebrecht et al.,
“A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping,”
NeuroImage, 61
(4), 1120
–1128
(2012). https://doi.org/10.1016/j.neuroimage.2012.01.124 NEIMEF 1053-8119 Google Scholar
F. Bevilacqua et al.,
“In vivo local determination of tissue optical properties: applications to human brain,”
Appl. Opt., 38
(22), 4939
(1999). https://doi.org/10.1364/AO.38.004939 APOPAI 0003-6935 Google Scholar
A. Custo et al.,
“Effective scattering coefficient of the cerebral spinal fluid in adult head models for diffuse optical imaging,”
Appl. Opt., 45
(19), 4747
–4755
(2006). https://doi.org/10.1364/AO.45.004747 APOPAI 0003-6935 Google Scholar
G. Strangman, M. A. Franceschini and D. A. Boas,
“Factors affecting the accuracy of near-infrared spectroscopy concentration calculations for focal changes in oxygenation parameters,”
NeuroImage, 18
(4), 865
–879
(2003). https://doi.org/10.1016/S1053-8119(03)00021-1 NEIMEF 1053-8119 Google Scholar
M. A. Yücel et al.,
“Targeted principle component analysis: a new motion artifact correction approach for near-infrared spectroscopy,”
J. Innov. Opt. Health Sci., 7
(2), 1350066
(2014). https://doi.org/10.1142/S1793545813500661 Google Scholar
B. W. Zeff et al.,
“Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography,”
Proc. Natl. Acad. Sci. USA, 104
(29), 12169
–12174
(2007). https://doi.org/10.1073/pnas.0611266104 Google Scholar
N. M. Gregg et al.,
“Brain specificity of diffuse optical imaging: improvements from superficial signal regression and tomography,”
Front. Neuroenergetics, 2 14
(2010). https://doi.org/10.3389/fnene.2010.00014 Google Scholar
R. B. Saager, N. L. Telleri and A. J. Berger,
“Two-detector corrected near infrared spectroscopy (C-NIRS) detects hemodynamic activation responses more robustly than single-detector NIRS,”
NeuroImage, 55
(4), 1679
–1685
(2011). https://doi.org/10.1016/j.neuroimage.2011.01.043 NEIMEF 1053-8119 Google Scholar
A. T. Eggebrecht et al.,
“Mapping distributed brain function and networks with diffuse optical tomography,”
Nat. Photonics, 8
(6), 448
–454
(2014). https://doi.org/10.1038/nphoton.2014.107 NPAHBY 1749-4885 Google Scholar
M. D. Wheelock, J. P. Culver and A. T. Eggebrecht,
“High-density diffuse optical tomography for imaging human brain function,”
Rev. Sci. Instrum., 90
(5), 051101
(2019). https://doi.org/10.1063/1.5086809 RSINAK 0034-6748 Google Scholar
J. P. Culver et al.,
“Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia,”
J. Cereb. Blood Flow Metab., 23
(8), 911
–924
(2003). https://doi.org/10.1097/01.WCB.0000076703.71231.BB Google Scholar
A. Y. Bluestone et al.,
“Three-dimensional optical tomography of hemodynamics in the human head,”
Opt. Express, 9
(6), 272
(2001). https://doi.org/10.1364/OE.9.000272 OPEXFF 1094-4087 Google Scholar
S. Prahl,
“Tabulated molar extinction coefficient for hemoglobin in water [Internet],”
(1998) https://omlc.org/spectra/hemoglobin/summary.html Google Scholar
M. S. Hassanpour et al.,
“Statistical analysis of high density diffuse optical tomography,”
NeuroImage, 85 104
–116
(2014). https://doi.org/10.1016/j.neuroimage.2013.05.105 NEIMEF 1053-8119 Google Scholar
B. Avants et al.,
“Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain,”
Med. Image Anal., 12
(1), 26
–41
(2008). https://doi.org/10.1016/j.media.2007.06.004 Google Scholar
C. McKay et al.,
“Home assessment of visual working memory in pre-schoolers reveals associations between behaviour, brain activation and parent reports of life stress,”
Dev. Sci., e13094
(2021). https://doi.org/10.1111/desc.13094 Google Scholar
V. R. Simmering,
“The development of visual working memory capacity during early childhood,”
J. Exp. Child Psychol., 111
(4), 695
–707
(2012). https://doi.org/10.1016/j.jecp.2011.10.007 JECPAE 0022-0965 Google Scholar
S. Wijeakumar et al.,
“Validating a new methodology for optical probe design and image registration in fNIRS studies,”
NeuroImage, 106 86
–100
(2015). https://doi.org/10.1016/j.neuroimage.2014.11.022 NEIMEF 1053-8119 Google Scholar
A. C. Evans,
“The NIH MRI study of normal brain development,”
NeuroImage, 30
(1), 184
–202
(2006). https://doi.org/10.1016/j.neuroimage.2005.09.068 NEIMEF 1053-8119 Google Scholar
C. E. Sanchez, J. E. Richards and C. R. Almli,
“Age-specific MRI brain templates for healthy brain development from 4 to 24 years,”
(2010). Google Scholar
C. E. Sanchez, J. E. Richards and C. R. Almli,
“Age-specific MRI templates for pediatric neuroimaging,”
Dev. Neuropsychol., 37
(5), 379
–399
(2012). https://doi.org/10.1080/87565641.2012.688900 DENEE8 Google Scholar
J. E. Richards, W. Xie,
“Brains for all the ages: structural neurodevelopment in infants and children from a life-span perspective,”
Advances in Child Development and Behavior, 48 1
–52 Academic Press/Elsevier, Vol. 2015). Google Scholar
J. E. Richards et al.,
“A database of age-appropriate average MRI templates,”
NeuroImage, 124 1254
–1259
(2016). https://doi.org/10.1016/j.neuroimage.2015.04.055 NEIMEF 1053-8119 Google Scholar
W. Penny et al., Statistical Parametric Mapping: The Analysis of Functional Brain Images, 1st ed.Academic Press(2006). Google Scholar
BiographySamuel H. Forbes completed his DPhil in experimental psychology at the University of Oxford in 2018. Currently, he is a senior research associate at the University of East Anglia. His research is concerned with cognitive and linguistic development over the first few years of life and is also involved with the development of methodological tools for use by infant researchers. Sobanawartiny Wijeakumar received her BEng degree in electronic engineering from Glasgow Caledonian University in 2008 and completed her PhD in vision science in 2012. She is an assistant professor in the School of Psychology at the University of Nottingham. Her research focus is on the development of executive functions across the lifespan using experimental, real-world, and computational modeling methods. Adam T. Eggebrecht received his PhD in physics in 2009. Currently, he is an assistant professor in the Mallinckrodt Institute of Radiology at Washington University School of Medicine. His research is focused on designing and applying diffuse optical tomography (DOT) imaging methods in the human brain to understand mechanisms of typical and atypical functional brain development. He directs development of the NeuroDOT software package for modeling and analyses of diffuse optical measurements of brain function. Vincent A. Magnotta is a professor of radiology at the University of Iowa, where he studies neurological and psychiatric disorders using MRI. He is currently involved in the development of new MR imaging techniques and their application to bipolar disorder. He is also involved in fNIRS studies to help develop new processing pipelines and to correlate the blood flow response between fNIRS and fMRI. John P. Spencer is a professor of psychology at the University of East Anglia in Norwich, UK. He received an ScB with honors from Brown University in 1991 and a PhD in experimental psychology from Indiana University in 1998. He is the recipient of the 2003 Early Research Contributions Award from the Society for Research in Child Development and the 2006 Robert L. Fantz Memorial Award from the American Psychological Foundation. |