SWIR (Short Wave Infrared) imaging can be of great use in precision agriculture, food processing and recycling industry, among other fields. However, hyperspectral SWIR cameras are costly and bulky, preventing their widespread deployment on the field. To answer the market need for compact and cost-efficient hyperspectral cameras covering the SWIR range, imec and SCD have joined efforts to develop a novel integration approach combining imec know-how in pixel level patterned thin film spectral filter technology, with SCD’s InGaAs technology. The here presented line-scan SWIR hyperspectral camera covers the 1.1-1.65 μm range with 100+ bands and a spectral resolution better than 10 nm. This imager uses a set of patterned Fabry-Pérot interferometers processed using semiconductor grade thin-film technology. The optical filters are then integrated directly on top of the sensing side of the InGaAs detector with high accuracy and with a minimum gap between filters and Focal Plane Array to limit cross-talk. The resulting line-scan camera, measuring only 70x62x60 mm and with a weight below 0.5kg, is the lightest and most compact SWIR hyperspectral camera on the market. Full sensor readout can be performed at up to 350 fps. An imecpatented SnapScan system with internal scanning was also developed, capable of acquiring data cubes of 640x512x128 pixels in a second. Maximum cube size is 1200x640x128. By selecting a subset of contiguous spectral bands and a reduced spatial resolution the sensor could be operated @ +1000 fps, for example enabling cube acquisitions of 320x512x64 in less than 300 ms.
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras that make the technology attractive for industry. To calibrate the sensor, we introduce a full pixel response model that takes into account the inherent properties of the optical filters. This model is then used to derive a calibration method that enables more accurate and robust measurements of the spectral reflectance of a scene. The calibration method is then extended to take into account the normal manufacturing variations between different sensors to perform an inter-sensor calibration. We experimentally validate this method by scanning reference targets with different types of sensors and demonstrate that accurate and reproducible reflectance measurements are obtained.
KEYWORDS: Cameras, Line scan image sensors, Sensors, Image sensors, Photonic devices, Image processing, Hyperspectral imaging, CMOS sensors, Signal to noise ratio, Imaging systems, High dynamic range imaging, Data acquisition
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475–925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Using light, we are able to visualize the hemodynamic behavior of the brain to better understand neurovascular coupling and cerebral metabolism. In vivo optical imaging of tissue using endogenous chromophores necessitates spectroscopic detection to ensure molecular specificity as well as sufficiently high imaging speed and signal-to-noise ratio, to allow dynamic physiological changes to be captured, isolated, and used as surrogate of pathophysiological processes. An optical imaging system is introduced using a 16-bands on-chip hyperspectral camera. Using this system, we show that up to three dyes can be imaged and quantified in a tissue phantom at video-rate through the optics of a surgical microscope. In vivo human patient data are presented demonstrating brain hemodynamic response can be measured intraoperatively with molecular specificity at high speed.
Brain needle biopsy (BNB) is performed to collect tissue when precise neuropathological diagnosis is required to provide information about tumor type, grade, and growth patterns. The principal risks associated with this procedure are intracranial hemorrhage (due to clipping blood vessels during tissue extraction), incorrect tumor typing/grading due to non-representative or non-diagnostic samples (e.g. necrotic tissue), and missing the lesion. We present an innovative device using sub-diffuse optical tomography to detect blood vessels and Raman spectroscopy to detect molecular differences between tissue types, in order to reduce the risks of misdiagnosis, incorrect tumour grading, and non-diagnostic samples. The needle probe integrates optical fibers directly onto the external cannula of a commercial BNB needle, and can perform measurements for both optical techniques through the same fibers. This integrated optical spectroscopy system uses diffuse reflectance signals to perform a 360-degree reconstruction of the tissue adjacent to the biopsy needle, based on the optical contrast associated with hemoglobin light absorption, thereby localizing blood vessels. Raman spectra measurements are also performed interstitially for tissue characterization. A detailed sensitivity of the system is presented to demonstrate that it can detect absorbers with diameters <300 µm located up to ∼2 mm from the biopsy needle core, for bulk optical properties consistent with brain tissue. Results from animal experiments are presented to validate blood vessel detection and Raman spectrum measurement without disruption of the surgical workflow. We also present phantom measurements of Raman spectra with the needle probe and a comparison with a clinically validated Raman spectroscopy probe.
Following normal neuronal activity, there is an increase in cerebral blood flow and cerebral blood volume to provide oxygenated hemoglobin to active neurons. For abnormal activity such as epileptiform discharges, this hemodynamic response may be inadequate to meet the high metabolic demands. To verify this hypothesis, we developed a novel hyperspectral imaging system able to monitor real-time cortical hemodynamic changes during brain surgery. The imaging system is directly integrated into a surgical microscope, using the white-light source for illumination. A snapshot hyperspectral camera is used for detection (4x4 mosaic filter array detecting 16 wavelengths simultaneously). We present calibration experiments where phantoms made of intralipid and food dyes were imaged. Relative concentrations of three dyes were recovered at a video rate of 30 frames per second. We also present hyperspectral recordings during brain surgery of epileptic patients with concurrent electrocorticography recordings. Relative concentration maps of oxygenated and deoxygenated hemoglobin were extracted from the data, allowing real-time studies of hemodynamic changes with a good spatial resolution. Finally, we present preliminary results on phantoms obtained with an integrated spatial frequency domain imaging system to recover tissue optical properties. This additional module, used together with the hyperspectral imaging system, will allow quantification of hemoglobin concentrations maps. Our hyperspectral imaging system offers a new tool to analyze hemodynamic changes, especially in the case of epileptiform discharges. It also offers an opportunity to study brain connectivity by analyzing correlations between hemodynamic responses of different tissue regions.
We present for the first time the analytical solution for the simplified spherical harmonics equations with partial reflective boundary conditions for a point source inside a spherical homogenous turbid medium.
Obtaining accurate quantitative information on the concentration and distribution of fluorescent markers lying at a depth below the surface of optically turbid media, such as tissue, is a significant challenge. Here, we introduce a fluorescence reconstruction technique based on a diffusion light transport model that can be used during surgery, including guiding resection of brain tumors, for depth-resolved quantitative imaging of near-infrared fluorescent markers. Hyperspectral fluorescence images are used to compute a topographic map of the fluorophore distribution, which yields structural and optical constraints for a three-dimensional subsequent hyperspectral diffuse fluorescence reconstruction algorithm. Using the model fluorophore Alexa Fluor 647 and brain-like tissue phantoms, the technique yielded estimates of fluorophore concentration within ±25% of the true value to depths of 5 to 9 mm, depending on the concentration. The approach is practical for integration into a neurosurgical fluorescence microscope and has potential to further extend fluorescence-guided resection using objective and quantified metrics of the presence of residual tumor tissue.
Cancer tissue is frequently impossible to distinguish from normal brain during surgery. Gliomas are a class of brain cancer which invade into the normal brain. If left unresected, these invasive cancer cells are the source of glioma recurrence. Moreover, these invasion areas do not show up on standard-of-care pre-operative Magnetic Resonance Imaging (MRI). This inability to fully visualize invasive brain cancers results in subtotal surgical resections, negatively impacting patient survival. To address this issue, we have demonstrated the efficacy of single-point in vivo Raman spectroscopy using a contact hand-held fiber optic probe for rapid detection of cancer invasion in 8 patients with low and high grade gliomas. Using a supervised machine learning algorithm to analyze the Raman spectra obtained in vivo, we were able to distinguish normal brain from the presence of cancer cells with sensitivity and specificity greater than 90%. Moreover, by correlating these results with pre-operative MRI we demonstrate the ability to detect low density cancer invasion up to 1.5cm beyond the cancer extent visible using MRI. This represents the potential for significant improvements in progression-free and overall patient survival, by identifying previously undetectable residual cancer cell populations and preventing the resection of normal brain tissue. While the importance of maximizing the volume of tumor resection is important for all grades of gliomas, the impact for low grade gliomas can be dramatic because surgery can even be curative. This convenient technology can rapidly classify cancer invasion in real-time, making it ideal for intraoperative use in brain tumor resection.
We introduce a novel approach for localizing a plurality of discrete
fluorescent inclusions embedded in a thick scattering medium using
time-domain (TD) experimental data. It relies on numerical constant fraction
discrimination (NCFD), a signal processing technique for extracting in a
stable manner the arrival time of early photons emitted by one or many
fluorescent inclusions from measured photons time of flight (TOF)
distributions. Our experimental set-up allows multi-view TD data acquisition
from multiple tomographic projections over 360 degrees without contact with
the medium. Fluorescence time point-spread functions (FTPSFs) are acquired
all around the medium with ultra-fast time-correlated single photon counting
(TCSPC) after short pulse laser excitation. From these FTPSFs, the early
photons arrival time (EPAT) of a fluorescent wavefront at a detector
position is extracted with our NCFD technique. The key to our localization
algorithm is to combine EPATs from several detection positions and
projections to form 3D surfaces. The digital analysis of the concavities of
the surfaces allows to find the 3D positions of an a priori unknown number
of fluorescent inclusions located in the medium. Indocyanine green (ICG;
absorption peak = 780nm, emission peak = 830nm) is used for the inclusions.
Various experiments were conducted, and we show localization results on
experimental data for up to 5 discrete inclusions distributed at arbitrary
positions in the medium. We expect to extend our method to continuous
distributions of fluorescence (rather than discrete inclusions) in a near
future.
We introduce an improved approach in the 3D localization of discrete fluorescent inclusions in a thick scattering
medium. Previously our approach provided accurate localization of a single inclusion, showing the potential for
direct time-of-flight fluorescence diffuse optical tomography. Here, we localize various combinations of multiple
fluorescent inclusions. We resort to time-domain (TD) detection of emitted fluorescence pulses after short pulse
laser excitation. Our approach relies on a signal processing technique, dubbed numerical constant fraction
discrimination (NCFD), for extracting in a stable manner the arrival time of early photons emitted by one or
many fluorescent inclusions from measured time-of-flight (TOF) distributions. Our experimental set-up allows
multi-view tomographic optical TD measurements over 360 degrees without contact with the medium. It uses an ultra-short pulse laser and ultra-fast time-correlated single photon counting (TCSPC) detection. Fluorescence time point-spread functions (FTPSFs) are acquired all around the phantom after laser excitation. From measured
FTPSFs, the arrival time of a fluorescent wavefront at a detector position is extracted with our NCFD technique.
Indocyanine green (ICG; absorption peak = 780nm, emission peak = 830nm) is used for the inclusions. Various
experiments were conducted with this set-up in a stepwise fashion. First, single inclusion experiments are
presented to provide background information. Second, we present results using two inclusions in a plane. Then,
we move on with two inclusions located in different planes. Finally, we show results with a plurality of inclusions
(>2) distributed at arbitrary positions in the medium. Using an algorithm we have developed and tested on the
acquired data, we successfully achieve to locate the inclusions. Here, results are obtained for discrete inclusions.
In a close future, we expect to extend our method to continuous fluorescence distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.