PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
An end-to-end tumor diagnosis framework including resolution enhancement and tumor classification is proposed. The U-Net + EDSR network enables a significant improvement of PSNR and enhances the resolution beyond physical limitations. Moreover, the subsequent tumor discrimination can benefit from the enhancement. Multi-image as network input and advanced models like generative adversarial networks are expected to bring a further improvement for the imaging. Our proposed novel method first time realizes intraoperative lensless CFB imaging with high resolution in the near-field. The technique builds a bridge to many techniques like optical biopsies, multi-modal imaging, virtual staining, and computer-assisted disease diagnostics for neuron signal monitoring as well as neurosurgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Helmholtz-Zentrum Hereon is operating imaging beamlines for X-ray tomography (P05 IBL, P07 HEMS) for academic and industrial users at the synchrotron radiation source PETRA III at DESY in Hamburg, Germany. The high X-ray flux density and coherence of synchrotron radiation enables high-resolution in situ/operando/vivo tomography experiments and provides phase contrast, respectively. Large amounts of 3D/4D data are collected that are difficult to process and analyze. Here, we report on the application of machine learning for image segmentation including a guided interactive framework, multimodal data analysis (virtual histology), image enhancement (denoising), and self-supervised learning for phase retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next leap in implantable neural interfaces requires technological advances in materials, devices, and computing paradigms. Multimodal approaches integrating optical and electrical sensing modalities can overcome spatiotemporal resolution limits of neural sensing as well as open up new avenues for non-invasive neural recording. Integration of sensing, computation and memory on a single array can enable real-time processing of neural signals for compact, low-power and high-throughput brain machine interfaces. Here, I will present this vision, its challenges, and discuss recent advances in the areas of transparent neural interfaces for multimodal recordings, neuromorphic approaches for on-chip neural processing and computational co-design at the system level for minimally invasive neural interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Science provides a framework for the objective assessment of image quality. This framework has been used in the evaluation of medical imaging devices by the FDA, with examples including iterative reconstruction algorithms for computed tomography (CT) and their potential to reduce radiation dose to patients as well as display systems optimized for specific tasks. My talk will describe the Image Science framework in general, examine its use by FDA in the evaluation of medical imaging devices, and consider how computational modeling and database development efforts can address knowledge gaps that challenge the evaluation of newer AI-enabled medical imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lyme disease (LD) is a tick-borne illness caused by the bacterium Borrelia burgdorferi, which can cause severe symptoms if untreated. We present a novel diagnostic platform utilizing synthetic peptides and a deep-learning-based analytical algorithm to detect LD-specific antibodies in patient serum samples. Blinded samples acquired from the Centers for Disease Control and Prevention (CDC) were tested using our platform, achieving a sensitivity of 95% among disseminated disease and a specificity of 100% across all healthy endemic controls and cross-infected samples. Our peptide-based assay offers high sensitivity, specificity, ease-of-use, and cost-effectiveness, making it an attractive platform for point-of-care LD diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pathology underlies every facet of healthcare, influencing greater than 70% of all medical decisions, every phase of pre-clinical and clinical drug development, every tumor repository and biobank, and an ever-increasing majority of standard and companion diagnostics for precision cancer care. However, such studies, whether performed traditionally via visual microscopy or via newer Artificial Intelligence (AI) enhanced image analyses, are all limited by the number of markers – typically immunologic – which can be performed on dwindling, aging samples that must also be preserved for downstream multiomics analysis. In this talk, we’ll demonstrate how we intend on altering the centuries old practice of histopathology with a digitized process to in a non-destructive fashion, enabled by a machine-learning-based virtual staining technology, that enables fully digital, virtual multiplex tissue platform to substantially improve the quality and quantity of pathology samples by protecting sample integrity, minimizing pre-analytic degradation of target analytes, and revolutionizing storage and processing of cancer-relevant biospecimens. We’ll also discuss additional benefits of the technology, such as lab sustainability, and digital outputs could be seamlessly integrated into downstream AI image analysis software, thereby providing total characterization of cellular processes within minutes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As deep learning continues to grow, developing adapted energy-efficient hardware becomes crucial. Learning on a chip requires hardware-compatible learning algorithms and their realization with physically imperfect devices. Equilibrium Propagation is a training technique introduced in 2017 by Yoshua Bengio which gives gradient estimates based on a spatially local learning rule, making it both more biologically plausible and more hardware compatible than backpropagation. This work uses the Equilibrium Propagation algorithm to train a neural network with hardware-in-the-loop simulations using hafnium oxide memristor synapses. Realizing this type of learning with imperfect and noisy devices paves the way for on-chip learning at very low energy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used reservoir computing to explore the changes in the connectivity patterns of whole-brain anatomical networks derived by diffusion-weighted imaging, and their impact on cognition during aging. The networks showed optimal performance at small densities. This performance decreased with increasing density, with the rate of decrease being strongly associated with age and performance on behavioural tasks measuring cognitive function. This suggests that a network core of anatomical hubs is crucial for optimal functioning, while weaker connections are more susceptible to aging effects. This study highlights the potential utility of reservoir computing in understanding age-related changes in cognitive function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study proposes a new approach to diagnose Alzheimer's disease by using a generative adversarial network (GAN) applied to T1-weighted scans to predict tau pathology on positron emission tomography (PET) images. We used a cohort of 259 participants across different stages stages of Alzheimer’s disease from the Alzheimer's Disease Neuroimaging Initiative. The proposed 3D pix2pix GAN model was more successful than other models in synthesizing regional tau-PET signals from structural brain scans, holding great promise as a tool for multi-modal diagnosis and allowing to assess the underlying disease’s pathology without the need of exposing patients to radiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning has shown great promise for modelling and analysing optical tweezers experiments. Models have been developed for particle tracking, estimating optical potentials and speeding up optical tweezers simulations. These models push the limits of what traditional techniques can achieve, and have the potential to reduce the cost and improve accessibility of accurate numerical simulations. In this talk, I will provide a brief overview of the current state of machine learning for optical tweezers simulation, current challenges, and potential solutions. In particular, I will focus on auto-encoder networks as a way to improve accuracy and reduce the required amount of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical diffraction tomography of biological cells is commonly based on illumination-scanning, which suffers of the missing-cone problem, but enables accurate calibration of the projection angle. We present an AI-driven alternative using precise adaptive-optical cell-rotation. A multi-core fiber, is transformed to a remote phased-array, employing a spatial light modulator and a novel phase encoder neural network called CoreNet. The resulting high-fidelity light-field delivery enables targeted 3D cell-rotation resulting in full spatial frequency coverage. The cell-motion and rotation angle are detected automatically and in real-time by a workflow based on machine learning and computer vision leading to rapid and robust tomographic reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical forces are often calculated by using geometrical optics to compute the exchange of momentum between particle and light beam. In geometrical optics, the light beam is represented by a certain number of rays. This sets a trade-off between calculation speed and accuracy. Here, we show that using neural networks allows overcoming this limitation, obtaining not only faster but also more accurate simulations. Then, we exploit our neural networks method to study the dynamics of ellipsoidal particles in a double trap, a system that would be computationally impossible otherwise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional shells can be obtained from the spontaneous self-folding of two-dimensional templates of interconnected panels, called nets. To design self-folding, one first needs to identify what are the nets that fold into the desired structure. In principle, different nets can fold into the same three-dimensional structure. However, recent experiments and numerical simulations show that the stochastic nature of folding might lead to misfolding and so, the probability for a given net to fold into the desired structure (yield) depends strongly on the topology of the net and experimental conditions. Here we discuss ongoing efforts to establish a relation between the structural features of the nets and their folding time and probability of misfolding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop an all-optical platform integrating a universal optothermal rotation technique with a standard optical microscope to drive the out-of-plane rotation of an arbitrary organism for its high-resolution volumetric visualization with reduced optical shadowing, occlusion and scattering effect. Furthermore, when coupled with machine learning for the classification of cells of high similarity, our volumetric imaging technique can collect large numbers of unique images for each cell and therefore reduce sample quantities required for the machine learning training. Impressively, we can improve the cell classification accuracy while using one-tenth the number of samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of machine learning, large datasets are essential for heavy tasks. However, the performance of power-hungry processors is limited by the data transfer to and from memory. Optical computing has been gaining interest as a means of high-speed computation, and here we present an optical computing framework called scalable optical learning operator based on spatiotemporal effects in multimode fibers. This framework is capable of performing various learning tasks, such as classifying COVID-19 X-ray lung images, speech recognition, and age prediction from face images. Our approach addresses the energy scaling problem without compromising speed by leveraging the simultaneous, linear and nonlinear interaction of spatial modes as a computation engine. Our experiments demonstrate the accuracy of our method comparable to a digital implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review the use of machine learning techniques in ultrafast fiber-optics systems. In particular, we discuss how machine learning can be used to extract useful information in the development of nonlinear instabilities from simple spectral measurements, to predict nonlinear dynamics in optical fibres for a wide range of input conditions, and how experimentally control nonlinear pulse propagation and supercontinuum generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The beginning of Neural Networks (NNs) as a computing concept were fundamentally motivated by several basic observations taken from biological brains. These principles also founded the field of neuromorphic hardware. However, today concepts and hardware have strongly drifted apart, making it difficult to re-consolidate their original connection. The computational and economic success of NNs ultimately justified the liberation from very strongly bio-inspired approaches, which consequently shifted the focus towards physical computing. I will give an overview of the physical computing trend and will highlight the fundamental aspects, which, in my opinion, need to be considered in order to arrive at an efficient merger of computing algorithm and physical hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show that a custom ResNet-inspired CNN architecture trained on simulated biomolecule trajectories surpasses the performance of standard algorithms in terms of tracking and determining the molecular weight and hydrodynamic radius of biomolecules in the low-kDa regime in optical microscopy. We show that high accuracy and precision is retained even below the 10-kDa regime, constituting approximately an order of magnitude improvement in limit of detection compared to current state-of-the-art, enabling analysis of hitherto elusive species of biomolecules such as cytokines (~5-25 kDa) important for cancer research and the protein hormone insulin (~5.6 kDa), potentially opening up entirely new avenues of biological research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the first demonstration of unidirectional imaging that permits image formation along only one direction, from an input field-of-view to an output field-of-view, while eliminating optical transmission in the reverse direction. This unidirectional imager is formed by diffractive layers composed of isotropic linear materials spatially-coded with thousands of phase features optimized using deep learning. We experimentally tested our diffractive design using a terahertz setup and 3D-printed diffractive layers, which revealed a good agreement with our numerical simulations. The designs of these diffractive unidirectional imagers are compact and can be scaled to operate at different parts of the electromagnetic spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate a new technique that combines holographic microscopy and deep learning to track microplankton through multiple generations, and measure their 3D positions and dry mass. The method is minimally invasive and non-destructive to the plankton cells, allowing us to study their trophic interactions, feeding events, and bio mass increase throughout the cell cycle. We evaluate the method on various plankton species belonging to different trophic levels, and observe the dry mass transfer during feeding interactions and diatom growth dynamics. Our approach provides a valuable tool for understanding microplankton behaviour and interactions in the oceanic food web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optical proximity correction using machine learning has been a promising alternative to physical three-dimensional Maxwell solvers in recent years. The benefits are mainly reduced CPU runtime and the incorporation of the resist and etching phenomena that lack proper physical models. The network architecture has been a key to the accuracy of the machine learning model. The appropriate architecture should grasp the physics and essential features in mask-resist mapping so that the test set prediction is improved. In addition, the architecture also affects the training process where the fine mask feature should be fitted and reflected in the corresponding resist patterns. In this work, we use a modified Unet with attention (Ozan Oktay et al. MIDL 2018) to construct a machine-learning model for OPC. The modification is on the attention layers inserted to the place where the up-sampling and cropped skip connection are combined. Instead of solely using concatenation to combine the up-sampled and skip-connected data flow, self-attention mechanism is shown to be effective in increasing the prediction accuracy. The mask-to-resist pattern, the image-to-image dataset, is from the Canon FPA
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Observing the dynamics of living cells and subcellular components is crucial for understanding fundamental biological processes. Time-lapse microscopy at the cellular and molecular level is a valuable tool for this purpose, but extracting quantitative information from these experiments can be challenging. In this talk, I will present our advances in data-driven methods for object tracking and analysis, including machine learning algorithms that offer remarkable improvements over classical methods. Specifically, I will discuss the results of an objective assessment of the performance of these methods for trajectory analysis and their follow-up applications. Furthermore, I will introduce novel strategies that we are currently developing to move beyond the tracking-by-detection paradigm. Through these methods, we hope to uncover new insights into the interactions between cellular components and their role in signaling and function regulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore the parallel information processing capacity of a broadband diffractive optical network and demonstrate that a single diffractive network could perform a large group of arbitrarily-selected, complex-valued linear transformations between its input and output fields-of-view at different wavelengths, accessed sequentially or simultaneously. Through deep learning-based training of the thickness values of its diffractive features, we demonstrate that a wavelength-multiplexed diffractive processor can implement W>180 complex-valued linear transformations with a negligible error when its number of trainable diffractive features approaches 2W×I×O, where I and O refer to the number of input and output pixels, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present LodeSTAR, an unsupervised, single-shot object detector for microscopy. LodeSTAR exploits the symmetries of problem statements to train neural networks using extremely small datasets and without ground truth. We demonstrate that LodeSTAR is comparable to state-of-the-art, supervised deep learning methods, despite training on orders of magnitude less training data, and no annotations. Moreover, we demonstrate that LodeSTAR achieves near theoretically optimal results in terms of sub-pixel positioning of objects of various shapes. Finally, we show that LodeSTAR can exploit additional symmetries to measure additional particle properties, such as the axial position of objects and particle polarizability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modal crosstalk is an issue limiting the deployment of multimode fibers (MMF) in the field of communications. Wavefront shaping techniques can compensate for the scrambling. However, the required coherent measurements usually need a complex optical system. In this paper, we introduce a deep learning-based reference-less method to undo the distortion and perform information transmission through MMF. A deep neural network trained with synthetic data is able to experimentally detect both amplitude and phase information of the light field. By using a spatial light modulator, a desired light field distribution is obtained at the output of MMF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a time-lapse approach for image classification that significantly improves the inference of a standalone diffractive optical network. This approach utilizes the information diversity derived from controlled or random lateral displacements of the objects relative to a diffractive optical network, over a finite integration time at the image sensor, to enhance its generalization and statistical inference performance. By employing this time-lapse training and inference, we achieved a numerical blind testing accuracy of 62.03% on grayscale CIFAR-10 images, which represents the highest classification accuracy for this dataset achieved so far using a single diffractive network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed a deep neural network that could design an active metasurface antenna. The neural network provides a fast and accurate calculation of the radiation pattern of the metasurface. This process is unhindered by the miscalculation due to periodic approximations frequently used in calculating unit cells of the metasurface. Using the network, we have demonstrated the search for the highest antenna gains in five different directions. The results showed higher gains and lower side lobe levels than theoretical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel approach to perform quantitative phase imaging (QPI) through random phase diffusers using a diffractive neural network consisting of successive diffractive layers optimized using deep learning. This diffractive network is trained to convert the phase information of samples positioned behind random diffusers into intensity variations at the output, enabling all-optical phase recovery and quantitative phase imaging of objects hidden by unknown random diffusers. Unlike traditional digital image reconstruction methods, our all-optical diffractive processor does not require external power beyond the illumination beam and operates at the speed of light propagation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present data class-specific transformation diffractive networks that all-optically perform different preassigned transformations for different input data classes. The visual information encoded in the amplitude, phase, or intensity channel of the input field is all-optically processed and transformed/encrypted by the diffractive network. The amplitude or intensity of the resulting field approximates the transformed/encrypted input information using the transformation matrix specifically assigned for that data class. We experimentally validated this class-specific transformation framework by designing and fabricating two diffractive networks at 1550nm and 0.75mm wavelengths. The presented framework provides a fast, secure, and energy-efficient solution to data encryption applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-dimensional computational imaging tools with surprisingly high performance can be created by coding physical phenomena as algorithms. This discovery suggests qualitatively new ways to reimagine image and video data and to accelerate computing via analog hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning methods have been widely used in subwavelength photonic structure designs since they are capable of solving the non-intuitive and nonlinear relationship between subwavelength structures and their optical responses and are significantly faster than the traditional numerical simulation methods. However, in the inverse design problems, machine learning models usually serve as black boxes which take the desired spectrum as an input to predict the shape of meta-atoms without elucidating the physics behind it. This makes the machine learning method difficult to apply when designing structures aimed at performing complicated functions. At the same time, the multipole expansion of the scattering cross sections, i.e. multipolar resonances, has been instrumental in analyzing and designing meta-atoms. In this work, we developed forward prediction models to discover hidden relationships between scattering behavior and the shapes of meta-atoms, and an inverse design model to reconstruct the meta-atoms having desired properties under the guidance of multipole expansion theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I will overview our work on analog neural networks based on photonics and other controllable physical systems. I will show how backpropagation can efficiently train physical neural networks (PNNs), and how to design physical network architectures for physics-based machine learning. I will review our work showing how nonlinear photonic neural networks may enhance computational sensing and how photonic neural networks may be operated robustly deep into low-energy regimes where quantum noise would ordinarily be a limiting factor. Finally, I will show that PNNs offer fundamental advantages for scaling AI models such as Transformers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By co-designing optics and algorithms, computational cameras can do more than regular cameras - they can see in the extreme dark, measure 3D, be extremely compact, record different wavelengths of light, or capture the phase of light. These computational imagers are powered by algorithms which uncover the signal from encoded or noisy measurements. Over the years the classic methods to recover information from computational cameras have been based on minimizing an optimization problem consisting of a data fidelity and hand-picked prior term. More recently, deep learning has been applied to these problems, but often has no way to incorporate known optical characteristics, requires large training datasets, and results in black-box models that cannot easily be interpreted. In this talk, I will introduce physics-informed machine learning for computational imaging, which is a middle ground approach that combines elements of classic methods with deep learning. I will demonstrate this approach through two examples on real computational cameras: a tiny, cheap lensless camera and a high-end low-light camera for nighttime videography. In each case incorporating knowledge of imaging system physics into neural networks can improve image quality and performance beyond what is feasible with either classic or deep methods. For lensless imaging, physics-informed machine learning can speed up the reconstruction time by an order of magnitude and improve the perceptual image quality. For nighttime videography, we can learn a physics-informed noise generator that can realistically synthesize noise at extremely high-gain, low-light settings. Using this learned noise model, we can take videos of moving objects on a clear, moonless night with no external illumination (submillilux) for the first time, pushing the limit of what cameras can see in the extreme dark by an order of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence in medical imaging involves research in task-based discovery, predictive modeling, and robust clinical translation. Quantitative radiomic analyses, an extension of computer-aided detection (CADe) and computer-aided diagnosis (CADx) methods, are yielding novel image-based tumor characteristics, i.e., signatures that may ultimately contribute to the design of patient-specific cancer diagnostics and treatments. Beyond human-engineered features, deep networks are being investigated in the diagnosis of disease on radiography, ultrasound, and MRI. The method of extracting characteristic radiomic features of a region can be referred to as “virtual biopsies”. Various AI methods are evolving as aids to radiologists as a second reader or a concurrent reader, or as a primary autonomous reader. This presentation will discuss the development, validation, database needs, and ultimate future implementation of AI in the clinical radiology workflow including examples from cancer, brain injuries, and COVID-19, including the creation and benefits of MIDRC (midrc.org).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous applications in science and technology nowadays utilize deep learning to tackle challenging computational tasks. With the increasing demand for deep learning, high-speed and energy-efficient accelerators are urgently needed. Although electronic accelerators are flexible, optical computers holds great promise due to their potential for massive parallelism and low power consumption. However, optical computing platforms demonstrated so far have mostly been limited to relatively small-scale computing tasks, despite the potential for scalability. Here, we propose and demonstrate a hardware-efficient design that allows deployment of a reconfigurable deep neural network (DNN) architecture without a direct isomorphism to standard DNN designs. Our proposed system is scalable and supports larger-scale computing. Our system realizes an optical neural network (ONN) using a digital micromirror device (DMD) for encoding data and trainable parameters, a complex medium for random complex weight mixing, and a camera for nonlinear activation and optical readout. A straight-through estimator enables backpropagation, even with a DMD as a binary encoding device. With this ONN as an elementary building block and automating the search for neural architectures, we can build complex and deep ONNs for a range of large-scale computing tasks, such as 3D medical image classification. The architecture-optimized deep ONNs are deployed by time-multiplexing data streams in one system. Our system enables large-scale training and inference in situ. Furthermore, we demonstrate that our system is capable of achieving task accuracies close to that of state-of-the-art benchmarks with more complex architectures implemented in silico.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photonic platforms for neuromorphic computing promise high-speed and low-energy computations for machine learning. However, current learning schemes in optical systems are often limited to training only a linear output layer. Here, we discuss performance gains by training input and/or internal weights of neural networks for classification tasks. We focus on optimization methods that can be directly applied to physical hardware without the need for mathematical models of the hardware or measurement of the network's state. Accordingly, we target online learning strategies that increase computational capabilities beyond reservoir computing, paving the way to more autonomous and performant photonic hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical Diffractive Neural Networks (ODNNs) have emerged as a new class of AI systems that hold promise for fast and low energy classification of scenes. While these systems resemble electronic neural networks, they also have important differences because they need to satisfy constraints imposed by physical laws of light propagation and light-matter interactions. This brings a number of interesting fundamental questions regarding the ultimate performance that can be achieved, the optimal structure of materials, and even how effectively they can be trained. In this presentation, we will present our efforts to address these questions. In particular, we will discuss how co-design of the diffractive material, the system architecture, and the training algorithms is essential to achieve the best performance and also reveal underlying properties. For example, universal scaling of the performance emerges which differs from traditional electronic NNs. We will also discuss how the properties of the systems differs for coherent and incoherent light. Finally, the role of depth will also be addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed two deep neural networks (inverse network / forward network) for obtaining metasurface operating at visible bandwidth. Unlike other studies, the neural networks involve not only the geometry of the metasurface but also incorporate refractive index information for metasurface designers. With the inverse network, inverse designs of metasurfaces displaying specific spectra have been conducted. Also, using the forward network, we have demonstrated a dual-mode metasurface with a reflective image / transmissive hologram, and an achromatic metalens. The networks provide vast design choices and fast calculation speed for engineers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study focuses on the complex relationship between aging and functional brain connectivity, and the need for advanced artificial intelligence approaches to understanding them. To identify the underlying mechanisms that drive cognitive decline in aging, we present a novel graph attention network model to detect nonlinear changes in functional brain connections across the aging process. The results have the potential to improve our understanding of the complexities of aging-related diseases, such as Alzheimer's disease, and aid in the development of effective diagnostic tools and treatments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method is developed to jointly identify functionally connected cortical-surface and deep-brain structures for TMS targeting. Anatomical information of the brain structures of interest is first utilized to locate enlarged, candidate brain structures, then a two-way clustering algorithm is adopted to partition the candidate brain structures into functionally homogenous subregions with strong functional connectivity between the cortical-surface and deep-brain structures, and finally the subregions with the strongest functional connectivity are identified as TMS targets. This method has been validated on HCP dataset for identifying personalized sgACC and DLPFC subregions, and demonstrated promising performance, better than alternative methods under comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Amyloid-beta positron emission tomography (PET) is used for the diagnosis of Alzheimer’s disease (AD). However, the inherent radiation of radioactive tracers used for PET is potentially harmful to the human body. In this study, we present a deep-learning framework for generating high-quality standard-dose PET brain images from scans that have a simulated reduced injected dose of 12.5% of the standard injected dose, thus reducing radiation exposure without compromising image quality. This novel approach achieves remarkable similarity to full-dose images in both visual and quantitative aspects. Our method offers the potential of enabling safer and more accessible PET imaging for early Alzheimer’s disease detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aging impacts brain connectivity and can lead to cognitive decline in healthy individuals, but how this occurs is not well understood. In a study of 640 cognitively normal individuals ranging in age from 18 to 88, we used deep learning and DTI MRI scans to construct individual graphs representing connectivity matrices of white matter fiber bundles between brain regions. They we explored these connections with a graph neural network and found that age-related changes in connectivity were strongly located in frontal regions, indicating that the prefrontal cortex may be particularly affected by aging. Additionally, we observed significant age-related changes in the connections of regions corresponding to the default mode network (DMN), suggesting that alterations in DMN connectivity may contribute to cognitive decline in healthy aging. These findings offer new insights into the neural mechanisms underlying cognitive aging in healthy individuals and demonstrate the potential of graph neural networks for investigating complex brain connectivity patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.