We enhance the accuracy of a diffractive optical network through time-lapse-based inference, which exploits the information diversity obtained by introducing controlled or random displacements between the object and the diffractive network, relative to each other. The numerical blind testing accuracy achieved using this time-lapse-based inference scheme on CIFAR-10 images reached >62%, representing the highest accuracy achieved so far on this dataset using a single diffractive network. Beyond image classification, this framework could also open doors to broader utilization of diffractive networks in tasks involving all-optical spatiotemporal information processing, paving the way for advanced visual computing paradigms.
We report deep learning-based design of diffractive all-optical processors for performing arbitrary linear transformations of optical intensity under spatially incoherent illumination. We show that a diffractive optical processor can approximate an arbitrary linear intensity transformation under spatially incoherent illumination with a negligible error if it has a sufficient number of optimizable phase-only diffractive features distributed over its diffractive surfaces. Our analysis and design framework could open up new avenues in designing incoherent imaging systems with an arbitrary set of spatially-varying point-spread functions (PSFs). Moreover, this framework can also be extended to design task-specific all-optical visual information processors under natural illumination.
We directly transfer optical information around arbitrarily-shaped, fully-opaque occlusions that partially or entirely block the line-of-sight between the transmitter and receiver apertures. An electronic neural network (encoder) produces an encoded phase representation of the optical information to be transmitted. Despite being obstructed by the opaque occlusion, this phase-encoded wave is decoded by a diffractive optical network at the receiver. We experimentally validated our framework in the terahertz spectrum by communicating images around different opaque occlusions using a 3D-printed diffractive decoder. This scheme can operate at any wavelength and be adopted for various applications in emerging free-space communication systems.
As an optical processor, a diffractive deep neural network (D2NN) utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing, completing its tasks at the speed of light propagation through thin optical layers. With sufficient degrees of freedom, D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light. Similarly, D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination; however, under spatially incoherent light, these transformations are nonnegative, acting on diffraction-limited optical intensity patterns at the input field of view. Here, we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light. Through simulations, we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products, a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination. The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.
We present a time-lapse approach for image classification that significantly improves the inference of a standalone diffractive optical network. This approach utilizes the information diversity derived from controlled or random lateral displacements of the objects relative to a diffractive optical network, over a finite integration time at the image sensor, to enhance its generalization and statistical inference performance. By employing this time-lapse training and inference, we achieved a numerical blind testing accuracy of 62.03% on grayscale CIFAR-10 images, which represents the highest classification accuracy for this dataset achieved so far using a single diffractive network.
We report the all-optical and twin-image-free reconstruction of inline holograms using diffractive networks. Our numerical results reveal that these diffractive network designs, when properly trained using error back-propagation algorithms, generalize very well to reconstruct new, unseen holograms at the speed of light propagation, without any external power source (except the illumination light). These diffractive hologram reconstruction networks also exhibit improved power efficiency and extended depth-of-field. With their passive operation and orders-of-magnitude faster reconstruction speed than digital hologram reconstruction systems, diffractive networks can find numerous applications in holographic imaging and display-related applications.
We report the use of ensemble learning to achieve significant improvements in the performance of diffractive optical classifiers on CIFAR-10 image dataset. We initially created a pool of 1252 diversely-trained diffractive network models; using a novel iterative pruning algorithm, we trimmed this down to an ensemble size of 14 diffractive networks to achieve a blind testing accuracy of 61.14% on CIFAR-10 image classification, which performs >16% higher in its inference accuracy compared to the average performance of the individual diffractive networks within the ensemble. These results signify a major advancement in all-optical inference and image classification capabilities of diffractive networks.
We report diffractive network-based all-optical reconstruction of inline holograms, eliminating twin-image artifacts without the use of a digital computer. We show that these trained diffractive networks generalize very well to reconstruct unknown holograms of new, unseen objects at the speed of light propagation through the physical diffractive network. The diffractive hologram reconstruction networks also exhibit improved diffraction efficiency and extended depth-of-field. All-optical reconstruction of holograms using passive diffractive networks can find numerous applications in holographic imaging and display-related applications due to its computer-free hologram reconstruction capability that is completed at the speed of light propagation within spatially engineered diffractive materials.
We improve the inference performance of diffractive deep neural networks (D2NN) for image classification by utilizing ensemble learning and feature engineering. Through a novel pruning algorithm, we designed an ensemble of e.g., N=14 D2NNs that collectively achieve a blind testing accuracy of 61.14% on the classification of CIFAR-10 images, which provides an improvement of >16% compared to the average performance of the individual D2NNs within the ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive network design and would be broadly useful to create diffractive optical machine learning systems for various imaging and sensing needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.