Dynamic real-time optical processing has significant potential for accelerating specific tensor algebra. Here we present the first demonstration of simultaneous amplitude and phase modulation of an optical two-dimension signal in the Fourier plane of a thin lens. Two spatial light modulators (SLMs) arranged in a Michelson interferometer modulate the amplitude and the phase while being simultaneously in the focal plane of two Fourier lenses. The lenses frame an interferometer in a 4f-system enabling full modulation in the Fourier domain of a telescope. Main sources of phase noise and losses are discussed such as native to SLMs non-linear inter-pixel crosstalk, variability in modulation efficiency as a function of projected mask parameters, and Fresnel’s optics limitations. Such a system is of extreme utility in rapidly progressing fields of optical computing, hardware acceleration, encryption, and machine learning, where neglecting phase modulation can lead to impractical bit-error rates.
Convolutional neural networks have become an essential element of spatial deep learning systems. In the prevailing architecture, the convolution operation is performed with Fast Fourier Transforms (FFT) electronically in GPUs. The parallelism of GPUs provides an efficiency over CPUs, however both approaches being electronic are bound by the speed and power limits of the interconnect delay inside the circuits. Here we present a silicon photonics based architecture for convolutional neural networks that harnesses the phase property of light to perform FFTs efficiently. Our all-optical FFT is based on nested Mach-Zehnder Interferometers, directional couplers, and phase shifters, with backend electro-optic modulators for sampling. The FFT delay depends only on the propagation delay of the optical signal through the silicon photonics structures. Designing and analyzing the performance of a convolutional neural network deployed with our on-chip optical FFT, we find dramatic improvements by up to 102 when compared to state-of-the-art GPUs when exploring a compounded figure-of-merit given by power per convolution over area. At a high level, this performance is enabled by mapping the desired mathematical function, an FFT, synergistically onto hardware, in this case optical delay interferometers.
The performance shortcomings of multipurpose compute engines have stirred recent excitement in specialized processors, preempted by GPUs. Simultaneously, computational complexity theory NP 'hard' problems scaling as O(n^k) require new hardware solutions. This presents an opportunity for photonic information processors (PIP) building on photonic integration through recent foundry developments. The value proposition for PIPs exist via optical parallelism, small capacitive charging of OE devices, 10's of ps short propagation delays, a natural convolution via optical interference, and an O(n)-scaling Fourier transform. Based on a recently developed photonic NxN router, here we present two photonic processors; a) the residual arithmetic nanophotonic computer (RANC), and b) a reconfigurable graph processor, the latter being a computing-in-switching (CIS) paradigm. PIPs operate with time-of-flight, once the processor is configured (e.g. setting phase), which is on the order of 10-100 ps given the mm-scale photonic integration footprints. This high bandwidth, however challenges the electronic-optic I/O bottleneck. To address this, we further discuss an optical front-end DAC with <100 ps delay enabled by a 2x2 electro-optic switch.
Photonic neural networks (PNN) are a promising alternative to electronic GPUs to perform machine-learning tasks. The PNNs value proposition originates from i) near-zero energy consumption for vector matrix multiplication once trained, ii) 10-100 ps short interconnect delays, iii) weak required optical nonlinearity to be provided via fJ/bit efficient emerging electrooptic devices. Furthermore, photonic integrated circuits (PIC) offer high data bandwidth at low latency, with competitive footprints and synergies to microelectronics architectures such as foundry access. This talk discusses recent advances in photonic neuromorphic networks and provides a vision for photonic information processors. Details include, 1) a comparison of compute performance technologies with respect to compute efficiency (i.e. MAC/J) and compute speed (i.e. MAC/s), 2) a discussion of photonic neurons, i.e. perceptrons, 3) architectural network implementations, 4) a broadcast-and-weight protocol, 5) nonlinear activation functions provided via electro-optic modulation, and 6) experimental demonstrations of early-stage prototypes. The talk will open up answering why neural networks are of interest, and concludes with an application regime of PNN processors which reside in deep-learning, nonlinear optimization, and real-time processing.
KEYWORDS: Neural networks, Neurons, Mirrors, Laser induced fluorescence, 3D vision, Human vision and color perception, 3D visualizations, Artificial neural networks, Logic, Sensors
The ability to rapidly identify symmetry and anti-symmetry is an essential attribute of intelligence. Symmetry perception is a central process in human vision and may be key to human 3D visualization. While previous work in understanding neuron symmetry perception has concentrated on the neuron as an integrator, here we show how the coincidence detecting property of the spiking neuron can be used to reveal symmetry density in spatial data. We develop a method for synchronizing symmetry-identifying spiking artificial neural networks to enable layering and feedback in the network. We show a method for building a network capable of identifying symmetry density between sets of data and present a digital logic implementation demonstrating an 8x8 leaky-integrate-and-fire (LIF) symmetry detector in a field programmable gate array. Our results show that the efficiencies of spiking neural networks can be harnessed to rapidly identify symmetry in spatial data with applications in image processing, 3D computer vision, and robotics.In conclusion, we have presented a novel algorithm for finding a scalar field representing the symmetry of points in a multi-dimensional space. We have shown how time synchronization in the input values of spiking neural networks, with the appropriate choice of threshold and spike period, results in the identification of output neurons along points of high symmetry density to the network inputs. We have demonstrated an implementation of the symmetry selective LIF neural network in common hardware with a high speed, 2.8 MHz identification of symmetry points in an 8x8 Manhattan metric space. Our results show that utilizing only the delay and coincidence detecting properties of a single layer of neurons in spiking neural networks naturally lead to effective symmetry identification. A greater understanding of symmetry perception in artificial intelligences will lead to systems with more effective pattern visualization, compression, and goal setting processes.
Convolutional neural networks have become an essential element of spatial deep learning systems. In the prevailing architecture, the convolution operation is performed with Fast Fourier Transforms (FFT) electronically in GPUs. The parallelism of GPUs provides an efficiency over CPUs, however both approaches being electronic are bound by the speed and power limits of the interconnect delay inside the circuits. Here we present a silicon photonics based architecture for convolutional neural networks that harnesses the phase property of light to perform FFTs efficiently. Our all-optical FFT is based on nested Mach-Zender Interferometers, directional couplers, and phase shifters, with backend electro-optic modulators for sampling. The FFT delay depends only on the propagation delay of the optical signal through the silicon photonics structures. Designing and analyzing the performance of a convolutional neural network deployed with our on-chip optical FFT, we find dramatic improvements by up to 102 when compared to state-of-the-art GPUs when exploring a compounded figure-of-merit given by power per convolution over area. At a high level, this performance is enabled by mapping the desired mathematical function, an FFT, synergistically onto hardware, in this case optical delay interferometers.
In the search for low-cost wide spectrum imagers it may become necessary to sacrifice the expense of the focal plane
array and revert to a scanning methodology. In many cases the sensor may be too unwieldy to physically scan and
mirrors may have adverse effects on particular frequency bands. In these cases, photonic masks can be devised to
modulate the incoming light field with a code over time. This is in essence code-division multiplexing of the light field
into a lower dimension channel. In this paper a simple method for modulating the light field with masks of the
Archimedes’ spiral is presented and a mathematical model of the two-dimensional mask set is developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.