KEYWORDS: Telecommunications, Neural networks, Target recognition, Signal processing, Photonics, Modulation, Digital micromirror devices, Data processing, Computer hardware, System integration
A high-performance photonic reservoir, which utilizes injection locking of a semiconductor multimode laser (SML), will be developed. This innovative design allows for fully parallel and high-bandwidth operation at telecommunication wavelength. The output of this system is projected in space and imaged onto a digital micromirror device, which provides a readout and facilitates the hardware integration of programmable output weights. By using a highly multimodal semiconductor laser, injection locking enables a large number of modes to be simultaneously locked to the high frequency modulated injection laser that provides the input signal, resulting in high dimensionality of the reservoir. The hardware integration of programmable output weights enables the system to be optimized for specific tasks, improving performance and reducing power consumption.
Combining the strength of multiple photonic and electronics concepts in one hybrid and multi-chip platform is a promising solution for the diversification of chips for specific computing tasks to boost performance. Using additive and CMOS compatible one- (OPP) and two-photon polymerization (TPP), i.e. flash-TPP printing, we create low-loss 3D integrated photonic chips for scalable and parallel interconnects, which is challenging to realize in 2D. Here, we demonstrate the CMOS compatibility of such technology by merging polymer-based 3D photonic chips with diverse photonic platforms. We interfaced 3D waveguides on top of semiconductor (GaAs) quantum dot micro-lasers, yielding very high emission collection efficiency into the waveguides at cryogenic temperatures (4 K). Furthermore, we integrated our technology with silicon-on-insulator (SOI) platforms by efficiently coupling light from 2D planar SiN waveguides into out-of-plane 3D waveguides. With this, we lay a promising foundation for scalable integration of hybrid photonic and electronic platforms.
Additive fabrication, in particular direct-laser writing (DLW) combined with two-photon polymerization (TPP), stands out as an innovative tool for creating intricate 3D photonic components. However, the long fabrication time associated with DLW-TPP restricts large-scale implementations. Here, we introduce an adaptative lithography strategy, i.e. flash-TPP, combining one- (OPP) and TPP, while adjusting the resolution of the different sections of the photonic circuit, reducing the printing time by up to 90% compared to TPP-only. Via flash-TPP, we demonstrate the fabrication of polymer-cladded single-mode photonic waveguides and adiabatic splitters, with low 1.3 dB/mm (0.26 dB) propagation (injection) losses and record optical coupling losses of 0.06 dB with very symmetric (3.4 %) splitting ratios for adiabatic couplers. The scalability of output ports here addressed can only be achieved by using the three spatial dimensions, which is challenging in 2D.
Photonic neural networks based on semiconductor lasers hold a big promise for future, high performance computing at ultra-high bandwidths. We integrate such networks in a variety of semiconductor laser structures, demonstrate their scalable photonic integration. Finally, we apply our fully implemented neural networks to technologically and commercially relevant tasks in optical telecommunication, demonstrating their realization in realtime and without active involvement of classical digital electronic computers.
Digital micromirror devices are versatile, high performance photonic components that combine high configurability with a large number of programmable parameters and high bandwidth. These are essential features in photonic neural networks. A DMD's mirrors can optically encode information to be injected into a photonics neural network, or they can even provide configurable connections between photonic neurons of the neural network itself. Their easy programmability makes them highly attractive, as through this feature DMDs act as the interface between the analogue world of the photonic neural network, and the digital world of programming languages as well as information processing. I will introduce several of such DMD-based operations in photonic AI will sketch future possibilities for the development of the field.
Photonic neural networks are a highly promising computational system for AI-inspired future information processing. We have recently demonstrated the first fully implemented, photonic neural network realized in multimode semicondcutor lasers. The numerous laser modes acts as the systems neurons, which carrier diffusion and intra-cavity diffraction creating recurrent connections. I will discuss our recent result, where we push the realtime data-rate of the neural network towards GHz levels and use such systems to address highly relevant photonic-technology applications.
The topology of neural networks fundamentally differs from classical computing concepts. They feature a colocation of memory and transformation of information, which makes them ill-suited for implementation in von Neumann architectures. In substrates pursuing in-memory computing, the connection topology of a neural network is encoded in the wiring of a chip, regardless of photonic or electronic, and this approach promises to revolutionize the efficiency of neural network computing. Equally general is that such in memory architectures cannot be implemented in 2D substrates, where their chip real-estate as well as energy consumption increase with an exponent larger unity with the number of neurons. I will discuss our recent work on using additive one and two photon polymerization in order to create 3D integrated photonic chips, that will allow to overcome this scaling bottleneck. Our process is CMOS compatible and hence maps a direct path to a technological implementation.
Artificial neural networks (ANNs) have become a staple computing technique in many fields. Yet, they differ from classical computing hardware by taking a connectionist and parallel approach to computing and information processing. Here, we present a high performance, scalable, fully parallel, and autonomous PNN based on a large area vertical-cavity surface-emitting laser (LA-VCSEL). We implement 300+ hardware nodes and train the network to perform up to 6-bit header recognition, XOR classification and digital to analog conversion. Moreover, we investigate the impact of different physical parameters, namely, injection wavelength, injection power, and bias current on performance, and link these parameters to the general computational measures of consistency and dimensionality.
An efficient photonic hardware integration of neural networks can benefit us from the inherent properties of parallelism, high-speed data processing and potentially low energy consumption. In artificial neural networks (ANN), neurons are classified as static, single and continuous-valued. On contrary, information transmission and computation in biological neurons occur through spikes, where spike time and rate play a significant role. Spiking neural networks (SNNs) are thereby more biologically relevant along with additional benefits in terms of hardware friendliness and energy-efficiency. Considering all these advantages, we designed a photonic reservoir computer (RC) based on photonic recurrent spiking neural networks (SNN) i.e. a liquid state machine. It is a scalable proof-of-concept experiment, comprising more than 30,000 neurons. This system presents an excellent testbed for demonstrating next generation bio-inspired learning in photonic systems.
Low-loss single-mode optical coupling is a fundamental tool for most photonic networks, in both, classical and quantum settings. Adiabatic coupling can achieve highly efficient and broadband single-mode coupling using tapered waveguides and it is a widespread design in current 2D photonic integrated circuits technology. Optical power transfer between a tapered input and the inversely tapered output waveguides is achieved through evanescent coupling, and the optical mode leaks adiabatically from the input core through the cladding into the output waveguides cores. We have recently shown that for advantageous scaling of photonic networks, unlocking the third dimension for integration is essential. Two-photon polymerization (TPP) is a promising tool allowing dynamic and precise 3D-printing of submicrometric optical components. Here, we leverage rapid fabrication by constructing the entire 3D photonic chip combining one (OPP) and TPP with the (3+1)D flash-TPP lithography configuration, saving up to ≈ 90 % of the printing time compared to full TPP-fabrication. This additional photo-polymerization step provides auxiliary matrix stability for complex structures and sufficient refractive index contrast ∆n ≈ 5×10−3 between core-cladding waveguides and propagation losses of 1.3 dB/mm for single-mode propagation. Overall, we confront different tapering strategies and reduce total losses below ∼ 0.2 dB by tailoring coupling and waveguides geometry. Furthermore, we demonstrate adiabatic broadband functionality from 520 nm to 980 nm and adiabatic couplers with one input and up to 4 outputs. The scalability of output ports here addressed can only be achieved by using the three-spatial dimensions, being such adiabatic implementation impossible in 2D.
Scalability is essential for computing, yet classical 2D integration of neural networks faces fundamental challenges in this regard. Using 3D printing via two photon polymerization-based direct laser writing, we overcome this challenge and create low loss waveguides and demonstrate dense as well as convolutional network topologies that scale linear in size. Air-clad high-confinement waveguides allow for high-density multimode photonic integration. Leveraging the writing laser’s power as a degree of freedom in a (3+1)D printing technique, we also achieve precise control over refractive index contrast, which enables single mode propagation and low-loss evanescent couplers for next generation 3D integrated photonic circuits.
We analyze the fundamental impact of noise propagation in deep neural network (DNN) comprising nonlinear neurons and with connections optimized by training. Our motivation is to understand the impact of noise in analogue neural network realizations. We consider the influence of additive and multiplicative, correlated and uncorrelated types of internal noise in DNNs. We find general properties of the noise impact depending on the noise type, activation function, depth and the statistics of connection matrices and show that noise accumulation can be efficiently avoided. Our work is based on analytical methods predicting the noise levels in all layers of the network.
Artificial Neural Networks (ANNs) have become a staple computing technique. Their flexibility allows them to excel in a wide range of tasks and they benefit from highly parallelized architecture by design. We experimentally demonstrate a fully parallel photonic neural network using spatially distributed modes of a large-area vertical-cavity surface-emitting laser (LA-VCSEL). All components of the ANN are fully realized in parallel hardware. We train the readout weights to perform 2 and 3-bit header recognition, XOR classification, and digital to analog conversion, and obtain low error rates for all tasks. Our system uses readily available components, is scalable to much larger sizes and to bandwidths in excess of 20 GHz.
3D two-photon polymerization has shown to be an enabling tool allowing dynamic and precise printing of submicrometric optical components. Here, we focus on direct laser writing for the additive fabrication of 3D photonic waveguides, which are prime candidates for integrated, ultra-fast and parallel photonic interconnects. We here present a novel approach based on 3D optical splitters leveraging adiabatic coupling, which ensures a smooth single-mode transition between input and output waveguides. This unique 3D canonical architecture represents a clear breakthrough overcoming the long-standing challenges of parallel and scalable connections with high integration density for high-speed and energy-efficient neural networks computers.
We present a novel method for constructing quantum dot arrays using optical tweezers. By optically trapping 10 nm core-shell quantum dots we can position the quantum dots with submicron precision. The quantum dots are suspended in a resin (nanoscribe IP-G 780) which is then polymerized locally around the trapped quantum dot, fixating its position. The process of trapping and positioning is automated using a neural network to locate both free quantum dots and the position of quantum dots already in the array. The ability to locate the already positioned quantum dots is essential to achieving high precision and accuracy in the placement. Automation makes the process scalable and enables the manufacturing of large arrays. As a first step we demonstrate the construction of a 4x4 array of quantum dots.
Analog neural networks are promising candidates for overcoming the sever energy challenges of digital Neural Network processors. However, noise is an inherent part of analogue circuitry independent if electronic, optical or electro-optical integration is the target. I will discuss fundamental aspects of noise in analogue circuits and will then introduce our analytical framwork describing noise propagation in fully trained deep neural networks comprising nonlinear neurons. Most importantly, we found that noise accumulation can be very efficiently supressed under realistic hardware conditions. As such, neural networks implemented in analog hardware should be very robust to internal noise, which is of fundamental importance for future hardware realizations.
Maximal computing performance can only be achieved if neural networks are fully hardware implemented. Besides the potentially large benefits, such parallel and analogue hardware platforms face new, fundamental challenges. An important concern is that such systems might ultimately succumb to the detrimental impact of noise. We study of noise propagation through deep neural networks with various neuron nonlinearities and trained via back-propagation for image recognition and time-series prediction. We consider correlated and uncorrelated, multiplicative and additive noise and use noise amplitudes extracted from a physical experiment. The developed analytical framework is of great relevance for future hardware neural networks. It allows predicting the noise level at the system’s output based on the properties of its constituents. As such it is an essential tool for future hardware neural network engineering and performance estimation.
We propose a novel implementation of autonomous photonic neural networks based on optically-addressed spatial light modulators (OASLMs). In our approach, the OASLM operates as a spatially non-uniform birefringent waveplate, the retardation of which nonlinearly depends on the incident light intensity. We develop a complete electrical and optical model of the device and investigate the optimal operational characteristics. We study both, feed-forward and recurrent neural networks and demonstrate that OASLMs are promising candidates for the implement of autonomous photonic neural networks with large numbers of neurons and ultra low energy consumption.
Photonic systems are candidates for next generation neural networks, promising to boost energy efficiency and speed via optical vector matrix multiplication. We will introduce the first scalable neural network integration strategy, demonstrating a network for 999 neurons in 0.36 mm^2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.