Mixed reality systems that integrate the real and digital worlds have recently gained popularity. The development of mixed reality devices in the form of wearable devices has been a significant trend in recent years. Key challenges in designing wearable mixed reality devices include reducing power consumption, enhancing productivity, and minimizing the size of the device. To address these challenges, neural processors are being incorporated into mixed reality systems. These are specialized hardware accelerators for convolutional neural network (CNN) processing. The use of neural processors is due to the fact that most environmental analysis and visualizations in mixed reality rely on CNNs. The use of these energy-efficient, high-performance processing devices enables increased comfort in using the wearable device through the high-performance processing of environmental data on a compact device.
The paper is devoted to the design of the microarchitecture of a neural processor for hardware acceleration of CNN processing based on the author's architecture of processor. The paper presents various microarchitectural solutions that can be used to accelerate CNN processing. We explore methods to optimize hardware resources and reduce time required for CNN processing. To achieve high throughput in pipelined computation, different algorithms for convolution calculations in a systolic array are examined. Based on the results of this research, we provide estimates of the characteristics of the neural processor with the proposed microarchitecture.
The current work introduces a neural processor architecture that allows hardware acceleration in the processing of convolutional neural networks (CNNs). The purpose of this study is to design the architecture and microarchitecture of the neural processor, which can be used to solve the problems of environmental analysis and recognition of objects that are located in the scene in augmented reality systems, presented as energy-efficient and compact wearable devices. The proposed architecture provides the opportunity to adjust the variable parameters of blocks for data processing and storage to optimize the performance, energy consumption and resources used to implement the neural processor. The article offers variants of scaling of computing blocks and memory blocks in the architecture of a neural processor, which can be used for increasing the performance of the end product. The paper describes a tool that generates a neural processor based on given limitations on power consumption and performance and on the structure of convolutional neural networks that you need to use for data processing. The proposed tool can potentially become a valuable product in the field of designing hardware accelerators of convolutional neural networks, as it allows to increase the degree of automation of the process of synthesis of neural processors for further implementation in mixed reality systems, made as portable devices. In general, this thesis presents tools that allow the developer of software based on convolutional neural networks for mixed reality systems to synthesize energy-efficient processors to accelerate the processing of convolutional neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.