Paper
27 July 2016 Demonstration of the suitability of GPUs for AO real-time control at ELT scales
Author Affiliations +
Abstract
We have implemented the full AO data-processing pipeline on Graphics Processing Units (GPUs), within the framework of Durham AO Real-time Controller (DARC). The wavefront sensor images are copied from the CPU memory to the GPU memory. The GPU processes the data and the DM commands are copied back to the CPU. For a SCAO system of 80x80 subapertures, the rate achieved on a single GPU is about 700 frames per second (fps). This increases to 1100 fps (1565 fps) if we use two (four) GPUs. Jitter exhibits a distribution with the root-mean-square value of 20 μs–30 μs and a negligible number of outliers. The increase in latency due to the pixel data copying from the CPU to the GPU has been reduced to the minimum by copying the data in parallel to processing them. An alternative solution in which the data would be moved from the camera directly to the GPU, without CPU involvement, could be about 10%–20% faster. We have also implemented the correlation centroiding algorithm, which - when used - reduces the frame rate by about a factor of 2–3.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Urban Bitenc, Alastair G. Basden, Nigel A. Dipper, and Richard M. Myers "Demonstration of the suitability of GPUs for AO real-time control at ELT scales", Proc. SPIE 9909, Adaptive Optics Systems V, 99094S (27 July 2016); https://doi.org/10.1117/12.2234273
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Adaptive optics

Cameras

Real-time computing

Data processing

Wavefront sensors

Reconstruction algorithms

Calibration

Back to Top