ABSTRACT
Correcting atmospheric turbulence effects in light with Adaptive Optics is necessary, since it produces aberrations in the wavefront of astronomical objects observed with telescopes from Earth. These corrections are performed classically with reconstruction algorithms; between them, neural networks showed good results. In the context of solar observation, the usage of Adaptive Optics on solar differs from nocturnal operations, bringing up a challenge to correct the image aberrations. In this work, a convolutional approach is given to address this issue, considering SCAO configurations. A reconstruction algorithm is presented, "Shack-Hartmann reconstruction with deep learning on solar-prototype" (proto-HELIOS), to correct on fixed solar images, achieving an average 85.39% of precision in the reconstruction. Additionally, results encourage to continue working with these techniques to achieve a reconstruction technique for all the regions of the sun.
ABSTRACT
Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.