PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The Canada-France-Hawai‘i Telescope, operational since 1979, currently has five scientific instruments ranging from a few years old to decades old, making it highly productive today. At this world-class facility, computing systems were built and software was developed to support some of the first and largest mosaic CCD cameras, control the telescope, transition from classical observing to queue scheduled observing, and to allow it to be remotely controlled. This involved many choices of computing platforms, programming languages, and significant open-source software development. Software tools and computing infrastructure have been continually adapted, purchased, made in house, and maintained. These “life cycles” are not easy to predict at their start. A retrospective analysis of how these have played out for over 40 years can inform future projects at CFHT and in astronomy in general. We detail the major decision points and speculate how outcomes would have been different had we taken alternative paths. We discuss a rationale for making software choices in future projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SKA Observatory is an international organization whose mandate is to build and operate two multi-purpose radio telescope arrays. The SKA Low Frequency Telescope array, located in the Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory in Western Australia, with the observing range 50 - 350 MHz, will consist of 131,072 log-periodic antennas organized as 512 stations; the maximum distance between two stations is 65 kilometers. The SKA Mid Frequency Telescope array, located in the Karoo region, South Africa, with the observing range 350 MHz - 15 GHz, will comprise 197 offset-Gregorian dishes. The SKA Mid Telescope dishes are 15 meters in diameter, and the maximum distance between two dishes is 150 kilometers. The SKA Global Headquarters is in the Jodrell Bank Observatory, near Manchester, UK. The construction of the SKA Telescopes is on the way, including the development of the Telescope Control System. The SKA Observatory, and each of the telescopes, will be delivered in stages, thus supporting incremental development of the collecting area, signal and data processing capacity, and the observing and processing modes. Much of the Control System functionality will be required early in construction to support integration and verification of other systems. This paper provides an overview of the SKA Telescope Control System, including the design patterns and technology choices, summarizes what has been achieved so far, and provides reflections on lessons learned so far.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European Solar Telescope (EST) aims to become the most ambitious ground-based solar telescope in Europe. This paper summaries the planned architecture, software practices adopted at the moment for the development environment and future lines. EST has adopted a mix of proven software from existing telescopes that are suited to the telescope requirements with new development systems, CI/CD practices and agile methodologies among others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Extremely Large Telescope (ELT) is a 39 meters optical telescope under construction in the Chilean Atacama Desert. The control software is under advanced development and the system is slowly taking shape for first light in 2028. ESO is directly responsible for coordination functions and control strategies requiring astronomical domain knowledge. Industrial contractors are instead developing the low-level control of individual subsystems. We are now implementing the coordination recipes and integrating the local control systems being delivered by contractors. System tests are performed in the ELT Control Model in Garching, while waiting for the availability of individual subsystems at the telescope. This paper describes the status of development for individual subsystems, of the high-level coordination software and of the system integration on the ELT Control Model (ECM), focusing on testing and integration challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Caltech Submillimeter Observatory (CSO) telescope in Mauna Kea, Hawaii, USA, has been disassembled and will be transported to Chajnantor Plateau, Chile by the end of 2024 and renamed as Leighton Chajnantor Telescope (LCT). LCT’s scientific objectives include multiwavelength imaging and spectroscopic survey of the galactic plane, magnetic fields in star formation via dust polarimetry, etc. To achieve these objectives, the instrumentation and control systems of LCT need to be upgraded such that its technological criteria can be improved significantly (e.g., 12 μm surface accuracy, 1 arcsec pointing accuracy, 10×17 W/Hz1/2 detecting sensitivity). This necessitates a comprehensive reconstruction of the computer, software, and data systems of LCT. In this work, we propose a four-phase reconstruction scheme. The first phase is dedicated to the design of the observation data management system, software system, and the computer and network system. The second phase focuses on infrastructure reconstruction which involves evaluating the computer environment, unifying the operating system across all the computers, and updating and developing the user interface program. The third phase focuses on application systems reconstruction to achieve higher accuracy, faster rotating speed and better capability of disturbance rejection. The final phase includes experiment and implementation of the developed systems on LCT to discover and correct the deficiencies. This work will provide a fundamental framework for upgrading LCT’s control systems, thereby enhancing its scientific capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Dragonfly Spectral Line Mapper (DSLM) is a semi-autonomous, distributed-aperture based telescope design, featuring a modular setup of 120 Canon telephoto lenses, and equal numbers of ultra-narrowband filters, detectors, and other peripherals. Here we introduce the observatory software stack for this highly-distributed system. Its core is the Dragonfly Communication Protocol (DCP), a pure-Python hardware communication framework for standardized hardware interaction. On top of this are 120 REST-ful FastAPI web servers, hosted on Raspberry Pis attached to each unit, orchestrating command translation to the hardware and providing diagnostic feedback to a central control system running the global instrument control software. We discuss key features of this software suite, including docker containerization for environment management, class composition as a flexible framework for array commands, and a state machine algorithm which controls the telescope during autonomous observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Naval Observatory (USNO) operates a wide range of telescopes, from historic meter-class telescopes through modern portable commercial-off-the-shelf (COTS) systems. As the number and variety of systems increases, maintaining separate control software for each telescope becomes impractical. We have implemented Telescope Control Software (TCS) in Python that allows us to quickly and safely unify interactions with, and outputs from, a large variety of telescopes. We will discuss the design and operations of this TCS, including examples of how the flexible nature enables similar control for vastly different system architectures and observational goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Atacama Pathfinder EXperiment (APEX) operates a 12m telescope in Chile at 5107m above sea level in the Andes mountains. Given this isolated location, APEX was developed as a telescope for remote operations, ensuring that it could be effectively managed despite the challenging environment. To guarantee optimal operation in terms of science and engineering, several developments were deployed related to monitoring and network systems. In this paper, we will describe the key properties that have enabled APEX to successfully operate in remote mode over the last seven years. Our discussion will detail the significant experiences gained during this period. We will describe the reliable network system setup implemented in APEX, the monitoring system that has been developed for safety and remote operation, which incorporates a redundant database in a master/slave topology at both control and high sites. Additionally, we will explain the design of the control room infrastructure that allows for continuous monitoring, alerting operators to failures through trend analysis for prompt response. We will provide a perspective how the successful implementation, lessons learned and expertise gathered through APEX as remote telescope could help as a pathfinder for the future of remote operations at the VLT and ELT, in the framework of the integrated Operation (IOP) Programme whose goal is to integrate VLT and ELT to a single observatory. Effective remote management under the IOP framework promises not only enhanced efficiency and safety but also a cohesive operational synergy. To this end we will provide a comparison of the remote-controlled systems at VLT and APEX, evaluating their strengths and weaknesses, which can provide guidance for shaping the future remote operations of the ELT and VLT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SKA Observatory, currently in the construction phase, will have two of the world’s largest radio telescopes when completed in 2028. The scale of the project introduces unique challenges for the telescope software design and implementation at all levels, from user interfacing software down to the lower-level control of individual telescope elements. The Observation Execution Tool (OET) is part of the Observation Science Operations (OSO) suite of applications and is responsible for orchestrating the highest level of telescope control through the execution of telescope control scripts. One of the main challenges for the OET is creating a design that can robustly run concurrent observations on multiple subarrays while remaining responsive to the user. The Scaled Agile Framework (SAFe) development process followed by the SKA project also means the software should be allow to iterative implementation and easily accommodate new and changing requirements. This paper concentrates on the design decisions and challenges in the development of the OET, how we have solved some of the specific technical problems and details on how we remain flexible for future requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The “Gran Telescopio de Canarias” (GTC) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). In 2009 it started its scientific operational phase. The GTC Control System (GCS) is a distributed object and component-oriented system. In the first developments, the different motion control and sensor systems were quite coupled to the technologies that were used for their implementation: PMAC controller, CANOpen protocol, Modbus protocol, specific drives such as Kollmorgen Servostar S700 or Technosoft IDM680, ESD CAN library, etc. This work has been the result of the evolution of these developments carried out in the field rotation, acquisition and guiding systems of the focal stations updated in recent years. All the knowledge acquired in this context has been reflected in a new design, generalizing concepts such as mechanism, axis, actuator, bus protocol and access library, in such a way that the components of the coordination package are completely decoupled from the technology that is used for the effective control of movement. On the one hand, this design provides simplicity, since the creation of new devices is reduced to specifying a configuration. And on the other hand, it provides stability to the system, focusing the points of variability on the simple extension of new technologies that will be incorporated into the system, such as Ethercat. One of the objectives has been to reduce development costs, going from an effort of more than a year to just a few weeks, in this way we accelerate the transfer to operation of the functionalities that we continue to incorporate into the telescope and export this solution to any other telescope. Examples of this are the future New Robotic Telescope (NRT) and the European Solar Telescope (EST), which will benefit from this new framework and the rest of our system, since they will use the GCS in their developments. Another objective has been to increase the quality of the system, thanks to the fact that this framework constitutes a common knowledge base, allowing improvements to be cumulative and avoiding divergences that prevent us from making future changes with little effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
METIS, the Mid-infrared ELT Imager and Spectrograph, will be one of the first-generation ELT instruments. It has an Instrument Control System (ICS) that allows the instrument to operate in various observing modes, using a multitude of mostly cryogenic mechanisms such as filter wheels, linear stages (e.g. to move mirrors and masks), a derotator, piezo mechanisms for high-speed or high-accuracy displacements (e.g. for pupil stabilization, for adaptive optics field selection and modulation, ...), a chopper, and so on. Thermal and vacuum control of the cryostat on the other hand is handled by a dedicated PLC-based system and is not described in this paper. The ICS is built using the ESO ELT instrument framework, which provides the basic building blocks to control the mechanisms, along with the interface to the telescope and its services, from the low-level pointing and tracking system, the real-time Single Conjugate Adaptive Optics (SCAO) system, to the high-level sequencer-based observing system. In this paper we provide an overview of the ICS electronics, the low-level software running on a Beckhoff PLC, and finally the high-level software running on a Linux workstation. As a detailed description of the entire system is out of scope of the paper, we focus instead on the general design, implementation and testing principles. We show how a fast real-time network (EtherCAT) and off-the-shelf industrial I/O, together with the services provided by the ELT instrument framework, can meet the requirements of ELT instruments, and how they can offer an elegant solution to technically demanding problems such as the high-speed synchronization between a chopper and a detector controller. Finally, we demonstrate how a modular electronics design, a flexible software architecture, and a strong focus on simulation can alleviate some of the organizational challenges of building, integrating and testing an ICS of a complex instrument, which subsystems are developed by institutes in different locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VLT at Paranal Observatory has been in operation for over two decades, and soon, the ELT will be managed by the same operational team. Maintaining operational efficiency and minimizing downtime with limited resources will be crucial. Previous research has shown that software logs effectively capture the telescopes' behavior, providing valuable operational insights. We've integrated various log analysis techniques from academic literature and industry best practices. These techniques allow engineers to monitor system health, analyze error sequences, detect anomalies, and reconstruct processes which improve maintenance and extract new insights. Additionally, we've utilized generative artificial intelligence and NLP transformer-based models, to infer observation behavior and predict execution failures. We have taken advantage of both the Paranal Datalab on-premises facility and Azure Cloud. In this work, we provide technical details and outline the key challenges and opportunities in adopting this technique within an astronomy facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array (SKA) is a groundbreaking radio telescope project, aiming at constructing the two biggest radio telescopes in Australia and South Africa. They will have a larger collecting area and sky resolution than existing radiotelescopes, and they will handle an unprecedented amount of data flowing between computing facilities. The functionality of these telescopes heavily depends on the quality of the operating software. The project’s magnitude and complexity require effective testing processes capable of preemptively identifying and addressing potential bugs and errors. In this context, a simple regression testing strategy is not enough. In the first years of SKA construction, we noticed that tests, which typically pass, may occasionally experience failures. Collecting and analyzing test results over extended time periods could help in understanding the origin of such failures and to find solutions that address them. It would be a significant step forward to improve the reliability of SKA software. Data mining is a process of discovering patterns, trends, correlations, or useful information from large sets of data. It can be applied to a large set of test results concerning the operations of a specific SKA software component, i.e. the Local Monitoring and Control of Central Signal Processor (CSP.LMC). The CSP.LMC is tested with a multilevel strategy, spawning from unit to system tests, that can be performed on different environments. In this paper we analyze the strengths of this approach, describe some of the pitfalls in implementing it, and discuss the possibility to apply it to different SKA Software components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The upcoming NASA Pandora Mission, scheduled for launch in 2025, will obtain exoplanet transmission spectra and stellar activity information to better characterize and correct for the spectral contamination of transmission spectra by the host star. Pandora will obtain at least ten wavelength-resolved transits each of 20 unique exoplanets, each with 24 hours of stellar baseline per transit. This will provide the vital context needed to disentangle stellar contamination from exoplanet transmission spectroscopy around cool stars, and understand the impact of star spots on retrieved atmospheric properties. Pandora will be equipped with i) a visible detector, providing time-series photometry at 550nm, and ii) a near-infrared detector, providing R=30 spectra from 0.9 to 1.6 microns with at least 150ppm precision at J=9. We have developed an open-source simulator of Pandora data to assist in the development of a) the Pandora concept of operations b) the Pandora Science Pipeline and c) science analysis software to retrieve transmission spectra from Pandora data. In particular, we describe how we use the scipy.sparse Python submodule to create memory efficient simulations. This software is both fast and efficient, to enable various operating scenarios to be simulated. Our simulator tool (v.1.0) is available as open-source software, and much of the infrastructure can be generalized to other missions with similar specifications or detectors to Pandora.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The New Robotic Telescope (NRT) is a fully autonomous robotic four-meter class telescope located at the Roque de los Muchachos Observatory (ORM) on La Palma, Canary Islands, Spain. The autonomous nature requires a robust, fault tolerant, real time Control System. This is achieved by using proven industrial Beckhoff PLCs and an Ethercat data bus for real time operation. The Ethercat data bus is used to link all pieces of PLC hardware together, this drastically cuts down on the number of control cables going through rotators and cable wraps and increases reliability with a ring topology giving cable redundancy. The PLC code is developed using a Unit testing framework which lowers the risk of breaking expensive hardware during code changes and allows extra functionality to be added easily. This is being implemented to allow new hardware to be added easily and old hardware can later be swapped out for newer models, lowering maintenance costs. The PLCs are controlled by a Kubernetes Cluster using the OPC-UA protocol. The telescope functional safety will be tightly integrated with Beckhoff Twinsafe allowing complete telemetry all the way up the software stack.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) is the next-generation atmospheric Cherenkov gammaray Observatory. CTAO will be constructed on two sites, one array in the Northern and the other in the Southern hemisphere, containing telescopes of three different sizes, for covering different energy domains. To combine and orchestrate the different telescopes and auxiliary instruments (array elements), the Array Control and Data Acquisition (ACADA) system is the central element for the Observatory on-site operations: it controls, supervises, and handles the data generated by the array elements. Considering the criticality of the ACADA system for future Observatory operations, corresponding quality assurance provisions have been made at the different steps of the software development lifecycle, with focus on continuous integration and testing at all levels. To enable higher-level tests of the software deployed on a distributed system, an ACADA test cluster has been set up to facilitate testing and debugging of issues in a more realistic environment. Furthermore, a separate software integration and test cluster has also been established that allows for the off-site testing of the integrated software packages of ACADA and of the corresponding array elements. Here the software integration can be prepared, interfaces and interactions can be tested, and on-site procedures that are required later in the process can be checked beforehand, only limited by the simulation capabilities that are delivered as part of the software packages. Once preparations and testing with the off-site test cluster are completed, the integrated software can be deployed at the target site. The software packages and setup parameters are kept under configuration control at all stages, and deployment steps are documented to ensure that installations are reproducible. This methodology has been applied for the first time in the context of the integration of ACADA with the first CTAO Large-sized Telescope (LST-1) in October 2023.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European Southern Observatory (ESO) has made a considerable progress in the implementation of a new software framework, the Instrument Control System Framework (IFW) tailored to facilitate the development of upcoming astronomical instruments at the Extremely Large Telescope (ELT). This framework offers a complete, scalable, and adaptable infrastructure to support the diverse needs of instrument control. The framework’s architecture is strongly based on ESO's extensive experience in operating and maintaining VLT instruments while integrating the technological innovations specified by the ELT project. It presents a unified approach to instrument control, fostering the coordination of various heterogenous instruments subsystems and tasks, ranging from the control of instrument hardware functions and data visualization to the execution of science observations and instrument calibrations. The framework is primarily targeted to instrument developers from ESO partner institutes who are currently working on the first-generation ELT instruments. ESO extended the framework's application in 2019 to all new instruments within its optical telescopes. This strategy aims to reduce maintenance costs and promote ELT-VLT integrated operations, embracing future VLT instruments. The framework is being elaborated following the ELT Development Process, a sort of scrum like process supported by the tools Jenkins, Gitlab and Jira. This paper provides an overview of the design principles, key features, as well as details of the development process and main technologies employed in its construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to advances in observation and imaging technologies, modern astronomical satellites generate large volumes of data. This necessitates efficient onboard data processing and high-speed data downlink. Reflecting this trend is the Visible Extragalactic background RadiaTion Exploration by CubeSat (VERTECS) 6U Astronomical Nanosatellite. Designed for the observation of Extragalactic Background Light (EBL), this mission is expected to generate a substantial amount of image data, particularly within the confines of CubeSat capabilities. This paper introduces the VERTECS Camera Control Board (CCB), an open-source payload interface board leveraging Commercial Off-The-Shelf (COTS) components, with a Raspberry Pi Compute Module four at its core. The VERTECS CCB hardware and software have been designed from the ground up to serve as the sole interface between the VERTECS bus system and astronomical imaging payload, while providing compute capability not usually seen in nanosatellites of this class. Responsible for mission data processing, it will facilitate high-speed data transfer from the imaging payload via gigabit Ethernet, while also providing a high-bitrate serial connection to the payload x-band transmitter for mission data downlink. Additional interfaces for secondary payloads are provided via USB-C and standard 15-pin camera connectors. The Raspberry Pi embedded within the VERTECS CCB operates on a standard Linux distribution, streamlining the software development process. Beyond addressing the current mission’s payload control and data handling requirements, the CCB sets the stage for future missions with heightened data demands. Furthermore, it supports the adoption of machine learning and other compute-intensive applications in orbit. This paper delves into the development of the VERTECS CCB, offering insights into the design and validation of this next-generation payload interface, to ensure that it can survive the rigors of space flight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Camera Control System (CCS) of the Wide Field Survey Telescope (WFST) serves as the core imaging module and is a complex distributed system composed of multiple devices. Building upon the Remote Autonomous Control System 2nd (RACS2), in this paper the RACS2-CCS framework was proposed and characterized mainly by its event-driven nature. The design incorporates basic control functions, a component manager mechanism, a file management mechanism, and a site interface component mechanism. The RACS2-CCS system can efficiently organize complex control processes, monitor system status, manage data files, and facilitate interactions between systems (such as the Observatory Control System (OCS) and the Telescope Control System (TCS)). This system is practically applied within WFST CCS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 4-meter Multi-Object Spectroscopic Telescope (4MOST) instrument uses 2436 individually positioned optical fibres to couple the light of targets into its spectrographs. The Fibre Target Alignment (FTA) software controls all aspects of the 4MOST instrument that are involved to position the 2448 spines of the AESOP positioner to their target locations closer than 10µm RMS, within 90 seconds. The AESOP fibre positioner provides a HTML interface which is used by the FTA software to command spine movements. The metrology system consists of four cameras, and a sophisticated software package to measure the location of fibres, which are moved by the AESOP spines. Spines reach their target typically after six to eight iterative movements, which are interlaced with metrology frames. The metrology software is capable of taking 4 images simultaneously, and reconstructing fibre positions to within 3μm RMS within five seconds. We present the FTA control software architecture, the interaction of sub-components and the different operation modes of the system. Especially the concurrent and simultaneous control of four metrology camera processes. Due to the complexity of the system, comprehensive debugging and visualization tools have been developed which allow a detailed understanding and interaction of the entire system. The graphical tool provides feedback for each individual camera stream and their combined result. It provides statistics and tools to manipulate individual spines, especially to recover them in case of entanglement. To develop the control software, a full end-to-end simulator has been created, which closes the loop between metrology image simulation, simulated fibre positioning and all control aspects in between. The metrology system uses the current spine position as presented by the AESOP positioner to render metrology camera images. Analysis and downstream computation is identical to the live software. When commanded to move spines, The AESOP simulator executes the identical steps to move spines, except sending electrical signals. After which it returns the expected spine positions after their move, which is taken as input for the next FTA iteration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an overview of the software architecture for the ArmazoNes high Dispersion Echelle Spectrograph (ANDES) spectrograph, which has been developed as part of the recent System Architecture Review (SAR) held in October 2023. Our focus in this paper is twofold: we will detail about the control software and science tools that are set to be implemented. In particular, we provide a detailed view on how the ELT Instrument Control Framework has been effectively deployed to manage the complexities of a distributed instrument like ANDES. This entails a comprehensive discussion of the key architectural decisions we have made to meet the requirements of the project. Furthermore, we offer insights into the suite of science software that will be an integral part of the ANDES instrument. This includes the Exposure Time Calculator, Observation Preparation tools, and the Data Reduction Library. Finally, we provide an overview of the Data Analysis Software and the End-to-End ANDES simulator. These tools are crucial for processing and analyzing the data collected by the ANDES spectrograph.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study aims to compare the effectiveness of deep learning methods, specifically Feedforward Neural Networks (FNN), with traditional Pointing Models (PMs) for compensating Blind Pointing Errors in astronomical instruments. Ambitious projects like the ongoing study for the Atacama Large Aperture Submillimeter Telescope (AtLAST† ) inspired to investigate possible improvements of traditional Pointing Error (PE) modeling. The study assesses the practicality of FNNs by applying them to data from an instrument in operation: a precursor MeerKAT+ telescope from the Max Planck Institute for Radio Astronomy (MPIfR) to extend the current MeerKAT Radio Telescope Array at the South African Radio Astronomy Observatory (SARAO) site in the Meerkat National Park in South Africa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a machine learning model based on deep encoder-decoder architecture that transforms an astronomical image into a latent space representation of the spatially varying Point Spread Functions (PSFs) within the image. This learned compressed representation can then be queried with a pixel position in order to create a realistic synthetic PSF for a transient source at that pixel. Our method is demonstrated using data from the Argus Pathfinder array’s transient detection pipeline. This methodology allows for more cost-effective generation of 100M+ image datasets for training transient detection pipelines and could be generalized to other next generation transient surveys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present results on integrating Machine Learning (ML) methods for adaptive optics control with a real-time control library: COmmon Scalable and Modular Infrastructure for real-time Control (COSMIC). We test the integration on simulations for the instrument SAXO+. Our proposed solution’s pipeline is formed by a two-model ML system. The first model consists of a very Deep Neural Network (DNN) that maps Wavefront Sensor (WFS) images to phase and is trained offline. The second model consists of predictive control with a more compact DNN. The predictive control stage is trained online, providing an adaptive solution to changing atmospheric conditions but adding extra complexity to the pipeline. On top of implementing the solution with COSMIC, we add a set of modifications to provide faster inference and online training. Specifically, we test NVIDIA’s TensorRT to accelerate the DNNs inference, reduced precision, and just-in-time compilation for PyTorch. We show real-time capabilities by using COSMIC and improved speeds both in inference and training by using the recommendations mentioned above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
he WIYN 3.5m Telescope at Kitt Peak National Observatory hosts a suite of optical and near infrared instruments, including an extreme precision, optical spectrograph, NEID, built for exoplanet radial velocity studies. In order to achieve sub ms−1 precision, NEID has strict requirements on survey efficiency, stellar image positioning, and guiding performance, which have exceeded the native capabilities of the telescope’s original pointing and tracking system. In order to improve the operational efficiency of the telescope we have developed a novel telescope pointing system, built on a recurrent neural network, that does not rely on the usual pointing models (TPoint or other quasi-physical bases). We discuss the development of this system, how the intrinsic properties of the pointing problem inform our network design, and show preliminary results from our best models. We also discuss plans for the generalization of this framework, so that it can be applied at other sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Submillimeter Array (SMA) requires precise full-sky blind pointing for its eight 6m antennas, aiming for an error within 3′′, a fraction of the 34′′ FWHM beam at 345 GHz. SMA’s typical 2–3′′ rms pointing accuracy is crucial for efficient array operation, especially with 4 to 6 antenna relocations across 23 pads in various configurations each semester. Traditional calibration using optical guidescopes for mount model errors has shifted to interferometric pointing measurements on quasars, for full model acquisition and baseline calibration. Following every array reconfiguration, mechanical imperfections in antenna mounting lead to significant deviations in azimuth encoder offset and axis tilt parameters, complicating pointing accuracy. To overcome this, a three-layer feed-forward neural network, trained on over ten years of data for each antenna-pad configuration, predicts post-reconfiguration changes. This approach, currently under evaluation and refinement, aims to expedite re-calibration, indicating potential substantial reductions in calibration time and enhanced operational efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Specreduce is an AstroPy-coordinated python package whose goal is to be a toolbox of functions and utilities that are relevant to the reduction of spectroscopic data. It is largely focused on optical/IR spectroscopy where the raw data consists of an image projected from a spectrograph onto a 2D imaging detector. The way the spectral and spatial information is encoded into these 2D images can be quite complex and varied (e.g. multi-object vs multi-fiber vs integral field spectroscopy). Methods and algorithms for handling this variety of data have been implemented across many previous and existing data pipelines. Specreduce aims to collect these best practices into a common, shared space that facilitates more collaboration and easier development of future spectroscopic pipelines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With active time-domain surveys like the Zwicky Transient Facility, the anticipated Rubin Observatory’s Legacy Survey of Space and Time, and multi-messenger experiments such as LIGO/VIRGO/KANGRA for gravitational wave detection and IceCube for high-energy neutrino events, there is a new era in both time-domain and multi-messenger astronomy. The Astro2020 decadal survey highlights effectively responding to these astronomical alerts in a timely manner as a priority, and thus, there is an urgent need for the development of a seamless follow-up infrastructure at existing facilities that are capable of following up on detections at the surveys’ depths. At the W. M. Keck Observatory (WMKO), we are actively constructing critical infrastructure, aimed at facilitating the Target-of-Opportunity (ToO) trigger, optimizing observational planning, streamlining data acquisition, and enhancing data product accessibility. In this document, we provide an overview of these developing services and place them in context of existing observatory infrastructure like the Keck Observatory Archive (KOA) and Data Services Initiative (DSI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving into the age of Time Domain Astronomy, robust, automated data reductions systems become essential. Here we present BANZAI-FLOYDS, a fully automated long-slit data reduction pipeline for the FLOYDS spectrograph at Las Cumbres Observatory. BANZAI-FLOYDS is fully written in Python, implementing wavelength calibration, fringe correction, object detection and tracing, telluric correction and flux calibration. The pipeline builds on the BANZAI library which handles the data flow and engineering allowing BANZAI-FLOYDS to only focus on spectroscopic processing. This design enables modularity of the processing stages allowing rapid development and encourages reuse for other spectrographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Simons Observatory (SO) is a next-generation ground-based telescope located in the Atacama Desert in Chile, designed to map the cosmic microwave background (CMB) with unprecedented precision. The observatory consists of three Small Aperture Telescopes (SATs) and one Large Aperture Telescope (LAT), each optimized for distinct but complementary scientific goals. To achieve these goals, optimized scan strategies have been defined for both the SATs and LAT. This paper describes a software system deployed in SO that effectively translates high-level scan strategies into realistic observing scripts executable by the telescope, taking into account realistic observational constraints. The data volume of SO also necessitates a scalable software infrastructure to support its daily data processing needs. This paper also outlines an automated workflow system for managing data packaging and daily data reduction at the site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kieran Leschinski, Hugo Buddelmeijer, Oliver Czoske, Gilles Otten, Martin Balaz, Fabian Haberhauer, Jennifer Karr, Wolfgang Kausch, Thomas Marquart, et al.
METIS will be the first-light mid-infrared instrument at the ELT. Given the expected performance of the ELT’s adaptive optics systems, METIS will be able to probe regions of the sky previously inaccessible to astronomers. In support of both the METIS integration and verification efforts as well as the astronomical community at large, the METIS pipeline team has begun work on the METIS data reduction pipeline. The METIS pipeline will be written mostly in Python to take advantage of the new data reduction tools released by ESO. The development schedule has been set in such a way that the pipeline team will be able to directly support the testing and verification efforts during the upcoming system integration phase for METIS. In order to ensure that the required pipeline functionality is available when it is needed, the recipes and workflows functionality has been broken down into four levels of readiness: skeleton, functional, performance, and science-grade. This breakdown aims to ensure a more agile approach to the pipeline implementation as well as enabling productive contributions from all members of the highly geographically distributed team.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Giuliano Taffoni, Andrea Mignone, Luca Tornatore, Eva Sciacca, Massimiliano Guarrasi, Giovanni Lapenta, Lubomir Riha, Radim Vavrik, Ondrej Vysocky, et al.
High Performance Computing based simulations are crucial in Astrophysics and Cosmology, helping scientists investigate and understand complex astrophysical phenomena. Taking advantage of Exascale computing capabilities is essential for these efforts. However, the unprecedented architectural complexity of exascale systems impacts simulation codes. The SPACE Center of Excellence aims to re-engineer key astrophysical codes to adapt to these new computational challenges by adopting innovative programming paradigms and software solutions. Through co-design activities, SPACE brings together scientists, code developers, HPC experts, hardware manufacturers, and software developers. This collaboration enhances exascale astrophysics and cosmology applications, promoting the use of exascale and post-exascale computing capabilities. Additionally, SPACE addresses high-performance data analysis for the massive data outputs from exascale simulations, using machine learning and visualization tools. The project facilitates application deployment across platforms by focusing on code repositories and data sharing, integrating European astrophysical communities around exascale computing with standardized software and data protocols. In this paper, we present the SPACE Center of Excellence and the preliminary results achieved by the project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Symbolic regression techniques are promising approaches to learning mathematical models that fit experimental data. One of the most powerful techniques for symbolic regression is Grammatical Evolution (GE). This evolutionary computation technique explores a space of candidate models that are ensured to be syntactically correct expressions built from a set of arbitrary building blocks and operators. In GE the syntax for these expressions is defined by a problem-specific formal grammar. Therefore, GE can produce an explainable solution (e.g. a formula), not a black-box model. The current contribution assesses the viability of GE for PSF characterization, using real datasets from HST/WFPC2. Our experiments show that our method is able to find the most likely candidate mathematical expression for the PSF shape and can also model combinations of shapes taken from a predefined family of functions commonly used in astronomy (Gaussian and Moffat PSFs). These results support the hypothesis that the expressive power of GE can be used to tackle the problem of characterization of complex PSF functions, for example, as a necessary step in the prediction of intra-pixel position of stars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a machine learning method to assign stellar parameters (temperature, surface gravity, metallicity) to the photometric data of large photometric surveys such as SDSS and SKYMAPPER. The method makes use of our previous effort in homogenizing and recalibrating spectroscopic data from surveys like APOGEE, GALAH, or LAMOST into a single catalog, which is used to inform a neural network. We obtain spectroscopic-quality parameters for millions of stars that have only been observed photometrically. The typical uncertainties are of the order of 100K in temperature, 0.1 dex in surface gravity, and 0.1 dex in metallicity and the method performs well down to low metallicity, were obtaining reliable results is known to be difficult.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for unanticipated fault diagnosis based on IGWO-iForest (Improved Grey Wolf Optimizer-Isolation Forest) is proposed to address various unpredictable problems faced by large telescopes in extreme environments. First, the random forest feature selection algorithm is used to identify the features of the original dataset and eliminate redundant features. Secondly, the differential evolution strategy is introduced into the GWO (Grey Wolf Optimizer) to improve the local search efficiency and accuracy, and the Levy flight strategy is introduced into the GWO to improve the global search ability of the algorithm. Then, the improved IGWO is used to optimize the parameters of the iForest model. Finally, the performance of the model is verified through data collected from a fault diagnosis and self-healing hardware-in-the-loop simulation platform. The experimental results show that the IGWO-iForest algorithm achieves a fault diagnosis accuracy of 99.1%, which demonstrates its higher sensitivity to a small number of unanticipated fault data compared with other anomaly detection algorithms, proving the effectiveness of this method in accurately diagnosing unanticipated faults in telescopes
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to address the challenges of the Rubin Science Platform, Rubin developed a kubernetes-based approach to service deployment with an in-house service configuration and support infrastructure called phalanx, based on ArgoCD. It became apparent that the challenges of running a service-oriented architecture in a modern observatory summit lent themselves equally well to this approach. In this talk we will describe how phalanx was adapted for use for telescope, instrument and sensor control services and the advantages of providing a unified service infrastructure for both control systems and data services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The amount of astronomical data that needs to be archived, calibrated, and processed continues to increase as telescopes and observing instruments advance. Securing necessary resources to store and process ever-increasing data is an operational challenge. To solve these issues, we conducted a demonstration experiment using ALMA archived data to efficiently utilize a commercial cloud for archive storage and data analysis pipeline processing. In archiving, a hybrid configuration combining on-premise storage and cloud based short-term and long-term storages is cost-effective, considering the trends on the number of data downloads over time since the data was obtained. In the data analysis processing, information on processing time and resource usage, such as memory and CPU core, measured during the pipeline process of approximately 400 observation data sets was analyzed, and a model was created to estimate processing time and the required amount of resources from the observation parameters. Based on the model created, the amount of required resources is predicted based on observation parameters, and an instance with the necessary and sufficient resources for pipeline processing is launched on demand on the cloud. These pipeline processes were completed with resources in a processing time comparable to that of on-premise ones. Since prices, services, computing resources, etc. on commercial cloud are updated frequently, we plan to continue making periodic estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Datalab, the La Silla Paranal Observatory Platform for data analysis, is being migrated from Docker Swarm to Kubernetes to align with the integrated operations program's goals: Remote, Lean, Sustainable, and High performance. The migration implied to move from an on-premises to a cloud-native infrastructure replicated locally into a Cloud-Edge, providing hybrid cloud containerized applications support, implementing DevOps practices and automation. Using infrastructure as code and configuration management tools like Terraform and Ansible. Building CI/CD pipelines in Gitlab to automatically deploy the proper infrastructure into to the hybrid cloud to hold Kubernetes clusters (Azure Kubernetes Cluster and Vanilla Kubernetes). This approach allows the Observatory to enhance efficiency, reducing power consumption and improving scalability. Using Datalab as proof of concept but setting up the foundation to standardize these technologies in the organization. This paper outlines the provisioning and deployment of the new hybrid cloud infrastructure, providing a concise overview of its architecture, operational impact, and benefits for the observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the rapidly evolving landscape of software development the adoption of containerization has transformed the software supply chain. Containers encapsulate software components, ensuring consistency across multiple development, testing, and production environments. They foster agility and scalability by enabling microservices architecture and DevOps practices. The recent increase in cyberattacks targeting research institutes makes it critical to have a secure supply chain for containers and their orchestration. This paper delves into the integration of containers within the software supply chain, examining best practices and challenges in orchestration, security, and continuous integration and delivery (CI/CD) and distribution. We focus on how containers are secured from build stage, verified and distributed securely and validated in production, while also exploring the implications for dependency management and obsolescence in modern cloud-native infrastructures. Our analysis provides insights into maximizing the benefits of containerization to streamline development pipelines and enhance software supply chain resilience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 10.4m Gran Telescopio Canarias or GTC currently stands as the world's largest infrared optical telescope, notable for its considerable scale and intricate operational complexities. It is operated by the Spanish state-owned company GRANTECAN S.A. The telescope aligns with prevailing trends in industry, emphasizing smart factory strategies rooted in Industry 4.0, with a recent pivot towards human-centric human-machine interaction (Industry 5.0). The initial focus was identified in refining connectivity and defining the typology of sensors for optimizing operations. This led to the implementation of an Industrial Internet of Things or IoT middleware, grounded in Fiware3. The incorporation of smart sensors, leveraging links and protocols like LoRaWAN, Bluetooth, and Wi-Fi, has been streamlined through MQTT connectivity while the integration of AI, edge computing, confined space control, flow management, electric vehicle charging, and photovoltaics represents a multifaceted technological augmentation. Looking ahead, the primary challenges lie in bolstering cybersecurity (Operations Technology or OT), achieving seamless integration with Business Process Management (BPM), advancing data analytics capabilities, standardizing prescriptive maintenance protocols, and refining logistics processes. It is now possible to obtain data and information in a flexible way from the systems, and they can interact with each other. The connection to higher decision layers and cost savings is now also an opportunity, together with the ease of deployment made possible by the system's scalability. This has allowed a real and tangible focus on anticipating action on maintenance and operations, rather than reacting to problems encountered, among other opportunities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ELT Control System can be divided into the Central Control System (CCS) and subsystems Local Control Systems (LCS). At the heart of the CCS we have the High-Level Coordination and Control (HLCC) software which offers a single interface to the telescope towards the operators and instruments and coordinates the telescope subsystems. HLCC interfaces to the Local Control Systems (LCS) via the respective subsystem Local Supervisors (LSV). The LSVs are then responsible for interfacing to the different LCSs converting from the astronomical and user domains into actions and measurements in the individual device’s domain. Following celestial objects, i.e. tracking, is done on three ELT LSVs, the Main Structure (MS), the Dome and the Pre-Focal Station (PFS) LSVs. HLCC distributes the target information and the involved LSVs compute periodically the trajectory setpoints using the CCS’s pointing engine for their respective devices. The tracking also considers a dynamic pointing origin, used to cope with the fact that the instruments might not have a perfectly aligned center of rotation. Pointing models that consider imperfections and physical effects are used for the MS and PFS LSVs. The timestamped setpoints are sent to the corresponding LCSs and feedback is gathered using deterministic channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Once in operation, the Vera C. Rubin Observatory will execute a ten-year-long survey of the Southern sky known as the Legacy Survey of Space and Time (LSST). The Rubin Observatory Control System (Rubin-OCS) is a distributed system with each component in charge of a particular sub-system e.g.; the mount, the M1M3 mirror support system, etc. Each component is designed as an independent part of the system, and they must work together during operations. Communication between the components is done by means of a software middleware. The software middleware is the backbone of the system, allowing components to communicate with each other in a seamless way. The highly distributed nature of the Rubin-OCS places tight constraints in terms of latency, availability, and reliability for the middleware. The baseline implementation of the Rubin-OCS adopts the Data Distribution Service (DDS) technology for the middleware. In the Rubin-OCS, the middleware is encapsulated with a layer of abstraction known as the Service Abstraction Layer (SAL), which currently uses the ADLink-OpenSpliceDDS implementation of the Data Distribution Service (DDS) message passing system. Recently we performed a study of using Apache Kafka to replace DDS as the middleware technology for the Rubin-OCS. This study was motivated by the middleware-related challenges we faced while integrating the system as well as the recent announcements indicating that the adopted library may be deprecated during the lifespan of the project. The study involved throughput and latency studies and a proof of concept of our core libraries. Overall Kafka proved to be a suitable replacement for DDS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array Observatory (SKAO) is an international organization that currently builds two multi-purpose radio telescope arrays. The SKA Low Frequency Telescope array (SKA Low), located in the Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory in Western Australia, with the observing range 50 - 350 MHz, will consist of more than 131,072 log-periodic antennas organized as 512 stations; the maximum distance between two stations is 65 kilometres. The SKA Mid Frequency Telescope array (SKA Mid), located in the Karoo region, Northwestern Cape province, South Africa, with the observing range 350 MHz - 15 GHz, will comprise 197 offset-Gregorian dishes; the dishes are 15 metres in diameter, the maximum baseline is 150 kilometres. This paper provides an overview of the automated attribute alarm handling in Tango Controls devices of the SKA Control System, the current software design for the early stages of the observatory, its horizontally scalable deployment, its integration and the lessons learned when real users and engineers deploy and use software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wide-Area Linear Optical Polarimeters (WALOPs) are two instruments - WALOPNorth and WALOPSouth - that will be installed at the Skinakas and South African Astronomical Observatories respectively. Their goal is to work towards a polarimetric map of the Galaxy, for the needs of the PASIPHAE collaboration. The WALOP instruments, to be able to operate smoothly, require custom-made software to fit their (and the survey’s) specifications. We will present said software’s specifications and the methods and technologies used to meet these requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wide-Field Infrared Transient Explorer (WINTER) is a new fully robotic infrared time-domain survey instrument at the Palomar Observatory, commissioned in June 2023. WINTER is performing a seeing-limited time domain survey of the infrared (IR) sky to detect, discover, and characterize astrophysical time-domain phenomena. As a dedicated observatory for real-time detection and rapid follow-up of infrared transient and variable targets, WINTER represents a new capability for multi-messenger astrophysics. We will describe the robotic software architecture of the WINTER Supervisor Program (WSP) which handles autonomous scheduling of both surveys and target-of-opportunity interrupts, as well as control and remote monitoring of the observatory, telescope, and cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A highly optimized E-field Parallel Imaging Correlator (EPIC), currently under commissioning on the Long Wavelength Array in Sevilleta, New Mexico, can image the sky at a rate of 25,000 FPS per polarization and frequency. The system consists of six processing nodes, each producing images of the visible sky with a 1-degree spatial resolution at an 80 ms temporal resolution, covering a 3.2 MHz spectral window below 100 MHz, yielding a total bandwidth of 19.2 MHz. Light curves for selected sources of interest will be extracted from each image into a distributed database, and 5-minute accumulations are archived on the disk for further analysis. In this paper, we describe the components of our real-time imaging system, designed as a plug-and-play solution to deploy EPIC on similar arrays with only minor modifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) is the next-generation atmospheric Cherenkov gamma-ray project. CTAO will be deployed at two sites, one in the Northern and the other in the Southern Hemisphere, containing telescopes of three different sizes for covering different energy domains. The commissioning of the first CTAO Large-sized Telescope (LST-1) is being finalized at the CTAO Northern site. Additional calibration and environmental monitoring instruments such as laser imaging detection and ranging (LIDAR) instruments and weather stations will support the telescope operations. The Array Control and Data Acquisition (ACADA) system is the central element for onsite CTAO operations. ACADA controls, supervises, and handles the data generated by the telescopes and the auxiliary instruments. It will drive the efficient planning and execution of observations while handling the several Gb/s camera data generated by each CTAO telescope. The ACADA system contains the CTAO Science Alert Generation Pipeline – a real-time data processing and analysis pipeline, dedicated to the automatic generation of science alert candidates as data are being acquired. These science alerts, together with external alerts arriving from other scientific instruments, will be managed by the Transients Handler (TH) component. The TH informs the Short-term Scheduler of ACADA about interesting science alerts, enabling the modification of ongoing observations at sub-minute timescales. The capacity for such fast reactions – together with the fast movement of CTAO telescopes – makes CTAO an excellent instrument for studying high-impact astronomical transient phenomena. The ACADA software is based on the Alma Common Software (ACS) framework, and written in C++, Java, Python, and Javascript. The first release of the ACADA software, ACADA REL1, was finalized in July 2023, and integrated after a testing campaign with the LST-1 finalized in October 2023. This contribution describes the design and status of the ACADA software system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have recently initiated a multi-institutional research program that will examine existing pipelines and catalog potential sources of variation in their resulting RV measurements. Through a series of EPRV community meetings we aim to establish community recommended, standardized formats for EPRV data products and to develop/distribute the tools necessary for direct comparisons of EPRV data between modern instruments. This program will lay the groundwork for a modular, open source, EPRV analysis toolbox that will be compatible with a wide variety of current and future instruments. Here we will provide a progress report on the program’s steps towards this community-endorsed data standard, and highlight lessons learned from the early years of operation across the NEID, KPF, EXPRES, CARMENES, HPF, and MAROON-X RV spectrographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introducing HERMES (HOP Enabled Rapid Message Exchange Service), an application which supports sharing and querying structured data containing targets, photometry, spectroscopy, astrometry, and more. Many branches of astronomy, particularly time-domain and multimessenger astrophysics, are driven by time-critical alerts. Coordinating the community-wide response to provide characterization observations of the alerts is critical to realizing many of the science goals in these fields. As part of the SCIMMA (Scalable CyberInfrastructure to support multimessenger astrophysics) project, HERMES provides a platform for users to share messages and data in a structured format that can be sent over the SCIMMA Kafka streams, while also delivering a queryable database of those messages. The goal of HERMES is to encourage more astronomers to share data in a common, machine-readable format. While the platform is robust and general enough to handle many kinds of astrophysical data, HERMES is especially useful for non-localized event follow-up such as gravitational wave or neutrino events and maintains relationships between non-localized events and related messages and targets of interest. We discuss the Domain-Specific Language (DSL) designed for sharing structured astronomical data through HERMES, which also supports formatting and submitting data to external services such as NASA’s GCN (General Coordinates Network) circulars or the TNS (Transient Name Server). Finally, we present the integration between HERMES and TOM (Target and Observation Management) Toolkit based systems, allowing TOM users to share or ingest data through HERMES.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The forthcoming New Robotic Telescope, a collaboration between the UK and Spain, is poised to become the world’s largest and fastest autonomous observatory, located in La Palma. It is tailored to be a premier 4m class follow-up facility for the imminent wave of time-domain and transient astrophysics. It exemplifies innovation with its use of serverless architectures and a unified DevOps methodology, integrating Docker and Kubernetes to facilitate reliable, scalable, and responsive deployments both on-premises and cloud infrastructure. This model not only aligns with modern web-based principles and distributed deployments but also ensures that astronomers and operations staff have unfettered access to manage their observations, data products and monitoring of the facility in a unified modern interface, setting a new standard for modern astronomical research facilities. Building on the Liverpool Telescope’s autonomous robotic legacy, the New Robotic Telescope merges the GranTeCan Control System’s framework with a novel Robotic Control System, facilitating the transition from human-operated to fully automated observatory functions. We describe the current status of the infrastructure for the New Robotic Telescope software stack, focusing on the current DevOps infrastructure and ongoing development, as well as outlining the future work ahead of the initial construction of the telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radio astronomy is currently facing a significant challenge due to the massive data volumes generated by modern radio-interferometers, which will be further exacerbated by the upcoming Square Kilometre Array. Efficient data processing at this scale necessitates advanced High-Performance Computing (HPC) resources. Our work focuses on developing a novel approach to implement the w-stacking algorithm on state-of-the-art HPC systems, specifically targeting heterogeneous architectures comprising both CPUs and GPUs. We introduce the RICK (Radio Imaging Code Kernels) code, designed to efficiently process radio-interferometric data by leveraging the parallelism and computational power of modern HPC nodes. This study demonstrates the effectiveness of RICK on a single computing node, showcasing significant performance improvements over traditional methods. The paper outlines the methodology, the algorithmic innovations, and the parallelization strategy, along with performance benchmarks on various CPU/GPU configurations, highlighting the potential of RICK for future large-scale radio astronomy projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Initiated in 2008, the Canadian Advanced Network for Astronomical Research (CANFAR) has evolved into an advanced science platform, a cloud-native framework for remote analysis of astronomical data. This innovative platform is designed for a diverse range of users, from large-scale groups like CHIME-FRB, to individual researchers in remote institutions. It offers intuitive interfaces such as notebooks, desktops, visualizers and IDEs, all accessible through a browser. These are supported by multi-tiered storage and Kubernetes orchestration. A comprehensive REST API facilitates seamless integration and automated batch processing. CANFAR stands out by combining conventional desktop analysis with cutting-edge, browser-based interfaces and GPU-accelerated machine learning capabilities. This unique blend has made it a hub for a varied user community, establishing it as a unified platform for comprehensive astronomical data analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MCAO Assisted Visible Imager and Spectrograph (MAVIS) is a new instrument for ESO’s VLT AOF. MAVIS embarks an Adaptive Optics (AO) system to cancel the image blurring induced by atmospheric turbulence. The latency and computational load induced by the system dimensioning led us to design a new software and hardware architecture for the Real Time Controller (RTC). Notably, the COSMIC framework harnesses GPUs for accelerated computation and is adept at scaling across multiple processes without overhead using shared memory. Employing a graph-based architecture where operations are intuitively represented as nodes. It aims at simplifying design, implementation, testing and integration by relying on robust concepts and useful tools. Recent updates have further enhanced its versatility, cementing its potential as a future-proof, extensible framework for AO advancements and their development process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large sky surveys from dedicated survey telescopes allow astronomers to access large swathes of the optical and near-infrared sky at high spatial resolution. The increasingly large volume of multi-wavelength imaging and catalogue photometry data products they generate are a cumbersome data management burden for survey teams. Even the simplest of tasks like visual inspection can quickly become out of reach when trying to apply the traditional astronomical software toolkit familiar to most astronomers. More complex tasks like quality assurance and scientific exploitation demand interactive visualization of data products alone (e.g. catalogues overlaid on imaging) and in conjunction with multiwavelength data from other surveys. Here we introduce a web application that effortlessly performs these tasks for the reprocessed imaging (⪆20TB) and catalogues (⪆100GB) of the VISTA Hemisphere Survey. The application is powered by the asynchronous architecture developed for data central’s data aggregation service that can query and receive several data streams simultaneously. Users can inspect a single target or navigate multiple targets from an uploaded list. Survey images are dynamically converted from FITS to HiPS format before being loaded into an Aladin Lite instance. Multiple catalogues are retrieved and accurately drawn in Aladin Lite using markers and ellipses. Dynamically generated tables summarize catalogue metadata with links to full record information. Users requiring more than the provided visualization and adjustable image scaling can download image and catalogue cutouts for closer inspection. With further science-oriented refinement we plan to release the application as part of the data central science platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Operating a cutting-edge radio telescope like ALMA demands optimal utilization of every minute available in the sky. With an increasing allocation of observation hours to researchers each year, the imperative for continuous, seamless operations grows. ALMA relies on an array of computer systems functioning on a full-time basis, with numerous concurrent users, generating approximately 50,000 logs per minute and a staggering 70 million logs per day. Addressing the challenge of managing this voluminous data flow, log detector emerges as an in-house solution designed to automate the detection and reporting of known issues. By scrutinizing logs, this tool empowers users to define Finite State Machine (FSM) states and transitions. Subsequently, users can feed logs into this machine, inducing state transitions that signal potential problems or facilitate system monitoring tasks. This article aims to spotlight the capabilities of Log Detector and its impact on operational efficiency. Additionally, it offers insights into the lessons learned while developing an in-house operational tool and outlines future development plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At Vera C. Rubin Observatory, the need to manage metrics and telemetry data efficiently led to the creation of Sasquatch. Sasquatch consolidates our high-frequency telemetry harness, which captures the observatory engineering data, with the science performance metrics measured by the LSST Science Pipelines. Sasquatch utilizes InfluxDB, a time series database, to efficiently store and query time-series data. We combine InfluxDB Enterprise with Apache Kafka and deploy our solution on the Kubernetes platform. Our current setup at the US Data Facility enables real-time access to data mirrored from the Summit and leverages tools like Chronograf for time series data visualization, Kapacitor for alert management, and the Rubin Science Platform’s notebook environment for data analysis using Python. Sasquatch is currently employed during Rubin Observatory’s System Integration Testing and Commissioning phase and is an essential service as we transition into survey operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array Observatory (SKAO) is a next-generation radio telescope that is being built in two locations in the Karoo region of South Africa and the Murchison region of Western Australia forming one Observatory run from a global headquarters based in the United Kingdom at Jodrell Bank. At the heart of the SKA software, there will be a database that persists and replicates its metadata between these three sites that have different functions. This paper will describe the main requirements for the SKA database, how this has been done in the past at the Atacama Large Millimeter/Sub millimeter Array (ALMA) in Chile, the lessons learned there from the ten years of operations and what new options have appeared to handle the SKA, three site telescope needs. The solution needs to be highly available, performant, cost effective and easy to implement and maintain in the long term.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We GPU ported with CUDA the solver module of the Astrometric Verification Unit–Global Sphere Reconstruction (AVU–GSR) pipeline for the ESA Gaia mission. The code finds the astrometric parameters of approximately 108 sources, by solving a linear system with the LSQR. The coefficient matrix is large (10–50 TB) and sparse. The CUDA code accelerates over the original MPI + OpenMP solver of approximately 14x on CINECA cluster Marconi100. We migrated the code production to Leonardo, which has 4x GPU memory per node. This speedup was obtained without computing the system covariances, whose total number is Nunk × (Nunk − 1)/2 and occupy approximately 1 EB with Nunk approximately 5 × 108. This “Big Data” problem cannot be solved with standard methods: we defined a two jobs, I/O-based pipeline, where one job writes the files and the second concurrent job reads them, iteratively computes the covariances, and deletes them. The covariances calculation does not significantly slowdown the code until a number of covariances elements equal to approximately 8 × 106 .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The United States Extremely Large Telescope Program (US-ELTP) is a joint effort of three organizations: the Thirty Meter Telescope (TMT) International Observatory (TIO), the Giant Magellan Telescope Corporation (GMTO), and the U.S. National Science Foundation National Optical-Infrared Astronomy Research Laboratory (NSF NOIRLab). The US-ELTP will provide US astronomers with access to observing and archival science with the GMT and TMT. A user services suite and integrated data management and science platform, the US-ELT NOIRLab Program Platform (NPP), is being developed by NOIRLab; its scope, conceptual architecture, and development progress are described here. The NPP leverages community best practices and lessons from existing and upcoming facilities such as the International Gemini Observatory, the Vera C. Rubin Observatory, and the Daniel K. Inouye Solar Telescope (DKIST) and is guided by inclusive and collaborative design principles to broaden access to GMT and TMT. The NPP will support the creation of proposal and observing programs, scheduling and execution of observations, data archival and automated data reduction, and exploratory data analysis. The system, its support, extensive documentation, and ample training will be guided by regular community surveys and interviews with a diverse range of community members, and, eventually, through usability testing based on prototypes and mockups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aspera is a NASA-funded UV SmallSat Mission in development with a projected launch in 2025. The goal of the mission is to detect and map warm-hot gas in the circumgalactic medium of nearby galaxies traced by the Ovi emission line at 103.2 nm. To that goal, Aspera will conduct long-exposure observations at one or more spatial fields around each target galaxy, employing two long-slit spectrographs. Spectra from both channels are focused on a single micro-channel plate detector. In preparation of the mission’s launch, we are developing a data reduction pipeline, the goal of which is to reconstruct a calibrated 3D IFU-like data cube by combining the photon event lists obtained during each observation for a given target galaxy. In this proceedings paper, we present an outline for the data reduction pipeline and describe the data flow through the processing of science observations. We will further discuss individual steps to be applied to the data during the processing and show how our final data cubes shall be reconstructed. Finally, we will present our planned data products and discuss how simulations of the Aspera data cubes are being used to develop the pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims at high-precision astrometry in the near-infrared wavelengths (1.0–1.6 μm). This mission focuses on the Galactic center region, obscured by interstellar dust in optical wavelengths. JASMINE’s observation strategy differs from other missions and must be verified via dedicated simulations. To verify the mission concept, we designed a simplified simulation, the JASMINE mini survey, covering three years with 100 orbits. As a simple case, the data obtained in a single satellite orbit are analyzed simultaneously (Plate Analysis). The observation model was made differentiable and implemented as a probabilistic model to make the best use of Stochastic Variational Inference. Model parameters converged to a certain solution, while the observation model contained more than 30,000 parameters. The estimated coordinates well represented the stellar motions expected from the ground truth. A typical positional error was estimated to be about 70 µas, consistent with the measurement error and the number of measurements. The present results validate parts of JASMINE’s mission concepts, leading to significant advancements in understanding the Galactic center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) is a major next-generation instrument in ground-based gamma-ray astronomy that will become operational in the era of multimessenger astronomy. With its unmatched sensitivity and angular resolution, CTAO will play a pivotal role in the study of transient phenomena in the GeV-TeV range. The Transients Handler is the component within the Array Control and Data Acquisition (ACADA) system that enables CTAO to respond swiftly to alerts about transient events with automatically scheduled observations. The Transients Handler’s tasks include (i) filtering thousands of events per night from multiple external and internal alert streams, (ii) matching these events with scientific proposals, (iii) determining the optimal observation strategy, and (iv) scheduling observations within five seconds of receiving an alert. Recently, in October 2023, the first implementation of the Transients Handler was successfully tested during the integration of ACADA with the first CTAO Large-sized Telescope (LST-1). In this contribution, we will present the design of the Transients Handler in detail and preview updates that will be introduced in the next implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2016 the SKA Organisation decided that, due to the complexity of the software and the uncertainty of the implementation details, the software development would be managed as an agile development programme. This led to a strategic decision to retain control of the primary risks of software construction and, by implication, some degree of central management. In the years before construction began, we selected the Scaled Agile Framework® as the basis for agile development and the NEC4® framework as the contract structure to hire the developers needed. The latter is complemented by the Vested® methodology for creating highly collaborative business relationships. All of these are unusual in our environment but reflect world best practice in other areas. We report on the progress of the contracts after about three years of construction. Overall, we believe it has been a success, with: the competitive rates offered allowing the engagement of more resources than what was estimated at the time of the SKA System CDR due to the low risk structure; positive results from a regular "happiness" survey of the developers; high levels of trust; a strong engagement with suppliers actively contributing to the governance; and the development being largely on time and budget with no expected impact on the critical path. Finally, we will report on the lessons we have learned and what we feel others should take away to consider for use in future large scale scientific software programmes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Giant Magellan Telescope (GMT) Software and Controls (SWC) team is responsible for designing, implementing, and maintaining the GMT Observatory Control System (OCS). GMT software modules are developed either in-house, or in collaboration with GMT partner institutions, following an Agile software development process. However, these software industry best-practices require significant tailoring to integrate well with other Engineering disciplines on a large, complex project such as GMT. In this paper we explore the various challenges in managing software development and how we are tackling them at GMT. Key areas include building the right team, handling programmatic challenges, streamlining development processes and engaging with customers and stakeholders. We’ve learned that people are at the heart of what we do, and the health of the team directly affects our ability to deliver high quality software on time and within budget. Also, managing limited resources is a common theme, requiring many different solutions in different domains. We have found the most effective to be a combination of process-optimization, resource-loaded scheduling, agile development, drastic overhead reduction and regular review of top priorities to help the team focus on what is important. Lastly, active engagement and efficient communication with customers and other stakeholders from the very beginning, help to set clear expectations and sets the team up for success. The team has made tremendous progress in the last few years in these areas and will continue to do so in the future due to a commitment to continuous improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ViaLactea Knowledge Base (VLKB) was designed and initially developed within the EU FP7 VIALACTEA project that included a Work Package dedicated to create infrastructure and tools to perform research in Milky-Way astrophysics. The infrastructure’s goal was to set up an archive and services to enable that research. About 50 dataset collections ( 35k datasets of various sizes, in FITS format), ten catalogues of compact sources (from thousands to a few million rows), a catalogue of morphological complex sources (few thousand sources), and a few other catalogues and simulated datasets were included in the archive, worth about 1TB of data. On top of those data, and their metadata descriptions, a set of services was deployed: a search and access (cutout and merge) service for the datasets, a general Table Access Protocol (TAP) service for all metadata and catalogues and some other dedicated attempts for serving specific user requirements. All the interfaces were developed in combination with the dedicated client, the ViaLactea Visual Analytics (VLVA) but were designed keeping in mind the discovery and access scenario that is continuously developed in the Virtual Observatory (VO) ecosystem. Indeed, interoperability was brought inside the VLKB afterwards, slowly (depending on the limited resources available after the end of the Vialactea project), mostly when the VLKB resources kept being used in galactic astrophysics projects or as a comprehensive resource of data and services in technical demonstrator projects. Those projects provided the continuity in funding basic maintenance of the VLKB and some updates (even if occasionally rather than continuously). With the first release of the VLKB in 2016, the subsequent maintenance gap spanning from then until 2020, and the restart of development since then, the current adoption of standards in the VLKB includes: an ObsCore table to keep the metadata for the observational datasets’ catalogue, the TAP service to expose the general metadata content for all its data resources (catalogues, images, radial velocity cubes and morphological complex objects, . . . ), a custom implementation of the SODA (Server-side Operation for Data Access) standard set up to replace the dataset cutouts (with UWS - Universal Worker Service - used to manage asynchronous cutout and merge requests). Furthermore, authentication and authorization infrastructure (AAI) solutions using OAuth/OIDC have been tested on top of the cutout service, and a multi-cutout solution has been presented at VO level as a feedback for the SODA and DataLink evolution. Other features (management of complex morphology, of simulated data, and registration of the VLKB resources into the VO Registry) are still undergoing or missing. In particular, enabling more standard interface for the VLKB and making VLVA aware and able to consume them, will let both the client be more general and easier to maintain and the server resources be consumed by non-dedicated client applications. This contribution reports the challenges in maintaining and improving the VLKB, the actual status of the technologies and standards in use for its resources, and the present and future perspectives for the VLKB itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Observatory Science Operations (OSO) subsystem of the SKAO consists of a range of complex tools which will be used to propose, design, schedule and execute observations. Bridging the gap between the science and telescope domains is the key responsibility of OSO, requiring considerations of usability, performance, availability and accessibility, amongst others. This paper describes the state of the observatory software as we approach construction milestones, how the applications meet these requirements using a modern technology architecture, and challenges so far.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Giant Magellan Telescope (GMT) is a next-generation ground-based segmented telescope. In the last few years, significant progress has been made by the GMT team and partners to design a natural guide-star wavefront control strategy that can reliably correct wavefront error, including the discrete piston aberration between segment gaps. After an extensive set of simulations and external reviews, the team proposed a design of a Pyramidal Wavefront Sensor (PWFS) combined with a Holographic Dispersed Fringe Sensor (HDFS) and started building a prototype for integrating a GMT simulator (High Contrast AO Testbed) with a PWFS and an HDFS. The prototype was developed in collaboration with the University of Arizona, INAF-Arcetri, and the GMT observatory. The software development of the adaptive optics controllers and the interfaces between all testbed components were done using the GMT software frameworks, as they will be implemented for the final observatory software. The GMT framework is model-based, and the software component interfaces are defined using a domain-specific language (DSL). In this paper, we show how the design of the testbed software fits within GMT's component-based architecture and what each partner was responsible for delivering. We discuss the challenge of a multidisciplinary team from multiple institutions in different time zones working together on the same software, describe how the software architecture and development process helped to ensure seamless integration and highlight other accomplishments and lessons learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Professional software engineering techniques have yet to consistently find their way into research software engineering. To address this problem, we develop the new LOFAR2.0 proposal tool in short iterations where software engineers strive for technical excellence to build maintainable software. Simultaneously, we embed stakeholders with an interest in academic concerns in this process to manage these concerns appropriately. Intensified collaboration between software engineers and relevant stakeholders creates more maintainable, durable, and reusable research software. This paper discusses the challenges we encounter and practical ways of working that help bridge the gap between academia and software engineering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Poster Session: Software and Cyberinfrastructure for Astronomy
Despite Python being the preferred programming language of choice for most astronomers, building or extending data reduction pipelines in the language can be problematic. A common approach is to write Python functions or classes as wrappers, calling individual pipeline recipes underneath, but this does not scale well with increasing pipeline complexity. Data management is also fraught since housekeeping code must be written to carefully handle input and output products between recipes. We have addressed these issues by creating an extensible pipeline development framework that leverages the Python bindings for the ESO Common Pipeline Library (PyCPL) toolkit. Pipeline recipes can be defined in a regulated manner using existing ESO pipeline recipes or new Python recipes compliant with ESO standards. Users can easily build their own pipeline workflows for execution by the PyCPL companion package PyEsorex. The ability to define Python recipes offers a powerful means to extend existing ESO pipelines or develop entirely new pipelines. An overview of the framework is presented along with an illustrative MUSE pipeline workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal control of the MMT Observatory (MMTO) primary mirror (M1) has been optimized to reduce M1 thermal anomalies and mirror seeing while enhancing overall imaging quality. These refinements include 1) increased use of temperatures from outside, chamber, M1 glass, and U.S. National Weather Service sources for thermal control, 2) expanded monitoring and analysis of the M1 glass temperatures, 3) integration of multiple feedback-based PID (proportional-integral-derivative) controllers in M1 thermal control, and 4) extensive data analysis of thermal anomalies and trends within the M1 mirror and surrounding telescope enclosure. The newly deployed control strategy uses the minimum temperature from different sources (i.e., outside, chamber, and forecast) to regulate the temperature of the conditioned air used to cool the M1 mirror. The controlling temperature source commonly changes during the night. Before this current work, thermal control of the M1 ventilation system used a linear regression model to determine the setpoint of the main glycol chillers. This simple approach has been replaced by a combination of open-loop and closed-loop, feedback-based PID servo controllers that regulate the chillers and coolant valves along the M1 ventilation air path. Different feedback temperatures for the various PID servos are considered, allowing for more detailed and responsive conditioning of air within the M1 ventilation system. Comparison of M1 glass-air temperature contrasts to wavefront-sensor (WFS) seeing values define optimal performance conditions. This work has led to recommendations for operational changes that aim to improve thermal conditions for the M1 mirror and telescope chamber, including during the transition from daytime to nighttime activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research analyzes historical data from the ASTRI-Horn, a Cherenkov telescope at the Astrophysical Observatory of Catania (Serra La Nave, Mt. Etna). Data from a multitude of sensors, distributed across the telescope, were studied. These sensors record various parameters, including currents, voltages, phases, positions, and temperatures, from different telescope components such as motors and encoders, as well as environmental conditions like temperature and humidity. Seven years of operational data have been analyzed to identify precursors indicative of component degradation. The aim was to discern unique data patterns or "signatures" corresponding with periods of component damage or replacement. These identified signatures will be instrumental in the development of a Predictive Maintenance (PdM) model, which will aim to foresee the standard operational patterns, issuing alerts for any detected anomalies or deviations, thereby facilitating early anomaly detection and resolution. PdM is an advanced maintenance strategy that uses data to help predict when parts might fail aiming to reduce unexpected costs, improving the overall efficiency and reliability of the telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BlueMUSE is a blue, medium spectral resolution, panoramic integral-field spectrograph under development for the Very Large Telescope (VLT). We demonstrate and discuss an early End-To-End simulation software for final BlueMUSE datacube products. Early access to such simulations is key to a number of aspects already in the development stage of a new major instrument. We outline the software design choices, including lessons learned from the MUSE instrument in operation at the VLT since 2014. The current simulation software package is utilized to evaluate some of the technical specifications of BlueMUSE as well as giving assistance in the assessment of certain tradeoffs regarding instrument capabilities, e.g., spatial and spectral resolution and sampling. By providing simulations of the end-user product including realistic environmental conditions such as sky contamination and seeing, BlueSi can be used to devise and prepare the science of the instrument by individual research teams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern astronomical telescopes often rely on control systems for observations. Many factors may affect the development of control systems, such as the differences in the development phases of devices, the robustness of devices. A simulation framework which mocks the component of each device is needed to speed up the development of control system, facilitating behavior-level simulation to support the upper layer development. Presently, many industry-standard simulation systems are predominantly based on actual hardware systems, which necessitate the development of independent hardware logic, such as the simulator of LSST. We have designed the Rsimu framework. This framework is built upon the RACS2 and is highly proficient in behavior simulation of devices. Rsimu's behavior is entirely configurable, and the properties of different components can be dynamically defined by pluggable configuration files. A shared data-plane is provided for components to synchronize their status, therefore helps developers to separate the behavior model of components apart. A series of designs, including pull-update, state-machine etc. are provided to help users to establish the simulation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the integral field unit part of the data reduction pipeline for METIS (Mid-infrared ELT Imager and Spectrograph), a first-generation infrared instrument that will be installed on the Extremely Large Telescope. The described software covers the entire process of correcting the instrumental effects and reconstructing the hyperspectral image. Apart from standard correction procedures common to virtually all digital imagers, the pipeline includes methods for distortion calibration, wavelength and flux calibration, correction of telluric absorption, reconstruction of the spectral cube with special emphasis on resampling the data only once, and finally algorithms for spatial and spectral dithering of multiple exposures taken at different field orientations and shifts, possibly taken many months apart. The pipeline has already passed the final design review and its implementation is underway.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array is an international project to build two radio interferometric telescopes, with a mid-frequency portion in South Africa and a low-frequency portion in Australia. The low-frequency portion of the telescope will contain 131,072 individual dipole antennas, each of which will need to have its health monitored. Previously, this monitoring in the Aperture Array Verification System 2 (AAVS2) had been done manually by publishing antenna bandpasses to a web-page where an engineer inspects the bandpasses for faulty data. This is a practical solution for AAVS2, however as the number of antennas is scaled up, an automated solution will be required. Using a random forest model trained on AAVS2 data manually classified as faulty or not, bandpass data can be classified with an accuracy of 98%. Testing this with Aperture Array Verification System 3, all faulty bandpasses were detected in the 512 bandpasses produced. This random forest model is encapsulated in a TANGO device independent of the devices controlling the antennas, so that as the number of antennas is scaled, the processing overheads are minimised. This also makes reconfiguration of the model (e.g. to use gradient boosting or to adjust the number/size of trees) a straightforward process. This model can then be used to highlight antennas which are suspected to be producing faulty data and mark that the antenna is not to be used for observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) is the next-generation ground-based instrument for gamma-ray astronomy. CTAO will be located at two sites, one in the Northern (La Palma, Spain) and the other in the Southern Hemisphere (Paranal, Chile), with telescopes in three different sizes to cover different energy ranges. The commissioning of the first CTAO Large-Sized Telescope (LST-1) is being finalized at the CTAO-North site. The Array Control and Data Acquisition (ACADA) software is a central element of on-site CTAO operations. ACADA comprises subsystems for central control, the short-term scheduler, monitoring systems, and data handling at rates of GB/s. Consequently, it is a very complex software that requires many developers with different expertise, such as control software, data acquisition, data analysis, scheduling, configuration, and human interfaces. To implement such complex software, ACADA has been broken down into subsystems, which CTAO delegates to expert developer teams around the world through in-kind contributions. All the software is under version control exploiting a dedicated installation of GitLab. We have created at least one repository for each subsystem and a final one for the integration. We have defined the software development and integration procedures so that all phases of the Software Development Life Cycle (SDLC) are supported. Particular attention has been paid to the critical time when a software version is in operation on site and, bug-fixing and new features need to be kept under version control in parallel. The goal is to manage bug fixes without adding new features out of the scope of the release, but at the same time to guarantee the distribution of bug fixes for future releases. This contribution presents our strategy to manage multiple software versions according to the CTAO development plan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MORFEO (Multi-conjugate adaptive Optics Relay For ELT Observations) is a new instrument being built for the ESO’s Extremely Large Telescope (ELT). The project is in the final design phase, and it is expected to be commissioned in 2030. The Instrument Control Software of MORFEO will be based on the new ESO ELT software framework, which is still under development, and a key activity during the control software implementation is the Continuous Integration (CI) of the code. Continuous integration is the practice of automating the integration of code changes from multiple contributors into a single software project. We present the current CI workflow that ensures that both the software control team and the software quality assurance team can work synchronously, effectively and independently. We also present the options we considered and the reasons that led us to choose this workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With dozens of telescopes in both hemispheres, the Cherenkov Telescope Array Observatory (CTAO) will be the largest ground-based gamma-ray observatory and will offer extensive energy coverage from 20 GeV to 300 TeV. Its large effective area, wide field-of-view, rapid slewing capability, and exceptional sensitivity will make CTAO an essential instrument for the future of ground-based gamma-ray astronomy. Furthermore, its two arrays will send alerts on transient and variable phenomena (e.g., gamma-ray bursts, active galactic nuclei, gamma-ray binaries, and serendipitous sources) to maximise the scientific return. Effective and rapid communication with the community requires a reliable automated system to detect and issue candidate science alerts. This automation will be achieved by the Science Alert Generation (SAG) pipeline, a core system of the CTA Observatory. The SAG is part of the Array Control and Data Acquisition (ACADA) system. The SAG working group develops pipelines for data reconstruction, data quality monitoring, science monitoring, and real-time alert issuance to the Transients Handler system of ACADA. The SAG performs the first real-time scientific analysis during data acquisition. The system analyzes data on multiple time scales (from seconds to hours) and must issue candidate science alerts with 20 seconds of latency and at least half the CTAO nominal sensitivity. Dedicated, highly optimized software and hardware architectures must be designed and tested to satisfy these stringent requirements and manage trigger rates of tens of kHz from both arrays. In this work, we present the general architecture and current development status of the ACADA/SAG system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EMIR (Espectrografo Multiobjeto Infra-Rojo) is a wide field, near infrared, multiobject spectrograph, with image capabilities, which is currently located at one of the Nasmyth focus of the 10.4m GTC (Gran Telescopio Canarias). It allows observers to obtain many intermediate resolution spectra simultaneously, in the near IR bands: Z, J, H and K. A configurable cryogenic multislit mask unit provides target acquisition too. This paper describes the upgrade of EMIR (to EMIR+) which incorporates a new Teledyne H2RG infrared detector, using the SIDECAR integrated controller and a Markury Scientific MACIE interface card over IP communication. A detailed description of the Data Acquisition System (DAS), integrated into the GTC Control System (GCS) software, is given. It configures the URG or FS acquisition modes, starts the acquisition process, capture the data coming from the H2RG unit, stores the FITS data and propagate the images to produce astronomical files. We also developed a Python-based MACIE Controller Simulator to test and debug the DAS, which works as a real MACIE interface, responding to all petitions and generating test images to feed the DAS or other control programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cloud computing offers unparalleled flexibility, a constantly increasing set of “Infrastructure as a Service’’ capabilities, resource elasticity and security isolation. One of the most significant barriers in astronomy to wholesale adoption of cloud infrastructures is the cost for hot storage of large datasets - particularly for Rubin, a Big Data project sized at 0.5 Exabytes (500 Petabytes) over the duration of its ten-year mission. We are planning to reconcile this with a “hybrid” model where user-facing services are deployed on Google Cloud with the majority of data holdings residing in our on-premises Data Facility at SLAC. We discuss the opportunities, status, risks, and technical challenges of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Very Large Telescope Interferometer (VLTI) must control its Optical Path Differences (OPD) to extremely high precision in order to achieve its characteristic and desired high performance. This proves a challenge when using Very Large Telescope’s (VLT) 8 meter Unit Telescopes (UT) given they are not fully dedicated to interferometry and can be equipped with up to three different instruments each. Among the several important control systems that allow the VLTI to achieve the necessary precision for this task is Manhattan II (MNII), which measures vibrations along the Optical Path (mirrors M1 to M7) and sends Optical Path Length (OPL) corrections to the Delay Lines (DL). In the context of GRAVITY+ upgrade, MNII is being extended to cover a larger portion of the light path (previously M1 to M3) and expanded with Phase-locked Loop (PLL) to improve OPD control by targeting specific frequencies. Alongside, several options are being explored to further improve the capabilities of the system. Active compensation is improved by the upgrade of MNII’s PLL. In addition, better troubleshooting tools and automatic Anomaly Detection (AD) systems are needed to constantly monitor and react to the changing vibration signature of the UTs. Furthermore, similar AD systems will be fundamental in the future for the operation of the upcoming Extremely Large Telescope (ELT). This work is about the ongoing efforts to develop an automatic AD system using Machine Learning on MNII’s vibration data. We focus on the different methods and models used in the proof of concept which include Auto-encoders, clustering and classical statistical methods as well, the infrastructure required to have a working end-to-end prototype, the data pipeline, preprocessing and the future envisioned production system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Large Millimeter Telescope (LMT) is a 50m-diameter single dish millimeter-wave radio telescope located in Mexico, built by the country of Mexico and the University of Massachusetts Amherst. The National Science Foundation Mid-Scale Innovations Program (MSIP) is now supporting access to the LMT for any astronomer located at a US institution with a share of total 15% of the scientific observation time. The LMT cyber-infrastructure is being innovated to accommodate the new science operation workflow. We developed and deployed the LMT data archive, fully integrating the data pipeline, data model, and data management workflow of most LMT instruments. For the LMT data archive, we use our own installation of the Dataverse software as the backend, with a custom-built frontend to provide a user-friendly search interface for discovering data products. The data model and data management workflow are developed along with the commissioning (hardware and/or software) of the instrument specific pipelines. The software package dvpipe is developed to package the data products from the instrument specific pipelines as science-ready data products and put them into the LMT data archive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FORS2 (FOcal Reducer/low dispersion Spectrograph) is a multimode (imaging, polarimetry, long slit and multi-object spectroscopy) optical instrument mounted on the Cassegrain focus of the UT1 of ESO’s Very Large Telescope (VLT). Its versatility and large wavelength range (330-1100 nm) make it one of the most requested instruments at the VLT. To keep it operational for at least the next 15 years, the FORS upgrade project (FORS-Up), a collaboration between ESO and INAF-OATs, was started: the twin spectrograph FORS1, decommissioned in 2009, has been sent to Europe and is currently undergoing a complete refurbishment in the integration hall of the Astronomical Observatory of Trieste. Once the upgrade is finished, FORS1 will replace FORS2 at the VLT. In this paper, we report the status of the work currently in progress on the control software: the original one is based on the VLT standards, and it is now being reimplemented within the new ELT (Extremely Large Telescope) software framework. New GUIs have been designed for FORS, which give the user in-depth control over the instrument; new templates for observational, engineering and maintenance procedures have been developed; hardware components have been configured, either as standard devices or as special devices (requiring customized solutions). The upgrade will ensure the continued operation of FORS and represent an invaluable testbed for the new ELT software framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the advancements in the development of the scheduler for the Son Of X-shooter (SOXS, 1,2) instrument at the ESO-NTT 3.58-m telescope in La Silla, Chile. SOXS is designed as a single-object spectroscopic facility and features a high-efficiency spectrograph with two arms covering the spectral range of 350-2000 nm and a mean resolving power of approximately R=4500. Its primary purpose is to conduct UV-visible and near-infrared follow-up observations of astrophysical transients, drawing from a broad pool of targets accessible through the streaming services of wide-field telescopes, both current and future, as well as high-energy satellites. The instrument is set to cater to various scientific objectives within the astrophysical community, each entailing specific requirements for observation planning, a challenge that the observing scheduler must address. A notable feature of SOXS is that it will operate at the European Southern Observatory (ESO) in La Silla, without the presence of astronomers on the mountain. This poses a unique challenge for the scheduling process, demanding a fully automated algorithm that is autonomously interacting with the appropriate databases and the La Silla Weather API, and is capable of presenting the operator not only with an ordered list of optimal targets (in terms of observing constraints) but also with optimal backups in the event of changing weather conditions. This requirement imposes the necessity for a scheduler with rapid-response capabilities without compromising the optimization process, ensuring the high quality of observations and best use of the time at the telescope. We thus developed a new highly available and scalable architecture, implementing API Restful applications like Docker Containers, API Gateway, and Python-based Flask frameworks. We provide an overview of the current state of the scheduler, which is now ready for the approaching on-site testing during Commissioning phase, along with insights into its web interface and preliminary performance tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Instrument Control Software of SOXS (Son Of X-Shooter), the forthcoming spectrograph for the ESO New Technology Telescope at the La Silla Observatory, has reached a mature state of development and is approaching the crucial Preliminary Acceptance in Europe phase. Now that all the subsystems have been integrated in the laboratories of the Padova Astronomical Observatory, the team operates for testing purposes with the whole instrument at both engineering and scientific level. These activities will make use of a set of software peculiarities that will be discussed in this contribution. In particular, we focus on the synoptic panel, the co-rotator system special device, on the Active Flexure Compensation system which controls two separate piezo tip-tilt devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) is the next generation ground-based observatory for gamma-ray astronomy at very-high energies. The CTAO will combine telescopes of different designs plus a large number of scientific instruments to achieve unprecedented performance and energy coverage. The Array Control and Data Acquisition (ACADA) system provides the means to execute observations and to handle the acquisition of scientific data in CTAO. The Resource Manager (RM) and Central Control (CC) subsystems are core components of the ACADA system. The RM subsystem provides infrastructure services concerning the administration of various resources to all ACADA subsystems. The CC subsystem implements the execution of observation requests received from the scheduler subsystem. CC interprets the received requests and sends appropriate commands to telescopes and other controllable array elements, supervises ongoing operations and coordinates the dynamic allocation and management of concurrent operations of telescopes to subarrays, which are logical groupings of individual CTAO telescopes performing coordinated scientific operations. This contribution provides a summary of the main design features, current status and future implementation plans for the ACADA RM and CC subsystems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software maintainability is a crucial aspect of software engineering, especially within research institutes with research infrastructure where software's longevity and adaptability directly impact scientific endeavors’ success. ASTRON, the Institute for Radio Astronomy in the Netherlands, has faced significant challenges in the past and present regarding the maintainability of its software. Previously, a rather unstructured approach was made to improve the state of software at ASTRON. Recently, a more structured approach has been taken to improve the overall state of the software landscape by employing different strategies. Astron started to use GitLab as its version control system including CI. ASTRON has started to structurally employ modern-day IDEs with all the tools that support better software engineering. Contemporary development practices have been used, and management practices have been adapted to modern software development. Lastly, ASTRON is investing in a shared model of language when building software systems together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radio-frequency interference (RFI) raises a challenging issue confronted by radio astronomy. This challenge is particularly pregnant when recording extremely faint signals such as those associated with pulsar observations. Indeed, generally of higher energy, RFI significantly degrades the quality of the measurements which makes astronomical data more difficult to interpret and analyze. The current solutions to tackle this problem usually consist in performing RFI flagging, i.e., localizing the time-frequency bins in the dynamic spectrum affected by interference. Then the RFI-corrupted data, i.e., the measurements associated with these identified bins, is generally discarded before any subsequent data processing, which unavoidably leads to a loss of information. Alternatively, this paper proposes to formulate RFI mitigation as a joint detection and restoration task to allow parts of the dynamic spectrum affected by RFI to be not only identified but also recovered. The proposed method relies on a particular instance of a recent architecture of deep convolutional networks. This network is trained on a large data sets generated within a simulation framework specifically designed according to physically-inspired and statistical models of the pulsar signals and of the RFI. Through extensive numerical experiments, the proposed approach is shown to reach competitive performance in terms of RFI detection and dynamic spectrum restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MAVIS is the new MCAO Assisted Visible Imager and Spectrograph for ESO’s Very Large Telescope. It is intended to be installed at the Nasmyth focus of UT4 “Yepun” telescope and it is composed of two main parts: a multi conjugate adaptive optics module and its post focal instrumentation, an imager and an IFU spectrograph, both operating in the visible spectrum. The project is now in the final design phase, and it is expected to be commissioned in 2030. In this paper we focus on the interface between the Instrument Control System Software (ICSS) and the Soft Real-Time Computer (SRTC). ICSS is in charge of controlling all the motorized functions, managing the scientific exposures, monitoring the status of the system and coordinating the sequence of operations; on the other hand, RTC receives data from from the wavefront sensors (8 LGS and 3 NGS) to compute the corrections to be applied by the two-post focal deformable mirrors and 8 LGS jitter mirrors. ICSS will be based on the new ESO ELT software framework, which is still under development; SRTC will be based on the new ESO RTC Toolkit, also under development. We present the first design of the common interface between ICSS and SRTC, focusing mainly on the communication processes (commands and data) and which are the most critical points we had to face.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FORS (FOcal Reducer and Low Dispersion Spectrograph), a multi-mode optical instrument mounted on the Very Large Telescope's (VLT) UT1 Cassegrain focus, gets a new look. The upgrade, known as FORS-Up (FORS-Upgrade), is being carried out by ESO and INAF-OATs, and includes, beside replacement of some optical components, the replacement of all the motors, the development of a new calibration unit, the adoption of a new detector, and the design of a control electronics based on the new ELT standards. The refurbishment work has started on the twin spectrograph FORS1, decommissioned in 2009 which was sent to the integration premises of the Astronomical Observatory of Trieste. After resuming the final design of the control electronics, this paper presents the PLC software implementation and the current state of the electronics integration with the new mechanics carried out at INAF-OATs. It also focuses on the ELT-based software and hardware solutions that have been adopted to meet the performance and safety requirements for the motorized functions that control the multiobject spectroscopy blades and the scientific exposure shutter and require customized applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gran Telescopio Canarias (GTC), a 10.4-meter telescope with a segmented primary mirror comprising 36 hexagonal segments, relies on a crucial stabilization system to ensure the primary mirror behaves as a single entity. After 15 years of operation, some subsystems are beginning to incur high maintenance costs and are encountering obsolescence issues that, in some cases, limit their performance capabilities. The primary mirror stabilization system is among these subsystems, facing significant migration challenges due to its real-time characteristics and high number of hardware elements it manages. We introduce the validation process of a new platform based on Linux Preempt-RT, industrial Compact PCI computers, featuring real-time characteristics at both the operating system level and control card drivers. Due to the difficulty in accessing real hardware, both due to operational restrictions and security concerns, and to expedite the development process, a comprehensive test bench mimicking production hardware was constructed. Once the development reached sufficient maturity, tests were conducted on the production system, validating both the performance of the new solution and extracting performance metrics and satisfaction of real-time requirements. As a result, it has been confirmed that the new solution not only functions correctly and meets current requirements but also would allow for an increase in the nominal performance of the system and enable a control loop with higher performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During ongoing maintenance and development of software used to control the European Southern Observatory (ESO) Very Large Telescope (VLT), the detection of memory leaks in legacy and newly developed software is of the utmost importance. This paper describes investigations into the development and use of additional test support software using Machine Learning (ML) to determine the presence of memory leaks. The software is implemented to integrate within existing pytest code and is designed to be executed alongside software module nightly tests as part of Continuous Integration (CI) testing. The work’s prime objective is to highlight memory suspicious processes so that memory leaks can be found and fixed before software deployment at the observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology to automate and accelerate the PLATO Payload (P/L) Boot Software (BSW) testing procedures by presenting a set of pre-programmed TCL scripts with different verification targets, satisfying the BSW requirements. These scripts are conceived in order to run an autonomous regression testing while verifying the BSW core functionalities, and in case of an additional BSW verification is needed, a set of scripts will be available for obtaining an automatic quick health-statement. The present method was proven by carrying out the pre-programmed functional and performance tests on the different PLATO’s BSW versions installed on the ICU development models. The tests performed on these models have proven their effectiveness during the BSW testing process, since the testing time has been greatly reduced and the test results can be archived to maintain a useful record that contemporaneously with the dedicated TCL scripts may assist in future verification of the flight BSW version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Low Frequency Array (LOFAR) is Europe’s largest radio telescope, originally designed, built and operated by ASTRON. It consists of an interferometric array of low band and high band antennas, distributed among 52 stations. Since 2018, a considerable upgrade of the main infrastructure has taken place both on the hardware and on the software side, the so-called LOFAR 2.0. The monitor and control software system of each LOFAR 2.0 station is based on the open-source TANGO-Controls framework, which manages the device architecture and the various functionalities of the station, including its states and transitions. Since each hardware device of the station is implemented as a software module, the startup of the station and its states transitions until a full operative state implies a non-trivial interaction and communication among the different device classes. The proposed design solution places each one of these devices in a specific hierarchical structure, which defines the parent-child relations and the allowed operations for its nodes. Besides that, the device hierarchy can be different according to the two main sequences that are involved in the station states transition: the power sequence and the control sequence. The whole set of sequential operations are entirely managed by the TANGO framework, in particular from a root device called Station Manager, which controls the children devices and the hierarchical sequences. In order to adhere to the TANGO architecture, the operations are mainly developed exploiting device attributes and properties, such that a potentially complex process is handled in a very straightforward, lightweight and maintainable way. The aforementioned software architecture has been already deployed and successfully tested on the LOFAR2 Test Station (L2TS) located in the Netherlands. Therefore, it is proving to be a primary feature for the whole LOFAR2 infrastructure, in view of a forthcoming fully operational phase within the next few years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is currently a strong push towards infrared astronomy, like the ground-breaking JWST and the upcoming ROMAN and Gaia NIR missions. The Japanese JASMINE telescope will be the first Near Infrared (NIR) astro-photometric mission to focus on the Galactic central region and, in many senses, it will be pioneering the field of NIR high-precision astrometry for Milky Way (MW) dynamics. In order to test our data processing pipelines, we require a robust and reliable way to generate mock images. In this contribution, we present the JASMINE input catalogue: the most complete census of point-like sources in the NIR towards the Galactic centre. We used this catalogue as a blueprint from which to generate mock sources that resemble real stars as much as possible, while offering also the possibility of generating entirely new sources to compensate for the observational incompleteness. The method, while conceptually simple, requires treating each star of the input catalogue as new evidence that updates our prior knowledge, which in this case is represented by the underlying model of the MW used. The result is a custom probability distribution function for each star from which to draw mock sources. This represents the biggest and most realistic mock catalogue of the MW centre to date. In the future, we will improve it by adding more proper motions and parallaxes to the input catalogue, and by modelling the dependence of the distance on the kinematics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The expansion of scientific, economic, and military activity in space has driven concomitant growth in the diversity, dynamism, and size of the anthropogic space object population. The safety of spacecraft operations depends on the detection and monitoring of transient events among this population, such as satellite maneuvers, proximity operations, and component articulation, as well as the routine maintenance of high-accuracy ephemerides. Optical sensors are effective tools for this task when applied in large numbers. However, responsiveness to transient events necessitates geographic diversity, and cost-effectiveness motivates heterogeneity of aperture and instrumentation. Orchestrating a globally distributed, diverse collection of sensors to satisfy space-object tracking objectives remains an open challenge. In this work, we adopt the open-source MACHINA agentic software framework to embody an autonomous space domain awareness agent that addresses this challenge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel software called TCSpy which is designed to efficiently control a multi-telescope array through network-based protocols. The primary objectives of TCSpy include centralized control of the array, support for diverse observation modes, and swift responses to the follow-up observations of astronomical transients. To achieve these objectives, TCSpy utilizes the ASCOM Alpaca protocol in conjunction with Alpyca, establishing robust communication among multiple telescope units. For the practical application of TCSpy, we implement TCSpy within the 7-Dimensional Telescope (7DT). 7DT is a telescope array consisting of 20, 0.5-m telescopes, equipped with 40 different medium-band filters. The main scientific goals of 7DT include detecting the optical counterparts of gravitational-wave sources, identifying kilonovae, and the spectral mapping of the southern sky. Through the integration of TCSpy, 7DT can achieve these scientific objectives with its unique observation modes and rapid follow-up capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Intelligent Observatory (IO) is a project of the South African Astronomical Observatory which aims to improve the efficiency of observing, optimize the use of the observatory’s resources and allow rapid follow-up of targets of interest. We have developed software to enable our telescopes and instruments to be programmatically controlled and have used this to develop remotely operable web interfaces for each of these. We are now focused on enabling robotic operation. To this end we have adopted the Las Cumbres Observatory’s Observatory Control System (OCS). This allows users to submit observing requests, and the OCS scheduler produces a schedule of observations for each telescope. We have developed software to retrieve the latest schedule, configure the telescope and instruments accordingly, and take the required exposures. In full robotic mode, it is important that the telescopes and instruments be operated only when safe to do so. We have developed watchdog software, using the same interfaces, to monitor the weather and shut down telescopes and instruments if the weather turns bad.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Erich A. Wiezorrek, John Lightfoot, Andrea Modigliani, Mark J. Neeser, Alex Agudo Berbel, Yixian Cao, Lars Lundin, Ric Davies, Robert J. De Rosa, et al.
The ERIS data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for the reduction of ERIS data, the support of Operations, and a monitoring of instrument health and data quality. The pipeline generates science-ready data products that are ingested into the ESO archive. The Enhanced Resolution Imager and Spectrograph (ERIS) is an instrument that both extends and enhances the fundamental diffraction-limited imaging and spectroscopic capabilities of the VLT. The observational modes ERIS provides are integral field spectroscopy at 1-2.5 um, done with ERIS-SPIFFIER, imaging at 1-5 um with several options for high-contrast imaging, and long-slit spectroscopy at 3-4 um, done with ERIS-NIX and ERIS-LSS, respectively. The pipeline recipes can be executed either with EsoRex at the command-line level, through the ESOReflex graphical interface, or using the new ESO Data Processing System. This poster will present the main functionalities of the ERIS-NIX and ERIS-SPIFFIER pipelines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging in the near-infrared is affected by a background signal coming from both the terrestrial atmosphere and the instrument itself, which plays an important role in limiting the instrument performances even when standard hardware solutions are applied – like the cryogenic cooling. Several extremely faint sources – which still produce relevant count levels – can therefore remain hidden under the noise, or else their weak characteristic peaks could be mistaken as residual noise peaks. In recent years, the development of increasingly sophisticated and performative deep learning techniques has been finding a number of applications in astronomical data handling and process. We present here a study aimed to identify below-the-noise (S/N⪅1) sources in near-infrared astronomical images. We used a dataset of images in the J (1.25-micron), H (1.65-micron) and K (2.2-micron) bands, acquired with the SWIRCAM near-infrared camera mounted at the AZT24 telescope in Campo Imperatore observatory in the decade 1999 – 2008. Each image from a first subset has been compared with the corresponding, photometrically deeper image from the 2MASS catalogue, producing a set of positions of the sources in 2MASS. After built a Denoising CNN with a paired catalog of 2MASS clean images and artificially added-noisy images with a GAN, the SWIRCAM images have then been fed as input to the CNN, with the aim of identifying a pattern in the background around the missed astronomical sources. The CNN has proven to be effective in removing IR image noise in a more efficient way with respect to classical analytical denoising algorithms, leading to detect extremely low S/N sources, which have also been compared to the validated catalog. The algorithm can be potentially applied to images coming from any telescope, identifying all the sources below the noise and above the intrinsic detectability threshold of the detector. As such, it represents a powerful way to push the limiting magnitude of a telescope beyond the classical paradigm based on the signal-to-noise ratio only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array (SKA) will host two radio telescopes in South Africa and Australia, each dedicated to observing at mid and low frequencies, respectively. The project has adopted the TANGO controls framework for its telescope control system. Both mid and low telescopes constitute subsystems whose components are implemented using TANGO monitoring and control and other bespoke as well as off-the-shelf software for the computing and network platforms. All devices are implemented in line with our SKA Control System Guidelines and with the aid of a shared repository for streamlined device server implementation. This ensures adherence to standards such as logging, asynchronous command execution, just to mention but a few. The components in the subsystems each implement a specialized behavior and state derived from the shared repository. The forthcoming discussion will outline how the TANGO controls framework is employed to implement the essential control elements for the SKA telescope. We will further detail our federated approach to implementing device servers which manage the different components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ASTRI project (Astrofisica con Specchi a Tecnologia Replicante Italiana) led by INAF (Italian National Institute of Astrophysics) was created to study astronomical sources that emit gamma rays at very high energies. The ASTRI Mini-Array project involves the construction and installation of nine Cherenkov telescopes at the Teide Observatory (Spain), three of which have been constructed thus far. This document concerns the development of the low-level software for the control of the remaining six telescopes forming part of the project. The logic to be implemented will be derived from the existing software on the ASTRI-Horn prototype telescope installed on Mount Etna as well as from what was created by INAF on the already existing ASTRI Mini-Array telescopes. The control system of each of the six telescopes is called TCU (Telescope Control Unit) and will be developed on a Beckhoff PLC, using the TwinCAT development environment. The TCU will generate the pointing trajectories, control the telescope’s movement, and enable the execution of every procedure required for the maintenance, testing, and calibration of the telescope control system. The TCU will also supervise all of the telescope’s sub-devices, such as the camera and power supplies, and will manage I/O signals of the interlocks and the logic of the safety procedures in collaboration with a Beckhoff safety PLC. It will also monitor all parameters related to the movement and status of hardware devices. The low-level control system will interface with the high-level TCS (Telescope Control System) software through a standard OPC-UA (Open Platform Communications - Unified Architecture) server, allowing the supervision and command of all equipment connected to the PLC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Cherenkov Telescope Array Observatory (CTAO) embodies the next phase of ground-based gamma-ray astronomy, engineered to function in the age of multimessenger astronomy. This observatory consists of two arrays, accommodating a collective count of more than 60 Cherenkov telescopes. These telescopes are strategically positioned in both the Northern hemisphere on La Palma Island, Spain, and the Southern hemisphere at Paranal, Chile. CTAO integrates a diverse array of telescope designs and scientific instruments, all collaboratively working to achieve unmatched sensitivity and energy coverage. This collective effort aims to advance the exploration of transient phenomena within the GeV-TeV range. This paper delineates the ongoing development of the monitoring, logging, and alarm subsystems within the Array Control and Data Acquisition System (ACADA) for the CTAO. The Monitoring System (MON) is tasked with overseeing and logging the overall conditions of the array. It has the capability to acquire the fundamental data required to enable predictive maintenance to minimize system downtime. The MON provides an unified tool for monitoring data items from telescopes and calibration instruments at CTAO sites, ensuring immediate availability for operators and facilitating quick-look quality checks. Meanwhile, the Array Alarm System (AAS) collects, filters, and exposes alarms originating from ACADA processes and array elements, thereby enhancing observational efficiency. This paper outlines the MON and AAS, including the technological implementation choices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LOCNES (Low Cost NIR Extended Solar Telescope) is a newly installed solar telescope at the Telescopio Nazionale Galileo. This small telescope has been specifically developed to examine the infrared spectrum of the Sun with the GIANO-B high resolution infrared spectrograph. LOCNES observes the Sun by integrating the entire solar disk, so its observations lack any spatial resolution and are comparable to what can typically be obtained for any other star. This observational method is commonly referred to as” Sun-as-a-star” observations. In this paper we provide an overview of the LOCNES Instrument Control Software (ICSS), which is in charge of controlling the dome, the telescope, and enable the acquisition of spectra with GIANO-B. We illustrate the control network, the instrument functions and elements to be controlled, the overall design of LOCNES Instrument Control Software and its main components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution we present the FAST, which is a comprehensive software suite that aims to streamline and automatically manage the forecast of atmospheric and astroclimatic parameters (provided respectively by Meso-Nh and Astro-Meso-Nh models) on large ground-based telescope installations. The forecast of the aforementioned parameters is becoming crucial for the operation of the large telescope installations which possess atmospheric-sensitive equipment equipped with Adaptive Optics (AO) systems. FAST performs automatically all the steps of an atmosphere forecast process: initialisation and forcing data, atmospheric simulation, postprocessing and managing of the outputs. The role of such service is useful both in optimizing beforehand AO instruments to the next atmospheric conditions and in planning telescope observations (especially in “service mode”) in order to maximize the scientific output. FAST was applied first to the ALTA Center project, which provides forecasts for the LBT telescope. Then it was extended to the more recent project FATE that is a similar forecast system applied to the VLT. Since its first version FAST evolved and it has been modified to fit with the different technical specifications of the different projects gaining in modularity. It is now able to provide forecasts on different timescales (from days to hours before) and to provide forecast during night and daytime. After several years of continuous development, we can say that FAST reached full maturity and it is now ready for applications to other projects/sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AVU-GSR is a pipeline designed to solve the problem of the Global Astrometric Sphere Reconstruction of the Gaia ESA mission whose goal is to replicate the AGIS baseline process. The pipeline produces an independent solution using a different astrometric model and different algorithms for the solution of this problem, thus providing an effective way to assess the reliability of the solution, as it is called by the absolute character of the satellite measurements. It recently passed its qualification phase with real data, successfully solving the sphere reconstruction problem at the sub-mas level with Cycle 2 data. We review the context, the current status of the pipeline, and the development needed to cope with the goal of contributing to the forthcoming Gaia Data Release four.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ASTRI ("Astrofisica con Specchi a Tecnologia Replicante Italiana") is a collaborative international effort led by the Italian National Institute for Astrophysics (INAF) for developing an array of nine 4m-class dual-mirror Imaging Atmospheric Cherenkov Telescopes (IACTs) sensitive to gamma-ray radiation at energies above 1 TeV. The array is placed at the Teide Observatory in Tenerife, in the Canary Islands. In order to support the development, installation, and operations of the ASTRI Mini-Array, an on-site Information and Communication Technology (ICT) Infrastructure has been designed. This paper describes the design of this ICT infrastructure, which includes various subsystems dedicated primarily to host the Supervisory Control and Data Acquisition (SCADA) software whose aim is to control and monitor the array of telescopes and to perform data acquisition and data quality control. For each subsystem, the best technology solutions were chosen. A dedicated Virtual System based on ProxMox for telescope control, to ensure the easy control and management combined with high reliability and continuity of service was implemented. To ensure the throughput of tens of MB/s the data acquisition and dispatch operations were realized bare metal from the camera and frontier server, combined with a dedicated BeeGFS-based storage system to ensure the necessary performance and provide a distributed, shared and concurrent filesystem. The high performances of the online data quality control and of the Monitoring System are guaranteed by a Kubernetes Technology approach, which also improves the automation, the scaling and deployment. These subsystems and ASTRI telescopes are interconnected by the high-performance network, so special attention has been focused on the network topology to ensure both reliability and data transfer throughput, both in the local network and for transmission to the remote archive facility in Rome where the data are transferred as soon as they are available. The entire ICT infrastructure was engineered to have no Single Point of Failure (SPOF) and to ensure high availability, because there will be no one dedicated to its maintenance on-site at Teide and during the night. Therefore, all the most critical systems have been designed in hot redundancy, that is, capable of supporting a failure without service interruption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Ariel space telescope in the ESA Cosmic Vision program aims to uncover the chemical composition of exoplanetary atmospheres. Ariel achieves this by using multi-wavelength spectroscopy and photometry with high photometric precision across a planet’s orbit. During these observations, the telescope requires stable pointing to reduce the photometric noise caused by spacecraft jitter. This task is covered by a dedicated instrument: the Fine Guidance Sensor (FGS). The FGS is a science instrument providing photometry and spectroscopy in the visual and near-infrared. While the gathering of science data is a key aspect of the FGS, the images are also used for the guiding of the telescope. Both tasks are carried out by the Instrument Application Software (IASW) which is implemented on the instrument’s data processing unit. The key scientific tasks of the IASW involve up the ramp sampling, data reduction and compression. Additionally, the IASW also handles more general tasks such as the instrument health, housekeeping management, commanding of the detectors, the handling of telecommands and telemetry, as well as fault detection, isolation and recovery. The software development is facilitated with a suite of tools and utilities that were created over the course of previous projects. These allow us to cut down the needed workload. This paper explores the design to code workflow and our test-driven development approach while also covering the peculiarities and challenges of the Ariel FGS IASW implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ASTRI (”Astrofisica con Specchi a Tecnologia Replicante Italiana”) program, led by the Italian National Institute for Astrophysics (INAF), is an international collaboration focused on developing and operating an array of nine 4-meter class, dual-mirror Imaging Atmospheric Cherenkov Telescopes (IACTs). These telescopes are designed to detect gamma-ray radiation at energies above 1 TeV. The ASTRI Mini-Array is being constructed at the Teide Observatory in Tenerife, Canary Islands. To support the development, installation, and operation of the ASTRI Mini-Array, a dedicated on-site Information and Communication Technology (ICT) Infrastructure has been designed. This ICT infrastructure hosts the Computing System for the SCADA (Supervisory Control and Data Acquisition) software, which monitors, controls, and acquires data from the ASTRI Mini-Array telescopes and associated auxiliary hardware. The deployment model for SCADA is based on a combination of containers, bare metal servers, and virtual machines, all of which communicate seamlessly. Containerization techniques, paired with advanced orchestration systems such as Kubernetes, enhance the system’s efficiency. In this paper, we detail the virtual and containerized environment system, which employs the concept of Infrastructure as Code (IaC) in conjunction with a container orchestration system. IaC enhances portability, automation, agility, and version control, while the orchestration system automates deployment, scalability, load balancing, resource management, recovery, and container version management. Together, IaC and container orchestration significantly simplify the infrastructure and application management. IaC automates the provisioning of basic infrastructure, while container orchestration ensures that containerized applications run efficiently and reliably. This architecture allows us to manage the entire SCADA system with ease, enabling rapid deployment of the entire system—both the environment and applications—at the Teide Observatory or other test sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We outline the development of the readout software for the Prime-Cam and Mod-Cam instruments on the CCAT Fred Young Submillimeter Telescope (FYST), primecam readout. The instruments feature Lumped-element Kinetic Inductance Detector (LEKID) arrays driven by Xilinx ZCU111 RFSoC boards. In the current configuration, each board can drive up to 4000 KIDs, and Prime-Cam is implementing approximately 25 boards. The software runs on a centralized control computer connected to the boards via dedicated ethernet and facilitates such tasks as frequency-multiplexed tone comb driving, comb calibration and optimization, and detector timestream establishment. The control computer utilizes dynamically generated control channels for each board, allowing for simultaneous parallel control overall, while uniquely tracking diagnostics for each. This work demonstrates a scalable RFSoC readout architecture where computational demands increase linearly with the number of detectors, enabling control of tens-of-thousands of KIDs with modest hardware, and opening the door to the next generation of KID arrays housing millions of detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FRIDA (inFRared Imager and Dissector for Adaptive optics of GTC) is a near-infrared imager and integral field spectrograph covering the wavelength range from 0.9 to 2.5 microns. FRIDA will work in two observing modes: direct imaging and integral field spectroscopy. This paper describes the main achievements and current status in the development of the electronics and control systems for FRIDA´s cryogenic mechanisms, cabling, and keeping (HK). A description of the main hardware and software tests is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a simple sytem for job distribution built on the RabbitMQ open-source message broker. The system is based on the concept of job sources (origins), sinks (destinations) and realms (hubs), where a network of these entities can be readily established with a configuration file for each site and a RabbitMQ server running at each hub. Jobs are sent via persistent JSON-encoded packets and delivered reliably by RabbitMQ queues. The system was built primarily for robust data tranfers amidst volatile network connections but is general enough for any kind of flexible job distribution scheme where reliable delivery of job messages is needed. We are releasing the ”datasink” as an open-source Python package on Github. Aside from RabbitMQ, there are minimal additional requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design, implementation, and testing of a modern user interface for the control and tuning of the Gemini Observatory M2 tip/tilt/focus mechanism. As part of observatory upgrades the M2 control electronics that sense and drive the M2 mechanism are being upgraded from a 90’s DOS-based system to contemporary components and software while maintaining and potentially improving system performance. While the position sensors, motors, and voice coils actuators remain untouched, the control electronics to sense and drive the assembly are being completely replaced. Functionally, the control computer and user interface are being split across multiple computers to better isolate the real-time functionality and free up more resources for the user interface. The control computer is an x86 architecture PC utilizing PCIe data acquisition cards and the PREEMPT RT Linux patch to maintain control loops at 3800 Hz. For rapid display, plotting, and analysis (including FFT computation) of telemetry from the control computer, the user interface is a Linux based desktop application built using PyQt. Communication between the control computer and user interface is achieved via EPICS as the transport layer due to its ubiquitous usage throughout the observatory. The user interface is an expert user tool for monitoring, controlling, and tuning the M2 mechanism live and exposing low level parameters not exposed to higher level systems and users. The user interface is feature rich with tools for control engineers like plant identification and filter tuning. The performance results and lessons learned during development and through the laboratory testing phase are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Vera C. Rubin Observatory’s Data Butler provides a way for science users to retrieve data without knowing where or how it is stored. In order to support the 10,000 science users in a hybrid cloud environment, we have to modify the Data Butler to use a client/server architecture such that we can share authentication and authorization controls with the Rubin Science Platform and more easily support standard tooling for scaling up backend services. In this paper we describe the changes being made to support this and some of the difficulties that are being encountered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Herzberg Extensible Adaptive optics Real-Time Toolkit (HEART) is a complete framework written in C and Python for building next-generation Adaptive Optics (AO) system real-time controllers, with the performance needed for extremely large telescopes. With numerous HEART-based RTCs now in their design or build phases, each with different AO algorithms, target hardware, and observatory requirements, continuous automated builds and tests are a cornerstone of our development effort. In this paper we describe the many levels of testing that we perform, from low-level unit tests of individual functions to more complex component and system-level tests that verify both numerical correctness and execution performance. Incorporating extensive testing into HEART since its inception has allowed us to continuously (and confidently) refactor and extend it to both meet the changing needs of local on-sky experiments, as well as those of the several major facility instruments that we are developing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and characterization of Earth-like planets in the solar neighborhood is a key scientific goal for the European Southern Observatory’s upcoming Extremely Large Telescope (ELT). A major limitation in achieving the high contrast ratios, i.e. 10−8–10−9, at the small inner working angles necessary to conduct these observations is the presence of Non-Common Path Aberrations (NCPAs), which arise from optical path differences between the adaptive optics system and the science instrument. NCPA calibration is therefore critical for improving the performance of several current and planned instruments including ELT-PCS and ELT-HARMONI, a first light instrument for the ELT. We present the development of an alternative approach to NCPA calibration using a deep learning model. The model is trained on both simulated image slicer images and real calibration data obtained from the recently commissioned ERIS integral field spectrograph at the VLT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Science Foundation’s Daniel K. Inouye Solar Telescope (DKIST) is the largest solar telescope in the world, utilizing a 4-m offset primary mirror that accumulates a 13-kW solar load. Safely offsetting and extracting that heat load is the Facility Management System (FMS), that controls all aspects of the Facility Thermal System (FTS). Using 3 PAC/PLC controllers, FMS provides coolant across 11 individual loops, each at different temperatures, to meet the requirements of the telescope, coude, enclosure, optics, and instruments, as well as controlling the Domestic Water System, the Energy Management System, Active and Passive ventilation, and the HVAC system including the Coudé Instrument Lab cleanroom. Control of all of these systems needs to be coordinated to provide the best thermal system performance based on the environmental conditions and operational requirements. Due to the unique and innovative nature of the observatory and the decision to self-perform a variety of construction work packages, all of the system programming and thermal instrumentation design was performed in-house. During DKIST’s construction phase, each subsystem was commissioned at the most basic acceptable level, then immediately put into a production environment due to pre-existing cooling demands of the facility. This led to a tight time budget to program and deliver the FMS, making sequential prototyping, commissioning, and acceptance nearly impossible. These parameters drove the need for a flexible programming approach, more similar to a retrofit than a new system-wide design, when performing system adjustments and integrating new subsystems. By using state machine programming, it is possible to design the system using diagrams that makes it easier to review potential system operations with all interested parties, including those that are not programmers. This also reduces the time to receive feedback from users, debugging time, and identifying edge cases all while making the logic more extensible and flexible. Presented herein is the programming methodology implemented in the FMS that allowed us to meet aggressive moving targets during construction, revise system operation and function in early operations commissioning, and ensures that the system will continue to provide necessary controls for the observatory while maintaining flexibility for future improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We outline the two workflows used for the reduction of science data from the MCAO Assisted Visible Imager and Spectrograph (MAVIS), and describe the inputs, outputs, and static calibration files required for each process of the workflows. Ronchi masks and pinhole masks are used in combination to determine the geometry of the spectrograph slices, and wavelength calibrations will be enhanced with Etalons. The precision required for the Imager astrometry is obtained by the mid-spatial frequency distortion calibrations. To prototype these complex methods and to test the efficacy of pixel tables and error handling we are using the new ESO PyCPL and PyHDRL libraries, which provide an interface to ESO’s classic Common Pipeline Library (CPL) in the Python ecosystem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gemini Observatory commissioned a SDSU (ARC) detector controller (DC) replacement for the aging GNAAC DC for the Gemini Near Infrared Spectrograph (GNIRS). The focus of this paper is on the iterative development approach that led to a unique Python-based DC. We leveraged the stability and modern technology of the Gemini Data System (GDS) and Gemini Instrument API (GIAPI) to facilitate communication between the DC and the Gemini telescope systems. Another core innovation was to implement a Python version of the Gemini specific CAD/CAR EPICS records which allowed us to switch from an EPICS Input Output Controller (IOC) to a Caproto Python IOC. These innovations allow the Python based DC to communicate with all of the many Gemini systems required to process GNIRS observations. The use of a Python based DC enhances the system's functionality but also simplifies future updates and maintenance. Our paper delves into the team-centric iterative development process, the software engineering challenges, and the initial operational performance, emphasizing the software's role in modernizing the observatory's infrastructure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the phase one report of the Bright Star Subtraction (BSS) pipeline for the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST). This pipeline is designed to create an extended PSF model by utilizing observed stars, followed by subtracting this model from the bright stars present in LSST data. Running the pipeline on Hyper Suprime-Cam (HSC) data shows a correlation between the shape of the extended PSF model and the position of the detector within the camera’s focal plane. Specifically, detectors positioned closer to the focal plane’s edge exhibit reduced circular symmetry in the extended PSF model. To mitigate this effect, we present an algorithm that enables users to account for the location dependency of the model. Our analysis also indicates that the choice of normalization annulus is crucial for modeling the extended PSF. Smaller annuli can exclude stars due to overlap with saturated regions, while larger annuli may compromise data quality because of lower signal-to-noise ratios. This makes finding the optimal annulus size a challenging but essential task for the BSS pipeline. Applying the BSS pipeline to HSC exposures allows for the subtraction of, on average, 100 to 700 stars brighter than 12th magnitude measured in g-band across a full exposure, with a full HSC exposure comprising ≈100 detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Southern African Large Telescope (SALT) has initiated a SALT Efficiency Project which aims to mitigate instrument overheads by employing software solutions in areas that waste the most time. Efforts were directed into improving the telescope’s complex focus model, the Guider Pre-Positioning (GPP) for all instruments, and the telescope’s pointing. An approximate 50 seconds have been saved for each observation done since November 2023 as a result of the solutions presented. A current project involves developing a new Observational Control System (OCS) to automate repetitive and repeatable observational steps, such as calibrations and instrument offsets. Another current project aims to introduce a Pyramid WaveFront Sensor (PWFS) onto the Fibre Instrument Feed (FIF) guider, allowing it to give focus feedback to the telescope. SALT plans to have both of these solutions implemented by 2025.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the complexity of scientific instruments, such as spectropolarimeters, managing instrument sequences can be challenging. To address this problem, a Finite-State Machine (FSM) approach has been used to manage solar observation sequences in the GREGOR Infrared Spectrograph (GRIS). FSMs provide a structured and visual representation of control logic, making them well-suited for managing intricate workflows. By using FSMs, both scientists and engineers can clearly define and modify instrument sequences, ensuring the precise coordination of various instrument components. In multiple optical channels spectropolarimeters, such as GRIS, FSMs can effectively synchronize the image acquisition across multiple channels, adjust exposure times, handle errors, and manage the selection of the scanning system. To streamline the implementation process, the CodeDesigner RAD tool was used to create diagrams that illustrate the execution order of the states belonging to a finite-state machine. CodeDesigner’s code generation feature automatically translates these diagrams into C++ code. This approach ensures the precise and reliable operation of the GRIS control software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ESO Common Pipeline Library (CPL) and High-Level Data Reduction Library (HDRL) together form a comprehensive, efficient and robust software toolkit for data reduction pipelines. They were developed in C for reasons of efficiency and speed, however, with the community’s preference towards Python for algorithm prototyping and data reduction, there is a need for access from Python. PyCPL and PyHDRL provide this, making it possible to run existing CPL data reduction recipes from Python as well as developing new recipes in Python. These new recipes are built using the PyCPL and PyHDRL libraries, which provide idiomatic Python interfaces to CPL and HDRL while allowing users to take advantage of the scientific Python ecosystem. PyCPL and PyHDRL are already being used to prototype recipes for the MAVIS instrument pipeline and have been used to develop an extensible pipeline development framework. Here we describe their design, implementation and usage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taurus is an opensource GUI framework that implements a Model View Controller (MVC) design pattern tailored for control systems. It is based on python and Qt and is extensively used in the particle accelerators and large experimental physics community and with Tango Controls. Taurus has an active community and solves design patterns and requirements that also the ELT control software project shall address for its GUIs. It provides a homogeneous way to interact with any control system (attributes and devices) and has extension points for other projects, widgets, and factories. A Taurus "Model" plugin adds support for a new control system, data access. It is the Model component of the MVC pattern. In the ELT case, an "oldb" model plugin was developed. This maps data-points and the tree structure of the database to Attributes and Devices, respectively. Support for read, subscription, write and polling operations were added incrementally over the versions of the "oldb" plugin. Conversely the MAL plugin supports access to the request-reply interfaces of the ELT software applications. Once a plugin has read support, developers have access to the features the Taurus framework offers: Taurus widgets will automatically work with the new model; and scalars, vectors and matrices widgets have immediate support. New widgets for particular ELT requirements are developed as normal Qt Widgets and a Controller class is added that completes the MVC pattern. A review of the integration is presented. An analysis of lessons learned offers our perspective of adoption. Finally, the future work in terms of GUI development is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Real-Time Computer of the Multi-Conjugate Adaptive Optics Relay module for the ESO Extremely Large Telescope (MORFEO@ELT) is the subsystem that computes the atmosphere tomography based on the wavefront captured by nine sensors and controls the shape of three deformable mirrors. Implementing the MORFEO RTC presents many technical challenges due to the high data throughput generated by the system sensors and the heavy processing power required for the real-time mirrors’ shape computation. To meet ESO requirements, the ESO RTC Toolkit will be used to build the soft RTC subsystem, while the Hard RTC will be based on a custom architecture. In this paper, we will discuss some activities undertaken to progress toward the Final Design of the SRTC. Specifically, a physical design is proposed for the MORFEO RTC to meet the computational and network requirements. This design will include both the computing cluster and network physical design. To validate the architecture’s functionalities, some prototyping activities have been initiated: Firstly, a subset of the SRTC components has been created to test the main end-to-end data path, i.e. from the source (wavefront sensor) to the permanent storage (telemetry storage), and through the gateway to the consumer data tasks. Additionally, the core and computationally intensive data tasks will be prototyped using simulated data to benchmark different implementation strategies and various hardware solutions. Finally, the distributed system will be prototyped in a virtual or physical environment. These prototyping platforms will be useful in the final design and development stages to test module functionalities and the system and sub-system interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mid-infrared ELT Imager and Spectrograph (METIS) instrument is one of three first-generation science instruments for the Extremely Large Telescope (ELT) in Chile. It has entered the Manufacturing, Assembly, Integration and Testing (MAIT) phase and it is currently scheduled to be installed in 2028. Its Single Conjugate Adaptive Optics (SCAO) system will provide the performance of an extreme adaptive optics system which enables High-Contrast Imaging (HCI) observations in the thermal/mid-infrared wavelength domain. The METIS Adaptive Optics (AO) control system is responsible for the AO wavefront correction and for supporting AO-related assembly, integration, verification and maintenance activities. It realizes the main AO loop by a Real-Time Computer (RTC) that receives images from a wavefront sensor and commands the corrective optics through the Central Control System (CCS) of the ELT. Several auxiliary functions will run outside of the RTC in the AO Observation Coordination System (AO OCS) that are necessary to maintain the quality of the wavefront correction. For instance, the Differential tip-tilt (DTT) control loop centers the star on the Vortex Phase Mask during HCI observations by adjusting the modulator device via the SCAO Function Control System (FCS) based on sciences images received from the Focal Plane Sensor Gateway (FPS GW). Conceptually, the METIS Adaptive Optics Control System (AOCS) is a distributed software system that is controlled by the METIS Instrument Control System (ICS). This paper describes the current status of the METIS AO control system, driving forces behind the design and the important control loops.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
METIS, the Mid-infrared ELT Imager and Spectrograph, will operate an internal Single Conjugate Adaptive Optics (SCAO) system, which will mainly serve the science cases targeting exoplanets and disks around bright stars. The Extremely Large Telescope (ELT) is expected to have its first light in 2028, and the entire instrument recently passed its final design phase. The Adaptive Optics (AO) of METIS SCAO is designed to correct for atmospheric distortions and is essential for diffraction-limited observations with METIS. The computational and data transfer requirements for these next generation ELT AO Real-Time Computers (RTCs) are enormous and require advanced data processing and pipelining techniques. METIS SCAO will use a pyramid Wavefront Sensor (WFS), which captures incoming wavefronts at 1 kHz with a raw throughput of 148 MB/s. The RTC will ingest these WFS images on a frame-by-frame basis, compute the corrections and send them to the deformable mirror M4 and the tip/tilt mirror M5. The RTC is split up into two distinct systems: the Hard Real-Time Computer (HRTC) and the Soft Real-Time Computer (SRTC). The HRTC is responsible for computing the time sensitive wavefront control loop, while the SRTC is responsible for supervising and optimizing the HRTC. A working prototype for the HRTC has been completed and operates with an RTC computation time of roughly 372 μs. This computation is memory limited and runs on two NVIDIA A100 GPUs. This paper shows a breakdown of the HRTC on a CUDA kernel level, focusing on the tasks that run on the GPUs. We also present the performance of the HRTC and possible improvements for it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-Adaptive Optics Imaging Camera for Deep Observations (MICADO) is one of the first light ESO Extremely-Large-Telescope (ELT) Instruments and is now nearing the completion of its final design stage. The MICADO instrument aims to generate high-resolution images of the Universe at near-infrared wavelengths, which requires maintaining a stable vacuum environment at 82 K inside the MICADO cryostat. To fulfill this requirement and for safety reasons, a PLC-based control software is used. This software communicates with over 180 sensors and devices simultaneously, to remotely maintain the cryostat environment. This paper discusses the software’s design architecture and implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MORFEO is the Multi-Conjugate Adaptive Optics Relay for the Extremely Large Telescope (ELT) that will provide multi-conjugate correction of the incoming wavefront by means of three deformable mirrors: one on the telescope and two in the instrument optical train. The wavefront sensing is based on six laser guide stars projected on a constellation of 45 arcseconds and three natural guide stars selected into the 2,7 arcminutes corrected FOV. The current design of the Real Time Computer (RTC) devoted to the deformable mirrors control is reported in the following. According to the ELT architecture, the RTC consists of a Hard Real-Time Core (HRTC) and a Soft Real-Time Cluster (SRTC). The former is in charge of acquiring data from the wavefront sensors and controlling the deformable mirrors and jitter mirrors. It adopts the HEART platform and will be provided by the Herzberg Astronomy and Astrophysics - NRC Canada - which is joining to the Consortium. The SRTC, based on the ESO-provided RTC Toolkit, provides the interface for the Instrument Control System Software. It performs all the supervisory and monitoring tasks, in addition to the auxiliary loops for optimization of correction. This paper will discuss the state of the updated design of the RTC after the Preliminary Design Review (PDR) towards the final design of the subsystem. It will provide an in-depth description of the distributed architecture adopted by the system, with a particular focus on the architecture of the SRTC. Detailed insights into the design considerations, challenges encountered, and solutions implemented in the SRTC architecture will be presented to provide a comprehensive understanding of the system’s current state and future direction. Part of the research activities described in this paper were carried out with contribution of the Next Generation EU funds within the National Recovery and Resilience Plan (PNRR), Mission 4 - Education and Research, Component 2 - From Research to Business (M4C2), Investment Line 3.1 - Strengthening and creation of Research Infrastructures, Project IR0000034 – “STILES - Strengthening the Italian Leadership in ELT and SKA”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MATTO (Multi-conjugate Adaptive Techniques Test Optics) is a wide-field adaptive optics test bench, under development at the INAF-Astronomical Observatory of Padova, with the goal of supporting the study and development of new Multi-Conjugated Adaptive Optics techniques. Hence, it has been designed to be flexible and composed of independently configurable modules. The DAO4MATTO Real-Time Control system will be a system-tailored implementation of DAO, the new RTC software solution developed at Durham University, that will interface with and control several devices with different purposes. After a short presentation of the main concepts of MATTO, we briefly discuss the hardware and software architecture of DAO4MATTO. Furthermore, we show some preliminary findings obtained in a closed-loop scenario for a basic prototype system, composed of two visible wavelength cameras, a Shack Hartmann wavefront sensor and a deformable mirror.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Simons Observatory (SO) is a ground-based cosmic microwave background experiment currently being deployed to Cerro Toco in the Atacama Desert of Chile. The initial deployment of SO, consisting of three 0.46m-diameter small-aperture telescopes and one 6m-primary large-aperture telescope, will field over 60,000 transition-edge sensors that will observe at frequencies between 30 GHz and 280 GHz. SO will read out its detectors using Superconducting Quantum Interference Device (SQUID) microwave-frequency multiplexing (µmux), a form of frequency division multiplexing where an RF-SQUID couples each TES bolometer to a superconducting resonator tuned to a unique frequency. Resonator frequencies are spaced roughly every 2 MHz between 4 and 6 GHz, allowing for multiplexing factors on the order of 1000. One challenge of µmux is matching each tracked resonator with its corresponding physical detector. Variations in resonator fabrication, and frequency shifts between cooldowns caused by trapped flux can cause the measured resonance frequencies to deviate significantly from their designed values. In this study, we introduce a method for pairing measured and designed resonators by constructing a bipartite graph based on the two resonator sets and assigning edge weights based on measured resonator and detector properties such as resonance frequency, detector pointing, and assigned bias lines. Finding the minimum-cost matching for a given set of edge weights is a well-studied problem that can be solved very quickly, and this matching tells us the best assignment of measured resonators to designed detectors for our input parameters. We will present results based on the first on-sky measurements from SAT1, the first SO MF small-aperture telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VSTPOL enhances VST’s capabilities by adding optical polarimetry via a linear polarized filter. This will make the VST the first large wide-field survey telescope with optical polarimetry. The project addresses the need for optical follow-up observations of Cherenkov Telescope Array (CTA) sources and transients. This paper describes software upgrades required for the new polarimetric mode. The current instrument control software, based on ESO VLT software 2011, manages pointing, acquisition, and active optics. The polarimetric mode necessitates two additional motorized movements: inserting the filter and selecting polarization while tracking the object. Traditionally, VLT systems use a Local Control Unit (LCU) on VxWorks for motor control, but this system is outdated. Since compatibility with modern hardware is crucial, we resorted to a PLC-based system, which are unsupported by the installed VLTSW. Fortunately, the ICS Fieldbus Extension allows for a dedicated Device Control Environment (DCE). This DCE, using an updated VTLSW release, acts as a gateway to control electronics, minimizing system-wide impact and reducing update-related risks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Italian Space Agency (ASI) and the Italian National Institute for Astrophysics (INAF) funded a project to design and develop an archive prototype for the ASI SPace weather InfraStructure (ASPIS). The project, CAESAR (Comprehensive Space Weather Studies for the ASPIS Prototype Realization) created a prototype aiming at unifying multiple Space Weather (SWE) resources through a flexible and adaptable architecture, allowing scientists to adopt an integrated approach, encompassing the whole chain of phenomena from the Sun to the Earth up to planetary environments. In this contribution we present various aspects and stages of the CAESAR project from its design phase to the final prototype. The definition of a template (metadata schema) to collect metadata or the resources (products) contributed to the prototype and management. The management of those same metadata documents through the development of a dedicated tool (ProSpecT, Product Specification Template, using JSON and JSONForms). The challenges in keeping it updated while helping the research community in providing both data and data description. The definition of a set of constraints to handle datasets and their metadata in a homogenized way (as much as possible), identifying potential common data formats and reference frames to follow a chain of phenomena from the Sun to the interplanetary medium up to Earth or planetary surfaces. The actual design and implementation of the prototype archive, its ingestion system and API considerations. The design and development of a web base graphical user interface to enable science research on top of the prototype archive, as well as the development of a dedicated python module (ASPISpy) for advanced data investigation and easier integration with other community drive software. The automation of documentation of the contributed resources (data collections, software tools, modules) from the machine-readable templated documents. All of the above aspects will be presented, highlighting challenges and specific solutions, as well as potential future evolution of the prototype into the actual archive infrastructure for ASPIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European Solar Telescope (EST) is a 4-m class solar telescope that will include a Multiconjugate Adaptive Optics system (MCAO) integrated in the telescope optical path. Its open-dome configuration implies that the complete telescope will be exposed to wind, having an important impact in image stability and quality. The integration of Active Optics (AcO) and adaptive optics (AO) in solar telescopes represents a pivotal area of research aimed at enhancing solar observation capabilities. This study delves into the convergence of these two factors. On one hand, the AcO, responsible for real-time adjustments in optical components such as mirrors, to compensate for mechanical deformations and misalignments. On the other one, the AO, designed to counteract atmospheric turbulence and enhance solar image resolution. Diverse strategies are explored for merging these systems, leveraging advancements in high-sensitivity wavefront sensors, advanced control algorithms, and adaptive deformable mirror configurations. AcO will be in charge of mitigate the low frequency- huge distortions, as gravitational and thermal deformations and the quasi-static component of wind and AO will be in charge of high frequency-small distortions, as wind buffeting and atmospheric turbulences. An analysis of the different strategies proposed for control of the AcO loop and its planned actuation ranges is presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Visible Tunable Filter Instrument (VTF) is a 2D imaging spectropolarimeter for high spatial and spectral resolution solar observations in the visible light. Integration into the world largest solar telescope, the 4m aperture Daniel K. Inouye Solar Telescope (DKIST) started in January 2024. In this paper we present an overview over the complete software infrastructure designed and developed for this instrument. In particular the Instrument Control Software (ICS), the Instrument Performance Calculator (IPC) which is a graphical tool enabling scientist to explore instrument performance and create executable observing configurations. Furthermore, real-time monitoring plugins were implemented to verify data acquisition and instrument performance. The main part of the infrastructure is the ICS package which provides the interface between the operator, the instrument and the observatory. It is based on the Common Services Framework (CSF) provided by DKIST and is based on an object-oriented design and written in JAVA. The interface to the operator is given by the engineering GUI that allows the user to monitor and control all system drives and sensors. All observation and calibration tasks can be configured and started from this GUI. The interface to the instrument is realized by a DKIST framework compatible OPC/UA layer developed for this instrument which interfaces to a Beckhoff Programmable Logic Controller (PLC) that manages the real-time requirements of the instrument. All real-time and synchronization requirements are implemented using the DKIST timing and synchronization system (TRADS) based on the precision time protocol (PTP) that allows timing accuracy well below microseconds. Furthermore, the ICS interfaces to the Camera System Software (CSS) and Data Handling System (DHS), where VTF delivers up to 2400 MB/s or roughly 9 TB/hour when used in spectropolarimetric imaging mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new Focal Plane Systems (FPS) built for the fifth iteration of the Sloan Digital Sky Survey (SDSS-V) at Las Campanas Observatory and Apache Point Observatory each consist of 500 robotic fiber positioners, feeding optical and infrared multi-object spectrographs, that can be arranged in configurations, internally called "designs", to match science targets in the night sky. SDSS-V plans to observe roughly 50,000 of these designs over the five-year survey, with up to 30 being observed on a single night at each observatory. Besides the sheer volume of designs, there are strict time domain requirements ("cadences") that must be respected in order to complete the signature SDSS time domain surveys. This complex set of requirements necessitates software that can ensure cadence requirements are always respected, in addition to normal observing requirements such as maximum sky brightness, moon distance, etc., while also optimizing the designs scheduled in a night to ensure all designs are completed by the end of the survey. We present an overview of the roboscheduler package which was developed to solve these problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ScopeSim is a general-purpose observation data simulation ecosystem for astronomical instruments. It allows users to simulate observations with multiple instruments for the same (often custom built) target description using a common software platform, thus enabling “apples-to-apples” comparisons of the outputs. The simulation engine has been described in a previous proceedings paper,1 however behind the scenes a vast infrastructure has been built to support the ScopeSim engine. The supporting elements are in some cases major projects in their own right, with multiple additional use cases and user groups. For example, the Instrument Reference Database (IRDB) provides a public and open-source platform for instrument consortia to distribute a coherent picture of the optical properties and characteristics of their instrument(s).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Building on the Square Kilometre Array's (SKA) Continuous Integration/Continuous Deployment (CI/CD) advancements, this paper focuses the adoption and evolution of cloud-native technologies in the integration environment and subsystem-level orchestration. We present SKA's transformative journey employing Kubernetes, Integration environment and release process to streamline development workflows, automate integration testing, and ensure high-velocity deployments. The paper discusses strategies for dynamic environment provisioning, the seamless integration of independently developed subsystems, and the management of complex workflows with advanced CI/CD capabilities. We highlight the implementation of Kubernetes cluster integration environments with software's lifecycle management across multi-cloud environments, accentuating a robust, scalable, and transparent infrastructure. These cloud-native paradigms have not only optimized observatory operations but have also paved the way for enhanced collaboration, observability, and reliability in the era of large-scale astronomical projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the advent of astronomical facilities to observe multiple sources in a single observation it is necessary to automate innovative strategies that select targets in an optimal way to make the most of telescope time, the tip-end of new scientific discoveries on astronomy. Meeting this need, we developed an Exposure Time Calculator (ETC) Extension for Multi-Object Observation (EMOO) to be used with MANIFEST, a fiber positioning facility for the GMT. The code is currently built upon the ETC of the GMT-Consortium Large Earth Finder (G-CLEF), a first-light instrument for the GMT, which is serving as a base test model for the results presented in this work. This new capability must be able to deliver the maximum Signal to Noise Ratio (SNR) for each target, balanced within a range provided by the user, with no saturation and in a limited amount of time. That means an optimal exposure time must be partitioned between a minimal number of observations blocks, always letting enough time to fulfill the requirements in all blocks. In order to mitigate the impact of unnecessary simulations our algorithm is inspired by binary searching routines and the results, compared to the classical approach, show that we can deliver a more uniform and higher SNR distribution across an optimal set of observations. In other words, we increased the SNR by decreasing the observation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commissioned in November 2022 at W. M. Keck Observatory (WMKO), the Keck Planet Finder (KPF) instrument is a fiber-fed high-resolution spectrometer, developed in partnership with California Institute of Technology, University of California Berkeley Space Science Laboratory, and University of California at Santa Cruz. At the heart of object acquisition and tracking is KPF’s guiding system, which uses 100 Hz tip/tilt corrections to maintain the target on the fiber aperture, and coarse telescope corrections to keep the target within the effective range of the tip/tilt mechanism. This paper covers the design of the guider software at the heart of these corrections, emphasizing simplicity for the initial approach, deliberately avoiding potentially unnecessary optimization, while leveraging existing standards and practices at WMKO. The software is implemented in Python with one key component written in C. The paper covers the gradual process of optimization, addressing critical performance bottlenecks in targeted fashion without rewriting the bulk of the code; the bottlenecks include image acquisition, image transmission, command transmission, and image processing. This paper concludes with an analysis of the tip/tilt performance on-sky.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Roman Space Telescope (RST) Wide Field Instrument (WFI) will be utilizing a preliminary Science Data Processing (SDP) pipeline during its Integration and Test, and to some extent during Operations, to track basic statistics and identify known features such as cosmic rays, snowballs as well as possible anomalies in raw detector data. In our detectors, these anomalies appear as jumps in the ramp of a readout and are classified as cosmic rays if they appear as a streak or snowballs if they’re more circular. The WFI employs an array of 18 H4RG-10 detectors that collect image samples. Each set of raw frames within a non-destructive exposure is packaged by the SDP pipeline into image cubes for each detector. Each cube is a time series of 4096 × 4096 accumulating pixel frames. The preliminary analysis pipeline is used to locate anomalies in these time-series accumulation frames and identify the type of anomaly, either natural phenomena or detector characteristic. To compare different methods, we’ve implemented both heuristic-based and data-driven methods to identify anomalies. For the heuristic-based approach, we identify snowballs and cosmic rays by the size and shape of outlier pixel clusters between consecutive frames. For data driven methods, we evaluated a Convolutional Neural Network (CNN) model, and more traditional methods like Principal Component Analysis (PCA). CNN is a supervised learning/classification method. Thus, we used a labeled dataset of anomalies to perform segmentation of the image and identify anomalies. We used previously identified cosmic rays and snowballs to measure the accuracy and efficiency of the mentioned approaches. In evaluating these methods, we aim to pick the best fit for the SDP pipeline’s anomaly detection in terms of both performance and runtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Simons Observatory (SO) is a Cosmic Microwave Background (CMB) observatory consisting of three small aperture telescopes and one large aperture telescope. SO is located in the Atacama Desert in Chile at an elevation of 5180m. Distributed among the four telescopes are over 60,000 Transition-Edge Sensor (TES) bolometers across six spectral bands centered between 27 and 280 GHz. A large collection of ancillary hardware devices which produce lower rate “housekeeping” data are used to support the detector data collection. We developed a distributed control system, which we call the observatory control system (ocs), to coordinate data collection among all systems within the observatory. ocs is a core component of the deployed site software, interfacing with all on-site hardware. Alongside ocs we utilize a combination of internally and externally developed open-source projects to enable remote monitoring, data management, observation coordination, and data processing. Deployment of a majority of the software is done using Docker containers. The deployment of software packages is partially done via automated Ansible scripts, utilizing a GitOps based approach for updating infrastructure on site. We describe an overview of the software and computing systems deployed within SO, including how those systems are deployed and interact with each other. We also discuss the timing distribution system and its configuration as well as lessons learned during the deployment process and where we plan to make future improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Taranta project, a collaboration between the MAX IV Institute and Square Kilometre Array (SKAO) within the Tango Collaboration, presents a web-based, no-code approach for creating graphical interfaces dedicated to monitoring and controlling Tango-based systems. Through an active development phase and close collaboration with the community, the software has seen the introduction of advanced features and a code refactor to enhance functionality, user experience, performance, and testability. Currently, the software has matured to a level enabling adoption in both the daily operations of MAX IV synchrotron beamlines and the developmental phases of the SKA project. It is also used within the Tango community. To facilitate users in increasingly comprehensive utilization, new features have been implemented, such as the ability to access devices from different Tango databases within a single dashboard. Architectural improvements have also been made to seamlessly integrate applications like Synoptic into Taranta. Additional developments and improvements have been introduced in response to user needs. Beyond these aspects, the presentation will delve into one of the most significant challenges faced—meeting the demands from institutes with diverse scientific purposes and project stages. This encompasses considerations of architecture, component numbers, and operator skill sets. The presentation will then delve into lessons learned, emphasizing the importance of adaptability and continuous improvement in a dynamic collaborative environment. The strategy for defining the roadmap will be discussed, with a focus on producing increasingly comprehensive GUIs tailored to the specific requirements of the users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The TSRS was a set of two multi-channel solar radio polarimeters which performed continuous surveillance of the decimetric and metric coronal radio emissions with high time resolution. TSRS was operational in Trieste (Italy) under the management of the INAF Astronomical Observatory of Trieste from 1969 to 2010 when a lightning stroke irreparably compromised its operations. Starting from that moment, all the services related to it, including the archive system, were abandoned due to lack of funds and resources. An Heritage Archive (TSRS-HA) has been preserved with the available digitized data and this contribution describes how it was planned to refurbish archive and service for such a heritage resource following current common FAIR principles adherence and new technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed the SMA eXchange (SMA-X) as a real-time data sharing solution, built atop a central Redis database. SMA-X provides efficient low-latency and high-throughput real-time sharing of hierarchically structured data among the various systems and subsystems of the telescope. It enables fast, atomic retrievals of specific leaf elements, branches, and sub-trees, including associated metadata (types, dimensions, timestamps, and origins, and more). At the Submillimer Array (SMA) we rely on it since 2021 to share a diverse set of approximately 10,000 real-time variables, including arrays, across more than 100 computers, with information being published every 10 ms in some cases. SMA-X is open-source and will be available to all through a set of public GitHub repositories in Summer 2024, including C/C++ and Python3 libraries, and a set of tools, to allow integration with observatory applications. A set of command-line tools provide access to the database from the POSIX shell and/or from any scripting language, and we also provide a configurable tool for archiving the observatory state at regular intervals into a time-series SQL database to create a detailed historical record.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array (SKA) Central Signal Processor (CSP) is a real-time backend system that processes incoming astronomical signals to produce visibilities and detects and profiles pulsars. The CSP is composed of the Local Monitoring and Control (LMC), the Correlator and Beam-Former (CBF), and the Pulsar Search and Timing (PSS, PST) engines. Each subsystem is developed by a different team in the SKA control software domain following the Scaled Agile Framework (SAFe) to guarantee coherence in the development. The definition of an engineering User Interface (UI) for the CSP is challenging due to the variety of skills that are required to identify the most relevant design concepts and potential roadblocks to an effective representation and the fact that several teams are involved. For this reason, we chose to leverage a collaborative design approach that can easily fit SKA’s biweekly sprint cadence while involving experts from different fields in a “think outside the box” process. Sketches and wireframes undergo multiple refinement sessions that lead to the realization of an engineering dashboard representing the current state of CSP implementation. User testing sessions constitute the means by which the success of the proposed UI is measured. Additional positive effects are alignment across different teams on the current capabilities of the system and its future development, as well as a way for continuously adapting the UI to the system’s evolution. In this paper, we describe the challenges we faced while coordinating the design across multiple teams, show how the process was implemented to fit the short agile iterations and overall SAFe framework and present the results of the work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array Observatory (SKAO) project aims to develop scientific and control software within a collaboration involving more than 30 teams distributed worldwide. The agile method tailored for large collaborations known as SAFe (Scaled Agile Framework) has been adopted to manage such a complex scenario. SAFe provides principles and practices for coordination among various teams involved in the incremental development of software, ensuring a global view of the project status at the individual team and program board levels. The CREAM team is a specialized team responsible for developing both the Local Monitoring and Control software of the Central Signal Processing (CSP.LMC) subsystem and the web graphical interface named Taranta. Within the team, it was observed that, in some cases, Taranta’s features released according to the Definition of Done (DoD) criteria and in alignment with the management’s requests did not fully meet user needs when adopted by teams. The lack of a process allowing for the reconsideration and redevelopment of features, coupled with uncertainty about when these issues could be addressed within the Program Increment, led teams to either underutilize or lose confidence in the whole software. A hypothesis is that the problem could be due to a missing beta testing process for features at the time of release. In the implemented SAFe process, features are demonstrated at the time of release during a system demo, and the team can make limited adjustments based on feedback collected during the session, typically constrained as they are usually already working on another feature. This paper suggests an approach to conduct beta tests that is well integrated within the SAFe framework and specifically its two-level iterations: quarterly and be-weekly ones. The challenges are being able to spend enough time exploring and detecting UX issues (through a beta testing process) and delivering a solution within consecutive sprints, but still adhering to the DoD. The paper outlines the steps for selecting beta users, the types of tests conducted, how feedback was collected and the final considerations. We believe this approach, testing with small groups of SKAO personnel, can be standardized for potential adoption across all the SKAO teams, and potentially also by other large scientific projects that rely on agile development methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Science Foundation’s Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory in operation at Haleakalā, Hawaii. The High-Level Software (HLS) group develops and maintains software and control systems for the observatory. During the nearly 20 yearlong observatory construction phase we utilized the Concurrent Versioning System (CVS) as the revision control component of our software configuration management process. As we transitioned into the observatory operations phase, we began looking at using a more modern revision control system that would offer more flexibility and control for software development going forward. Through our long-term planning process, the decision was made to transition from CVS to the Git revision control system. In this paper we describe the motivation to move from CVS to Git for software revision control at DKIST and explain the planning involved to ensure a smooth transition. We will review challenges faced, planning steps involved, migration results, and look at lessons learned from the process. We conclude by sharing initial feedback from the team on the experience thus far using Git.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gaia Legacy idea born to enhance the platform system of Gaia's Big Data science data center in Turin and to be a center for the management, visualization, processing, manipulation and analysis of large amounts of data that require the development and implementation of innovative systems with exascale approach, guaranteeing high performances. The system responds to the scientific needs of the INAF community beyond the core science of the Gaia mission itself under a multimessenger approach such as characterization of cosmological gravitational waves and degenerate binary systems in the Milky Way. The system will extend its capability to engineering data collected by space instrumentation for studies of future missions, observation calibration and qualification of instrumental models. We present the Gaia Legacy repository project which goal is the generation of a deep and complete sky, on 4π sterad, as a reference tool and therefore interoperable for the integration of multiband data (from radio to high energies) and multimessenger data (e.g. sources of gravitational waves, neutrinos,...) for efficient data mining aimed at fast multidimensional scientific data exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Square Kilometre Array precursors are starting to release the first data of their large-field continuum surveys, making clear that also in the field of radio astronomy, deep learning turns as the primary solution for handling an overwhelming volume of data. Within this framework, our research group is taking a forefront position in various research initiatives aimed at assessing the effectiveness of ML techniques on survey data from ASKAP and MeerKAT. In this work we show how an unsupervised multi-stage pipeline is able to discover physically meaningful clusters within the heterogeneous Supernova Remnant (SNR) population: a convolutional autoencoder extracts features from multiwavelength imagery of a SNR sample; then an unsupervised clustering process operates on the latent space. Despite a large number of outliers, we were able to find a new classification system, in which most clusters relate to the presence of certain features regarding not only the morphology but also the relative weight of the different frequencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the final integration and commissioning of the Gemini High Resolution Optical Spectrograph (GHOST) instrument control software. The instrument was developed at three separate organizations starting in 2011 and finishing in 2023 with the software control system undertaken by a team at the Australian National University. This scenario presented challenges during development and ultimately when integrating at the various labs in Australia and Canada and then commissioning at the Gemini South telescope in Chile. We describe the software aspects of this process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.