|
1.INTRODUCTIONThe VIDEO instrument is a small and compact instrument with extra wide field of view. Based on Thales Alenia Space exclusive patent combining freeform mirrors in a smart optical compact TMA design and latest advances in material development, the VIDEO instrument will have the capability to perform High Resolution images as well as video monitoring on a wide scene. The VIDEO project consortium includes 6 entities from 3 different European countries, combining the skills of academics and SMEs, and three large industrial companies, among which the project coordinator, Thales Alenia Space.
The VIDEO instrument will includes the latest advances in Europe technologies for telescope structure and mirrors as well as latest innovative solution for video detection, acquisition, treatment and compression. As the final purpose of the VIDEO instrument is to demonstrate video monitoring on a wide scene with autonomous motion detection and ranging capacity, the project will include video channel development in order to integrate latest CMOS matrix technologies as well a smart algorithms. An end to end ground demonstration with a downscaled demonstrator instrument will be performed in the frame of the project to prove the capacity of the overall European supply chain to Produce, Assemble, and Test the VIDEO instrument by using the best skills in Europe in all these domains. 2.VIDEO INSTRUMENT PRINCIPLE DESCRIPTIONAt system level the VIDEO concept is based on the principle to have high temporal revisit with automated detection of region of interest. The concept is well adapted to LEO constellation (or train of satellites) and this can be adapted to several use cases and operational concept (Fire detection, flood detection, deforestation identification, maritime disaster monitoring, ship identification and monitoring, maritime piracy fight, etc…) The VIDEO instrument is based on a Korsch TMA (three mirrors anastigmatic) telescope optical design in order to have a wide field of view with a reduced number of optical surfaces. On top of that the design is fully reflective in order to allow growth potential for a wide range of wavelength for the future use (from UV to IR band). The instrument is based on a cubic geometry in order to be able to stack easily several instruments together in order to increase the satellite swath or to increase the number of spectral channels on the same field of view. The focal plane assembly (FPA) will be based on a big single 2D matrix detector (the target is to use the 220 MPix Gigapyx detector a derivative from the current 46Mpix version, part of the Gigapyx family) but the instrument image plane could be compatible with bigger matrix) in order to simplify the architecture (no spatial registration nor stability between several detectors). The baseline for the focal plane material is AlSi40 (in order to keep the athermal properties of the overall structure), but there is also several options as the need for thermoelastic stability of the FPA is less stringent with a single detector for the instrument. 2.1.Instrument principleThe VIDEO instrument main goal is to perform video acquisition on small satellite with two main features, a detection mode and an identification mode. The detection mode will allow continuous imaging in background with very downgraded signal and resolution (due to high speed swath on ground). But thanks to smart algorithm on board, it will allow to identify shape or event that will automatically trigger the identification mode with permanent video mode. SNR is improved through post-accumulation of successive frame pixels imaging the same area on ground. During the video identification mode, the algorithm will decide to perform multi-window acquisition if relevant, depending on objects detected and identified in the whole scene for ranging and storage purpose. Data to be downloaded will be chosen among the relevant windows analyzed in order to minimize the size of the downlinked data. In video identification mode, instrument line of sight is stabilized towards a fixed point on the ground. 2.2.VIDEO instrument technical choicesThe VIDEO project proposes a set of breakthrough technologies for instruments for Earth Observation. the VIDEO project instrument is to combine the latest technologies in terms of Additive Manufacturing for low Coefficient of Thermal Expansion (CTE) AlSi40 material development, in order to have same material for structure and mirrors. AlSi40 is a material specifically developed in order to adapt its CTE to the desired value thanks to the proportion of Si added in the Aluminum matrix (“40” refers to the proportion of Si in the material). Thanks to that, the instrument will have an extremely stable and homothetic behavior for optical point of view, as well as an optimized stiffness to mass ratio. Due to the AlSi40 based material for mirrors and structure, the instrument will be one of the best in class in terms of demisability (due to AlSi40 low melting point and additive manufacturing light weighted structure) in the perspective of being parts of future Low Earth Orbit small satellite observation constellations. 2.3.Optical designThe optical design is a Korsch solution with three Free forms mirrors. It is based on the following patent “Telescope anastigmat à 3 miroirs de type Korsch” from Thales Alenia Space. The main advantages are:
This design has an intermediate image and a real exit pupil, which can be baffled in order to limit the straylight. 2.4.Structure and mirrorsThe structure of the instrument is based on a monolithic structure which is used as a primary structure to hold the mirrors, and also as a secondary structure to hold for instance parts for active and passive thermal control and stray light mitigation, diaphragms, vanes and baffles if necessary. Thanks to additive manufacturing, there is no intermediate links inside the structure but only external interfaces (for mirrors, platform, thermal control and stray light mitigation) Structure and mirrors design were done with topology optimization in order to increase the stiffness to mass ratio for the whole instrument. The mirrors are in the same material than the structure in order to keep the athermal properties for the instrument (mostly no sensitivity to focus under thermal environment modification). The structure and mirrors are produced in additive manufacturing in order to optimize the specific stiffness (stiffness to mass ratio) of the overall instrument. The link between the structure and the mirror is done thank to three isostatic bonded area (gluing box) in order to not induce constraints in the mirrors after optical alignment and to allow reversibility of the mirror mounting in case of anomaly or lack of performances. The instrument link to the platform is realized thanks to interfaces in the four corners of the instrument. In that way, the accommodation of the instrument could be possible on several configuration and various platform. 2.5.DetectorThe detector for flight purpose will be as big as possible and use the future 220 Mpixels Gigapyx sensor version to be compatible with large Field of view application. After validating the 46 Mpixels version, the design resizing should be straight forward as the same stitched block will be used, even if a validation of the whole video chain including much more data to process will have to be done. A particular challenge will be to design a large package compatible with the die dimension and space mission. The 220 Mpixels (16640 x 13200) Gigapyx detector exhibits a pixel pitch of 4.4 µm in a BSI technology, with a monochrome or RGB format. The detector operates in rolling-shutter mode with a readout noise as low as 1.5 e rms in high gain. A low gain is also available and allows HDR (High Dynamic Range) operations. A 12 to 14 bits ADC is integrated on chip with adjustable analog gain from x0.5 to x8. The Gigapyx technology offers a lot of advanced functions : multiple Region of Interest (ROI), flip and skipping modes, advanced burst-sequence and page configuration with multiple user settings, 2x2 and 4x4 Binning color-compatible, analog and digital gains, digital offset… Up to 256 selectable sub-LVDS data lanes at 860 Mbps for high speed-operations are available. 2.6.Image processing2.6.1.CompressionThe compression process is handled by means of a prediction-based near-lossless to lossless algorithm. Concretely, the proposed solution is based on the algorithm defined by the Consultative Committee for Space Data Systems (CCSDS) in the CCSDS123.0-B-2 standard. The CCSDS123.0-B-2 is a low-complexity solution for lossless and near-lossless compression, thought for multispectral and hyperspectral images. However, it is adapted in VIDEO project for carrying out the compression of panchromatic/RGB video sequences, requiring some modifications. This approach provides very important benefits over other possible alternatives:
2.6.2.Detection and trackingAs a starting point, a version of the predictor developed in a previous project from University of Las Palmas Gran Canaria (ULPGC) is used, adapting it to the requirements of this mission for compressing video sequences. The detection process is handled using a convolutional neural network (CNN) approach. The main reasons for following this approach are the high detection performance of neural networks when enough data is available, as well as their flexibility for detecting multiple and different kind of targets using a unique network. On one side, since the goal of this project is to be capable of collecting video sequences, it is assumed that a high amount of data will be available for training the deep learning models. On the other side, once that a particular network structure has been designed and implemented into the hardware available on-board, its detection performance can be constantly improved by updating the neural network weights (after carrying out larger training stages on ground). Additionally, it can be trained for detecting different kind of targets, switching the detection behavior of the network by just selecting the corresponding weights without changing the network itself, nor its implementation. Different standard neural network architectures has been tested. The experiments have been carried out in a workstation and using Python as description language. This including an efficient way to implement this kind of network architectures using hardware-friendly C language, guarantying that it can be efficiently implemented later onto Field Programmable Gate Array (FPGA) devices. It is also important to find out which kind of layers fits better in this kind of devices in order to select or develop an appropriated neural network architecture. 2.7.Demonstrator with reduce scale (end to end test)The VIDEO Project development comprise an end to end demonstration based on reduce scaled instrument for functional test. The instrument demonstrator is scaled from the flight instrument architecture with a 1/3 ratio. The VIDEO Telescope will be fully integrated and tested in a single plant (Thales Alenia Space in France) to optimize the transport and configuration change duration. The whole telescope demonstrator assembly and integration process will be performed in ISO 8 conditions as much as possible in order to limit particular pollution on optics and sensor parts. Verification is implemented throughout the manufacturing and AIT cycles to ensure a successful integration campaign. The main alignment steps and performance tests (Wave Front Error) are derived from set-ups already validated on previous instruments from Thales Alenia Space in France and will ensure a good performance, efficiency and capitalization between demonstrator and future flight models. The Gigapyx 46 Mpixels format will be used for the demonstrator phase in the frame of this demonstrator test (demokit format instead of real focal plane). The target will be to prove the concept and validate the technology concept. Regarding data processing and video channel validation, the scope of the demonstration will mainly focus on boat detection and tracking. Once that a feasible solution has been achieved, it could be extended for trying to detect other kind of targets. 3.VIDEO INSTRUMENT DEVELOPMENT STATUSThe development status on the technologies will be focused on the demonstrator parts. Due to the reduced scale, the demonstrator had a specific optical design. 3.1.Demonstrator optical performancesThe WFE performances (@632.8 nm in the field) of the demonstrator are given here below: Taken into account the following contributors
As the gravity effect has not been studied yet, the factory level for the WFE is evaluated without the gravity effect. The demonstrator transmission performance is taking into account the reflectivity coefficient of the coating (protected silver) of the mirrors, and an allocation for the transmission of the filter and the window, the value of the transmission is shown in the following table. The impact of the particular and molecular contamination on the transmission is not taken into account for this value. 3.2.Freeform mirror developmentAlSi40 mirrors are already existing at TRL6 when produced by classical production processes, but the VIDEO development will demonstrate that this is possible to build AlSi40 freeform mirrors from additive manufacturing process. Mirrors will be made of additive manufacturing blank, then diamond turning machined and finally polished to meet the Wave-Front Error (WFE) specification. This technology is not yet used for making optical mirrors at the moment, and this is where the innovation is. The three mirrors of the demonstrator have the following characteristics. In order to minimize the error on the optical shape of the freeform mirrors, the Sag(x,y) of the surface is already defined and compared to the equation of the mirror surface. Topology optimization concept is to obtain an ideal material distribution in a designated volume on which constraints are applied. The aim can be different from a project to another, for example to enhance mechanical performances, or to gain mass. The solver works on a Finite Element Model that has to be done before, boundary conditions and loads included. In data settings (the parameters of the topology optimization), it is required to define the objective of the optimization, the constrains (for example, the range limiting the responses) and the responses (what is calculated, for example force, stress, displacement…). By iterating process, the solver assigns a normalized density to each element (kind of sorting of elements) in order to best satisfy the objective and constrains. When the calculation has converged, the user has to select the threshold of the elements’ density, from 0 to 1, that conditions shape and thicknesses of the outcome design. In the case of the VIDEO mirrors, there have been many iterations done among data settings before finding the best conditions for the topology optimization by defining crucial parameters which are the objective, constrains and responses, and the right combination between boundary conditions and loads. Also, three finite element models for M1 and three for M3 have been made from the beginning of the study, one to change the mesh size, one because the geometry changed a bit. Topology optimization has been performed for the demonstrator mirrors M1 and M3 as if they were ones from the flight instrument, particularly with regard to the mechanical environments and the optical performances. In this framework, the loads applied are detailed in the following list:
No thermal environment has been implemented in the simulations by lack of specification. SFE, for Single Front Error, corresponds to the deformation of the front face of the mirror, expressed by quadratic sum of Zernike polynomials. Topology optimizations have been performed with the software OptiStruct® because it provides the chance to integrate this kind of opto-mechanical parameter (thanks to an external Fortran code). A large volume at the rear of the mirrors is given as design space to OptiStruct® so that topology optimization has a maximum space to distribute the material. On the contrary, elements of the front face and of the three external ears from red zones in pictures above are affected to the non-design space (they are not taken into account during the optimization and remain as they are). Various iterations had been necessary to set the right mechanical subcases combined to the suitable constrain for optimization. First, load cases were applied one at a time on the start volume to see the impact on the design, then they were put together in the same optimization run to get an unique design. As the solver OptiStruct® is not able to optimize itself the thickness of the front face (because of 3D element modeling and the non-design space definition), the mesh was prepared previously taking into account a front face composed with 3 layers of 1 mm high elements. Thus, several topology optimizations have been performed 2 or 3 times but with a different front face thickness. Below, are presented some examples extracted from the numerous intermediate designs obtained since the beginning of the study. From these results, the next step is to interpreting the design (choice of the threshold) and smooth the shapes where it is useful. Actually, the smoothing is a step where the engineer’s feeling is at stake to get a part that is mechanically responding to the needs and also that can be manufactured. Then, it is necessary to re-create the design in CAD format as the post-processor of the topology optimization does not provide a format from which we are able to perform mechanical analyzes immediately. After that, the solid from the CAD file can be meshed by solid elements (3D) to get a finite element model in order to perform the final mechanical analyzes (it can be seen like a simple display). The geometries shown after integrate also recommendations from partners to give better conditions for both manufacturing and polishing (reference surfaces, thicknesses). To allow the additive manufacturing and diamond turning processes to be taken into account when defining the mirrors, several iterations have been done on the surface reference, the calibration of the thickness not to have any deformation. Final modifications have been made to the models:
As the final step of the mirrors design, a proper modelling and load cases are apply to verify the mechanical performances of the mirrors. These analyzes have been performed taking into account the extra-material modifications and the nickel plating (NiP) external layer that enables the mirror to be polished. The load cases and boundary conditions associated are detailed below:
The first batch of mirror blanks was produced in July 2021, but unfortunately some cracks were identified on the blanks which had cannot allowed the mirror to be polished. A back up with standard milled mirrors was launched in parallel with the new batch of mirrors after the correction of the machine set up. The mirrors are presently in SPDT polishing (single Point Diamond Turning) at AMOS facilities. AlSi40 material will offer a nice compatibility (in terms of CTE) with NiP plating for the active surface optical layer. The polishing process will include utilization of SPDT machine, robots for post polishing activities and if necessary AMOS has the capability to use IBF (Ion Beam Figuring) machine to finalize the process until the required values. During the polishing, mirrors will be measured several times with a dedicated designed metrology setup based on interferometer and CGHs (Computer-generated holography) one for each mirror. Through 3D CMM (Coordinate Measuring Machine) and laser tracker it will be possible to position and align mirrors on the setup with the precision required. 3.3.AlSi40 Structure developmentThe telescope structure was also designed with topology optimization. At the beginning, the primitive geometry shown below was assigned as the design space of the topology optimization, except for the external interfaces and optics interfaces. It corresponds to the envelope volume of the instrument in which the optical path has been cleared. The next step is to proceed to the meshing and model suitable and realistic conditions for loading. Data setting are defined for the optimization: First raw result of the optimization is given here below: For the final structure design based on the preliminary studies and with co-engineering work with AddUp team for the feasibility of the structure, the final design was issued. The first structure was issued without Laser Beam Melting (LBM) manufacturability constraints in order to focus only on the need coming from mechanical and interfaces at telescope level. Then a co-engineering phase was performed with AddUp in order to take into account the feasibility and the constraints for the LBM process. This consisted mainly to perform the following tasks on the structure model:
The expected mass for final structure (without mirrors) is less than 1.3 kg. Structure verification is performed to validate that there is no obvious weaknesses on the design because the demonstrator is not supposed to sustain flight environments (mechanical or thermal). The maximum stress is 32 MPa under QSL of 30g. Considering the yield limit of the material of 228 MPa and the associated safety factors (1.875), the minimum residual margin for the telescope demonstrator structure is +280%. The telescope structure design of the demonstrator is well sized and justified even if there will be no environmental solicitation in the frame of the H2020 VIDEO project. The approach for the design and the sizing of the demonstrator structure is in this way quasi identical to the approach that would be applied for the structure of a telescope flight model. Nevertheless, due to the scale ratio of the demonstrator structure (1/3 with respect to the flight model), there is a risk that the conclusion of the topology optimization, and the manufacturing constraints will lead to a slightly different design and shape for the telescope flight model. The final structure of the telescope demonstrator will be produced before end of 2022 in the frame of the VIDEO project. 3.4.Video chain development3.4.1.Video chain principleThe goal of the VIDEO chain is to:
The video channel in its camera is described as follow: The Gigapyx detector module in its demokits interfaces with a proximity electronics to drive the sensor and a digital processing electronics based on a re-programmable FPGA in order to implement multiple complex real-time processing on a single hardware. 3.4.2.Gigapyx Sensor developmentThe Gigapyx sensor development have been finalized in the frame of the VIDEO project, the first version of this sensor (in 46Mpixels definition) has been produced and tested. The measured characteristics of this sensor after manufacturing are presented in the following table. Figure 34.Gigapyx 46Mpix main performances (according to test report). (a) with and without FPN correction ![]() The future (bigger)version of the Gigapyx sensor will use more stitch in the 2 directions than the first version. For end to end test purpose, the 46MPix sensor is provided in a demokit for accommodation on the demonstrator instrument for VIDEO with all necessary features to operate the sensor in the frame of the ground demonstration. 3.4.3.RGB Video compression efficiencyIn this work, the CCSDS-123.0-B-2 algorithm for near lossless compression of multi- and hyperspectral images has been adapted to compress RGB video sequences while being still fully compliant with the standard and modifying its core functionality. The goal of this approach was to provide a solution for remote sensing applications that allows to carry out the compression of data of different nature with a single compression core that can be efficiently executed on on-board satellites. To do this, the followed approach consist in using the temporal domain to predict the information of the subsequent video frames, instead of the previous spectral channels as it is done in the CCSDS 123.0-B-2 standard. Different experiment have been carried out to validate the compression performance of the developed compression chain and its different configuration. The verification has been automated to accelerate the process using a Python based test framework, compressing and decompressing the input video sequence and generating the reconstructed video to analyze it. Different reports are generated for each test case, including compression ratio and distortion. Datasets of several video sequence from real remote sensing scenario have used to assess the quality of CCSDC-123 standard for RGB Video compression. At the end a set of exhaustive parameters have been identified as the one that provides the best results in terms of compression rate and distortion ratio, not only for RGB video but also for panchromatic video compression. Typical result in terms of compression ratio versus reconstructed video quality (measured in Peak Signal-to-Noise Ratio PSNR) are shown below. The obtained results demonstrate the goodness of the proposed solution for remote sensing on-board applications, having achieved compression ratios up to 39 without observing any degradation (i.e. almost lossless). Higher ratio can be achived at the cost of decreasing the decompressed video quality. Further future works may also include the modification of the proposed CCSDS 123.0-B-2 predictor for being able to work with ROI. This could allow to specify the relevant spatial areas that need to be preserved with higher level of detail and the areas that can be more aggressively compressed. 3.4.4.Convolutional Neural Network (CNN) architectures efficiency for detectionIn order to select the right architecture, a wide evaluation of existing CNNs models suitable for a resource constrained hardware implementation has to be carried out. Five different architectures have been evaluated (AlexNet, VGG Network, ResNet, MobileNet, DenseNet) measuring both the detection performance and the computational cost. At the end, regarding dedicated figure of merit for the targeted scenario (boat detection), the training parameters and the computational cost, a lighter architecture derived from MobileNet CNN, so called MobileNetv1Lite has been identified as the most suitable option for this work, and as a result this architecture was the best candidate for the FPGA implementation of the project. After selecting the most suitable architecture for the project, the next step is the hardware implementation. FPGA device selected in the VIDEO project is the Xilinx Kintex Ultrascale XCKU040-2FFVA1156E. The implementation of the selected CNN architecture on the target FPGA will be performed on the latest stage of the project for the end to end test. 3.5.End to end testSince the demonstrator end to end test is the final goal of the project, the main activities performed by Thales Alenia Space in Spain are related to the preparation of the test:
The following scheme describes the software and test hardware parts of the end-to-end configuration set-up, together with the corresponding responsibilities (blue = Thales Alenia Space in Spain, green = Pyxalis, Orange = ULPGC): The data in a format legible by ULPGC’s algorithms will be an output of the Pyxalis camera software. The raw data from the camera will be also output for validation checks of the algorithms. A software produced by Thales Alenia Space in Spain will perform the following functions:
This test bench will also be used during the development of the camera:
The end to end test includes a set of revolutionary features for instrument testing including extended scene simulations (thanks to micro display Sony Oled ECX335S associated with collimator), synthetic image database (generated by Thales Alenia Space in France) for simulation and algorithm training on the two instrument modes. 4.CONCLUSIONThe VIDEO project is close to its final achievement to demonstrate that the set of new breakthrough technologies is suitable for innovative optical payload architectures for space. This was possible thanks to the European Commission support all along the project, but also to tremendous skills, heritage and know-hows from all the consortium partners during the development of system, sub-systems and technologies. The final demonstration and validation will be held in Madrid at Thales Alenia Space in Spain facilities in 2023. ACKNOWLEDGEMENTThis work has been conducted within the Video Imaging Demonstrator for Earth Observation (VIDEO) project, that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 870485. This publications reflects only the authors’ view. The Agency is not responsible for any use that may be made of the information it contains. REFERENCESFlorence Montredon,
“Additive Manufacturing strategies for Space Applications” / Thales Alenia Space,”
Additive Manufacturing for Defence, Aerospace & Space congress, , London, United Kingdom
(2016). Google Scholar
JY. Plesseria, L. Jacques, P. Gailly, C. Lenaerts, K. Fleury-Frenette, Y. Garin, A. Chiavarini, A. Heck, C. Borbouse, F. Montredon, E. Chouteau, I. Liémans, P. Bigot, L. Pambaguian,
“Development and test of a three and an half space applications using additive manufacturing technologies,”
ECSSMET,
(2018). Google Scholar
Romen Neris, Adrian Rodriguez, Raul Guerra, Sebastian Lopez, and Roberto Sarmiento, FPGA-based implementation of a CNN architecture for the on-board processing of very high resolution remote sensing images, Google Scholar
Yubal Barrios, Raul Guerra, Sebastian Lopez, and Roberto Sarmiento, Adaptation of the CCSDS 123.0-B-2 Standard for RGB and Multispectral Video Compression, Google Scholar
|