Proceedings Article | 24 September 2012
KEYWORDS: Databases, Data processing, Astronomy, Statistical analysis, Data centers, Human-machine interfaces, Signal to noise ratio, Telescopes, Stars, Image processing
Before LAMOST spectra release, raw data need to go through a series of processes, i.e. a pipeline after observed,
including 2D reduction, spectral analysis, eyeball identification. It is a proper strategy that utilizing a database to
integrate them. By using database the coupling between relative modules would be reduced to make the adding or
removing of them more convenient, and the dataflow seems to be more clearly. The information of a specific object,
from target selection to intermediate results and spectrum production, can be efficiently accessed and traced back
through the database search, rather than via FITS reading. Furthermore, since the pipeline has not been perfected yet, the
eyeball check is needed before the spectra are released, and an appropriate database can make the feedback period of
eyeball check result more conveniently, thus the improvement of the pipeline will be more purposely. Finally, database
can be a data mining tools for the statistics and analysis of massive astronomical data. This article focuses on the
database design and the data processing flow built on it for LAMOST. The database design requirement of the existing
routines, such as input/output, the relationship or dependence between them is introduced. Accordingly, the database
structure suited for multiple version data process and eyeball verification is presented. The dataflow, how the pipeline is
integrated relied on such a dedicated database system and how it worked are also explained. In addition, some user
interfaces, eyeball check interfaces, statistical functions are also presented.