Outstanding questions on data reduction strategy
- Are we forcing all the reducer to use the pipeline that will be realeazed by the data reduction and data quality group ? Can the reducer decide to use their own pipeline ?
SJS : for the management plan, we say that ALL will be reduced with a single version of the pipeline. We discuss in Belfast how exactly to manage this
- Are we training reducer (with the pipeline) before they are going to observe ?
SJS : as we now have a european support team, the question should be different. But still relevant. I think the team MUST be capable of doing reducing data fast with the pipeline. The reducer must be trained in EFOSC2 data in particular, and capable of fast reductions.
- Are we installing the pipeline on each reducer laptop ? or are we providing an account in a machine where all the reducer can connect and reduce the data ? [the pipeline will probably be a python script based on standard python modules (pyraf,numpy)]
SJS : it would be best if we provide an account on a machine and all reductions done on one platform, with one pipeline - and you have personal control over the version.
- Giving that we are using ETABASE for the public spectra, where are we storing the pre-reduced images ?
SJS : storing of the "detrended" (i.e. bias, flat, defringed, wavelength calibrated 2D images) is essential. This will allow anyone to go back to the image to do the extraction. The files required to get to flux calibrated spectra (spectroscopic standard and telluric standard) must also be made available in a "data package" with these detrended 2D frames.