Queued Service Observing with MegaPrime:

Semester 2003B Report



A - Introduction

The Queued Service Observing (QSO) Project is part of a larger ensemble of software components defining the New Observing Process (NOP) which includes NEO (acquisition software), Elixir (data analysis) and DADS (data archiving and distribution). The semester 2003B was the second semester for which MegaPrime was used for the NOP system. It was a much better semester than the previous one although a lot of time was lost to technical problems, engineering, and mostly very adverse weather. The overheads with MegaPrime are also longer than with CFH12K so we are not as efficient on the sky as we were in the past. The semester 2003B was really where we learned how to deal with additional issues introduced by having a large program with very restrictive constraints like the CFHTLS. CFHTLS represents about 50% of all the time allocated for QSO observations, the other half being the regular PI programs with also their own constraints. As showed below, we were able to balance very well the time between all of the different Agencies (I most say quite an achievement with the terrible weather we've had!). However, the final share of observing time between the different components of the CFHTLS was not satisfying at the end. This is hardly surprising since the larger component of the LS has strong time constraints, which become even more dominant when the weather is unstable and bad on long periods of time. A full solution to this important problem, which will probably include a dynamic way of shifting priorities between programs, remains to be found.

B - General Comments

The semester started really well with a long run in September with good weather and MegaCam behaving quite well. However, it was during that run that we could start getting a better evaluation of our operational overheads, since for instance the guiding acquisition had improved a lot. The overheads are still much larger than we would like, mostly because of focus sequences and filter changes. Later during the semester, engineering for the auto-focus was done but this feature is not there yet. It appears that MegaPrime is more sensitive to temperature changes or position on the sky than CFH12K so focusing must be done more often than in the past. The introduction of frequent focus sequences (8-12) during a night add up to about 10% of overheads. It appears that MegaPrime is more sensitive to temperature changes or position on the sky than CFH12K was so focus must be done more often than in the past. Even if the queues are always prepared to diminish the global overheads, filter changes remain very costly. One filter change takes about 90 seconds longer than the readout of the mosaic. Depending on if we have to switch queues frequently or not, the number of standard stars observed, or how the PI requested the observations to be done, as much as 25 filter changes can take place during one night. Also, there are still some issues on the control software for the guiders and NOP which introduce additional time lost on the sky (now much less frequent). Our goal is to reduce the overheads as much as physically possible with the instrument. Auto-focus (expected to be available for the beginning of 2004) should help but filter changes will remain costly. A more quantitative description of the overheads is given below.

Starting in October, this is when difficulties arose with the instrument, coinciding with the arrival of bad and unstable weather. During that run, we started to experience a problem with CCD03 in the mosaic. The problem remained intermittent and affected all of the runs for the remaining of the semester. (Note: The faulty connector within the dewar has now been fixed). Since a failure of one of the chips was not totally unexpected and that we require that the PIs indicate in their Phase 2 what impact on their science such a failure could have, this was not too difficult to deal with, despite of course for the lost of 2.5% of data. We experienced also some time lost to weather (hurricane). At the end of the run, a major failure of the filter jukebox forced us to observe with only one filter for a given night. We do not count that as technical time lost but this is obviously not ideal for queue observing....

The last three runs of the semester were very severely affected by bad weather. For instance, 65% of the run in November was lost and about 50% of the run in Janaury. Moreover, another failure of the filter jukebox resulted in some time lost as well. As detailed below, the global fraction of time lost to weather and technical problems for 2003B is very high, in fact a factor of 2 higher than what we could expect. Thus the final statistics for 2003B are way lower than we would like. This is discussed in the section C below.

Some general remarks on QSO in general for the semester 2003B:

1. Technically, the entire chain of operation, QSO --> NEO --> TCS, is efficient and robust. The time lost to the NOP chain is very small. This is a complex system and we have worked real hard to reduce the overheads on this. Glitches appear from time to time, mostly on the guide star acquisition, but the system is now quite reliable and efficient.

2. The QSO concept is sound. With the possibility of preparing several queues covering a wide range of possible sky conditions in advance of an observing night, a very large fraction of the observations were done within the specifications. The ensemble of QSO tools allows also the quick preparation of queues during an observing night for adaptation to variable conditions, or in case of unexpected overheads. The introduction of the CFHTLS with time constrained observations on the large-scale adds significant complexity to queue scheduling and requires much more work on planning of the run. For 2003B, however, the global validation rate (validated/observed) was lower than usual. A discussion on this is included in section C.

3. QSO is well-adapted for time-constrained programs. The Phase 2 Tool allows the PIs to specify time constraints. Two of the components of the CFHTLS have very restrictive time constraints. We can handle those easily if the weather is cooperative (of course!) although the introduction of time constrained observations on a large-scale adds up definitive complexity in the scheduling process.

4. Very variable seeing and non-photometric nights represent the worse sky conditions for the QSO mode. In 2003B, we were short on "shapshot" programs or regular programs requesting mediocre conditions. As a result, we were often forced to observe programs in conditions worse than requested because the weather was very unstable. Again, we were able to calibrate all the fields requesting photometry but originally done during non-photometric conditions. The availability of Skyprobe and real-time measurements of the transparency is extremely valuable and regularly used do decide what observations should be undertaken.

5. Observations of moving targets is feasible in a queue mode. During the 2003A semester, we implemented a way of preparing observations for moving targets in our Phase 2 Tool (ephemeris tables). The process is a bit laborious but works really well and several programs have used this option for 2003B as well. Non-sidereal guiding is not yet offered.

C - Global Statistics, Program Completeness, and Overheads

1) Global Statistics

The following table presents some general numbers regarding the queue observations for 2003B (C, F, H, K, L, and T, D-time, excluding snapshot programs):

Total number of Nights
Nights fully lost to weather
~31 (30%)
Nights lost to engineering + technical problems
~ 4 + 8 = 12 (12%)
QSO Programs Requested
QSO Programs Started
QSO Programs Completed
Total I-time requested (hr.)
Total I-time validated (hr.)
340 (55%)
Queue Validation Efficiency
~ 80 %


2) Program Completeness

The figure below presents the completion level for all of the programs in 2003B, according to their grade:



3) Overheads

There is no doubt that the overheads with MegaPrime are more important than with CFH12K. The following table include the main operational overheads (that is, other than readout time of the mosaic) with MegaPrime during the semester 2003B. This is given as a reference; overheads are highly variable during a given night depending on the conditions, complexity of science programs, etc.

Total overhead per night
Filter Change
15 - 25/ night
90s /change
1500 - 2200 seconds
Focus Sequence
8 - 12 / night
200s / seq
1600 - 2400 seconds
Dome Rotation > 45 d
5 ?
< 600 seconds
Guide Star Acquisition
20 - 30 ?
30 - 40 s / acq
600 - 1200 seconds


Note that overheads for calibrations (standard stars and Q98 short exposures for photometric purposes) are not included in this table. For 2003B, we observed about 3 standard star fields duing a photometric night (12 minutes / fields due to filter changes). For 2004A, we will observe only two standard fields.


D - Agency Time Accounting

1) Global Accounting

Balancing of the telescope time between the different Agencies is another constraint in the selection of the programs used to build the queues. The figure below presents the Agency time accounting for 2003B. The top panel presents the relative fraction requested by the different agencies, according to the total I-time allocated from the Phase 2 database. The bottom panel represents the relative fraction for the different Agencies, that is, [Total I-Time Validated for a given Agency]/[Total I-Time Validated]. As showed in the plots, the relative distribution of the total integration time of validated exposures between the different Agencies was balanced at the end of the 2003B.



2 ) CFHTLS Accounting

The following figures show the time accounting for the different CFHTLS components:

Since each component of the survey is divided into two programs, the global fractions are given in the following table:

Fraction Requested
Fraction Validated for 2003B
Deep Synoptic L01 + L04
30% + 13% = 43%
52% + 7% = 59%
Wide Synoptic L02 + L05
17% + 17% = 34%
20% + 7% = 27%
Very Wide L03 + L06
10% + 12% = 22%
7% + 7% = 14%



E - Conclusion

Our second semester with the queue mode with MegaPrime was better than 2003A but was a difficult one nonetheless, mostly due to the time lost to weather and technical problems. Even if the statistics are poor, we already have learned a great deal and a lot of progress was made in the semester, notably on some of the operational overheads. Improving efficiency remains a high priority, in particular by increasing the validation rate and implementing the auto-focus feature, and we are hopeful that 2004A will be more productive on the science side.