Report of the ORAC reviews, December 1997


Gillian Wright, Alan Bridger

orac010-rev1 Version: 02 Created: 13 February 1998 Modified: 16 February 1998

Results of the reviews of the Preparation System, Data Reduction System, and Sequencer.

1.0 Purpose


This document presents a brief summary of the points that were raised at the ORAC review meetings on December 8th and 9th 1997. Decisions about strategy and design that were agreed are also summarised and a list of action items resulting from the meetings is provided. Responses to the written comments circulated prior to the meeting are also included. The document is intended to be read by anyone with an interest in the ORAC project, although knowledge of the documents that were reviewed is required.

Apologies for the informal style adopted, but it was felt that it retained the flavour of the meetings.

2.0 Overview


Present: AB, FE, GSW, MJC, AJA, NPR, TGH, TRG, BDK, DAP, SMB, JMS, MT, SKR, WRFD, ACHG, IRB, JMDS, APGR, DGP (though not everyone was there all the time...)

The meeting started with a short overview presented by Alan Bridger of how the system fits together overall.

The diagram presented and discussed assumed the Gemini Observing Tool would be used for Preparation, but discussion of this was left until the PREP section of the review.

It was noted that in the first instance the "observation sequencer" on the diagram would be just an observer filling out an observing queue.

The header driven nature of the data reduction was described very briefly - more details were for the DR review,

It was noted that the selection of appropriate DR recipes depended on the PREP tool being able to link DR recipes and Observing sequences in an appropriate way.

FE pointed out that if something goes wrong with the data reduction - e.g. if the user realises after the observation sequence has started that the wrong recipe was chosen the data could be re-reduced by manually starting a second copy of the data reduction with a command like "reduce 20 40" to re-reduce observations 20 to 40.

ACHG suggested that it would be useful if the observer could start the second copy of the data reduction before they knew what the last observation number would be. It was agreed that this capability would be provided, by allowing the second copy of the data reduction to be started with a "reduce nn end" type of command which would reduce data starting from a given observation number until the "end of this observing sequence" was detected.

SMB asked whether the intention was that the system define a general interface for all instruments at UKIRT and whether interface control documents were needed. AB explained that the system would indeed define an interface which all new instruments would have to comply with and that written specifications for the interfaces to the ORAC modules would be produced. Given the nature of the interfaces it was hoped that these could be defined adequately with less paperwork than the very detailed level of a formal Gemini ICD.

Other General comments circulated by SMB prior to the meeting:

The data dictionary needs to be made consistent over the whole project. For example, is the process "UKIRT Data Reduction" described in [O-3] the same as external "Data Reduction System" described in [O-5]? The data flow "data reduction recipes" in [O-3] is called "dr recipe" in [O-5]. These and other detailed notes from Steven on consistency and errors in the documents/diagrams will be taken care of when the final documentation set is produced.

I really like the flexibility of the design of this project. Having the messaging technology and the kind of algorithm engine to be definable allows the system to maximise its use of existing software, and will allow the system to be upgraded as new technologies and data reduction packages become available. Can the data format be made just as definable, just in case NDF and FITS are replaced in the future by something else? Yes, that is the intention although initial scripts will be NDF specific.

The issue of how to inform the Observatory Control System of changes to the telescope and instrument configurations needs to be addressed. What happens when an instrument is removed from the ISU and a new one is installed? What happens when a grating, camera or filter wheel is replaced inside an instrument? What happens when the ISU is commanded to direct the beam to different instrument? There probably needs to be a database maintained (and updated on operation (a)) so that the ISU port can be translated into instrument name on operation (c) and future commands directed to that instrument. Discussed in the sequencer session, but note that it was agreed that the details of how to handle these issues would be dealt with in the design of the scheduler which will not start until 1999 - since they are largely issues for queue scheduled observing only.

Will the Observatory Control System allow an instrument on one ISU port to take a calibration observation while an instrument which has the telescope beam simultaneously makes an observation on the sky? Yes that is the intention for instruments that have a calibration unit - should speed up instrument switches for over-rides and service observing.

Can you include a reference to the UKIRT Telescope Control System documentation? Yes

General comments sent by AJA after the meeting, underlining and in addition to the discussions at the meeting:

I think the sooner the arrangements with Michelle (and UFTI) are formalized and written down, the better. This may be an obvious statement, but from the outside it's not clear who's doing what and who's paying for it. This came out at the meeting, and I know you're all aware of it; but as soon as some agreements are struck, it would be useful to have them stated somewhere.

3.0 Preparation System


A summary of the key features of the proposed preparation system and the recommendation that the Gemini phase-2 observing tool be adopted for the PREP system was given.

Written comments from Tom Kerr, Andy Adamson and Steven Beard were gone through and responded to.

A demonstration of the Gemini Observing Tool was given, with continuing discussions and questions about how it would work for UKIRT.

Points raised either before or during the demo:

There was much discussion about high level of checking of the legality of instrument configurations - e.g. was it possible or desirable to have the PREP tool determine whether the instrument was actually on the telescope or in the case of CGS4 which gratings were installed. The desirability of this depends on when it is envisaged the observer uses PREP - if it is several months in advance of their run then an instrument or instrument configuration not available at the time PREP was used, could well be fine by the time of the observations. It was felt strongly by AJA and others that people would want to use the PREP tool several months in advance, including perhaps to help them think about possibilities when planning proposals. It was agreed that this meant that having the preparation system be restricted to only the current configuration of instruments on the telescope was too much. It was concluded that PREP should check that a requested setup for a given instrument is using sensible parameters, but the job of determining which instruments and instrument components were available for a given night was the responsibility of the scheduler -i.e. for now an observer / support scientist, but eventually the scheduler software.

Checking of instrument configurations was also discussed, with reference to Tom Kerr's comments. The ORAC team explained that checking things like appropriate order for a given wavelength and grating was one of the areas of improvement over the current system which the new PREP was required to provide. The Gemini observing tool had the facility to put in such parameter checking. There was one example quoted by Tom Kerr - the observation of a calibration star in a different slit width to the target, which was much more complicated. The reason is that different astronomical sources are treated as separate observations. It was thought that checking astronomical calibration sources would be observed with the same slit was a nice thing to have but shouldn't really be necessary, and no-one could come up with a simple way of doing it. It was felt that perhaps the new PREP would help to remove the possibility of accidental changes to items (which can happen easily in the current Config menu), because the need to keep changing Configs while observing should be reduced. Slit width changing between target and standard star could equally well arise for Gemini observing. AB will find out whether the Gemini team have any plans to automatically check that standards and objects are observed with the same setup.

Support scientists at UKIRT want to be able to check observations prepared by visiting observers before they arrive in Hawaii. It was explained that using the Gemini OT meant UKIRT would be able to decide on a policy over when observation definitions had to be submitted to UKIRT for checking prior to observing.

The format of the files written by the OT is SGML. FE noted that these are order sensitive, not just queue-value pair files, which makes them harder to edit. It was agreed however that they are editable and given that instrument scientists need to do this only very rarely it would be acceptable. NPR emphasised that it was extremely undesirable to allow visiting astronomers to do any editing or changing of files, other than through the PREP tool.

Some issues that were a part of the sequencer design were also raised during the PREP discussions. e.g. components getting stuck and not going to the position requested and how this would be accounted for in the headers - it was agreed that this was a problem for the sequencer and low level instrument control and would be covered in the sequencer session. Similarly the need for the scheduler or the sequencer to know which instrument was on which port via a database that would be updated whenever a change was made.

SKR asked about PREP tool facilities for creating data reduction scripts. FE replied that it will allow visiting observers to put together high level recipes. At a lower level they could write their own script - but probably using an editor off-line not through the PREP tool.

Most people who saw the OT demonstration agreed that "iterators" were a powerful solution to the request of users to be able to change just one component of an instrument configuration in a sequence of observations, and that the "nested do loop" type construction was logical. However it was also agreed that potential for confusion and mistakes was large. It was felt that UKIRT would have to try using iterators for some time to see if the benefits compensated for the additional complexity, with support scientists being aware of the need to keep an eye on what users were doing. It was noted that there would probably be some improvements in making it easy to see the sequence generated by the iterators in the next version of the Gemini observing tool, and that this was an area that could evolve as users provided feedback.

There was some discussion of whether UKIRT should equate an observation of a single source with the Gemini OT "observing programme" level - thereby creating one file for each source and making translation into a Config and an EXEC simpler. However from the users point of view AJA pointed out that it was better to define a whole programme for several sources at once. This also maintains compatibility with the Gemini paradigm.

There was debate about what level the SGML=>Config/EXEC translator should run at - i.e. at what stage in the process the conversion of SGML into Configs and EXECs for the existing instruments would need to be made. It was agreed that it would be best to make these intermediate files as ephemeral as possible - creation of Configs and EXECs "on the fly" was considered best. For driving the instrument the observers would interface with a simple scheduler that would let them pick the next observation that they wanted done from a list of possible ones. These discussions led to the idea that the first version of the scheduler should be a translator with a user interface. AB was actioned to change the overview diagram to reflect this. However, note that there was more discussion of the details of the scheduler in the technical sessions for the sequencer.

NPR asked about terminals/windows and how that would be handled because the OT has a lot more windows on it that the current single xterm needed to run PREP. One possibility would be to add a PREP specific x-term at the summit. It was also pointed out that the user interface to the instrument through the scheduler/sequencer would be simpler and the instrument status screens could be separate from this control. It was possible that low level instrument control (e.g. the equivalent of reload_occam) screens could be on the TO's machine, although details of this still need planning and discussion with staff in Hawaii. On balance the observer would probably not have more keyboards/screens to type in than at present though the arrangements would be different.

AJA re-emphasised that it was important to make the top level of the PREP user-interface as astronomical as possible, by making extensive use of "auto-configure" options. He felt that this was particularly important for exposure times. In the case of CGS4, observers are in any case basically using a lookup table to enter values and so this could be automated. There was some concern that for a new instrument exposure time parameters would be less certain - but PREP could at least be inserting a "best guess" to start with which would quickly be refined on commissioning nights and over the first year. The idea was that the "auto-configured" values could be changed by observers if they wanted or needed to. ACHG pointed out that given the complexities of background levels in the 10-20um windows he felt that auto selection of exposure times would be essential for Michelle.

If the Gemini observing tool were used for PREP then some changes to the existing CGS4 and telescope software would be needed. An action was placed on AB and NPR to discuss these and agree which parts the ORAC project will do and which will be done by JAC staff. It was also agreed that MH-P would visit ROE in early 1998 - partly to discuss such issues with AB and partly for UFTI tests.

Attention was drawn to the fact that in choosing to use the phase-2 Gemini Observing Tool for the PREP system, UKIRT was volunteering to commission it with real users for Gemini and to provide feedback. There would be a need to keep the UKIRT and Gemini versions in step and since many of the benefits for both telescopes resulted from keeping much of the preparation system in common, this could mean compromises for UKIRT.

Conclusion: At the end of the discussions it was unanimously agreed that the Gemini phase-2 tool be adopted as the preparation system for UKIRT. An action was placed on AB to pursue a detailed agreement with the Gemini team, with a view to obtaining a copy of the software in time to start work on the UKIRT modifications in February. It was also agreed that demonstrations of the OT for all the UKIRT support scientists, including if possible examples of what the UKIRT version will look like, be planned to be held in Hawaii at the end of March, around the time of the Kona SPIE meeting.

Written comments and suggestions from Tom Kerr, with responses from FE, AB and the meeting:

THK> Will the software be able to run on Linux systems? There are one or two

> support scientists here who run Linux on their own PCs, and it might be

> useful if the software could be run from these systems.

AB> I am double checking on this. It should do, but it does use

commercial software both within the tool and for distribution, and that's

the bit I'm not so sure about.

FE> The Gemini prep tool (the likeliest candidate) is a Java application, able to run on any operating system that has the Java Virtual Machine. I am aware of at least one Linux port.

AB> At meeting - Yes its OK.

THK > I agree that any observation generated by the preparation system should be

guaranteed to be able to be performed by the instrument. However, apart

from the support scientist, will there be checks on `silly' observation

modes that the instrument can still perform, but are not suitable. For

instance, using the 40 l/mm grating in CGS4 in 1st order at 1.1 microns,

or observing a standard with the 1-pixel wide slit, then observing the

target with the 2-pixel wide slit (both have happened in the last couple

of weeks).

FE> I believe Alan is planning on catching many things at this stage. His prototype for example suggests the correct order given grating and wavelength. I don't know how it would cope with observing a star with one slit and an object with another, as standard stars are just objects after all. Pass.

AB> The details of exactly what will be trapped here are TBD. Your

example of standard & target having different slit widths might be

problematic. We'll discuss it here and report what comes out.

THK > I'm not sure how the system is going to work, but I do realise trapping

things like doing two separate observations with different settings would

be a difficult one to check. However, I thought that perhaps the sequencer

might allow one to define a standard star observation followed by an

observation of the target object, and in this case I think it would be

prudent to have a check for different settings. If this isn't the way the

sequencer will work, then I guess it'll be difficult to detect such a

problem. Anyway, I just wanted to give an example of a couple of avoidable

problems we've seen here recently.

Discussion at the meeting also concluded that this particular example was almost impossible to trap without causing (worse) problems in other cases, but we will bear it in mind and see if Gemini else has a solution in mind e.g. do they plan to check chained observations?

Tom> FP5 --- Will the support scientist easily be able to check observations generated by visiting astronomers before they leave their home institution?

FE> As I understand it, in the Gemini paradigm astronomers have the choice of working on their files locally and/or uploading them to the UKIRT database. The support scientist then can move them into an "active" database. The support scientist has access to both the UKIRT general database and the active database. As long as the astronomers have submitted their preparations to either of those, there is no problem (and in fact we can require them to do so).

Tom> FP7 --- Not sure I understand what is meant here by `hidden' items. Does this mean that, for example in CGS4, you choose the 4-pixel wide slit, it will also show that you have picked the slit named 0ew? If this is what is meant, won't this add confusion?

FE> My interpretation of the requirement is that such hidden items are viewable, i.e. can be viewed if necessary - not that they will be displayed to the observer come what may.

AB > I think this requirement needs some fleshing out. I don't think it is meant to

include your example.

Tom> OK, but I'm still unsure about what `hidden' might mean in this case. I'll

see what comes out of the meeting and any further discussions.

GSW> Hidden was intended to mean items like the CGS4 filter - which is now chosen by PREP software for astronomical configs and so is hidden from the user. However the user might like to over-ride the blocking filter choice for a specialised application and so they do need access to these items - the idea is "available if wanted". This feature of the menus should be covered in specification for design of menus for each item in an instrument Config.

Tom> I think popular demand would be to allow both the exposure time and position angle to be changed in the iterator. I've had a number of requests that the PA be controlled from within a CGS4 EXEC, and although I really don't like the option, I know a lot of people would.

FE> Yes it seems that iterators will be popular, though some of the uses in which they can be put seem downright perverse to me:-)

Tom > FP9: I believe this should be in the simplest format possible so that it is simple for support staff and observers to check. I can also think of at least one occasion where it was necessary to edit a cgs4 config by hand to allow a very non-standard observation to be made. This would be much easier in ASCII (I would like to avoid the issue of intermediate configs for now...).

AB > If we go with the Gemini OT it will be SGML, which is ASCII and structured but would deter "casual" editing! Of course in the plans the "translator" would produce simpler ASCII output which could be modified, but observers shouldn't be allowed to.

Tom> I sort of agree that observers shouldn't be allowed to, but I hope support scientists will (albeit, I see no reason to make it easy if it's considered a rather risky thing to do, but the output should be easy to understand, especially if something has to be done at 14000 ft.).

Note from the meeting - the decision to use "translation to Configs/EXECs on the fly in the scheduler" made at the meeting means that the only files available for editing by observers will be the SGML ones.

Tom> I've already commented on this above (in FP9). However, since it is very rare for observers to edit the config or EXEC files by hand (and extremely rare for the support scientist to do so) personally I could live with a mark-up language if it is was relatively easy to understand and follow.

FE> Yes, I think that even in a mark-up language they are still text files, and can be viewable. Having seen a sample output I have no doubt that you could follow what was going on. You could even edit them, though I'm not sure I'd want to.

Tom > Section 4.8 `Breaks' and Pauses in the observing Sequence: Just a quick comment on detail - wouldn't the `break' be better if it came after the `set object' command rather than the `config' command?

AB> I think that is probably a mistake!

Additional comments sent by AJA the next day:

Just to reiterate - you have a great chance to create an (interface to the) observing tool which asks astronomical questions, like what central wavelength and what resolution are required, rather than instrumental questions like what grating and what camera. I would expect that Gemini would want to adopt such a system if it existed, and to be honest I was a bit surprised at that aspect of their interface. If checking for sanity can be done, then I think it makes sense to remove that loop - why enter instrument configs, have them checked by a subsystem which knows about the instrument, and then update them, if the subsystem could just generate the configs for you (or tell you that "this instrument can't observe at this wavelength so think again")? Yes - as discussed at meeting we intend a very astronomical interface to instrument configs as the top level with access to more for those who want them.

On a related point, the prep tool docs tend to refer to instruments - is the telescope to be treated as part of the instrument? i.e. will you check for things like dec limits, observability at time of year etc.? This is another of those things which depends on exactly when you see the user using the tool (I think it should be at as early a stage as possible). Good suggestion.

If anything is specifically excluded from what will be doable, then I think it should be said here. For example, fast time series photometry isn't in the lists, but the average observer might not see that as implying that it wasn't available (or couldn't be made so). If the instrument can support particular techniques then there should be items in the instrument Configs and library observing sequences for them.

Additional comments from SMB, circulated by email prior to the meeting:

Figure 1: Are there no responses from the "UKIRT preparation" process back to the observer? Yes there are - should add.

[O-6] Using the Gemini Observing Tool on UKIRT: Only one major comment - How does the expected delivery date of the completed Gemini Observing Tool compare with the date when the ORAC project needs it? Discussed during meeting - likely that UKIRT will start with an early version and test with real users for Gemini - timescales seem OK for this.

4.0 Data Reduction


Brief summaries of the data reduction requirements and design were presented. It was noted that the data reduction was likely to use P4 as the display tool, due to the requirement that users be able to change plotting actions on the fly. The plan is to develop the proto-type for use with UFTI, use it for the first few months and then review its success or otherwise. The proto-type used for the demonstration runs in verbose mode, but for real observing a quiet mode giving only completion type messages would be used.

There was much discussion of error handling - e.g. how the data reduction would know that an observing sequence had been stopped prematurely. In general most areas were covered in the plans - but a detailed specification for the DR was still to be written. An action was placed on GSW to add some descriptions of "things going wrong" to the scenarios document.

It was noted that it is not possible to build a system that will recover no matter what - and that a philosophy of restarting reduction of sequences in cases of serious problems had been adopted. It was agreed that trying to re-insert observations into queues ala CGS4DR was too complex in practice for most observers, and starting over again with a new copy of the DR would be simpler. The provision that it would be possible to start the re-reduction before the sequence was finished and allow an indeterminate ending was felt to be sufficient for most "disaster" scenarios.

Discussion of the detailed specification for the UFTI scripts was deferred for a separate smaller meeting (TGH, GSW, FE, MJC) - see section 6. It was however agreed by the meeting that the principal of having a limited number of scripts for UFTI was the right approach to take, given the need to also provide data reduction for Michelle commissioning, which will be essential for commissioning it, and for CGS4.

There was discussion of header information - both that which would be required by the data reduction systems and headers in general, such as items that support scientists would like to see included. These are in addition to the engineering/instrument configuration headers that are specified for each instrument. An action was placed on TRG to arrange for someone at the JACH to collate support scientist ideas for header items so that a master list could be drawn up. GSW/FE/MJC would make a list of items required by the data reduction as specification of the recipes and scripts for each instrument were drawn up.

Visitor instruments - for visiting instruments new primitives would need to be written. FE pointed out that the documentation would specify how to users could write their own if they needed to and that these could be left at UKIRT for the next time that instrument was allocated time.

THK had commented on the extreme desirability of requirement SD35 - 2 row and 2 column line fitting of spectroscopy data, and added to this that at the moment the engineering files are still analysed by CGS4GRAPH on a Vax. It was agreed that a replacement for CGS4GRAPH was not really a part of the ORAC remit - however it was also agreed that a specification for a simple format for engineering and analysis files written by the DR software should be written.

There was debate about exactly what the need for parallel reduction processes meant. SMB summarised the conclusions of the Gemini discussions about quick-look analysis in which they thought that Gemini should have a high and a low priority reduction "queue". After some discussion it was agreed that UKIRT might have a similar requirement for "high priority" reduction - specifically for focus run reduction. An observation sequence e.g. of a large mosaic, will have a number of DR steps that have to be carried out after all the observations are obtained. However, as soon as the last frame is taken a focus run could be done - but the observer should not have to wait until the reduction of the mosaic is complete before seeing the reduction of the focus run. The proposed DR design could meet this need by implementing a second incarnation of it running on a different machine (so that it was not slowed down by CPU). This could of course be run under TO control. It was agreed that after a focus run the TO would manually enter a new focus position and that automated setting to a new focus was not required or desirable. (i.e. there was no need for feedback from a data reduction process to the TCS or ICS).

FE reported results of timing tests for 1024^2 arrays - using memory is faster than scripts but the speed the scripts run at can be speeded up with additional CPU and on a dedicated machine it was not thought that there would be a problem. None-the-less it was important to find efficient reduction algorithms.

There was discussion of the configurability requirements and how well the proto-type met them. It was agreed that it also applies to the display - the observer needs to be able to configure what was displayed as well as how it was displayed while the data reduction is running.

It was agreed that editing recipes on the fly should not be needed and indeed would not be allowed. The idea was to have lots of recipes with the PREP system providing help for the user in selecting the best one for their observing sequence. Gemini OT allowed for libraries - one of which would probably be a menu of recipe names.

The question of whether to provide "Starlink style" make files and user guide was raised. FE said that this was a lot of work. She intended the initial documentation to cover how to run the software, lists of recipes and primitives and what to do with them. Since much of it was Perl modules there was no technical reason why Starlink could not distribute it. The issue of whether to release it through Starlink would be revisited by UKIRT after delivery of the system with Michelle.

It was noted that the data reduction would use NDF as the internal data format. Format changes from one command to the next were not envisaged. FE said that the archiving format would also be NDF. If observers wanted a different data format such as FITS then the data would be translated and put into a separate directory.

If new algorithms are needed it is intended that they could be included and maintained as part of Kappa.

Delivery timescales were discussed, assuming that the prototype would be developed enough to use with UFTI and then extended into the full system. FE was keen that some off-line use of the scripts was made on IRCAM data prior to UFTI commissioning. For automated use with UFTI it would be necessary to add one more command to the EXEC dictionary, until the sequencer and DHS were delivered.

Agreed dates were:

May 1st: Off line scripts for use with IRCAM and one demonstration script for reducing CGS4 data.

Conclusion:

The proto-type design was accepted as the baseline for the DR design and testing during the UFTI commissioning will go ahead. A final design decision will be made shortly thereafter in order to ensure a working system was delivered with Michelle.

Written comments from THK, with responses from FE

Tom> FD8 --- As long as the scripts are obvious, then I think it's ok to show the script to provide feedback. However, I feel that the current CGS4DR does a pretty good job in telling the observer what's going on (although it doesn't always do such a good job when reporting errors), and I think the data reduction feedback should look quite similar to what we have now.

FE> Yes I actually envisage just reporting the general step, e.g. "Bias subtraction done". Probably have a more detailed debugging mode for problems.

Tom> [headers]

FE> Alan tells me that all sort of systems contribute header information (the array software, the instrument sequencer etc.). The telescope system could contribute the types of items you mentioned. As to what headers we actually want - i think that's probably a whole new document.

Tom> SD11 ---- Do we have to be limited to ~5 positions? How much effort is it to provide for more than this, just in case....

FE> Uhm I guess not, though every Nth position nodding scheme would need its own reduction recipe. So we are not limited to it, it is just that initially that will have to do...

Tom> SD15 ---- Could we ask the Durham SMIRFS-IFU group to write software for their CGS4 IFU that can be tagged onto ORAC? Might save a bit of effort.

FE> Yes they could write their own recipes and primitives. Once the data reduction system is well defined, we will provide a document to assist people in doing this.

Tom> SD27 ---- I feel that correction of curvature is very important. It's the one obvious thing CGS4dr cannot currently do, and with the large degree of curvature we now get with the long camera and 40 l/mm grating, it should be part of the ORAC's dr.

FE> That's why it's a requirement:-) GSW> and one of the reasons a new DR was deemed necessary.....

Tom> SD35 ---- Please, please, please can we have this?

FE> It also is a requirement. You will probably be bugged to assist with it in fact.

Tom> Further to this, currently engineering data is analyzed using CGS4GRAPH on the vaxes. Will there be any software in ORAC to replace this so that engineering plots can be made from within this system? I don't think this is absolutely essential, but it would be an extremely useful and time saving thing to have.

AB > Umm. Will have to discuss. My memory of what happens in cgs4graph says to me that it shouldn't be too difficult, but does it really fall inside the ORAC project?

Tom> Probably not, but it would be nice to have. I wouldn't think of putting

this as a requirement, but if someone has a spare hour or two one day...

In fact, this might be something I'd like to have a go at myself one day,

putting something like cgs4graph into ORAC - might be good training for

me.

Discussed at the meeting, see above - file format will be specified.

Comments from SMB circulated by email prior to the meeting, mostly covered by discussion summarised above, but included here for completeness:

Scenarios: Section 3.1, point 6: Is there a time limit within which all these 5 observations must take place? What happens if the weather deteriorates after 3 observations and you have to complete the remaining two later in the night? Is that a sensible scenario or would you just begin the set of 5 observation again? General part of error handling -see action on GSW to expand the scenarios document.

What are the units of the offsets (e.g. "0 8"). Are these arcseconds? yes

When the design of the Gemini Control System was first presented it became apparent that there was a need for a high priority data reduction queue (which became known as the "synchronous" data reduction queue). The queue existed to take care of the following scenario: Suppose you have just completed a large set of observations. The data reduction queue is full and the data reduction system is busy reducing the last set of observations. You now want to start a new set of observations, but you need to carry out a focus run on the instrument first. The focus run requires the data reduction system, but you don't want to have to wait and waste observing time while the data reduction queue empties. The solution is to write the observations from the focus run into a new high priority data reduction queue. Would this be an issue at UKIRT? See above for discussion during meeting - UKIRT will have a separate DR for focus runs.

Can the prototype be run more than once per instrument? Yes - see above for details

Additional written comments sent by AJA the next day:

Scenarios point 1 of list under 3.2 - I think we agreed that the DR system would look back for a suitable dark if one wasn't found in the present group. The document here makes it sound as if that's not going to happen. GSW will check and update document if necessary.

Scenarios p.7 of 8 - at the top - I would support the idea that a standard star is as essential a calibration as an arc. should be treated the same.

DR Requirements fd4 is inconsistent with a statement at the start of this document (section 2.4). GSW will check/clarify document.

5.0 Sequencer Design


AB presented a summary of the sequencer design ideas and pointed out that the sequencer design was at a more preliminary stage with some implementation details still to be worked out.

There was some further discussion of the changes needed for using the OCS with the existing IRCAM and CGS4 software. NPR asked whether the earlier decision that the scheduler was now a SGML=>config/EXEC translator with a simple user interface solved the problem. AB responded that changes to how the software runs up, DR header information etc. were needed. Many of these are summarised in the design document. It was agreed that JAC would make many of the changes in consultation with AB. Probably MH-P would make an extensive visit to ROE next year, or AB and MH-P would discuss in Hawaii around the time of the SPIE meeting. AB and NPR would further discuss details outside the meeting.

There was discussion of the need for the sequencer to provide feedback of what stage it was at in the observing sequence, and how this could be done for iterators. Possibilities such as flags on or lines drawn on the display to show the loops were suggested. AB will investigate possibilities.

It was noted that the need for sequences of indefinite length, for example when the expected signal is not well known, a number of observations = 0 would be used.

The ability to dynamically modify the sequence was considered to be a complex requirement. TGH thought there was a need for stop/pause, make a change and then resume as well as for a simple reload and start again.

It was pointed out that editing EXECs which contain indefinite numbers of observations and iterators might be difficult to achieve. It was agreed that having indefinite loops was a higher priority than editability if such a choice had to be made.

The list of sequencer user functions in Table 1 of the orac007-seqd were discussed in detail. It was agreed that "load observation definition" was now an action of the scheduler - and that initially only one observation definition at a time could be loaded (no queue).

The "Run" action should be changed to "starts the sequencer running, executing the observation definition".

It was suggested that in the "stop" item an option of "abort" could usefully be added to the list of choices presented to the user.

It was agreed that a "restart this item", as well as a "restart the sequence" was a very desirable option after an "abort".

Need feedback between sequencer and scheduler when actions like "stop" are used.

It was noted that a full abort would be possible in Edict, unlike the current Alice.

"Stop" commands will often be used to stop a sequence when sufficient S/N has been obtained. It was felt that it would be desirable to help appropriate stopping places to be chosen - e.g. for nodded pairs, when a pair was completed and the telescope was in the non-offset position, but that "stop" should not be restricted in its use. It was suggested that "breakpoints" could be displayed somehow in the feedback display.

Since the scheduler enters observation definitions and translates them for the sequencer and eventually would enter them into a queue, there needed to be some definition of the interface between the scheduler and the sequencer at a basic level, even although design of a sophisticated scheduler is not planned until the final stages of the project. There was some discussion of ideas relating to this "interface" (described below), but in general it was agreed that the design would be worked out as the design of the sequencer progressed to Cdr. level.

It was noted that the sequencer needs to know which instrument is being used for observing and which is the additional instrument with which calibration observations are being taken. One could imagine this "run two instruments" in parallel requirement being met by running 2 schedulers, 2 data reductions, and 2 sequencers only 1 of which has control of the telescope. There was discussion of whether when switching instruments there should be one sequencer for each, or have one that handles it all. A proposed solution was to have one sequencer that was always "tied" to the telescope and switches instrument as needed and another sequencer that cannot "get" the telescope. It was also possible that this could be done at the scheduler level instead.

There was discussion of the relationship between data acquisition and reduction - which should keep an index of the observations taken and the pro's and con's of both. Need to think about whether "bad" flags are needed in the index file.

There was some discussion of table 2 and the "interface" between the sequencer and the instrument. An action was placed on AB to expand the table to include more of the necessary extensions to the EXEC dictionary - e.g. to show which commands were allowed in parallel and which were not.

SMB suggested that instead of a command "DARK" to take a dark, instead a single "observe" command could be used to take all types of data with the type as a parameter i.e. for a dark the command would be "OBSERVE DARK" This could allow more flexibility for adding other observation types in future. It was generally agreed that this was a good suggestion and consistent with the "SET" command.

There was extensive discussion of whether instead of extending EXECs the JCMT "Todd" software could be used. It was concluded that UKIRT's requirements for parallel actions were high level and relatively few, whereas JCMT's requirements were low level and many, and this accounted for the differences between EXECs and Todd. It was noted that there was little point in change for the sake of change if EXECs could be expanded to meet the new requirements, and that there were advantages in the fact that UKIRT staff and users were accustomed to using and thinking in terms of EXECs.

For Gemini parallelism results from commands that can complete asynchronously and others that are synchronous only, but the sequencer is linear.

BDK pointed out that there could still be areas in common between the JCMT system and UKIRT if the modularity was kept similar. e.g. in using the Gemini OT, keeping boundaries in the same place, where observations are driven from, where commands go to sub-systems etc. This seemed like a sensible approach. BDK also pointed out that EXECs were effectively proven technology, whereas introducing Todd could create new problems.

There was some discussion about the use of tokens in EXECs and whether these would require changes to the Gemini OT that would make it too different, but it was felt that the hierarchy and libraries would make this not a problem.

The conclusion of the meeting overall was that the preliminary design of the sequencer met the requirements with no "show stoppers" and detailed design could proceed on the basis of it.

Written comments on Sequencer design document from AJA and responses:

Suggests that engineering commands handled by the sequencer be sequencer specific engineering only, not commands that really should go straight to the ICS. That's what was intended.

Notes that translation to Configs and EXECs during observing would probably be best. This is what the discussion concluded also in deciding that translation would be done by the scheduler.

Suggests that for remote observing a user interface that was a "restricted subset" would be better than a "subtly different one". Yes.

Suggests not removing completed items from the observing database, but just flag as "complete. This would be a useful observing tool in the case where a user specifies an entire programme in advance. OK

In page 7 table 1, "OD" is ambiguous. Suggest using ODef (or OD) for observation definition and ODb for observation database. The list of ODefs should be visible at all times. OK - the latter is now the intent for the user-interface to the scheduler/translator.

Agrees that translating GOT SGML files just prior to passing them onto the sequencer is the best way forward.

Agrees with the suggestion that an observe command is used with FLAT, ARC etc. as arguments rather than separate commands. From the users point of view this would be better than the current "set blah" "blah" system.

Agrees with the suggestion that the observers do not need access to instrument engineering commands, and so the sequencer does not need to provide any such functionality. Anything the observer needs is not engineering and so should be available. Agreed (see also THK comments)

Does the TCS really need modification or could the scheduler put together a bunch of existing TCS commands to get the same effect as (e.g.) target description. Yes - some changes are needed because in order to make using standard observing sequences easy, and keep them general the "offset" commands need to be general, (c.f GOT makes no distinction between offset types) - i.e. something else must figure out whether the guider is on and how to implement the offset. NPR and AB agreed it was "cleanest" if this were a TCS action.

Suggest that for queue scheduled observing, all the data for different runs could be stored in the same place with header items indicating TAC # /AIC/PUBLIC (for standards, calibrations etc.).

Agrees that decoupling the observing system and the data reduction system is a good idea, but note that this means we must be careful with sequence numbers to ensure that off-line reduction can work sequentially. If this means leading zeros then it also means an assumption about the maximum no of frames per group and groups per night, assuming we don't use an index.

Why not just use a change of group number to determine whether a frame is associated with previous ones? This would also get round the STOP directive problem, and you'd in anycase have the running total to look at. Problem with this is when a calibration observation taken as part of group n is also valid for other groups e.g. n+2.

Unresolved issues - 3 this assumes the G.O.T. or does it? No applies whatever the Prep tool since remote access is needed.

Unresolved issue 4 - suggest One OD = One group, one iterator=one subgroup?

Unresolved issue 7 - suggest flagging of data from different programmes is enough, with separation done off-line.

Suggest including appendix1 describing the information flow in ORAC to the overview document, instead of a separate document.

How do you know what's done in parallel? See discussion above - table of actions will be expanded to explicitly show which commands are allowed in parallel.

Email comments from THK that relate to issues discussed in sequencer meeting with responses from AB.

THK > The subject has already been brought up, but I would dearly love to see in

the file headers:

System temperatures (array temperature etc.)

The actual position angle used for CGS4

Weather conditions (e.g., temperature, humidity etc.)

If we are also to get a seeing monitor on the mountain, perhaps we could also allow for a `seeing' header in the files?

AB> We really need a separate specification of header items. Some will be easy, some will be hard. Some will require "ops" effort as well as ROE Project.

THK> As I said, I wasn't sure if this was the right place to discuss this or not. I know both Sandy Leggett and myself have quite strong thoughts about what should go into the headers, but we'll leave this for later I guess. See also action from meeting on Tom Geballe to co-ordinate some JAC input on headers.

THK> FO9. The sequence should be easily interruptible, but can it be made to make it more obvious _when_ a sequence should be interrupted to avoid having e.g., more object observations than sky? Maybe the software could actually tell the observer on which observation it would be best to interrupt?

AB> Something sensible should be possible.

Written comments from SMB relating to the sequencer design, discussed at the meeting but included here for completeness.

Figure 1: Where is the telescope operator? Does the "Observer" external represent the observer and the telescope operator operating as a team, or does ORAC make the telescope operator redundant? Figure deals with interface only for a visiting observer - absolutely no intention to remove TO's!

Section 2.4: I think "telescope status" also needs to contain some "in position" information, so you know when the telescope is ready for the next observation to start. It does

You need a reference to the UKIRT TCS documentation. agreed

FO18 - What does this mean? Plan for changes to CGS4/IRCAM/UFTI software that are needed so that they can be run by the new sequencer are summarised above.

Table 1: Does a "pause" command pause the current observation (if one is in progress) or does it wait for the observation to finish? Is this the command you would use to signal a temporary interruption due to deterioration in the weather? Item 4 (just after Table 1): It may be worth keeping the observation definition in the queue until after it has been successfully executed.This would give the observer the change to repeat the observation definition if it fails, and would allow the observer to see which one is currently executing. (I may have misunderstood this). Both of these were covered in the extensive discussion of desirable behaviour and options for ABORT and PAUSE above.

Table 2: The operations BIAS, DARK, FLAT, ARC, OBJECT and SKY look as if they are geared towards a spectrograph. What would an imager do in response to the ARC command, for example? It may be more flexible to allow each instrument to define its own set of frame types and have a single OBSERVE command which takes "frame type" as an argument. That way you could have a set of commands like LOAD BIAS SET BIAS

OBSERVE BIAS and if a new instrument invents another frame type you wouldn't need to

define a new command for it. Agreed

At what stage are instrument configurations checked to make sure they can be successfully executed? My guess is that the Observation Preparation tool would check the configurations for scientific validity and the Instrument Sequencer would check that the instrument could actually achieve the desired configuration at the time it is commanded

(e.g. the correct filter wheel is loaded). I think in order to be able to check configurations you need an additional command to define what allowed configurations the instrument

can take. This command would be used whenever the filter wheel or grating inside the instrument is changed (for example). Also discussed at meeting. Conclusion was that there is a difference between determining legality in the sense of "is a given instrument or instrument component such as grating or filter actually available right now" which will be a job for the scheduler software/support scientist/TO/observer (and trapped lower down by the ICS) as at present, and trapping of the "is this a legal setup for when instrument X has component Y installed in it" problems - which is a job for the PREP tool. Examples of the sort of thing that should be checked by the PREP tool for CGS4 would be - for a given grating and wavelength is the entered position angle legal or should it be 180deg different, can polarimetry be done with the echelle at 1.1um, etc. Does CGS4 have the echelle installed in it would be a level of checking for the scheduler.

Section 4.5: You need a reference to the UKIRT TCS documentation.

Page 14: Is the table on page 14 Table 7? It seems to have lost its label.

Section 4.8.1: I think the system would be a lot more robust if it relied on an explicit "queue" of files to be reduced, rather than just detecting when a file appears. As you say, the index file could be used for this. Section 4.8.5: My vote would be to write the index from the data acquisition system. A flag in the index could indicate when each file

has been reduced. You would need some way of managing the index so that the data acquisition and data reduction tasks can both update it. Both were discussed at meeting, general consensus was that an index written by the DR would be most appropriate (maintains independence of the two systems) - but details to be thought about further as the DR design progresses.

Section 4.8.7: What happens if you have to terminate an observation prematurely due to a deterioration in the weather? Can you finish a group later on (or during a future night) when the weather improves, or will you have to start again? Would have to start again - from a science point of view keeping data before and after a weather change separate for manual inspection and combining by observer at a later time is probably better anyway.

Section A1: This is very useful. Please can you add it to document

[O-1] and include it at the beginning of the book.

6.0 UFTI Scripts


A separate meeting between TGH, GSW, FE and MJC was held to discuss details of the proposed UFTI data reduction scripts for commissioning.

TGH commented that for the "bright standards" and "combined jitter set" recipes the blank sky observations for the FLAT would probably be taken every ~3 hours rather than once per night. This would not fundamentally alter the reduction recipe though - other than that the most recent FLAT should be used.

There was some discussion of the "quadrant jitter" method - it was felt that the generalised N position version would be better, but since "quadrant jitter" had been used by experienced users and deemed to work this was a good starting point for the initial UFTI scripts.

Concerns were expressed about sigma clipping and how well it worked, especially if there were only a few frames. GSW said she didn't know exactly what the IRAF routine recommended by SKL did, but that SKL had used it a lot without problems. MJC will find out which IRAF routine and what it does. After the meeting MJC pointed out that worries about small numbers of frames were irrelevant if the sigma clipping was for a whole frame and therefore had lots of pixels in it.

TGH was not clear how the "Adjacent Frames" method would work, but agreed that if SKL had found it useful for very faint sources then it would be good to include it in the initial set.

For the extended sources and crowded fields TGH emphasised that using stars in the overlap regions to work out relative positions was much more accurate if at all possible. MJC said he already had this working on some test data.

2 points were raised about the lower priority scripts:

For end of night photometry, there was already a good approximation to what was required that could read an ascii file, and so leaving this as low priority was OK.

Simple image quality monitoring should be raised in priority, since at the moment TGH felt that users did not check this often enough. Writing the results in a file was especially important and upgrades would keep an eye on the files also. Since much of the required DR steps (e.g. calculating FWHM) have to be done for the recipe for reducing data from a focus run, it would be a relatively small amount of extra work. It was agreed that this would be raised to the priority 1 category.

Overall it was agreed that a reasonable number and variety of scripts had been included and they should be sufficient to cover most of what UFTI needs to get started.

Note from AJA who was unable to attend discussion of UFTI scripts since this was after the main meetings: point 3.1.3 - I agree wholeheartedly with the philosophy of providing a restricted number of EXECs and their related DR scripts, while offering to produce others on (reasonable and timely) requests from users. This should probably be stated more clearly somewhere higher up in the document hierarchy - say in the overview document?

7.0 Action List


[O-1] AB to find out whether the Gemini team have any plans to automatically check that standard stars and objects are observed with the same spectrographic setup.
[O-2] AB to change overview diagram to reflect new role of scheduler as a user inteface+translator.
[O-3] AB+NPR to discuss changes to the existing CGS4, IRCAM, telescope software. (COMPLETED).
[O-4] AB to reach an agreement with Gemini regarding use of the OT, with a view to obtaining a copy of the software in time to start work on the UKIRT modifications in February.
[O-5] GSW add some "things going wrong" items to the Scenarios document
[O-6] TRG arrange for someone at the JAC to collate support scientist requests for additional header items.
[O-7] GSW/FE/MJC make a list of extra headers that the data reduction would need.
[O-8] FE/MJC/AB produce a specification for a simple format for engineering and analysis files written by the DR software (photometry files, logging of Array_tests results, logging of FWHM results, logging of 2-row/2-column line fits etc.)
[O-9] AB + MH-P to discuss the changes to CGS4 etc. during the SPIE meeting and MH-P visit to ROE.
[O-10] AB to investigate what can be done in the way of having markers on the display of the sequence that is executing which will show iterator loops, and "breakpoints"
[O-11] AB to expand table of EXEC dictionary to show which items will be allowed in parallel and which not, also additional commands needed.
[O-12] MJC find out from SKL which IRAF sigma clipping routine she uses and in detail what it does, with a view to reproducing it
[O-13] GSW change UFTI scripts document to show "simple image quality monitoring" as a high priority.
[O-14] GSW to check and update Scenarios document about searching for suitable dark frames.
[O-15] GSW to check and clarify DR requirements FD4.