Overview
note that this document is somewhat out of date: the details
of the programs being used for the different steps may not reflect
current reality at CFHT - EAM 2000.07.11
Ptolemy is a collection of programs which act together to provide
automated reduction of images, including photometry and astrometry,
data validation, and incorporation of the resulting measurements in a
photometry database organized by star position on the sky. Ptolemy
was originally designed to analyse images produced by LONEOS, the
Lowell Observatory Near Earth Object Search. It has since been
adapted to handle the more general questions of mosaic camera data and
multiple filter sets. It has been tuned to work with the CFH12K
system, but can easily be adapted to any CCD imaging camera with
sufficient field of view (limited by the astrometric routines).
Why Ptolemy? Two reasons: First, Ptolemy, the Roman Astronomer (AD
100-170) was the the first to make a systematic map of the entire sky
visible from Greece. Second, Ptolemy I of Egypt (367-283 BC), the
founder of the Library of Alexandria, personifies the ideals of the
collection and organization of large quantities of information.
The organization of the various programs which make up Ptolemy is
handled by the program elixir (see 'Elixir: a program-organization
program'). In this document, we will discuss the implementation
and components which make up Ptolemy. Fig.~\ref{pipeline} shows a
schematic of the Ptolemy components. Images to be analysed are
passed to Ptolemy as a list of files names. They may have just come
from the telescope, or they may have been on disk for an arbitrary
amount of time. In the basic implementation, the images are assumed
to be raw, unpre-processed (undetrended) images, though it is possible
to introduce data from any stage in the process and skip over the
earlier stages.
Images are first flattened, then analyzed by an object detection
routine which produces instrumental photometry and pixel coordinate
positions for stars and other objects in the images. The resulting
files are cleaned of particularly bad types of detections, limited to
a minimal subset of data for each object, and merged with the header
information from the image to produce an ASCII FITS table which can be
easily used with the next stages. Astrometry is performed on the
resulting file and the results are written to the file header.
Finally, the file is added to a photometry database which stores
information about multiple measurements of objects and the images from
which those measurements were derived.
The above description is very general, and does not mention the use of
CFHT, the CFH12K mosaic, or any specific programs used in the analysis
steps. This is intentional. We have designed Elixir in general, and
the Ptolemy component in particular, to be as flexible and independent
of such constraints as possible. To date, we have used this system
with several different telescopes \& detectors (the single chip
Astrocam on the LONEOS telescope, the CFH12K on CFHT, the UH8K on the
UH 2.2m telescope, and (SOON) the Suprime on Subaru). We have also
substituted several different object detection programs, multiple
flat-fielding modules, and different object list cleaning programs in
the process depending on the needs and the situation. To be more
explicit, we will discuss below the nominal set of analysis modules,
in the context of analysis of CFH12K mosaic images. An important
point, though, related to the flexibility of our modular approach is
the independence of data in the photometry database from the data
source. In this database, there are no artificial conditions based on
the arrangement of the mosaic or the position of images on the sky.
Although the image source information is stored so that it is possible
to investigate instrumental effects as needed, an end user of the
photometry database need not worry about the source of the photometry
measurements to perform an analysis.
Figure 1: Schematic of the photometry pipeline. Rectangles represent
analysis steps, with conceptual names instead of actual program names,
while ovals represent data products. Arrows show the direction of
travel of information.
Flat Fielding [Flatten] : Flips & Mana
To date we have used two modules to perform the detrending step. One
uses the program mana to perform the arithmetical operations,
which the other uses the tools from the Flips package. The difference
between these two is one of detail: the mana-based verion was quick
to implement, and only performs bias subtraction and flat-field
correction. Ignoring the dark current, for example, introduces an
error which can be as much as a couple of percent with certain CFH12K
chips. The mana-based process has been used in our analysis of the
archived CFH12K images from September 1999 to April 2000. The
Flips-based detrending process was implemented in early 2001, and
performs a complete set of detrending steps: bias, dark, flat-field,
fringe-frame correction.
Both implementations of the flat-fielding process make use of the
Elixir detrend database system to automatically associate the
appropriate detrend images, of the various types, with a given science
image to be detrended. The appropriate detrend image is identified by
the time of the image, and the latest detrend image which is defined
to overlap that time. Images are entered in the detrend database
either by hand, or may be automatically added by the Elixir detrend
system.
Photometry [Phot] : Sextractor & Dophot
Photometry is performed using a varient of dophot, called
gophot This program is adapted to streamline the processing of many
files and has a few other enhancements over Dophot. The basic goals
are to detect objects, measure their positions, instrumental
magnitudes, and shapes, and to perform some limited object
classification. Gophot uses two-dimensional Gaussian fits to measure
the magnitudes and determine the shape parameters. THIS SECTION NEEDS
SOMEWHAT MORE DETAIL, BUT THE OTHERS MAY NEED LESS
Cleanup [imclean] : imclean, imclean.cfh12k, etc
After an image is processed by dophot, the object file
(foo.obj) is converted to a more-compact, more-complete file
called a 'cmp' file, with a name of the form (foo.cmp). This
conversion is done with the program imclean. The 'cmp' file
consists of the FITS header from the original image
(foo.fits), with some additional keywords to be used at later
stages in the analysis, followed by an ASCII list of the interesting
data from (foo.obj). In this list, types 6 and 8 are
excluded, and only the following values are kept:
\begin{verbatim}
X Y Mag dMag t log(sky)
1342.0 106.1 14.166 000 4 3.2
\end{verbatim}
Objects with a signal-to-noise ratio lower than a specified cutoff
(MIN_SN_FSTAT) are also excluded. Some general information
about the image is derived by imclean (FWHM, saturation and
completeness limits, number of each dophot type) and stored as
keywords in the header. The resulting file can now stand on its own
without reference to the original image. The keywords used by
imclean include the minimum signal to noise, a rough guess at the
zero point (ZERO_PT), and four numbers defining the format of
the dophot 'obj' file.
Astrometry [astro] : gastro
Astrometry is performed automatically by the program gastro.
gastro loads a 'cmp' file and determines an initial guess
for the image coordinates (based on header keywords). The program
also uses determines the plate scale and rough orientation of the
image from the header to get close to the final solution. Also, the
true sky position of the telescope pole may be defined to allow
astrometry on images taken close to the pole. The comparison is made
with the astrometric catalog, which may be the HST Guide Star Catalog,
the USNO database, or even the photometry database produced by
Ptolemy.
Data Incorporation [addstar] : addstar
Images which are successfully processed are then incorporated into the
photometry database with the program addstar. This program
decides which region files are appropriate for this particular image,
then one-by-one adds stars from the image to the appropriate region
file. Stars already in the catalog are matched with stars in the new
image purely by a positional comparison. In order to avoid the
difficulty of comparisons in the RA and DEC coordinate frame, a
cartesian projection is performed. The stars from the image being
processed and the database stars in the same area are projected onto a
tangent plane and positional comparisons made in this (locally
cartesian) coordinate frame. This avoids the dangerous singularities
at the pole and also makes the RA 0,360\degree\ boundary a trivial
problem.
Several choices must be made in the comparison process. If a star in
the catalog is matched with a star in the image, the new measurements
of that star are added to the database. If a star in the database
lies in the field of the image, but is not detected, this information
is added to the list of missing data. The only data stored in this
case is the time and source, which is sufficient to unambiguously
identify the source image from the image database. This allows later
programs to find relevant statistics from the image database, if
necessary. If a star is detected in the image, but is not already in
the database, a new entry is added to the database. In addition, the
same ``missing data'' from all previous observations images which have
covered this location (ie, images already in the image database) are
included. This last step is necessary so that the image processing
order is not important.
Stars also run the danger of being crowded together. Since all
comparisons are performed on the basis of position alone, crowded
fields may make for ambiguous cross-identifications. A pair of stars
are matched if the difference in their positions is less than a
specified search radius (dependent on the scatter in the astrometric
solution for the image). If more than one catalog stars are correlated
with a star in an image, the new measurement is added to both
catalog stars, and a flag is set noting that this observation had a
blended IMAGE. Conversely, if a single catalog star is matched with
more than one image star, both new measurements are added to the one
catalog star, and a different flat is set, noting that this
observation had a blended CATALOG.
Ptolemy is only responsible for analysing individual images and adding
their results to the photometry database. As part of the maintenence
of the photometry database, it is useful to run a variety of programs
which cleanup and improve the database. These programs perform
functions such as identifying moving objects, determining relative
photometric zero points for images which overlap, and so forth. We
call these types of programs `data worms' since they worm their way
though the database on a regular schedule and process the data in some
way. The `data worms' currently in use are discussed elsewhere
(`dataworms').
|