[Date Prev][Date Next][Date Index]

sne revision and error report



I made an error at the recent meeting in Toronto when working on the 
supernovae time. We are requesting first and last useful night of the dark 
runs on each field at 95 percentile image quality (1.2 arcsec currently) and
the intermediate 6 nights at 85 percentile image quality (1 arcsec 
currently).  This is statistically 7 nights. However, we also expect it to be 
clear about 75% of that time to give 5.25 useful nights. The exhaustive error 
analysis we did includes all this, and the correlations of weather from night 
to night.

The result is that the total ultra deep synoptic request drops to 202 nights 
from 270 or so. This really helps ease things a lot. Apologies for this 
error. Please check my arithmetic in the altered draft below.

Also the SA22 hour field is pretty undesirable. It has an extinction of about 
0.3 mag, which is equal to the entire signal of the dark energy detection at 
about z=1. This is not a reasonable starting point. There is at least one 
alternate field away from the equator in this RA range, but it is in the 
north.


Revised writeup is attached. 
\subsection{ The cosmic equation of state}

Over the past year there has been a profound shift in our view of the
cosmological world model as a result of the two supernovae teams
finding that the distance-redshift data is best described by an
``accelerating'' universe, as described by a flat low density
model. The more recent Boomerang and Maxima results find that the
first Doppler peak is at a location that indicates that the universe
is flat. The implication of these results is that the mass energy of
the universe is dominated by a repulsive $\Lambda$.  The effects of
this non-clustering mass energy are only readily visible on the scales
of the universe itself. Therefore its properties are only open to
ready investigation through astronomical investigation.

One important property of $\Lambda$ is the relation between its energy
density, $\rho_X$, and its effective pressure in the cosmological
equation, $p_X = w \rho_X$. A constant $\Lambda$ has $w=-1$. At this
stage most theorists would prefer some dynamical form of $\Lambda$
which varies in time. There is a huge range of alternatives arising
from string theory, ``quintessence'' and other fundamental theories of
fields. Quintessence predicts a late-time value of $w\simeq -0.8$ and
one form of strings predicts $w= -1/3$. Values that are currently
discussed (see Weller and Albrecht) cover the range from about -1.1 to
-0.5.  It must be emphasize that at this stage there is no useful
constraint in this range.

A group has proposed that a specialized satellite, SNAP, be built to
provide a very tight constraint on $w$ (see snap.lbl.gov). As
proposed, this is approximately a 2.5m (HST sized) telescope with a 1
degree optical imager, an IR imager, and a low resolution
spectrograph. They suggest that this could begin observations in about
2006, based on very prompt approval and minimal complications. We expect to
be able to publish our primary results before they begin observing.
This is an exciting experiment however there is considerable value in
ground based studies undertaking the first measurement of $w$ which
will also lead to huge expansion in our knowledge of SnIa prior to the
launch of SNAP through the creation of a large, uniform and ``clean''
sample which is likely to be invaluable for further measurements of
the time variation of w.

At this time CFHT has been proven to be the best existing telescope to
find SnIa and MegaPrime will improve that situation.  Below we lay out
a proposal to make a measurement of w. As much as possible we try to
take a balanced approach to this issue to help understand the evidence
that we have assembled showing that the basic approach will work.
Note that we will create a well sampled dataset of about 2500 SnIa
(and a comparable number of SnII) which exceeds anything else
available, even that of the first round SNAP plans.  All of the
photometry for these will be in our data and we hope and expect that
we will learn techniques which will allow us to incorporate these into
our analysis to allow a new level of precision in the measurement.

\subsubsection{2000 CFHT Supernovae and w}

It has now been established that within the precision of the data that
SnIa can be calibrated to yield their luminosity. The discovery of the
luminosity-decline relation is the key ingredient. Normally every
supernovae discovered must have its spectrum measured to confirm that
it is a Ia. In the case of this program we expect to discover nearly
400 Ia per year out to redshift about 1.  Based on
the discovery redshift distribution we estimate would require approximately 3 nights of 8m class
telescope time per month, or about 30 per year to follow two campaigns five months long.
Between Canada and France we have access to the 4 VLTs, 2 Gemini and 2 Magellan. The main
problem is in the North. However, since all confident supernovae detections will be immediately 
available on the web we expect informally collaboration will allow us to acquire 
the necessary spectra. Furthermore, we are cautiously optimistic that the 200,000 
uniform photometric measures we will acquire will allow us to devise new, entirely photometric,
approaches to typing as well control over as-yet unknown systematic errors.

There are two proposed survey styles. The common element is to observe
every second night if image quality is better than 85\%, currently 1
arcsec.  The French group propose to observe 300, 600, 1800 and 1800
sec in g', r', i' and z' respectively. The Canadian group propaose to
use 900, 1800, 3600 and 1800 seconds in the same filters.  The total
cost in queue time per epoch is either 1.25 hours and 2.25 hours. The
shorter integrations are less costly and produce similar errors
provided that the the ``intrinsic dispersion'' in Ia brightnesses is
at least 0.1 mag. The longer integrations have the significant
advantage that for as much as is known of the Ia luminosity function
at peak that we will be more than 50% complete in all colors up to
redshift 0.9. 

The MSWG also supports the ultra-deep survey, which adds in the u'
filter for 900 seconds per epoch for a total of 2.5 hours per field
per epoch. We assume that camera is on the telescope for two 5 month
campaigns per year (this may not be true, in which case the
integration time is automatically reduced--there is no real value in
observing more often than once every two days). Consequently the total
integration times in u', g', r', i' and z' will be approximately 26,
52, 104 and 52 hours per field, respectively. This meets our goal of
obtaining ``near HDF'' depths. We should hit near to 28.4 (Vega) in  g',
28.0 in r', 27.8 in i' and 26.0  in z', assuming sqrt(T) improvement (all Vega).

\subsection{Details of Survey Design}

The three key ingredients to designing the survey are the estimated
limited magnitude of Megaprime in the SDSS filters, real-world sky
statistics of clear sky, seeing and transparency for Mauna Kea, and
the current best-estimates of supernovae rates, luminosity functions
and light curves as observed in the SDSS filters. We have used a set
of programs kindly provided by John Tonry. It should be emphasized
that some of the numbers used in this program are controversial at the
factor of 50% or so level. However, it does represent the state of
knowledge in this field at this time. The same sky series were used by
the French and Canadians. The French used their estimate of supernovae
rates and luminosities whereas the Canadians used the Tonry
values. For the same assumptions the two teams derived very similar
results. The precision of w measurement, in the presence of an
$\Omega_m$ from weak lensing should be about 5\%.  The strategic
difference is based on whether photometry in all bands near peak light
is sufficiently important to roughly double the observing time. The
MSWG supports an approach which allows the best possible control over
systematics and at the same time satisfies the goals of the ultra deep
survey.

\subsubsection{Primary Scientific results}

In total this program will produce about 2500 Ia and comparable
numbers of II. The 10\% subsample in E/S0 gives $\sigma_w=0.18$ if we
use the readily available constraint on $\Omega_M$ from weak
lensing. If we add constraints on both $\Omega_M$ and $\Omega_\Lambda$
from CMB then $\sigma_w\simeq 0.07$. Moreover this vast dataset will
be extensively examined for systematic errors at a new level of
precision. The other 90\% of the Ia sample will likely be able to be
eventually turned into a precision distance estimator sample as well.
If so, then we get a result that is $\sigma_w \simeq 0.03-0.05$, within
a factor of two to three of what can be done with  SNAP.


\subsubsection{Data Requirements}

We require that the seeing be better than 85 percentile image quality, currently
one arcsecond, but this should improve to about 0.9 arcsecond with Megaprime.
We require observations every second night, provided that the sky is clear and
the image quality is met. If these are not true, then the observations are not 
taken (thereby being a good time for calibrations, if photometric).  Ideally
we request first and last nights of the dark run at 95 percentile seeing and
the six intervening nights at 85 percentile seeing. Statistically, allowing for
75\% cloudy weather (the simulator allows for night-to-night correlations) this
amounts to a request of about 5.25 epochs per month. For 5 month campaigns over
five years this is 131.25 epochs. For four fields with a total of 2.5 hours of
open shutter time this is 1312.5 hours, or 202 queue nights.