| Version 1 (modified by , 17 years ago) ( diff ) |
|---|
(started by dgm 9 Feb 07 after conversations with Gene)
Big Picture:
Due to the effects of seeing (mostly) and other stuff (a little), one get better astrometric accuracy by solving for frame-to-frame mapping coefficients and then star parameters (relative position, motion, parallax, etc.) in a small chunk of sky than one gets in a big chunk of sky. The division between "small" and "big" comes from being able to locally characterize the seeing and other errors by a low order polynomial (or similar) as disctinct from actually understanding all of the effects and taking them out on all (or at least larger) scales. You get better accuracy, but the price is doing differential measurements. You get relative position, motion, parallax, etc., with respect to the mean of all stars and galaxies in the soltuion. For bright stars, this is a big problem because the correction from relative to absolute parallax, for example, can be a few milliarcseconds at 15th magnitude.
Brief Description of Current Algorithm:
Outer loop: All portions of the visible sky. At the moment, I have adopted the HEALPix tessellation using
NSide=512 which gives NPixel=3,145,728 and a squarish box of side about 6.9 arcminutes. This may be too big, but it is a starting point.
Task: Consult database and get the list of all stars that lie in this HEALPix. Life is a lot easier if
all stars have unique IDs and even more so if the database has a column containing the HEALPix.
Task: Given the list of star IDs, extract all measures of all stars from all observations. We use the
chip-based observations (x and y in pixels, brightnes in counts), and we rely on the database to provide metadata such exposure data as epoch, bore sight (RA,Dec), camera rotation, filter, etc.
Task: Go to the reference catalog and extract "first guess" data such as position (RA,Dec), magnitude,
color, star-or-galaxy, parallax, proper motion, etc.
Task: Identify each combination of Observation+CCD that contains enough HEALPix stars, and compute the
expected position for each reference catalog star.
(At the moment, and possibly for not well understood reasons, I have chosen to project each
reference star to the tangent plane of an idealized exposure of the nominal field center at the instant of the observation based on the SLALIB DS2TP() routine. Since my simulation starts with no knowledge of proper motions or parallaxes, the starting guess is Apparent places, as distinguished from SLALIB's Catalog and Mean places, at the instant of 2000.0. Maybe this needs to be fixed.)
Inner Loop: Since neither the places nor transformations are known, a relaxation is needed. Since most
do not move and have no parallax, the catalog positions should be reasonably accurate. Hence, the first step of the relaxation is to compute the transformations, and the second is to compute the mean positions.
Task: Given the set of actual positions (x and y in pixels) and the true positions (xi, eta; starting at
the catalog values but moving as the iterations proceed), use least squares to compute the transformation between (x,y) and (xi,eta).
(At the moment, I use a cubic polynomial in each of X and Y. Many experiments have shown that
the seeing does all sorts of things, but that a cubic gives a decent approximation without inserting steep gradients, etc.)
(Numerical experiments have shown two areas where this solution has problems. The first is where
the stars used in the fit occupy only a small portion of the HEALPix. This means that the solution will be extrapolated for stars in the HEALPix, not included in the soltuion, but whose parameters need to be estimated. It is much better to insert these with a low weight than to omit them. The other is for the first solution of the first iteration. Noise is bad enough, but stars with significant motion or parallax tend to distort the solution. This distortion can persist through many iterations. It is much better to restrict this first solution to galaxies or faint stars whose motion is zero or negligible.)
(By using (x,y) and (xi,eta), the scale (arcsec/pixel for example) will appear in the coefficients.
This isn't an extra free parameter because the linear term includes effects from seeing and other things, but it makes these terms nowhere near unity. If you convert (xi,eta) into idealized pixels, the constant terms can get really big. At some level, this choice is driven by personal preference.)
(Remember that this fit is mostly to remove the seeing, residual errors in the telescope model, and
similar. Since a single Observation+CCD contains many HEALPix, the same pairing will have different transformation coefficients in solutions for different HEALPix. One should avoid detailed interpretation of these transformation coefficients and attribution of their value to causes other than seeing.)
Task: Given the transforamtion coeffients for each Observation+CCD, transform the measures for each star into
the mean coordinate frame, and then fit these for position, proper motion, parallax, refraction, perturbations from unseen compaions, and anything else you can think of.
(This step needs lots of metadata. A useful choice is to retain times in Julian years counted from
2000.0. This makes the constant term the one you need for the catalog and gives you annual proper motion. My preference is to use SLALIB to compute the parallax factors and refraction coefficients by
pi = 4.0D00*ATAN(1.0D00) halfpi = pi/2.0D00 twopi = pi*2.0D00 z = SIN(obs_ha)*SIN(halfpi-obs_lat)/SIN(obs_zd) obs_pa = ASIN(z) z = TAN(obs_zd) obs_rra = z*SIN(obs_pa) obs_rdec = z*COS(obs_pa) CALL sla_MAPQK(r0,d0, ZERO,ZERO,1.0D00,ZERO,
- amprms, ara,adec) CALL sla_MAPQK(r0,d0, ZERO,ZERO,ZERO,ZERO,
- amprms, bra,bdec) obs_pfx = (ara-bra)*3600.0D00*radian*COS(d0) obs_pfy = (adec-bdec)*3600.0D00*radian
Sorry for FORTRAN but the C would be very similar. These follow the assumption of the idealized projection to a tangent plane axes oriented along RA and Dec. The parallax factors are in units of arcseconds, and the refraction vectors are dimensionless. The least squares fit for each star is
X = a + b*t + c*obs_pfx + d*obs_rra Y = A + B*t + C*obs_pfy + D*obs_rdec
where X and Y are computed for the star from the appropriate transformation, and obs_pfx, obs_pfy, obs_rra, and obs_rry are taken from each observation's metadata.)
(One can argue that there is only one parallax for the star, and that the soltuions for X and Y should be
coupled into a single solution. I really don't like to do this, but will omit all the reasons here.)
Task: Using the solution coefficients, the predicted place for each star on each CCD of each exposure is updated.
End of Inner Loop: Three iterations is usually enough for convergence.
Task: If you have a list of known galaxies, then their mean motion and parallax should be computed and removed
from the soltuion.
Task: Update the database with the values for proper motion and parallax. It would be useful to save the
coefficients of refraction for the first guess when you do this again after new observations become available. You probably shouldn't use the constants to update the RA and Dec of your reference star catalog. Too many other things use this catalog, and it shouldn't be changed unless there is a really good reason to do so.
End of Outer Loop: The sky is done. Start writing letters to the ApJ or Nature.
Comments, brickbats, and flames.
(dgm) This is a big and messy operation. Simulations based on USNO-B and the LSST focal plane showed that it would take
several months on a single machine, and was limited by disk access. Further experiments highlighted the need to balance disk I/O with the limitations of physical memory. For LSST, the simulation was coded using three different disk storage models. The first was one file per observation, the second was one file per raft (9 CCDs), and the third was one file per CCD. Just as Goldilocks found out, the middle was pretty good. Saving by observation put too many measures into memory, and virtual memory could only hold a year or so. Saving by CCD was amazingly slow because of the need to load (and reload) so many little chunks for each HEALPix. The raft-based storage needed a few hundred megabytes of memory, and this seems pretty reasonable.
(dgm) The HEALPix tessellation isn't great, but it seems useful. The C language library gives a subroutine that returns
a unique HEALPix ID for any (RA,Dec), and adding this to the structure for each star assures that stars do not move between tiles. It also allows indexing so that the extraction goes a lot faster. The logic for boxes of (RA,Dec) gets messy, particularly near the poles, and it really need only be done once per star.
