IPP Software Navigation Tools IPP Links Communication Pan-STARRS Links

Changeset 30593


Ignore:
Timestamp:
Feb 13, 2011, 10:21:44 AM (15 years ago)
Author:
eugene
Message:

some work on the psphot guide

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/doc/psphot/psphot.tex

    r21455 r30593  
    159159Python).
    160160
     161\note{discuss the psphot program varients}
     162
    161163\section{PSPhot Design Goals}
    162164
     
    276278\end{itemize}
    277279
    278 Note that a given run of PSPhot \note{should} allow the user to
    279 perform any of these stages as an option.  For example, the PSF model
    280 may already be available from external information, in which case the
    281 PSF modeling stage can be skipped.  Or, when used as a library
    282 function, the image may have already been loaded and the mask and
    283 weight images constructed.  In some implementations, it may be
    284 possible to skip the initial object detection stage because only
    285 supplied sources are measured.  These are only some of the possible
    286 configurations.  The use of these different configurations depends on
    287 the source of the image, the desired detail and speed of the
    288 processing, and the level of accuracy desired from the analysis.
     280Note that a given run of PSPhot allows the user to perform many of
     281these stages only if needed.  For example, the PSF model may already be
     282available from external information, in which case the PSF modeling
     283stage can be skipped.  Or, when used as a library function, the image
     284may have already been loaded and the mask and weight images
     285constructed.  In some implementations, it may be possible to skip the
     286initial object detection stage because only supplied sources are
     287measured.  These are only some of the possible configurations.  The
     288use of these different configurations depends on the source of the
     289image, the desired detail and speed of the processing, and the level
     290of accuracy desired from the analysis.
    289291
    290292\subsection{Image Preparation}
     
    292294The first step is to prepare the image for detection of the
    293295astronomical objects.  We need three separate images: the measured
    294 flux, the corresponding noise level, and a mask defining which pixels
    295 are valid and which should be ignored.  For the stand-alone program,
    296 the input flux image is a required program argument.  When it is
    297 loaded, it is converted by default to 32-bit floating point
     296flux, the corresponding variance image, and a mask defining which
     297pixels are valid and which should be ignored.  For the stand-alone
     298program, the input flux image is a required program argument.  When it
     299is loaded, it is converted by default to 32-bit floating point
    298300representation.  In the function-call form of PSPhot, the image must
    299301be supplied by the user in 32-bit floating point format.  The noise
     
    307309automatically by PSPhot.
    308310
    309 For the mask, we use an 8-bit image in which a value of 0 represents a
    310 valid pixel.  We use each of the 8 bits to define different reasons a
    311 pixel should be ignored.  This allows use to optionally respect or
     311\note{describe the use of the covariance image}
     312\note{describe the difference between 'bad' and 'suspect' pixels}
     313
     314For the mask, we use a 16-bit image in which a value of 0 represents a
     315valid pixel.  We use each of the 16 bits to define different reasons a
     316pixel should be ignored.  This allows us to optionally respect or
    312317ignore the mask depending on the circumstance.  For example, in some
    313318cases, we ignore saturated pixels completely while in other
     
    325330\code{XMIN}, \code{XMAX}, \code{YMIN}, \code{YMAX}.
    326331
    327 \note{Mask values are currently hard-wired numbers.  We need a method
    328   for user-defined mask values to be supplied.  PSLib needs to have a
    329   mask registration system.}
     332\note{discuss the mask.config file, in which the mask meanings are assigned to bit values}
    330333
    331334The noise image, if not supplied is constructed by default from the
     
    337340valid.  For example, if the input flux image is the result of an image
    338341stack with significantly variable number of input measurements per
    339 pixel, it will necessary to supply a noise image which accurately
     342pixel, it will be necessary to supply a noise image which accurately
    340343represents the noise as a function of position in the image.
    341344
     
    343346
    344347The objects are initially detected by finding the location of local
    345 peaks in the image.  The flux image is smoothed with a very small
    346 circularly symmetric kernel using a two-pass 1D Gaussian.  At this
     348peaks in the image.  The flux and variance images are smoothed with a
     349small circularly symmetric kernel using a two-pass 1D Gaussian
     350(\note{KEYWORD?}).  The smoothed flux and variance images are combined
     351to generate a significance image in signal-to-noise units
     352\note{including correction for the covariance, if known}. At this
    347353stage, the goal is only to detect the brighter sources, above a user
    348354defined S/N limit (configuration keyword: \code{PEAK_NSIGMA}).  The
    349355detection efficiency for the brighter sources is not strongly
    350356dependent on the form of this smoothing function.
    351 
    352 \note{Is this smoothing needed?  we could save time here by skipping
    353 it.}
    354357
    355358The local peaks in the smoothed image are found by first detecting
     
    364367the maximum $X$ and $Y$ corners of the region.
    365368
    366 \note{The current implementation ignores the S/N map in making the
    367 peak detection.  This code must be modified (a la Kaiser) to be used
    368 for a peak-detection pass in a difference image or to re-find peaks in
    369 the image after the modeled objects have been subtracted}.
     369\subsection{Footprints}
     370
     371\note{need to describe the process of generating the source footprints
     372  and then culling the insignificant peaks}
     373
     374\subsubsection{Moments and related}
     375
     376\note{disucss the Kron mags}
     377
     378\note{this section is wrong: we no longer use S/N clipping, but a
     379  Gaussian window function, chosed based on the measured moment}
    370380
    371381Once a collection of peaks have been identified, basic properties of
     
    391401
    392402\subsubsection{Determination of the Peak Coordinates and Errors}
     403
     404\note{this section is wrong: it is a poor estimator of the source
     405  position errors.  we gave up a reverted to using the FWHM / (S/N)}
    393406
    394407We use the 9 pixels which include the source peak to fit for the
     
    605618the minimization values.  PSPhot uses the first and second moments to
    606619make a good guess for the centroid and shape parameters for the PSF
    607 models.  In order to minimize the impact of close neighbors, the noise
    608 values used in the fit are enhanced by a fraction of the deviation of
    609 the particular pixel value from the model guess.  Any objects which
    610 fail to converge in the fit are flagged as invalid.
     620models.  \note{still true? In order to minimize the impact of close
     621  neighbors, the noise values used in the fit are enhanced by a
     622  fraction of the deviation of the particular pixel value from the
     623  model guess.}  Any objects which fail to converge in the fit are
     624flagged as invalid.
    611625
    612626\note{does the noise enhancement introduce too much bias?}
     
    10441058
    10451059\subsection{Difference Images}
    1046 
    1047 \note{much of this discussion is theoretical: PSPhot can incorporate
    1048   these modifications, but it currently does not.}
    10491060
    10501061The noise map for a difference image must be generated from the two
Note: See TracChangeset for help on using the changeset viewer.