IPP Software Navigation Tools IPP Links Communication Pan-STARRS Links

Changeset 40423


Ignore:
Timestamp:
May 14, 2018, 6:20:09 PM (8 years ago)
Author:
watersc1
Message:

Incomplete set of edits, but I wanted it checked in.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/doc/release.2015/ps1.detrend/detrend.tex

    r40081 r40423  
    109109\begin{abstract}
    110110
    111 The Pan-STARRS1 Science Consortium have carried out a set of imaging
    112 surveys using the 1.4 giga-pixel GPC1 camera on the PS1 telescope.  As
     111The Pan-STARRS1 Science Consortium has carried out a set of imaging
     112surveys using the 1.4 gigapixel GPC1 camera on the PS1 telescope.  As
    113113this camera is composed of many individual electronic readouts, and
    114114covers a very large field of view, great care was taken to ensure that
     
    133133the data reduction techniques and the resulting data products. This paper (Paper III)
    134134describes the details of the pixel processing algorithms, including
    135 detrending, warping, and adding (to create stacked images) and subtracting
    136 (to create difference images) and resulting image products and their
     135detrending, warping, adding (to create stacked images), and subtracting
     136(to create difference images), along with the resulting image products and their
    137137properties.
    138138\citet[][Paper I]{chambers2017} provides an overview of the Pan-STARRS System, the
     
    142142%Magnier et al. 2017 (Paper II)
    143143%Pan-STARRS Data Processing Stages
    144 \citet[][Paper II]{magnier2017a}
     144\citet[][Paper II]{magnier2017.datasystem}
    145145describes how the various data processing stages are organized and
    146146implemented
     
    153153%Magnier et al. 2017 (Paper IV)
    154154%Pan-STARRS Pixel Analysis : Source Detection
    155 \citet[][Paper IV]{magnier2017b}
     155\citet[][Paper IV]{magnier2017.analysis}
    156156describes the details of the source detection and photometry, including
    157157point-spread-function and extended source fitting models, and the
     
    159159%Magnier et al. 2017 (Paper V)
    160160%Pan-STARRS Photometric and Astrometric Calibration
    161 \citet[][Paper V]{magnier2017c}
     161\citet[][Paper V]{magnier2017.calibration}
    162162describes the final calibration process, and the resulting photometric and
    163163astrometric quality.
     
    178178
    179179
    180 The Pan-STARRS 1 Science Survey uses the 1.4 giga-pixel GPC1 camera
     180The Pan-STARRS 1 Science Survey uses the 1.4 gigapixel GPC1 camera
    181181with the PS1 telescope on Haleakala Maui to image the sky north of
    182182$-30^\circ$ declination.  The GPC1 camera is composed of 60 orthogonal
    183183transfer array (OTA) devices, each of which is an $8\times{}8$ grid of
    184 readout cells.  This parallelizes the readout process, reducing the
    185 overhead in each exposure.  However, as a consequence of this large
    186 number of individual detector readouts, many calibrations are needed
    187 to ensure the response is consistent across the entire field of view.
    188 
    189 The Processing Version 3 (PV3) reduction represents the third full
    190 reduction of the Pan-STARRS archival data.  The first two reductions
    191 were used internally for pipeline optimization and the development of
    192 the initial photometric and astrometric reference catalog \citep{magnier2017c}.  The
    193 products from these reductions were not publicly released, but have
    194 been used to produce a wide range of scientific papers from the
     184readout cells.  The large number of cells parallelizes the readout
     185process, reducing the overhead in each exposure.  However, as a
     186consequence, many calibration operations are needed to ensure the
     187response is consistent across the entire seven square degree field of
     188view.
     189
     190%The Processing Version 3 (PV3) reduction represents the third full
     191DR1 contains the results of the third full reduction of the Pan-STARRS
     192archival data.  The first two reductions were used internally for
     193pipeline optimization and the development of the initial photometric
     194and astrometric reference catalog \citep{magnier2017.calibration}.
     195The products from these reductions were not publicly released, but
     196have been used to produce a wide range of scientific papers from the
    195197Pan-STARRS 1 Science Consortium members.
    196198
    197199The Pan-STARRS image processing pipeline (IPP) is described elsewhere
    198 \citep{magnier2017a}, but a short summary follows.  The
    199 archive of raw exposures is stored on disk, with a database storing
    200 the metadata of exposure parameters.  For the PV3 processing, large
    201 contiguous regions were defined, and the images for all exposures
    202 within that region launched for the \IPPstage{chip} stage processing.
    203 This stage performs the image detrending (described below in section
     200\citep{magnier2017.datasystem}, but a short summary follows.  The raw
     201image data is stored on the processing cluster, with a database
     202storing the metadata of exposure parameters.  These raw images can be
     203launched for the initial \IPPstage{chip} stage processing.  This stage
     204performs the image detrending (described below in section
    204205\ref{sec:detrending}), as well as the single epoch photometry
    205 \citep{magnier2017b}, in parallel on the individual OTA device data.
    206 Following the \IPPstage{chip} stage is the \IPPstage{camera} stage, in
    207 which the astrometry and photometry for the entire exposure is
    208 calibrated by matching the detections against the reference catalog.
     206\citep{magnier2017.analysis}, in parallel on the individual OTA device
     207data.  Following the \IPPstage{chip} stage is the \IPPstage{camera}
     208stage, in which the astrometry and photometry for the entire exposure
     209is calibrated by matching the detections against a reference catalog.
    209210This stage also performs masking updates based on the now-known
    210211positions and brightnesses of stars that create dynamic features (see
     
    213214\IPPstage{chip} stage images onto common sky oriented images that have
    214215fixed sky projections (Section \ref{sec:warping}).  When all
    215 \IPPstage{warp} stage processing is done for the region of the sky,
     216\IPPstage{warp} stage processing is done for a region of the sky,
    216217\IPPstage{stack} processing is performed (Section \ref{sec:stacking})
    217218to construct deeper, fully populated images from the set of
    218 \IPPstage{warp} images that cover that region of the sky.  Beyond the
    219 \IPPstage{stack} stage, a series of additional stages are done that
    220 are more fully described in other papers.  Transient features are
    221 identified in the \IPPstage{diff} stage, which takes input
    222 \IPPstage{warp} and/or \IPPstage{stack} data and performs image
     219\IPPstage{warp} images that cover that region of the sky.  Transient
     220features are identified in the \IPPstage{diff} stage, which takes
     221input \IPPstage{warp} and/or \IPPstage{stack} data and performs image
    223222differencing (Section \ref{sec:diffs}).  Further photometry is
    224223performed in the \IPPstage{staticsky} and \IPPstage{skycal} stages,
    225224which add extended source fitting to the point source photometry of
    226 objects detected in the \IPPstage{stack} images, and calibrate the
    227 results against the reference catalog.  The \IPPstage{fullforce} stage
    228 takes the catalog output of the \IPPstage{skycal} stage, and uses the
    229 objects detected in that to perform forced photometry on the
     225objects detected in the \IPPstage{stack} images, and again calibrate
     226the results against a reference catalog.  The \IPPstage{fullforce}
     227stage takes the catalog output of the \IPPstage{skycal} stage, and
     228uses the objects detected in that to perform forced photometry on the
    230229individual \IPPstage{warp} stage images.  The details of these stages
    231 are provided in \citet{magnier2017b}.
    232 
    233 The same reduction procedure described above is also performed in real
    234 time on new exposures as they are observed by the telescope.  This
    235 process is largely automatic, with new exposures being downloaded from
    236 the summit to the main IPP processing cluster at the Maui Research and
    237 Technology Center in Kihei, and registered into the processing
    238 database.  This triggers a new \IPPstage{chip} stage reduction for
    239 science exposures, advancing processing upon completion through to the
    240 \IPPstage{diff} stage.  This allows the ongoing solar system moving
    241 object search to identify candidates for follow up observations within
    242 24 hours of the initial set of observations \citep{2015IAUGA..2251124W}.
     230are provided in \citet{magnier2017.analysis}.
     231
     232The limited version of same reduction procedure described above is
     233also performed in real time on new exposures as they are observed by
     234the telescope.  This process is automatic, with new exposures being
     235downloaded from the summit to the main IPP processing cluster at the
     236Maui Research and Technology Center in Kihei, and registered into the
     237processing database.  New \IPPstage{chip} stage reductions are
     238launched for science exposures, advancing processing upon completion
     239through to the \IPPstage{diff} stage, skipping the additional stack
     240and forced warp photometry stages.  This automatic processing allows
     241the ongoing solar system moving object search to identify candidates
     242for follow up observations within 24 hours of the initial set of
     243observations \citep{2015IAUGA..2251124W}.
    243244
    244245Section \ref{sec:detrending} provides an overview of the detrending
    245246process that corrects the instrumental signatures of GPC1, with
    246 details of the construction of those detrends in Section
    247 \ref{sec:detrend construction}.  An analysis of the algorithms used to
    248 complete the \IPPstage{warp} (section \ref{sec:warping}),
     247details of the construction of the reference detrend templates in
     248Section \ref{sec:detrend construction}.  An analysis of the algorithms
     249used to perform the \IPPstage{warp} (section \ref{sec:warping}),
    249250\IPPstage{stack} (section \ref{sec:stacking}), and \IPPstage{diff}
    250 (section \ref{sec:diffs}) stage transformations of the image data to
    251 from the detector frame to a common sky frame, and the co-adding of
    252 those common sky frame images continues after the list of detrend
    253 steps.  Finally, a discussion of the remaining issues and possible
    254 future improvements is presented in section \ref{sec:discussion}.
    255 
    256 Image products presented in figures have been
    257 mosaicked to arrange pixels as follows.  Single cell images are
    258 arranged such that pixel $(1,1)$ is at the lower left corner.  Images
    259 mosaicked to the OTA level have cell xy00 in the lower left corner,
    260 with cells xy10, xy20, etc. sequentially to the right, and cells xy01,
    261 xy02, etc. sequentially to the top of this cell.  Again, pixel $(1,1)$
    262 of cell xy00 is located in the lower left corner of the image.  For
    263 mosaicks of the full field of view, the OTAs are arranged as they see
    264 the sky.  The lower left corner is the empty location where OTA70
    265 would exist.  Toward the right, the OTA labels decrease in $X$ label,
    266 with the empty OTA00 located in the lower right.  The OTA $Y$ labels
    267 increase upward in the mosaic.  The OTAs to the left of the midplane
    268 (OTA4Y-OTA7Y) are oriented with cell xy00 and pixel $(1,1)$ to the
    269 lower left of their position.  Due to the electronic connections of
     251(section \ref{sec:diffs}) stage transformations of the image data
     252follows after the list of detrend steps.  Finally, a discussion of the
     253remaining issues and possible future improvements is presented in
     254section \ref{sec:discussion}.
     255
     256Image products presented in figures have been mosaicked to arrange
     257pixels as follows.  Single cell images are arranged such that pixel
     258$(1,1)$ is at the lower right corner (for example Figure
     259\ref{fig:burntool images}).  This corrects the parity difference
     260between the raw data and the sky.  Images mosaicked to show a full OTA
     261detector are arranged as they are on the focal plane (as in Figure
     262\ref{fig:dark image}.  The OTAs to the left of the midplane
     263(OTA4Y-OTA7Y) are oriented with cell xy00 and pixel $(590,1)$ to the
     264lower right of their position.  Due to the electronic connections of
    270265the OTAs in the focal plane, the OTAs to the right of the midplane
    271266(OTA0Y-OTA3Y) are rotated 180 degrees, and are oriented with cell xy00
    272 and pixel $(1,1)$ to the top right of their position.
     267and pixel $(590,1)$ to the top left of their position. For mosaics of
     268the full field of view, the OTAs are arranged as they see the sky,
     269with the cells arranged as in the single OTA images (Figure \ref{fig:optical ghosts}).  The lower left
     270corner is the empty location where OTA70 would exist.  Toward the
     271right, the OTA labels decrease in $X$ label, with the empty OTA00
     272located in the lower right.  The OTA $Y$ labels increase upward in the
     273mosaic. \czw{This is somewhat of a mess?}
    273274
    274275\textit{Note: These papers are being placed on the arXiv.org to
     
    290291level, dark frame subtraction to remove temperature and exposure time
    291292dependent detector glows, and flat field correction to remove pixel to
    292 pixel response functions.  We also construct fringe correction for the
    293 reddest data in the \yps{} filter, to remove the interference patterns that
    294 arise in that filter due to the variations in the thickness of the
    295 detector surface.
    296 
    297 These corrections, however, assume that the detector response is
    298 linear across the full range of values.  This is not universally the
    299 case with GPC1, and this requires an additional set of detrending
    300 steps to remove these non-linear responses.  The first of these is the
    301 \IPPprog{burntool} correction, which removes the persistence trails
    302 caused by the incomplete transfer of charge along the readout columns.
    303 This bright-end nonlinearity is generally only evident for the
    304 brightest stars, as only pixels that are at or beyond the saturation
    305 point of the detector have this issue.  More widespread is the
    306 non-linearity at the faint end of the pixel range.  Some readout cells
    307 and some readout cell edge pixels experience a sag relative to linear
    308 at low illumination, such that faint pixels appear fainter than
    309 expected.  The correction to this requires amplifying the pixel values
    310 in these regions to match the expected model.
    311 
    312 The final non-linear response issue has no good option for correction.
     293pixel response functions.  We also perform fringe correction for the
     294reddest data in the \yps{} filter to remove the interference patterns
     295that arise in that filter due to the variations in the thickness of
     296the detector surface.
     297
     298These corrections assume that the detector response is linear across
     299the full range of values.  This assumption is not universally true for
     300GPC1, and an additional set of detrending steps are required to remove
     301these artifacts.  The first of these is the \IPPprog{burntool}
     302correction, which removes the flux trails left by the incomplete
     303transfer of charge along the readout columns.  These trails are
     304generally only evident for the brightest stars, as only pixels that
     305are at or beyond the saturation point of the detector leave residual
     306charge.  More widespread is the non-linearity at the faint end of the
     307pixel range.  Some readout cells and some readout cell edge pixels
     308experience a sag relative to linear at low illumination, such that
     309faint pixels appear fainter than expected.  The correction to this
     310requires amplifying the pixel values in these regions to match the
     311expected model.
     312
    313313Large regions of some OTA cells experience significant charge transfer
    314314issues, making them unusable for science observations.  These regions
    315315are therefore masked in processing, with these CTE regions making up
    316316the largest fraction of masked pixels on the detector.  Other regions
    317 are masked for other regions, such as static bad pixel features or
    318 temporary readout masking caused by issues in the camera electronics
    319 that make these regions unreliable.  These all contribute to the
    320 detector mask, which is augmented in each exposure for dynamic
     317are masked for reasons such as static bad pixel features or temporary
     318readout masking caused by issues in the camera electronics that make
     319these regions unreliable.  These all contribute to the detector mask,
     320a 16 bit value which records the reason a pixel is masked based on the
     321value added.  This mask is augmented in each exposure for dynamic
    321322features that are masked based on the astronomical features within the
    322323field of view.
    323324
    324325For the PV3 processing, all detrending is done by the
    325 \IPPprog{ppImage} program.  This program applies the detrends to the
    326 individual cells, and then an OTA level mosaic is constructed for the
    327 science image, the mask image, and the variance map image.  The single
    328 epoch photometry is done at this stage as well.  The following
    329 subsections (\ref{sec:burntool} - \ref{sec:background}) detail these
    330 detrending steps, presented in the order in which they are applied to
    331 the individual OTA image data.
     326\IPPprog{ppImage} program.  This program applies the detrend
     327corrections to the individual cells, and then an OTA-level mosaic is
     328constructed for the signal image, the mask image, and the variance map
     329image.  The single epoch photometry is done at this stage as well.
     330The following subsections (\ref{sec:burntool} - \ref{sec:background})
     331detail these detrending steps, presented in the order in which they
     332are applied to the individual OTA image data.  \czw{I haven't
     333  rearranged into regular and ``special'' yet.}
    332334
    333335\subsection{Burntool / Persistence effect}
     
    335337
    336338Pixels that approach the saturation point on GPC1, which varies by
    337 readout with common values around 60000 DN, cause persistence problems
    338 on that and subsequent images.  During the read out process of an
    339 image with such a bright pixel, some of the charge associated with it
    340 is not fully shifted down the detector column toward the amplifier.
    341 As a result, this charge remains in the starting cell, and is
    342 partially collected in subsequent shifts, resulting in a ``burn
     339cell with common values around 60000 DN, introduce ``persistent
     340charge'' on that and subsequent images.  During the read out process
     341of a cell with such a bright pixel, some of the charge remains in the
     342undepleted region of the silicon and is not shifted down the detector
     343column toward the amplifier.  This charge remains in the starting
     344pixel and slowly leaks out of the undepleted region, contaminating
     345subsequent pixels during the read out process, resulting in a ``burn
    343346trail'' that extends from the center of the bright source away from
    344347the amplifier (vertically along the pixel columns toward the top of
    345348the cell).
     349
     350%associated with it
     351%is not fully shifted down the detector column toward the amplifier.
     352%As a result, this charge remains in the starting cell, and is
     353%partially collected in subsequent shifts, resulting in a ``burn
     354%trail'' that extends from the center of the bright source away from
     355%the amplifier (vertically along the pixel columns toward the top of
     356%the cell).
    346357
    347358This incomplete charge shifting in nearly full wells continues as each
     
    352363image towards the amplifier (vertically down along the pixel column).
    353364This remnant charge can remain on the detector for up to thirty
    354 minutes, requiring the locations of these ``burns'' be retained
    355 between exposures.
     365minutes.
     366%, requiring the locations of these ``burns'' be retained
     367%between exposures.
    356368
    357369Both of these types of persistence trails are measured and optionally
    358370repaired via the \IPPprog{burntool} program.  This program does an
    359 initial scan of the images, and identifies objects with pixel values
    360 brighter than a conservative threshold of 30000 DN.  The trail from
    361 the peak of that object is fit with a one-dimensional power law in
    362 each pixel column above the threshold, based on empirical evidence
    363 that this is the functional form of this persistence effect.  This
    364 also matches the expectation that a constant fraction of charge is
     371initial scan of the image, and identifies objects with pixel values
     372higher than a conservative threshold of 30000 DN.  The trail from the
     373peak of that object is fit with a one-dimensional power law in each
     374pixel column above the threshold, based on empirical evidence that
     375this is the functional form of this persistence effect.  This fit also
     376matches the expectation that a constant fraction of charge is
    365377incompletely transferred at each shift beyond the persistence
    366 threshold.  Once this fit is done, the model can be subtracted from
    367 the image, and the location of the star is stored in a table along
     378threshold.  Once the fit is done, the model can be subtracted from
     379the image.  The location of the source is stored in a table along
    368380with the exposure PONTIME, which denotes the number of seconds since
    369 the detector was last powered on, and provides an internally consistent
    370 time scale.
     381the detector was last powered on and provides an internally
     382consistent time scale.
    371383
    372384For subsequent exposures, the table associated with the previous image
    373385is read in, and after correcting trails from the stars on the new
    374386image, the positions of the bright stars from the table are used to
    375 check for remnant trails on the image.  These are fit and subtracted
    376 using a one-dimensional exponential model, again based on empirical
    377 studies.  If a significant model is found, then this location is
    378 retained in the image output table.  If not, the old burn is allowed
    379 to expire.
    380 
    381 The main concern with this method of correcting the persistence trails
    382 is that it is based on fits to the raw image data, which may have
    383 other signal sources not determined by the persistence effect.  The
    384 presence of other stars or artifacts along the path of the burn can
     387check for remnant trails from previous exposures on the image.  These
     388are fit and subtracted using a one-dimensional exponential model,
     389again based on empirical studies.  The output table retains this
     390remnant position for 2000 seconds after the initial PONTIME recorded.
     391This allows fits to be attempted well beyond the nominal lifetime of
     392these trails.  Figure \ref{fig:burntool images} shows an example of a
     393cell with a persistence trail from a bright star, the post-correction
     394result, as well as the pre and post correction versions of the same
     395cell on the subsequent exposure.  The profiles along the detector
     396columns for these two exposures are presented in Figure
     397\ref{fig:burntool plot}.
     398
     399Using this method of correcting the persistence trails has the
     400challenge that it is based on fits to the raw image data, which may
     401have other signal sources not determined by the persistence effect.
     402The presence of other stars or artifacts in the detector column can
    385403result in a poor model to be fit, resulting in either an over- or
    386 under-subtraction of the persistence burn.  For this reason, the image
    387 mask is marked with a value indicating that this correction has been
    388 applied.  These pixels are not fully excluded, but they are marked as
    389 suspect, which allows them to be excluded from consideration in
    390 subsequent stages, such as image stacking.
    391 
    392 Another concern is that the cores of very bright stars are deformed by
    393 this process, as the burntool fitting subtracts flux
    394 from only one side of the star.  As most stars that result in burns already
    395 have saturated cores, they are already ignored for the purpose of
    396 PSF determination and are flagged as saturated by the photometry
    397 reduction.
    398 
    399 \begin{figure}
    400   \centering
    401   \begin{minipage}{0.45\hsize}
    402     \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0123o_XY11_bt_trail.png}
    403 %    \caption{(a)}
    404 %  \end{subfigure}%
    405 %  \begin{subfigure}[]{.45\hsize}
    406   \end{minipage}%
    407   \begin{minipage}{0.45\hsize}
    408     \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0124o_XY11_bt_trail.png}
    409 %    \caption{(b)}
    410 %  \end{subfigure}
    411   \end{minipage}
    412 
    413   \caption{Example of a profile cut along the y-axis through a bright star on exposure o5677g0123o OTA11 in cell xy60 (left panel) and on the subsequent exposure o5677g0124o (right panel).  In both figures, the green points show the image corrected with all appropriate detrending steps, but without burntool applied, illustrating the amplitude of the persistence trails.  The red points show the same data after the burntool correction, which reduces the impact of these features.  Both exposures are in the \gps{} filter with exposure times of 43s}
    414 \end{figure}
     404under-subtraction of the trail.  For this reason, the image mask is
     405marked with a value indicating that this correction has been applied.
     406These pixels are not fully excluded, but they are marked as suspect,
     407which allows them to be excluded from consideration in subsequent
     408stages, such as image stacking.
     409
     410The cores of very bright stars can also be deformed by this process,
     411as the burntool fitting subtracts flux from only one side of the star.
     412As most stars that result in persistence trails already have saturated
     413cores, they are already ignored for the purpose of PSF determination
     414and are flagged as saturated by the photometry reduction.
    415415
    416416\begin{figure}
     
    438438%  \end{subfigure}
    439439  \end{minipage}
    440   \caption{Example of OTA11 cell xy60 on exposures o5677g0123o (left) and o5677g0124o (right).  The top panels show the image with all appropriate detrending steps, but without burntool, and the bottom show the same with burntool applied.  There is some slight over subtraction in fitting the initial trail, but the impact of the trail is greatly reduced in both exposures.}
     440  \caption{Example of OTA11 cell xy50 on exposures o5677g0123o (left) and o5677g0124o (right).  The top panels show the image with all appropriate detrending steps, but without burntool, and the bottom show the same with burntool applied.  There is some slight over subtraction in fitting the initial trail, but the impact of the trail is greatly reduced in both exposures.}
     441  \label{fig:burntool images}
    441442\end{figure}
     443
     444
     445\begin{figure}
     446  \centering
     447  \begin{minipage}{0.45\hsize}
     448    \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0123o_XY11_bt_trail.png}
     449%    \caption{(a)}
     450%  \end{subfigure}%
     451%  \begin{subfigure}[]{.45\hsize}
     452  \end{minipage}%
     453  \begin{minipage}{0.45\hsize}
     454    \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0124o_XY11_bt_trail.png}
     455%    \caption{(b)}
     456%  \end{subfigure}
     457  \end{minipage}
     458
     459  \caption{Example of a profile cut along the y-axis through a bright star on exposure o5677g0123o OTA11 in cell xy50 (left panel) and on the subsequent exposure o5677g0124o (right panel).  In both figures, the green points show the image corrected with all appropriate detrending steps, but without burntool applied, illustrating the amplitude of the persistence trails.  The red points show the same data after the burntool correction, which reduces the impact of these features.  Both exposures are in the \gps{} filter with exposure times of 43s}
     460  \label{fig:burntool plot}
     461\end{figure}
     462
    442463
    443464
     
    447468Each cell on GPC1 has an overscan region that covers the first 34
    448469columns of each row, and the last 10 rows of each column.  No light
    449 lands on these pixels, so the image region is trimmed to exclude them.
    450 Each row has an overscan value subtracted, calculated by finding the
    451 median value of that row's overscan pixels and then smoothing between
    452 rows with a three-row boxcar median.
     470lands on these pixels, so the science region is trimmed to exclude
     471them.  Each row has an overscan value subtracted, calculated by
     472finding the median value of that row's overscan pixels and then
     473smoothing between rows with a three-row boxcar median.  \czw{something
     474  about this sounding like real pixels?}
    453475
    454476\subsection{Non-linearity Correction}
     
    461483evidence of this effect.
    462484
    463 To correct this sag, we studied the flux behavior of a series of flat
     485To correct this sag, we studied the behavior of a series of flat
    464486frames for a ramp of exposure times with approximate logarithmically
    465487equal spacing between 0.01s and 57.04s.  As the exposure time
    466 increases, the flux on each pixel also increases in what is expected
    467 to be a linear manner.  Each of these flat exposures in this ramp is
     488increases, the signal on each pixel also increases in what is expected
     489to be a linear manner.  Each of the flat exposures in this ramp is
    468490overscan corrected, and then the median is calculated for each cell,
    469491as well as for the rows and columns within ten pixels of the edge of
    470492the science region.  From these median values at each exposure time
    471 value, we can construct the expected trend by fitting a linear model,
    472 $f_{region} = G * t_{exp} + B$, to determine the gain, $G$, and the
    473 bias, $B$, for the region considered.  This fitting was limited to only
    474 the range of fluxes between 12000 and 38000 counts, as these ranges
    475 were found to match the linear model well.  This range avoids the
    476 non-linearity at low fluxes, as well as the possibility of high-flux
    477 non-linearity effects.
     493value, we can construct the expected trend by fitting a linear model
     494for the region considered.  This fitting was limited to only the range
     495of fluxes between 12000 and 38000 counts, as these ranges were found
     496to match the linear model well.  This range avoids the non-linearity
     497at low fluxes, as well as the possibility of high-flux non-linearity
     498effects.
    478499
    479500We store the average flux measurement and deviation from the linear
    480501fit for each exposure time for all regions on all detector cells in
    481 the linearity detrend look up tables.  When this is applied to science
    482 data, these lookup tables are loaded, and a linear interpolation is
    483 performed to determine the correction needed for the flux in that
    484 pixel.  This look up is performed for both the row and column of each
    485 pixel, to allow the edge correction to be applied where applicable,
    486 and the full cell correction elsewhere.  The average of these two
    487 values is then applied to the pixel value, reducing the effects of
    488 pixel nonlinearity.
     502the linearity detrend look up tables.  An example of this data is
     503shown in figure \ref{fig: nonlinearity}.  When this correction is
     504applied to science data, these lookup tables are loaded, and a linear
     505interpolation is performed to determine the correction needed for the
     506flux in that pixel.  This look up is performed for both the row and
     507column of each pixel, to allow the edge correction to be applied where
     508applicable, and the full cell correction elsewhere.  The average of
     509these two values is then applied to the pixel value, reducing the
     510effects of pixel nonlinearity.
    489511
    490512This non-linearity effect appears to be stable in time for the
     
    503525  \includegraphics[width=0.9\hsize,angle=0,clip]{images/linearity_XY27_xy16.png}
    504526  \caption{Example plot of the linearity correction as a fraction of observed flux for OTA27, cell xy16.}
     527  \label{fig: nonlinearity}
    505528\end{figure}
    506529
     
    508531\label{sec:dark}
    509532
    510 The dark model we make for GPC1 considers each pixel individually,
    511 independent of any neighbors.  To construct this model, we fit a
    512 multi-dimensional model to the array of input pixels from a randomly
    513 selected set of 100-150 overscan and non-linearity corrected dark
    514 frames chosen from a given date range.  The model fits each pixel as a
    515 function of the exposure time $t_{exp}$ and the detector temperature
    516 $T_{chip}$ of the input images such that $\mathrm{dark} = a_0 + a_1
    517 t_{exp} + a_2 T_{chip} t_{exp} + a_3 T_{chip}^2 t_{exp}$.  This
    518 fitting uses two iterations to produce a clipped fit, rejecting at the
    519 $3\sigma$ level.  The final coefficients $a_i$ for the dark model are
    520 stored in the detrend image.  The constant $a_0$ term includes the
    521 residual bias signal after overscan subtraction, and as such, a
    522 separate bias subtraction is not necessary.
     533The dark current in the GPC1 detectors has significant variations
     534across each cell.  The model we make to remove this signal considers
     535each pixel individually, independent of any neighbors.  To construct
     536this model, we fit a multi-dimensional model to the array of input
     537pixels from a randomly selected set of 100-150 overscan and
     538non-linearity corrected dark frames chosen from a given date range.
     539The model fits each pixel as a function of the exposure time $t_{exp}$
     540and the detector temperature $T_{chip}$ of the input images such that
     541$\mathrm{dark} = a_0 + a_1 t_{exp} + a_2 T_{chip} t_{exp} + a_3
     542T_{chip}^2 t_{exp}$.  This fitting uses two iterations to produce a
     543clipped fit, rejecting at the $3\sigma$ level.  The final coefficients
     544$a_i$ for the dark model are stored in the detrend image.  The
     545constant $a_0$ term includes the residual bias signal after overscan
     546subtraction, and as such, a separate bias subtraction is not
     547necessary.
    523548
    524549Applying the dark model is simply a matter of calculating the response
    525550to the exposure time and detector temperature for the image to be
    526551corrected, and subtracting the resulting dark signal from the image.
     552Figure \ref{fig:dark image} shows the results of the dark subtraction.
    527553
    528554\subsubsection{Time evolution}
     
    531557significant drift over the course of multiple months.  Some of the
    532558changes in the dark can be attributed to changes in the voltage
    533 settings of the GPC1 controller electronics, but the majority seem to
    534 be the result of some unknown parameter.  We can separate the dark
    535 model history of GPC1 into three epochs.  The first epoch covers all
    536 data taken prior to 2010-01-23.  This epoch used a different header
    537 keyword for the detector temperature, making data from this epoch
    538 incompatible with later dark models.
     559settings of the GPC1 controller electronics, but the causes of others
     560are unknown.  We can separate the dark model history of GPC1 into
     561three epochs.  The first epoch covers all data taken prior to
     5622010-01-23.  This epoch used a different header keyword for the
     563detector temperature, making data from this epoch incompatible with
     564later dark models.  In addition, the temperatures recorded in this
     565value were not fully calibrated, making the dark model generated less
     566reliable.
    539567
    540568The second epoch covers data between 2010-01-23 and 2011-05-01, and is
    541569characterized by a largely stable but oscillatory dark solution.
    542 There are two modes that the dark model switches between apparently at
     570The dark model switches between two modes apparently at
    543571random.  No clear cause has been established for the switching, but
    544572there are clear differences between the two modes that require the
     
    570598
    571599After 2011-05-01, the two-mode behavior of the dark disappears, and is
    572 replaced with a slow observation date dependent drift in the magnitude
     600replaced with a slow observation-date-dependent drift in the magnitude
    573601of the gradient.  This drift is sufficiently slow that we have modeled
    574 it using three observation date independent dark model for different
    575 date ranges.  These darks cover the range from 2011-05-01 to
    576 2011-08-01, 2011-08-01 to 2011-11-01, and 2011-11-01 and on.  The
    577 reason for this time evolution is unknown, but as it is correctable
    578 with a small number of dark models, this does not significantly impact
    579 detrending.
     602it by generating models for different date ranges.  These darks cover
     603the range from 2011-05-01 to 2011-08-01, 2011-08-01 to 2011-11-01, and
     6042011-11-01 and on.  The reason for this time evolution is unknown, but
     605as it is correctable with a small number of dark models, this does not
     606significantly impact detrending.
    580607
    581608\begin{figure}
     
    593620%  \end{subfigure}
    594621  \end{minipage}
    595   \caption{An example of the dark model application to exposure o5677g0123o, OTA23 (2011-04-26, 43s \gps{} filter).  The left panel shows the image data mosaicked to the OTA level, and has had the static mask applied, the overscan subtracted, and the detector non-linearity corrected.  The right panel, shows the same exposure with the dark applied in addition to the processing shown on the left.}
     622  \caption{An example of the dark model application to exposure o5677g0123o, OTA23 (2011-04-26, 43s \gps{} filter).  The left panel shows the image data mosaicked to the OTA level, and has had the static mask applied, the overscan subtracted, and the detector non-linearity corrected.  The right panel, shows the same exposure with the dark applied in addition to the processing shown on the left, removing the amplifier glows in the cell corners.}
     623  \label{fig:dark image}
    596624\end{figure}
    597625
     
    599627  \centering
    600628  \includegraphics[width=0.9\hsize,angle=0,clip]{images/B_profile_ex.png}
    601   \caption{Example showing a profile cut across exposure o5676g0195, OTA67 (2011-04-25, 43s \gps{} filter).  The entire first row of cells (xy00-xy07) have had a median calculated along each pixel column on the OTA mosaicked image.  Arbitrary offsets have been applied to shift the curves to not overlap.  The top curve (in purple) shows the initial raw profile, with no dark model applied.  The next curve (in green) shows the smoother profile after applying the correct B-mode dark model.  Applying the incorrect A-mode dark results in the blue curve, which shows a significant increase in gradients across the cells.  The orange curve shows the result of the PATTERN.CONTINUITY correction.  Although this creates a larger gradient across the mosaicked images, it decreases the cell-to-cell level changes.  The final yellow curve shows the final image profile after all detrending and background subtraction, and has not had an offset applied.  The bright source at the cell xy00 to xy01 transition is a result of a large optical ghost, which due to the area covered, increases the median level more than the field stars.}
     629  \caption{Example showing a profile cut across exposure o5676g0195, OTA67 (2011-04-25, 43s \gps{} filter).  The entire first row of cells (xy00-xy07) have had a median calculated along each pixel column on the OTA mosaicked image.  Arbitrary offsets have been applied to shift the curves to not overlap.  The top curve (in purple) shows the initial raw profile, with no dark model applied.  The next curve (in green) shows the smoother profile after applying the appropriate B-mode dark model.  Applying the A-mode dark instead results in the blue curve, which shows a significant increase in gradients across the cells.  The orange curve shows the result of the PATTERN.CONTINUITY correction.  Although this creates a larger gradient across the mosaicked images, it decreases the cell-to-cell level changes.  The final yellow curve shows the final image profile after all detrending and background subtraction, and has not had an offset applied.  The bright source at the cell xy00 to xy01 transition is a result of a large optical ghost, which due to the area covered, increases the median level more than the field stars.}
    602630  \label{fig:dark switching}
    603631\end{figure}
     
    606634\label{sec:video_darks}
    607635
    608 The dark signal is stronger in cell corners due to glow from the
    609 read-out amplifiers.  The standard dark model corrects this for most
    610 observations.  However, as mentioned above, when a cell is repeatedly
    611 read in video mode, the dark model for the OTA containing it changes.
    612 Surprisingly, added reads for the video cell do not amplify the
    613 amplifier glow, but rather decrease the dark signal in these regions.
    614 As a result, using the standard dark model on the data for these OTAs
    615 results in oversubtraction of the corner glow.
    616 
    617 Video darks have been constructed to eliminate the effect this
    618 observational change has on the final image quality.  This was done by
    619 running the standard dark construction process on a series of dark
    620 frames that have had the video signal enabled for some cells.  GPC1
    621 can only run video signals on a subset of the OTAs at a given time.
    622 This requires two passes to enable the video signal across the full
    623 set of OTAs that support video cells.  This is convenient for the
    624 process of creating darks, as those OTAs that do not have video
    625 signals enabled create standard dark models, while the video dark is
    626 created for those that do.
     636Individual cells on GPC1 can be repeatedly read to create a video
     637signal used for telescope guiding.  However, when a cell is used for
     638this purpose, the dark signal for the entire OTA is changed.  The most
     639noticeable feature of this change is that the glows in cell corners
     640caused by the read-out amplifiers are suppressed.  As a result, using
     641the standard dark model on the data for these OTAs results in
     642oversubtraction of the corner glow.
     643
     644To generate a correction for this change, a set of video dark models
     645were created by running the standard dark construction process on a
     646series of dark frames that have had the video signal enabled for some
     647cells.  GPC1 can only run video signals on a subset of the OTAs at a
     648given time.  This requires two passes to enable the video signal
     649across the full set of OTAs that support video cells.  This is
     650convenient for the process of creating darks, as those OTAs that do
     651not have video signals enabled create standard dark models, while the
     652video dark is created for those that do.
    627653
    628654This simultaneous construction of video and standard dark models is
    629 useful, as it provides the ability to isolate the response on the
    630 standard dark from the video signals.  Isolating this response is
    631 essential for attempting to create archival video darks.  We only have
    632 raw video dark frame data after 2012-05-16, when this problem was
    633 initially identified, so any data prior to that can not be directly
    634 corrected for the video dark signal.  Isolating the video signal
    635 response allows linear corrections to the pre-existing standard dark
    636 models for archival data.  Testing this shows that constructing a
    637 video dark for older data simply as $VD_{2009} = D_{2009} - D_{Modern}
    638 + VD_{Modern}$ produces a satisfactory result that does not
    639 over subtract the amplifier glow.  This is shown in figure
    640 \ref{fig:video_darks}, which shows video cells from before 2012-05-16,
    641 corrected with both the standard and video darks, with the early video
    642 dark constructed in such a manner.
     655useful, as it provides a way to isolate the response on the standard
     656dark from the video signals.  If the standard and video dark signals
     657are separable, then archival video darks can be constructed by
     658applying the video dark response to the previously constructed dark
     659models.  Raw video dark frame data only exists after 2012-05-16, when
     660this problem was initially identified, so any data prior to that can
     661not be directly corrected for the video dark signal.  Testing the
     662separability shows that constructing a video dark for older data
     663simply as $VD_{Old} = D_{Old} - D_{Modern} + VD_{Modern}$ produces a
     664satisfactory result that does not over subtract the amplifier glow.
     665This is shown in figure \ref{fig:video_darks}, which shows video cells
     666from before 2012-05-16, corrected with both the standard and video
     667darks, with the early video dark constructed in such a manner.
    643668
    644669\begin{figure}
     
    668693with the noise generally higher away from the read out amplifier
    669694(higher cell x pixel positions).  This is likely an effect of the
    670 row-by-row bias issue discussed below.  This gradient causes the read
    671 noise to increase as the row is read out.  As a result of this
     695row-by-row bias issue discussed below.  As a result of this
    672696increased noise, more sources are detected in the higher noise regions
    673 when the read noise is assumed constant across the readout.  Read noise is the
    674 
    675 To
    676 mitigate this noise gradient, we constructed an initial set of
    677 noisemap images by measuring the median variance on bias frames.  The
    678 variance is calculated in boxes of 20x20 pixels, and then linearly
    679 interpolated to cover the full image.
     697when the read noise is assumed constant across the readout. 
     698
     699To mitigate this noise gradient, we constructed an initial set of
     700noisemap images by measuring the median variance on bias frames
     701processed as science images.  The variance is calculated in boxes of
     70220x20 pixels, and then linearly interpolated to cover the full image.
    680703
    681704Unfortunately, due to correlations within this noise, the variance
     
    689712contaminating the final object catalogs.
    690713
    691 In the detection process, we expect false positives at a rate equal to
    692 the one-tailed probability beyond the detection threshold.  For these
    693 tests, only detections measured at the $\sigma_{thresh} = 5\sigma$
    694 level are used, to match that used in the photometry on science data.
    695 This probability can be converted into a number of false number by
    696 considering a given area.  As the detections must be isolated to not
    697 be detected as an extended object, this area must be reduced by the
    698 area a given PSF occupies.  Combining this, we find that we expect a
    699 probability $P = 1 - \Phi_{normal}(5) = \frac{1}{2}
    700 \erfcinv\left(\frac{5}{\sqrt{2}}\right)$, and an area given $N$
    701 exposures of area $X\times Y$, $A = \frac{X \times Y \times
    702   N}{A_{PSF}}$.  For a typical $1"$ seeing, $A_{PSF}$ is approximately
    703 16 pixels.  Using this model for the false positives, we found that
    704 the added read noise was insufficient to account for the observed
    705 false positive rate.  Inverting this relation, we can measure
    706 $\sigma_{obs}$, the true threshold level based on the number of false
    707 positives observed.  This $\sigma_{obs}$ is the combined to form a
    708 boost factor $B = \sigma_{thresh} / \sigma_{obs}$ that amplifies the
    709   noisemap to match the observed false detection rate.
    710 
    711 The row-to-row variations that contribute to the extra noise are
    712 related to the dark model, and because of this, as the dark model
    713 changes, the effective noise also changes.  To ensure that the
    714 noisemap accurately matches the true noise level, we have created
    715 different noisemap models for the three major time ranges of the dark
    716 model.  We do not see any strong evidence that the noisemaps have the
    717 A/B modes visible in the dark, and so we do not generate different
    718 models for each individual dark model.  The additional pixel-to-pixel
    719 variance from this noisemap is added to the Poissonian variance to
    720 form the science variance image generated by the \IPPstage{chip}
    721 processing.
     714In the detection process, we expect false positives at a low rate,
     715given that all sources are required to be significant at the $5\sigma$
     716level.  Since the observed false positive rate was significantly
     717higher than expected, we implemented an empirical ``boost'' to
     718increase the noisemap to more accurately account for the position
     719dependent read noise.  By binning the number of false positives
     720measured on the bias frames on the noisemap inputs using 20 pixel
     721boxes in the cell x-axis, and comparing this to the number expected
     722from random Gaussian noise, we estimated the true read noise level.
     723
     724As the noisemap uses bias frames that have had a dark model
     725subtracted, we constructed noisemaps for each dark model used for
     726science processing.  There is some evidence that the noise has changed
     727over time as measured on full cells, so matching the noisemap to the
     728dark model allows for these changes to be tracked.  There is no
     729evidence that the noisemap has the A/B modes found in the dark, so we
     730do not generate separate models for that time period.
     731
     732The noisemap detrend is not directly applied to the science image.
     733Instead, it is used to construct the weight image that contains the
     734pixel-by-pixel variance for the \IPPstage{chip} stage image.  The
     735initial weight image is constructed by dividing the science image by
     736the cell gain (approximately 1.0 e$^{-} /$ DN).  This weight image
     737contains the expected Poissonian variance in electrons measured.  The
     738square of the noisemap is then added to this initial weight, adding
     739the additional empirical variance term in place of a single read noise
     740value.
     741
     742%% In the detection process, we expect false positives at a rate equal to
     743%% the one-tailed probability beyond the detection threshold.  For these
     744%% tests, only detections measured at the $\sigma_{thresh} = 5\sigma$
     745%% level are used, to match that used in the photometry on science data.
     746%% This probability can be converted into a number of false detections by
     747%% considering a given area.  As the detections must be isolated to not
     748%% be detected as an extended object, this area must be reduced by the
     749%% area a given PSF occupies.  Combining this, we find that we expect a
     750%% probability $P = 1 - \Phi_{normal}(5) = \frac{1}{2}
     751%% \erfcinv\left(\frac{5}{\sqrt{2}}\right)$, and an area given $N$
     752%% exposures of area $X\times Y$, $A = \frac{X \times Y \times
     753%%   N}{A_{PSF}}$.  For a typical $1"$ seeing, $A_{PSF}$ is approximately
     754%% 16 pixels.  Using this model for the false positives, we found that
     755%% the added read noise was insufficient to account for the observed
     756%% false positive rate.  Inverting this relation, we can measure
     757%% $\sigma_{obs}$, the true threshold level based on the number of false
     758%% positives observed.  This $\sigma_{obs}$ is the combined to form a
     759%% boost factor $B = \sigma_{thresh} / \sigma_{obs}$ that amplifies the
     760%%   noisemap to match the observed false detection rate.
     761
     762%% The row-to-row variations that contribute to the extra noise are
     763%% correlated with the dark model,
     764%% changes, the effective noise also changes.  To ensure that the
     765%% noisemap accurately matches the true noise level, we have created
     766%% different noisemap models for the three major time ranges of the dark
     767%% model.  We do not see any strong evidence that the noisemaps have the
     768%% A/B modes visible in the dark, and so we do not generate different
     769%% models for each individual dark model.  The additional pixel-to-pixel
     770%% variance from this noisemap is added to the Poissonian variance to
     771%% form the science variance image generated by the \IPPstage{chip}
     772%% processing.
    722773
    723774\subsection{Flat}
     
    736787correction to remove the effect of the illumination differences over
    737788the detector surface.  This is done by dithering a series of science
    738 exposures with a given pointing.  By fully calibrating these exposures
     789exposures with a given pointing, as described in
     790\citet{2004PASP..116..449M}.  By fully calibrating these exposures
    739791with the initial flat model, and then comparing the measured fluxes
    740792for the same star as a function of position on the detector, we can
     
    748800In addition to this flat field applied to the individual images, the
    749801ubercal process used to calibrate the database of all detections
    750 \citep{2012ApJ...756..158S} constructs internal ``flat field'' corrections.
    751 Although a single set of image flat fields was used for the entire PV3
    752 survey, five separate ``seasons'' of database flat fields were needed
    753 to ensure proper calibration.  This indicates that the flat field
    754 response is not completely fixed in time.  More details on this
    755 process are contained in \citet{magnier2017c}.
     802\citep{2012ApJ...756..158S} constructs ``in catalog'' flat field
     803corrections.  Although a single set of image flat fields was used for
     804the entire PV3 survey, five separate ``seasons'' of database flat
     805fields were needed to ensure proper calibration.  This indicates that
     806the flat field response is not completely fixed in time.  More details
     807on this process are contained in \citet{magnier2017.calibration}.
    756808
    757809\subsection{Pattern correction}
    758810\label{sec:pattern}
    759811
    760 Due to detector specific issues that are not cleanly removed by the
    761 dark model, we have a set of ``pattern'' corrections that are applied
    762 to some selection of the OTAs in the camera.  This is done to reduce
    763 the effect that detector differences have on the measured astronomical
    764 signal that are not stable enough to be corrected with a static model.
    765 Because of this, the pattern corrections attempt to identify and
    766 correct the detector issues based on appropriate filtering the
    767 individual science exposures.
    768 
    769 The PATTERN.ROW correction is used to remove any remaining row-by-row
    770 bias variation, and the PATTERN.CONTINUITY correction attempts to
    771 ensure that the cells of a given OTA are consistent with the other
    772 cells on that OTA.
     812%% Due to detector specific issues that are not cleanly removed by the
     813%% dark model, we have a set of ``pattern'' corrections that are applied
     814%% to some selection of the OTAs in the camera.  This is done to reduce
     815%% the effect that detector differences have on the measured astronomical
     816%% signal that are not stable enough to be corrected with a static model.
     817%% Because of this, the pattern corrections attempt to identify and
     818%% correct the detector issues based on appropriate filtering the
     819%% individual science exposures.
     820
     821%% In addition to the standard detrend corrections, we apply additional
     822%% adjustments for features that are not completely removed by the dark
     823%% model.
     824
     825%% The PATTERN.ROW correction is used to remove any remaining row-by-row
     826%% bias variation, and the PATTERN.CONTINUITY correction attempts to
     827%% ensure that the cells of a given OTA are consistent with the other
     828%% cells on that OTA.
    773829
    774830\subsubsection{Pattern Row}
     
    784840As discussed above in the dark and noisemap sections, certain
    785841detectors have significant bias offsets between adjacent rows, caused
    786 by noise in the camera control electronics.  The magnitude of these
    787 offsets increases as the distance from the readout amplifier
    788 increases, resulting in horizontal streaks that are more pronounced
    789 along the large x pixel edge of the cell.  As the level of the offset
    790 is apparently random between exposures, the dark correction cannot
    791 fully remove this structure from the images, and the noisemap value
    792 only indicates the level of the average variance added by these bias
    793 offsets.  Therefore, we apply the PATTERN.ROW correction in an attempt
    794 to mitigate the offsets and correct the image values.  To force the
    795 rows to agree, a second order clipped polynomial is fit to each row in
    796 the cell.  Four fit iterations are run, and pixels $2.5\sigma$ deviant
    797 are excluded from subsequent fits, to minimize the effect stars and
    798 other astronomical signals have.  This final trend is then subtracted
    799 from that row.  Simply doing this subtraction will also have the
    800 effect of removing the background sky level.  To prevent this, the
    801 constant and linear terms for each row are stored, and linear fits are
    802 made to these parameters as a function of row, perpendicular to the
    803 initial fits.  This produces a plane that is added back to the image
    804 to restore the background offset and any linear ramp that exists in
    805 the sky.
     842by drifts in the bias level due to cross talk.  The magnitude of these
     843offsets increases as the distance from the readout amplifier and
     844overscan region increases, resulting in horizontal streaks that are
     845more pronounced along the large x pixel edge of the cell.  As the
     846level of the offset is apparently random between exposures, the dark
     847correction cannot fully remove this structure from the images, and the
     848noisemap value only indicates the level of the average variance added
     849by these bias offsets.  Therefore, we apply the PATTERN.ROW correction
     850in an attempt to mitigate the offsets and correct the image values.
     851To force the rows to agree, a second order clipped polynomial is fit
     852to each row in the cell.  Four fit iterations are run, and pixels
     853$2.5\sigma$ deviant are excluded from subsequent fits, to minimize the
     854effect stars and other astronomical signals have.  This final trend is
     855then subtracted from that row.  Simply doing this subtraction will
     856also have the effect of removing the background sky level.  To prevent
     857this, the constant and linear terms for each row are stored, and
     858linear fits are made to these parameters as a function of row,
     859perpendicular to the initial fits.  This produces a plane that is
     860added back to the image to restore the background offset and any
     861linear ramp that exists in the sky.
    806862
    807863These row-by-row variations have the largest impact on data taken in
    808864the \gps{} filter, as the read noise is the dominant noise source in
    809865that filter.  At longer wavelengths, the noise from the Poissonian
    810 variation in the sky level increases.  Although the PATTERN.ROW correction is still applied to data taken in the other filters,
     866variation in the sky level increases.  The PATTERN.ROW correction is
     867still applied to data taken in the other filters, because the increase
     868in sky noise does not fully obscure the row-by-row noise.
    811869
    812870This correction was required on all cells on all OTAs prior to
    8138712009-12-01, at which point a modification of the camera electronics
    814872reduced the scale of the row-by-row offsets for the majority of the
    815 OTAs.  As a result, we only apply this correction to the cells where
    816 it is still necessary, as shown in Figure \ref{fig: pattern row
    817   cells}.  A list of these cells is listed in Table
    818 \ref{tab:pattern_row_cells}.
    819 
    820 Although this correction does largely resolve the row-by-row offset
    821 issue in a satisfactory way, large and bright astronomical objects can
    822 bias the fit significantly.  This results in an oversubtraction of the
     873OTAs.  \czw{describe modification} As a result, we only apply this
     874correction to the cells where it is still necessary, as shown in
     875Figure \ref{fig: pattern row cells}.  A list of these cells is in
     876Table \ref{tab:pattern_row_cells}.
     877
     878Although this correction largely resolves the row-by-row offset issue
     879in a satisfactory way, large and bright astronomical objects can bias
     880the fit significantly.  This results in an oversubtraction of the
    823881offset near these objects.  As the offsets are calculated on the pixel
    824882rows, this oversubtraction is not uniform around the object, but is
     
    826884astronomical objects are not significantly distorted by this, with
    827885this only becoming on issue for only bright objects comparable to the
    828 size of the cell (598 pixels = 150").
     886size of the cell (598 pixels = 150").  Figure \ref{fig: pattern row example}
     887shows an example of a cell pre- and post-correction.
    829888
    830889\begin{deluxetable}{lcccc}
     
    869928%  \end{subfigure}
    870929  \end{minipage}
    871   \caption{Example of the PATTERN.ROW correction on exposure o5379g0103o OTA57 cell xy00 (\ips{} filter 45s).  The left panel shows the cell with all appropriate detrending except the PATTERN.ROW, and the right shows the same cell with PATTERN.ROW applied.  The correction reduces the correlated noise on the right side, which is most distant from the read out amplifier.  There is a slight over subtraction along the rows near the bright star.}
     930  \caption{Example of the PATTERN.ROW correction on exposure o5379g0103o OTA57 cell xy01 (\ips{} filter 45s).  The left panel shows the cell with all appropriate detrending except the PATTERN.ROW, and the right shows the same cell with PATTERN.ROW applied.  The correction reduces the correlated noise on the right side, which is most distant from the read out amplifier.  There is a slight over subtraction along the rows near the bright star. \czw{I don't think this fits the convention I stated earlier}}
     931  \label{fig: pattern row example}
    872932\end{figure}
    873933
    874934\subsubsection{Pattern Continuity}
    875935
    876 After previous attempts to ensure that adjacent cells on an OTA
    877 matched background levels were insufficient in many situations, we
    878 designed a replacement correction that would reduce the background
    879 distortion for large objects.  In addition, studies of the background
    880 level illustrated that the row-by-row bias can introduce small
    881 background gradient variations along the rows of the cells that is not
    882 stable enough to be completely fit by the dark model.  This common
    883 feature across the columns of cells results in a ``saw tooth'' pattern
    884 horizontally across an OTA, and as the background model fits a smooth
    885 sky level, this induces over and under subtraction at the cell
    886 boundaries. 
     936The background levels of cells on a single OTA do not always have the
     937same value.  Even with dark and flat corrections applied, adjacent
     938cells may not match.  In addition, studies of the background level
     939indicate that the row-by-row bias can introduce small background
     940gradient variations along the rows of the cells that are not stable.
     941This common feature across the columns of cells results in a ``saw
     942tooth'' pattern horizontally across an the mosaicked OTA, and as the
     943background model fits a smooth sky level, this induces over and under
     944subtraction at the cell boundaries.
    887945
    888946The PATTERN.CONTINUITY correction, attempts to match the edges of a
    889947cell to those of its neighbors.  For each cell, a thin box 10 pixels
    890 wide on each edge is extracted and the median value of unmasked values
    891 calculated for that box.  These median values are then used to
    892 construct a vector of differences $\Delta_i = \sum_{j} \mathrm{Edge}_{i} -
    893 \mathrm{Edge}_{j}$, along with a matrix of associations $A_{i,i'} = \sum_{j}
    894 \delta(i,j) \delta(j,i')$ denoting which cell boundaries are adjacent.
    895 By solving the system $A x = \Delta$, we find the set of offsets $x_i$
    896 to be applied to each cell to ensure the minimum differences between
    897 all cell edges and their neighbors.
     948wide running the full length of each edge is extracted and the median
     949value of unmasked values calculated for that box.  These median values
     950are then used to construct a vector of the sum of the differences
     951between that cell's edges and the corresponding edge on any adjacent
     952cell $\Delta$.  A matrix $A$ of these associations is also
     953constructed, with the diagonal containing the number of cells adjacent
     954to that cell, and the off-diagonal values being set to -1 for each
     955pair of adjacent cells.  The offsets needed for each chip, $x$ can
     956then be found by solving the system $A x = \Delta$. A cell with the
     957maximum number of neighbors, usually cell xy11, the first cell not on
     958the edge of the OTA, is used to constrain the system, ensuring that
     959that cell has zero correction and that there is a single solution.
    898960
    899961For OTAs that initially show the saw tooth pattern, the effect of this
     
    901963the absolute background level.  However, as we subtract off a smooth
    902964background model prior to doing photometry, these deviations from an
    903 absolute sky level are unimportant.  The fact that the final ramp is
    904 smoother than it would be otherwise also allows for the background
    905 subtracted image to more closely match the astronomical sky, without
    906 significant errors at cell boundaries.  An example of the effect of
    907 this correction on an image profile is shown in Figure \ref{fig:dark switching}.
     965absolute sky level do not affect photometry for small sources.  The
     966fact that the final ramp is smoother than it would be otherwise also
     967allows for the background subtracted image to more closely match the
     968astronomical sky, without significant errors at cell boundaries.  An
     969example of the effect of this correction on an image profile is shown
     970in Figure \ref{fig:dark switching}.
    908971
    909972
     
    928991input images with two iteration of clipping at the $3\sigma$ level.
    929992
    930 A course background model for each cell is constructed by calculating
     993A coarse background model for each cell is constructed by calculating
    931994the median on a 3x3 grid (approximately 200x200 pixels each).  A set
    932 of 1000 randomly selected points are then selected on the fringe image
    933 for each cell, and a median calculated for this position in a 10x10
    934 pixel box, with the background level subtracted.  These sample
    935 locations provide scale points to allow the amplitude of the measured
    936 fringe to be compared to that found on science images.
     995of 1000 points are randomly selected from the fringe image for each
     996cell, and a median calculated for this position in a 10x10 pixel box,
     997with the background level subtracted.  These sample locations provide
     998scale points to allow the amplitude of the measured fringe to be
     999compared to that found on science images.
    9371000
    9381001To apply the fringe, the same sample locations are measured on the
     
    9621025\label{sec:static_masks}
    9631026
    964 Due to the large size of the detector, it is expected that there
    965 are a number of pixel defects that do not have the detection
    966 sensitivity on par with their neighbors.  To remove these pixels, we
    967 have constructed a static mask that identifies the known defects.
    968 This mask is constructed in three phases.
    969 
    970 First, a CTEMASK is constructed to mask out regions in which the
    971 charge transfer efficiency is low compared to the rest of the
    972 detector.  Twenty-five of the sixty OTAs in GPC1 show some evidence of
    973 CTE issues, with this pattern appearing (to varying degrees) in
    974 roughly triangular patches on the OTA due to defects in the
    975 semiconductor manufacturing.  To generate the mask for these regions,
    976 a sample set of 26 evenly illuminated flat field images were measured
    977 to produce a map of the image variance in 20x20 pixel bins.  As the
    978 flat image is expected to illuminate the image uniformly, the expected
    979 variances in each bin should be Poissonian distributed with the flux
    980 level.  However, in regions with CTE issues, adjacent pixels are not
    981 independent, as the charge in those pixels is more free to spread.
    982 This reduces the pixel-to-pixel differences, resulting in a lower than
    983 expected variance.  All regions with variance less than half the
    984 average image level are added to the static CTEMASK.
     1027Due to the large size of the detector, it is expected that there are a
     1028number of pixels that respond poorly.  To remove these pixels, we have
     1029constructed a mask that identifies the known defects.  This mask is
     1030referred to as the ``static'' mask, as it is applied to all images
     1031processed.  The ``dynamic'' mask (Section \ref{sec:dynamic_masks}) is
     1032calculated based on objects in the field, and so changes between
     1033images.  Construction of the static mask consists of three phases.
     1034
     1035First, regions in which the charge transfer efficiency (CTE) is low
     1036compared to the rest of the detector are identified.  Twenty-five of
     1037the sixty OTAs in GPC1 show some evidence of poor CTE, with this
     1038pattern appearing (to varying degrees) in roughly triangular patches
     1039on the OTA due to defects in the semiconductor manufacturing
     1040\czw{check this fact with Peter}.  To generate the mask for these
     1041regions, a sample set of \czw{26} evenly-illuminated flat-field images
     1042were measured to produce a map of the image variance in 20x20 pixel
     1043bins.  As the flat screen is expected to illuminate the image
     1044uniformly, the expected variances in each bin should be Poissonian
     1045distributed with the flux level.  However, in regions with poor CTE,
     1046adjacent pixels are not independent, as the charge in those pixels is
     1047more free to spread along the image columns.  This reduces the
     1048pixel-to-pixel differences, resulting in a lower than expected
     1049variance.  All regions with variance less than half the average image
     1050level are added to the static mask.
    9851051
    9861052The next step of mask construction is to examine the flat and dark
     
    10001066The final step of mask construction is to examine the detector for
    10011067bright columns and other static pixel issues.  This is first done by
    1002 processing a set of 100 \ips{} filter science images in the same fashion as
     1068processing \czw{examining residuals in flattened flat-field images?} a set of 100 \ips{} filter science images in the same fashion as
    10031069for the DARKMASK.  A median image is constructed from these inputs
    10041070along with the per-pixel variance.  These images are used to identify
     
    10201086  \centering
    10211087  \includegraphics[width=0.9\hsize,angle=0,clip]{images/gpc1_mask_indexed.png}
    1022   \label{fig:static mask}
    10231088 
    10241089  \caption{Image map of the GPC1 static mask.  The CTE regions are clearly visible as roughly triangular patches covering the corners of some OTAs.  Some entire cells are masked, including an entire column of cells on OTA14.  Calcite cells remove large areas from OTA17 AND OTA76.}
     1090  \label{fig:static mask}
    10251091\end{figure}
    10261092
     
    10651131deviations due to imperfections in the burntool correction.
    10661132
    1067 The remaining dynamic masks are not generated until the IPP
    1068 \IPPstage{camera} stage, at which point all object photometry is
    1069 complete, and an astrometric solution is known for the exposure.  This
    1070 added information provides the positions of bright sources based on
    1071 the reference catalog, including those that fall slightly out of the
     1133The remaining dynamic masks are generated in the IPP \IPPstage{camera}
     1134stage, at which point all object photometry is complete, and an
     1135astrometric solution is known for the exposure.  This added
     1136information provides the positions of bright sources based on the
     1137reference catalog, including those that fall slightly out of the
    10721138detector field of view or within the inter chip gaps, where internal
    10731139photometry may not identify them.  These bright sources are the origin
     
    10831149Table \ref{tab:crosstalk_rules} summarizes the list of known crosstalk
    10841150rules, with an estimate of the magnitude difference between the source
    1085 and ghost.  For all of the rules, any cell $v$ within the specified
     1151and ghost.  For all of the rules, any source cell $v$ within the specified
    10861152column of cells on any of the OTAs in the specified column of OTAs $Y$
    1087 creates the ghost in the same $v$ and $Y$ in the target column of
     1153can create a ghost in the same cell $v$ and OTA $Y$ in the target column of
    10881154cells and OTAs.  In each of these cases, a source object with an
    10891155instrumental magnitude brighter than -14.47 creates a ghost object
     
    11341200\label{sec:optical_ghosts}
    11351201
    1136 Due to imperfections in the anti-reflective coating on the optical
    1137 surfaces of GPC1, bright sources can also result in large out of focus
    1138 objects, particularly in the \gps{} filter data.  These objects are the
    1139 result of light reflecting back off the surface of the detector,
    1140 reflecting again off the lower surfaces of the optics (particularly
    1141 the L1 corrector lens), and then back down onto the focal plane.  Due
    1142 to the extra travel distance, the resulting source is out of focus and
     1202The anti-reflective coating on the optical surfaces of GPC1 is less
     1203effective at shorter wavelengths, which can allow bright sources to
     1204reflect back onto the focal plane and generate large out-of-focus
     1205objects.  Due to the wavelength dependence, these objects are most
     1206prominent in the \gps{} filter data.  These objects are the result of
     1207light reflecting back off the surface of the detector, reflecting
     1208again off the lower surfaces of the optics (particularly the L1
     1209corrector lens), and then back down onto the focal plane.  Due to the
     1210extra travel distance, the resulting source is out of focus and
    11431211elongated along the radial direction of the camera focal plane. These
    11441212optical ghosts can be modeled in the focal plane coordinates (L,M)
    11451213which has its origin at the center of the focal plane.  In this
    11461214system, a bright object at location (L,M) on the focal plane creates a
    1147 reflection ghost on the opposite side of the optical axis at (-L,-M).
    1148 The exact location is fit as a third order polynomial in the focal
    1149 plane L and M directions (as listed in Table \ref{tab:ghost_centers}).
    1150 An elliptical annulus mask is constructed at the expected ghost
    1151 location, with the major and minor axes defined by linear functions of
    1152 the ghost distance from the optical axis, and oriented with the
    1153 ellipse major axis is along the radial direction (Table
    1154 \ref{tab:ghost_radii}).  All stars brighter than a filter-dependent
    1155 threshold (listed in Table \ref{tab:ghost_magnitudes}) have such masks
    1156 constructed.
     1215reflection ghost on the opposite side of the optical axis near
     1216(-L,-M).  The exact location is fit as a third order polynomial in the
     1217focal plane L and M directions (as listed in Table
     1218\ref{tab:ghost_centers}).  An elliptical annulus mask is constructed
     1219at the expected ghost location, with the major and minor axes defined
     1220by linear functions of the ghost distance from the optical axis, and
     1221oriented with the ellipse major axis is along the radial direction
     1222(Table \ref{tab:ghost_radii}).  All stars brighter than a
     1223filter-dependent threshold (listed in Table
     1224\ref{tab:ghost_magnitudes}) have such masks constructed.
    11571225
    11581226\begin{deluxetable}{lcc}
     
    12091277  \includegraphics[width=0.9\hsize,angle=0,clip]{images/full_fpa_ghosts.jpg}
    12101278  \caption{Example of the full GPC1 field of view illustrating the sources and destinations of optical ghosts on exposure o5677g0123o (2011-04-26, 43s \gps{} filter).  The bright stars on OTA33 and OTA44 result in nearly circular ghosts on the opposite OTA.  In contrast, the trio of stars on OTA11 result in very elongated ghosts on OTA66.}
     1279  \label{fig:optical ghosts}
    12111280\end{figure}
    12121281
     
    12201289detector surface in a long narrow glint.  This surface was physically
    12211290masked on 2010-08-24, removing the possibility of glints in subsequent
    1222 data, but that taken prior have an advisory dynamic mask constructed
    1223 when a reference source falls on the focal plane within one degree of
    1224 the detector edge.  This mask is 150 pixels wide, with length $L =
    1225 2500 \left(-20 - m_{inst}\right)$ pixels.  These glint masks are
    1226 constructed by selecting sufficiently bright sources in the reference
    1227 catalog that fall within rectangular regions around each edge of the
    1228 GPC1 camera.  These regions are separated from the edge of the camera
    1229 by 17 arcminutes, and extend outwards an additional degree.
     1291data, but images that were taken prior to this date have an advisory
     1292dynamic mask constructed when a reference source falls on the focal
     1293plane within one degree of the detector edge.  This mask is 150 pixels
     1294wide, with length $L = 2500 \left(-20 - m_{inst}\right)$ pixels.
     1295These glint masks are constructed by selecting sufficiently bright
     1296sources in the reference catalog that fall within rectangular regions
     1297around each edge of the GPC1 camera.  These regions are separated from
     1298the edge of the camera by 17 arcminutes, and extend outwards an
     1299additional degree.
    12301300
    12311301\begin{figure}
     
    12331303  \includegraphics[width=0.9\hsize,angle=0,clip]{images/glint_example_o5379g0103o.jpg}
    12341304  \caption{Example of a glint on exposure o5379g0103o (2010-07-02, 45s \ips{} filter).  The source star out of the field of view creates a long reflection that extends through OTA73 and OTA63.}
     1305  \label{fig:optical glints}
    12351306\end{figure}
    12361307
     
    12441315mask is constructed, as the source is likely too faint to produce the
    12451316feature.  These spikes are dependent on the camera rotation, and are
    1246 oriented at $\theta = n * \frac{\pi}{2} - \mathrm{ROTANGLE} + 0.798$,
    1247 based on the header keyword.
     1317oriented based on the header keyword at $\theta = n * \frac{\pi}{2} -
     1318\mathrm{ROTANGLE} + 0.798$, for $n = {0,1,2,3}$.
    12481319
    12491320The cores of stars that are saturated are masked as well, with a
     
    12631334\label{sec:masking_fraction}
    12641335
    1265 Although there are a large number of masked pixels within the sixty
    1266 OTAs of GPC1, the camera was designed to move chips with problematic
    1267 areas (most notably CTE issues) to the edges of the detector.  Because
    1268 of this, the main analysis of the mask fraction is based not on the
    1269 total footprint of the detector, but upon a circular reference field
    1270 of view with a radius of 1.5 degrees.  This field of view corresponds
    1271 approximately to half the width and height of the detector.  This
    1272 field of view underestimates the unvignetted region of GPC1.  A second
    1273 ``maximum'' field of view is also used to estimate the mask fraction
    1274 within a larger 1.628 degree radius.  This larger radius includes far
    1275 larger missing fractions due to the circular regions outside region
    1276 populated with OTAs, but does include the contribution from
    1277 well-illuminated pixels that are ignored by the reference radius.
    1278 
    1279 The results of simulating simulating the footprint of the detector as
    1280 a grid of uniformly sized pixels of $0\farcs{}258$ size are provided
    1281 in Table \ref{tab:mask fraction}.  Both fields of view contain
    1282 circular segments outside of the footprint of the detector, which
    1283 increase the area estimate that is unpopulated.  This category also
    1284 accounts for the inter-OTA and inter-cell gaps.  The regions with poor
    1285 CTE also account for a significant fraction of the masked pixels.  The
     1336The GPC1 camera was designed such that where possible, OTAs with CTE
     1337issues were placed towards the edge of the detector.  Because of this,
     1338the main analysis of the mask fraction is based not on the total
     1339footprint of the detector, but upon a circular reference field of view
     1340with a radius of 1.5 degrees.  This radius corresponds approximately
     1341to half the width and height of the detector.  This field of view
     1342underestimates the unvignetted region of GPC1.  A second ``maximum''
     1343field of view is also used to estimate the mask fraction within a
     1344larger 1.628 degree radius.  This larger radius includes far larger
     1345missing fractions due to the circular regions outside region populated
     1346with OTAs, but does include the contribution from well-illuminated
     1347pixels that are ignored by the reference radius.
     1348
     1349The results of simulating the footprint of the detector as a grid of
     1350uniformly sized pixels of $0\farcs{}258$ size are provided in Table
     1351\ref{tab:mask fraction}.  Both fields of view contain circular
     1352segments outside of the footprint of the detector, which increase the
     1353area estimate that is unpopulated.  This category also accounts for
     1354the inter-OTA and inter-cell gaps.  The regions with poor CTE also
     1355contribute to a significant fraction of the masked pixels.  The
    12861356remaining mask category accounts for known bad columns, cells that do
    12871357not calibrate well, and vignetting.  There are also a small fraction
     
    12971367input masks already account for the inter-cell gaps).  This estimate
    12981368does not include the circular segments outside of the detector
    1299 footprint.  This is minor for the reference field of view (1\%
    1300 difference), but does underestimate the static mask fraction for the
    1301 maximum radius by 7.3\%.  This analysis does provide a the observed
    1302 dynamic and advisory mask fractions, which are 0.03\% and 3\%
    1303 respectively.  The significant advisory value is a result of applying
    1304 such masks to all burntool corrected pixels.
     1369footprint.  This difference is minor for the reference field of view
     1370(1\% difference), but underestimates the static mask fraction for the
     1371maximum radius by 7.3\%.  This analysis provides the observed dynamic
     1372and advisory mask fractions, which are 0.03\% and 3\% respectively.
     1373The significant advisory value is a result of applying such masks to
     1374all burntool corrected pixels.
    13051375
    13061376\begin{deluxetable}{lcc}
     
    13261396background model for the full OTA is then determined prior to the
    13271397photometric analysis.  The mosaicked image is subdivided into
    1328 $800\times{}800$ pixel segments that define each pixel of the
    1329 background model, with the segments centered on the image center, and
    1330 overlapping adjacent subdivisions by 400 pixels.  These overlaps help
    1331 smooth the background model, as adjacent model pixels share input
     1398$800\times{}800$ pixel segments that define each superpixel of the
     1399background model, with the superpixels centered on the image center
     1400and overlapping adjacent superpixels by 400 pixels.  These overlaps
     1401help smooth the background model, as adjacent model pixels share input
    13321402pixels.
    13331403
    1334 From each subdivision, 10000 random unmasked pixels are drawn.  In the
     1404From each segment, 10000 random unmasked pixels are drawn.  In the
    13351405case where the mask fraction is large (such as on OTAs near the edge
    13361406of the field of view), and there are insufficient unmasked pixels to
    13371407meet this criterion, all possible unmasked pixels are used instead.
    13381408If this number is still small (less than 100 good pixels), the
    1339 subdivision does not have a background model calculated, and instead,
     1409superpixel does not have a background model calculated.  Instead,
    13401410the value assigned to that model pixel is set as the average of the
    13411411adjacent model pixels.  This allows up to eight neighboring background
    13421412values to be used to patch these bad pixels.
    13431413
    1344 For the remaining subdivisions that have sufficient unmasked pixels
    1345 for the background to be measured, the pixel values are used to
    1346 calculate a set of robust statistics for the initial background guess.
    1347 The minimum and maximum of the values are found, and checked to ensure
     1414For the subdivisions that have sufficient unmasked pixels for the
     1415background to be measured, the pixel values are used to calculate a
     1416set of robust statistics for the initial background guess.  The
     1417minimum and maximum of the values are found, and checked to ensure
    13481418that these are not the same value, which would indicate some problem
    13491419with the input values.  The values are then inserted into a histogram
    1350 with 1000 bins between the minimum and maximum values, and again
    1351 checked for issues with the inputs by ensuring that the bin with the
    1352 most input pixels does not contain more than half of the input values.
    1353 In this case, the minimum and maximum do not constrain the true
    1354 distribution of the input values well, and any values outside of the
    1355 20 bins closest to the bin with the peak are masked for future
    1356 consideration.  A cumulative distribution is then constructed from the
     1420with 1000 bins between the minimum and maximum values.  If the bin
     1421with the most input pixels contains more than half of the input
     1422values, the bin size is too coarse for the population of interest.  In
     1423this case, a new histogram is constructed using a range corresponding
     1424to the 20 bins closes to the peak, again dividing the range into 1000
     1425bins.  This process is iterated up to 20 times until a binsize is
     1426determined.  A cumulative distribution is then constructed from the
    13571427histogram, which saves the computational cost of sorting all the input
    13581428values.  The bins containing the 50-percentile point, as well as the
     
    13681438If this measured standard deviation is smaller than 3 times the bin
    13691439size, then all points more than 25 bins away from the calculated
    1370 median are masked, and the process is repeated until the bin size is
    1371 sufficiently small to ensure that the distribution width is well
    1372 sampled.  Once this iterative process converges, or 20 iterations are
    1373 run, the 25- and 75-percentile values are found by interpolating the 5
    1374 bins around the expected bin as well, and the count of the number of
    1375 input values within this inner 50-percentile region, $N_{50}$ is
    1376 calculated.
     1440median are masked, and the process is repeated with a new 1000 bin
     1441histogram until the bin size is sufficiently small to ensure that the
     1442distribution width is well sampled.  Once this iterative process
     1443converges, or 20 iterations are run, the 25- and 75-percentile values
     1444are found by interpolating the 5 bins around the expected bin as well,
     1445and the count of the number of input values within this inner
     144650-percentile region, $N_{50}$ is calculated.
    13771447
    13781448These initial statistics are then used as the starting guesses for a
     
    13831453500 \right)$.  With this bin size, we expect that a bin at $\pm 2
    13841454\sigma$ will have approximately 50 input points, which gives a
    1385 Poissonian signal to noise estimate around 7.  In the case where
     1455Poissonian signal-to-noise estimate around 7.  In the case where
    13861456$N_{50}$ is small (due to a poorly populated input image), this bin
    13871457size is fixed to be no larger than the guess of the standard
     
    13931463Two second order polynomial fits are then performed to the logarithm
    13941464of the histogram counts set at the midpoint of each bin.  The first
    1395 fit considers the ``lower half'' of the distribution, under the
     1465fit considers the lower portion of the distribution, under the
    13961466assumption that deviations from a normal distribution are caused by
    13971467real astrophysical sources that will be brighter than the true
     
    14071477fit.  The Gaussian mean and standard deviation are calculated from the
    14081478polynomial coefficients, and the symmetric fit results are accepted
    1409 unless the lower-half fit results in a smaller mean.  This process is
    1410 repeated again if the calculated standard deviation is not larger than
    1411 75\% of the initial guess (suggesting an issue with the initial bin
    1412 size).
     1479unless the lower-half fit results in a smaller mean.  This histogram
     1480and polynomial fit process is repeated again, with updated bin size
     1481based on the previous iteration standard deviation, if the calculated
     1482standard deviation is not larger than 75\% of the initial guess
     1483(suggesting an issue with the initial bin size).
    14131484
    14141485With this two-stage calculation performed across all subdivisions of
    14151486the mosaicked OTA image, and missing model pixels filled with the
    14161487average of their neighbors, the final background model is stored on
    1417 disk as a $13\times{}13$ image with header entries listing the binning
    1418 used.  The full scale background image is then constructed by
    1419 bilinearly interpolating this binned model, and this is subtracted
    1420 from the science image.  Each object in the photometric catalog has a
    1421 SKY and SKY\_SIGMA value that is the evaluation of this model at the
    1422 location of that object.
     1488disk as a $13\times{}13$ image for the GPC1 chips with header entries
     1489listing the binning used.  The full scale background image is then
     1490constructed by bilinearly interpolating this binned model, and this is
     1491subtracted from the science image.  Each object in the photometric
     1492catalog has a SKY and SKY\_SIGMA value determined from the background
     1493model mean and standard deviation.
    14231494
    14241495Although this background modeling process works well for most of the
     
    14311502of GPC1, the measured background was added back to the \IPPstage{chip}
    14321503stage images, but this special processing was not used for the large
    1433 scale $3\Pi$ PV3 reduction.
     1504scale $3\pi$ PV3 reduction.
    14341505
    14351506\section{GPC1 Detrend Construction}
    14361507\label{sec:detrend construction}
    14371508
    1438 The various detrends for GPC1 are constructed in similar ways.  A
    1439 series of appropriate exposures is selected from the database, and
    1440 processed with the \IPPprog{ppImage} program.  This program is used
    1441 for the \IPPstage{chip} stage processing as well, and is designed to
    1442 do multiple image processing operations.  The extent of this
    1443 processing is dependent on the order in which the detrend to be
    1444 constructed is applied to science data.  In general, the input
    1445 exposures to the detrend have all prior stages of detrend processing
    1446 applied.  Table \ref{tab:detrend ppImage} summarizes stages applied
    1447 for the detrends we construct.
     1509The various master detrend images for GPC1 are constructed using a
     1510common approach.  A series of appropriate exposures is selected from
     1511the database, and processed with the \IPPprog{ppImage} program.  This
     1512program is used for the \IPPstage{chip} stage processing as well, and
     1513is designed to do multiple image processing operations.  The
     1514processing steps applied to the images depend on the type of master
     1515detrend to be constructed.  In general, the input exposures to the
     1516detrend have all prior stages of detrend processing applied.  Table
     1517\ref{tab:detrend ppImage} summarizes stages applied for the detrends
     1518we construct.
    14481519
    14491520Once the input data has been prepared, the \IPPprog{ppMerge} program
    1450 is used to construct some sort of ``average'' of the inputs.  This
    1451 step need not be a mathematical average, but is used to combine the
    1452 signal from the individual exposures into a single output product.
    1453 Table \ref{tab:detrend ppMerge} lists some of the properties of the
    1454 process for the detrends, including how discrepant values are removed
    1455 and the combination method used.  The outputs from this step have the
    1456 format of the detrend under construction, and after construction, are
    1457 applied to the processed input data.  This creates a set of residual
    1458 files that are checked to determine if the newly created detrend
    1459 correctly removes the detector dependent signal.
     1521is used to combine the inputs.  In some cases, this is the
     1522mathematical average, but in other cases it is a fit across the
     1523inputs.  Table \ref{tab:detrend ppMerge} lists some of the properties
     1524of the process for the detrends, including how discrepant values are
     1525removed and the combination method used.  The outputs from this step
     1526have the format of the detrend under construction.  After
     1527construction, these combined outputs are applied to the processed
     1528input data.  This creates a set of residual files that are checked to
     1529determine if the newly created detrend correctly removes the detector
     1530dependent signal.
    14601531
    14611532This process of detrend construction and testing can be iterated, with
    14621533individual exposures excluded if they are found to be contaminating
    1463 the output.  If the final detrend has sufficiently small residuals,
    1464 then the iterations are stopped and the detrend is finalized by
    1465 selecting the date range to which it applies.  This allows subsequent
    1466 science processing to select the detrends needed based on the
    1467 observation date.  Table \ref{tab:detrend list} lists the set of
    1468 detrends used in the PV3 processing.
     1534the output.  The construction of detrends is largely automatic, but
     1535manual intervention is needed to accept the detrend for use on science
     1536data.  If the final detrend has sufficiently small residuals, then the
     1537iterations are stopped and the detrend is finalized by selecting the
     1538date range to which it applies.  This allows subsequent science
     1539processing to select the detrends needed based on the observation
     1540date.  Table \ref{tab:detrend list} lists the set of detrends used in
     1541the PV3 processing.
    14691542
    14701543\begin{deluxetable}{lcccc}
     
    14751548  \startdata
    14761549  LINEARITY & Y & & & \\
     1550%%  DARKMASK  & Y & Y & Y & \\
     1551%%  FLATMASK  & Y & Y & Y & Y \\
     1552%%  CTEMASK   & Y & Y & Y & Y \\
     1553  DARK      & Y & Y & & \\
     1554%%  NOISEMAP  & Y & Y & & \\
     1555  FLAT      & Y & Y & Y & \\
     1556  FRINGE    & Y & Y & Y & Y \\
    14771557  DARKMASK  & Y & Y & Y & \\
    14781558  FLATMASK  & Y & Y & Y & Y \\
    14791559  CTEMASK   & Y & Y & Y & Y \\
    1480   DARK      & Y & Y & & \\
    14811560  NOISEMAP  & Y & Y & & \\
    1482   FLAT      & Y & Y & Y & \\
    1483   FRINGE    & Y & Y & Y & Y \\
    14841561  \enddata
    14851562  \label{tab:detrend ppImage}
     
    15511628\section{Warping}
    15521629\label{sec:warping}
    1553 To provide a consistent and uniform set of images for co-added image
    1554 stacking and differences, the individual mosaicked OTA images are
    1555 projected onto a common set of tangent plane projected regions called
    1556 projection cells.  These projection cells are $4\times{}4$ degree
    1557 fields spaced onto a set of centers that fully cover the sky.  They are
    1558 arranged into rings of constant declination, and allowed to overlap as
    1559 $|\delta|$ increases.  Each projection cell is further subdivided into
    1560 $10\times{}10$ sky cells with fixed $0.25"$ resolution pixels, and
    1561 constant overlap regions between adjacent skycells of $60"$.  These
    1562 skycells are the main image unit used for processing image data beyond
    1563 the initial chip stage.  The coordinate system used for these images
    1564 matches the parity of the sky, with north in the positive y direction
    1565 and east to the negative x direction.
     1630To  provide a  consistent and  uniform  set of  coordinates for  image
     1631combination  (including  stacking  and  differences),  the  individual
     1632mosaicked OTA images  are projected onto a common  pixel grids, called
     1633tessellations.  A tessellation can contain  any number of tangent plane
     1634projections,  with those  designed for  single pointing  surveys using
     1635only one, while the tessellation used for the $3\pi$ survey containing
     16362643  tangent  plane  projections.   These  ``projection  cells''  are
     1637$4\times{}4$ degree  fields spaced  onto a set  of centers  that fully
     1638cover the sky.  They are  arranged into rings of constant declination,
     1639and allowed to overlap as  $|\delta|$ increases.  Each projection cell
     1640is  further subdivided  into  $10\times{}10$  ``skycells'' with  fixed
     1641$0.25"$  resolution  pixels,  and  constant  overlap  regions  between
     1642adjacent skycells  of $60"$.  These  skycells are the main  image unit
     1643used for  processing image  data beyond the  initial chip  stage.  The
     1644coordinate system used for these images matches the parity of the sky,
     1645with north  in the  positive y  direction and east  to the  negative x
     1646direction.
    15661647
    15671648After the detrending and photometry, the detection catalog for the
    1568 full camera is fit to the reference catalog, producing third-order
    1569 astrometric solutions that map the detector focal plane to the sky,
    1570 and map the individual OTA pixels to the detector focal plane.  This
    1571 solution is then used to determine which skycells the exposure OTAs
    1572 overlap.
     1649full camera is fit to the reference catalog, producing astrometric
     1650solutions that map the detector focal plane to the sky, and map the
     1651individual OTA pixels to the detector focal plane
     1652\citep[][see]{magnier2017.calibration}.  This solution is then used to
     1653determine which skycells the exposure OTAs overlap.
    15731654
    15741655For each output skycell, all overlapping OTAs and the calibrated
    1575 catalog are read into the \IPPprog{pswarp} program.  Each input image
    1576 is examined in order, and the same transformation performed.  This
    1577 transformation breaks the output warp image into $128\times{}128$
    1578 pixel grid boxes.  Each grid box has a locally linear map calculated
    1579 that converts the output warp image coordinates to the input chip
    1580 image coordinates.  By doing the transformation in this direction,
    1581 each output pixel has a unique sampling position on the input image
     1656catalog are read into the \IPPprog{pswarp} program.  The output warp
     1657image is broken into $128\times{}128$ pixel grid boxes.  For purposes
     1658of speed, each grid box has a locally linear map calculated that
     1659converts the output warp image coordinates to the input chip image
     1660coordinates.  By doing the transformation in this direction, each
     1661output pixel has a unique sampling position on the input image
    15821662(although it may be off the image frame and therefore not populated),
    15831663preventing gaps in the output image due to the spacing of the input
    15841664pixels.
    15851665
    1586 With the locally linear grid defined, Lanczos interpolation with
    1587 filter size parameter $a = 3$ on the input image is used to determine
    1588 the values to assign to the output pixel location.  The output
    1589 locations are shifted by 0.5 pixels to let the interpolation select
    1590 the value that would be assigned to the center of the output
    1591 pixel. This process is repeated for all grid boxes, for all input
     1666With the locally linear grid defined, Lanczos interpolation
     1667\citep{Lanczos:1950zz} with filter size parameter $a = 3$ on the input
     1668image is used to determine the values to assign to the output pixel
     1669location.  This process is repeated for all grid boxes, for all input
    15921670images, and for each output image product: the science image, the
    15931671variance, and the mask.  The image values are scaled by the absolute
    1594 value of the Jacobian determinant of the transformation.  This
    1595 corrects the pixel values for the possible change in pixel area due to
    1596 the transformation.  Similarly, the variance image is scaled by the
    1597 square of this value, again to correctly account for the pixel area
    1598 change.
    1599 
    1600 As the interpolation constructs the output pixels from more than one
    1601 input pixel, there is a covariance term that is must be included.  For
    1602 each locally linear grid box, the covariance is calculated from the
     1672value of the Jacobian determinant of the transformation for each grid
     1673box.  This corrects the pixel values for the possible change in pixel
     1674area due to the transformation.  Similarly, the variance image is
     1675scaled by the square of this value, again to correctly account for the
     1676pixel area change.
     1677
     1678The interpolation constructs the output pixels from more than one
     1679input pixel, which introduces covariance between pixels.  For each
     1680locally-linear grid box, the covariance matrix is calculated from the
    16031681kernel in the center of the 128 pixel range.  Once the image has been
    1604 fully populated, this set of individual covariance matrices is
     1682fully populated, this set of individual covariance matrices are
    16051683averaged to create the final covariance for the full image.
    16061684
     
    16081686catalog, including only those objects that fall on the new warped image.
    16091687These detections are transformed to match the new image location, and
    1610 to scale the position errors based on the new orientation.
     1688to scale the position uncertainties based on the new orientation.
    16111689
    16121690The output image also contains header keywords SRC\_0000, SEC\_0000,
    1613 MPX\_0000, and MPY\_0000 that contain the mappings from the warped
     1691MPX\_0000, and MPY\_0000 that define the mappings from the warped
    16141692pixel space to the input image.  The SRC keyword lists the input OTA
    1615 name, and the SEC keyword lists the image section corresponding to the
    1616 locally linear grid box.  The MPX and MPY contain the transformation
    1617 parameters for the locally linear grid.  These parameters are stored
    1618 in a string listing the reference position in the chip coordinate
    1619 frame, the slope of the relation in the warp x axis, and the slope of
    1620 the relation in the warp y axis.  From these keywords, any position in
    1621 the warp can be mapped back to the location in any of the input OTA
    1622 images.
     1693name, and the SEC keyword lists the image section that the mapping
     1694covers.  The MPX and MPY contain the back-transformation linearized
     1695across the full chip.  These parameters are stored in a string listing
     1696the reference position in the chip coordinate frame, the slope of the
     1697relation in the warp x axis, and the slope of the relation in the warp
     1698y axis.  From these keywords, any position in the warp can be mapped
     1699back to the location in any of the input OTA images, with some
     1700reduction in accuracy.
    16231701
    16241702\begin{figure}
     
    16711749
    16721750Once individual exposures have been warped onto a common projection
    1673 system, they can then be combined pixel-by-pixel regardless of their
     1751system, they can be combined pixel-by-pixel regardless of their
    16741752original orientation.  Creating a stacked image by co-adding the
    16751753individual warps increases the signal to noise, allowing for the
    1676 detection of objects that would not be sufficiently significant to be measured from a single image.
    1677 Creating this stack also allows a complete image to be
    1678 constructed that does not have regions masked due to the gaps between
    1679 cells and OTAs.  This fully populated static sky image can also be
    1680 used as a template for subtraction to find transient sources.
     1754detection of objects that would not be sufficiently significant to be
     1755measured from a single image.  Creating this stack also allows a more
     1756complete image to be constructed that has fewer regions masked due to
     1757the gaps between cells and OTAs.  This deeper and more complete image
     1758can also be used as a template for subtraction to find transient
     1759sources.
    16811760
    16821761The stacked image is comprised of all warp frames for a given skycell
     
    16891768FWHM greater than 10 pixels (2.5 arcseconds), as those images have the
    16901769seeing far worse than average, and would degrade the final output
    1691 stack.  For the PV3 $3\Pi$ survey, this size represents a PSF larger
     1770stack.  For the PV3 $3\pi$ survey, this size represents a PSF larger
    16921771than the $97$th percentile in all filters.  A target PSF for the stack
    16931772is constructed by finding the maximum envelope of all input PSFs,
     
    16971776input images when matched to the target.
    16981777
    1699 The input images also need to have their fluxes normalized to prevent
    1700 differences in seeing and sky transparency from causing discrepancies
    1701 during pixel rejection.  From the reference catalog calibrated input
    1702 catalogs, we have the instrumental magnitudes of all sources, along
    1703 with the airmass, image exposure time, and zeropoint.  All output
    1704 stacks are calibrated to a zeropoint of 25.0 in all filters, and to
    1705 have an airmass of 1.0.  The output exposure time is set to the sum of
    1706 the input exposure times, regardless of if those inputs are rejected
    1707 later in the combination process.  We can determine the relative
     1778The input image fluxes are normalized to prevent differences in seeing
     1779and sky transparency from causing discrepancies during pixel
     1780rejection.  From the reference catalog calibrated input catalogs, we
     1781have the instrumental magnitudes of all sources, along with the
     1782airmass, image exposure time, and zeropoint.  All output stacks are
     1783constructed to a target zeropoint of 25.0 in all filters, and to have
     1784an airmass of 1.0.  The output exposure time is set to the sum of the
     1785input exposure times, regardless of whether those inputs are rejected later
     1786in the combination process.  We can determine the relative
    17081787transparency for each input image by comparing the magnitudes of
    17091788matched sources between the different images.  Each image then has a
    1710 normalization factor defined, equal to $\mathrm{norm}_{input} = (ZP_\mathrm{input}
    1711 - ZP_\mathrm{target}) - \mathrm{transparency}_\mathrm{input} - 2.5 *
    1712 \log_{10} (t_\mathrm{target} / t_\mathrm{input}) -
    1713 \mathrm{F}_\mathrm{airmass} * (\mathrm{airmass}_\mathrm{input} -
    1714 \mathrm{airmass}_\mathrm{target})$.  For the PV3 processing, the
    1715 airmass factor $\mathrm{F}_\mathrm{airmass}$ was set to zero, such
    1716 that all flux differences from differing exposure airmasses are
    1717 assumed to be included in the zeropoint and transparency values.
     1789normalization factor defined, equal to $\mathrm{norm}_{input} =
     1790(ZP_\mathrm{input} - ZP_\mathrm{target}) -
     1791\mathrm{transparency}_\mathrm{input} - 2.5 * \log_{10}
     1792(t_\mathrm{target} / t_\mathrm{input}) - \mathrm{F}_\mathrm{airmass} *
     1793(\mathrm{airmass}_\mathrm{input} - \mathrm{airmass}_\mathrm{target})$.
     1794For the PV3 processing, the airmass factor
     1795$\mathrm{F}_\mathrm{airmass}$ was set to zero, such that all flux
     1796differences from differing exposure airmasses are assumed to be
     1797included in the zeropoint and transparency values.
    17181798
    17191799The zeropoint calibration performed here uses the calibration of the
     
    17241804the entire region of the sky imaged.  This further calibration is not
    17251805available at the time of stacking, and so there may be small residuals
    1726 in the transparency values as a result of this \citet{magnier2017c}.
     1806in the transparency values as a result of this \citet{magnier2017.calibration}.
    17271807
    17281808With the flux normalization factors and target PSF chosen, the
     
    17331813the kernel, and the residual with the target PSF used to update the
    17341814parameters of the kernel via least squares optimization.  Stamps that
    1735 significantly deviate are rejected, but as the squared residual
     1815significantly deviate are rejected, although the squared residual
    17361816difference will increase with increasing source flux.  To mitigate
    17371817this effect, a parabola is fit to the distribution of squared
     
    17411821convolution kernel is returned.
    17421822
    1743 This convolution may change the image flux scaling, so a normalization
    1744 factor is used to correct this.  This normalization factor is equal to
     1823This convolution may change the image flux scaling, so the kernel is
     1824normalized to account for this.  The normalization factor is equal to
    17451825the ratio of $10^{-0.4 \mathrm{norm}_{input}}$ to the sum of the
    17461826kernel.  The image is multiplied by this factor, and the variance by
     
    17491829Once the convolution kernels are defined for each image, they are used
    17501830to convolve the image to match the target PSF.  Any input image that
    1751 has a kernel match $\chi^2$ value greater than 4.0$\sigma$ larger than
    1752 the median value is rejected from the stack.  Each image also has a
    1753 weight assigned, based on the image variance after convolution.  A
    1754 full image weight is then calculated for each input, with the weight,
    1755 $W_\mathrm{input}$ is equal to the inverse of the median of the image
    1756 variance multiplied by the peak of the image covariance (due to the
    1757 warping process).
     1831has a kernel match $chi^2$ value (defined as the sum of the RMS error
     1832across the kernel) greater than 4.0$\sigma$ larger than the median
     1833value is rejected from the stack.  Each image also has a weight
     1834assigned, based on the image variance after convolution.  A full image
     1835weight is then calculated for each input, with the weight,
     1836$W_\mathrm{input}$ equal to the inverse of the median of the image
     1837variance multiplied by the peak of the image covariance (from the
     1838warping process).  This ensures that low signal-to-noise images are
     1839down-weighted in the final combination.
    17581840
    17591841Following the convolution, an initial stack is constructed.  For a
    17601842given pixel coordinate, the values at that coordinate are extracted
    1761 from all input images.  Images that have a suspect mask bit (including
    1762 the SUSPECT, BURNTOOL, SPIKE, STREAK, STARCORE, and CONV.POOR bit
    1763 values) are appended to a suspect pixel list for preferential
    1764 exclusion.  Following this, the pixel values are combined and tested
    1765 to attempt to identify discrepant input values that should be excluded.
     1843from all input images, with pixels masked excluded from consideration.
     1844Images that only have a suspect mask bit (including the SUSPECT,
     1845BURNTOOL, SPIKE, STREAK, STARCORE, and CONV.POOR bit values) are
     1846appended to a suspect pixel list for preferential exclusion.
     1847Following this, the pixel values are combined and tested to attempt to
     1848identify discrepant input values that should be excluded.
    17661849
    17671850If only a single input is available, the initial stack contains the
    17681851value from that single input.  If there are only two inputs, the
    17691852average of the two is used.  These cases should occur only rarely in
    1770 the $3\Pi$ survey, as there are many input exposures that overlap each
     1853the $3\pi$ survey, as there are many input exposures that overlap each
    17711854point on the sky.  For the more common case of three or more inputs, a
    17721855weighted average from the inputs is used, with the weight for each
     
    17891872there were no valid inputs, in which case the BLANK mask bit is set.
    17901873
    1791 Due to the various non-astronomical ghosts that can occur on GPC1, and
    1792 the fact that they may not be fully masked to ensure all bad pixels
    1793 are removed, it is expected that some of the inputs for a given stack
    1794 pixel are not in agreement with the others.  In general, there is the
    1795 population of input pixel values around the correct astronomical
    1796 level, as well as possible populations at lower pixel value (such as
    1797 due to an over-subtracted burntool trail) and at higher pixel values
    1798 (such as that caused by an incompletely masked optical ghost).  Due to
    1799 the observation strategy to image a given field twice to allow for
     1874Due to uncorrected artifacts that can occur on GPC1, and the fact that
     1875they may not be fully masked to ensure all bad pixels are removed, it
     1876is expected that some of the inputs for a given stack pixel are not in
     1877agreement with the others.  In general, there is the population of
     1878input pixel values around the correct astronomical level, as well as
     1879possible populations at lower pixel value (such as due to an
     1880over-subtracted burntool trail) and at higher pixel values (such as
     1881that caused by an incompletely masked optical ghost).  Due to the
     1882observation strategy to observe a given field twice to allow for
    18001883warp-warp difference images to be constructed to identify transient
    18011884detections, higher pixel values that come from sources like optical
    1802 ghosts that depend on the telescope pointing will come in pairs as well.
    1803 The higher pixel value contaminants are also potentially problematic
    1804 as they may appear to be real sources, prompting photometry to be
    1805 performed on false objects.  Because of the expectation that there are
    1806 more bright contaminants than faint ones, there is a slight preference
    1807 to reject higher pixel values than lower pixel values.
     1885ghosts that depend on the telescope pointing will come in pairs.
     1886Detector artifacts will  appear in pairs as well.  The higher
     1887pixel value contaminants are also potentially problematic as they may
     1888appear to be real sources, prompting photometry to be performed on
     1889false objects.  Because of the expectation that there are more positive
     1890deviations than negative ones, there is a slight preference to  reject
     1891higher pixel value outliers than lower pixel values, as described below.
    18081892
    18091893Following the initial combination, a ``testing'' loop iterates in an
     
    18181902and both are flagged for rejection
    18191903
    1820 If the number of inputs is larger than 6, then a Gaussian mixture
    1821 model analysis is run on the inputs to fit two sub populations, and
    1822 determine an the likelihood that the distribution is best described by
     1904If the number of input pixels is larger than 6, then a Gaussian mixture
     1905model analysis is run on the inputs fitting two sub populations, to
     1906determine the likelihood that the distribution is best described by
    18231907an uni-modal model.  If this probability is less than $5\%$, then the
    18241908mean is taken from the bimodal sub population with the largest
     
    18261910comprised of high pixel value outliers.
    18271911
    1828 If this is not the case, and the distribution is likely unimodal, or
    1829 if there are insufficient inputs for this mixture model analysis, the
    1830 input values are passed to an Olympic weighted mean calculation.  We
    1831 reject $20\%$ of the number of inputs through this process.  The
    1832 number of bad inputs is set to $N_\mathrm{bad} = 0.2 *
    1833 N_\mathrm{input} + 0.5$, with the 0.5 term ensuring at least one input
    1834 is rejected.  This number is further separated into the number of low
    1835 values to exclude $N_\mathrm{low} = N_\mathrm{bad} / 2$, which will
    1836 default to zero if there are few inputs, and $N_\mathrm{high} =
    1837 N_\mathrm{input} + N_\mathrm{low} - N_\mathrm{bad}$.  After sorting
    1838 the input values to determine which values fall into the low and high
    1839 groups, the remaining input values are used in a weighted mean using
    1840 the image weights above.
     1912If the unimodal probability is greater than $5\%$ (indicating the
     1913distribution is likely to be unimodal), or if there are insufficient
     1914inputs for this mixture model analysis, the input values are passed to
     1915an Olympic weighted mean calculation.  We reject $20\%$ of the number
     1916of inputs through this process.  The number of bad inputs is set to
     1917$N_\mathrm{bad} = 0.2 * N_\mathrm{input} + 0.5$, with the 0.5 term
     1918ensuring at least one input is rejected.  This number is further
     1919separated into the number of low values to exclude, $N_\mathrm{low} =
     1920N_\mathrm{bad} / 2$, which will default to zero if there are few
     1921inputs, and $N_\mathrm{high} = N_\mathrm{low} - N_\mathrm{bad}$.
     1922After sorting the input values to determine which values fall into the
     1923low and high groups, the remaining input values are used in a weighted
     1924mean using the image weights above.
    18411925
    18421926A systematic variance term is necessary to correctly scale how
    18431927discrepant points can be from the ensemble mean.  If the mixture model
    1844 analysis was run, the Gaussian sigma from the largest sub population
    1845 is squared and used.  If this is not available, a $10\%$ systematic
    1846 error on the input values is used.  Each point then has a limit
    1847 calculated using a $4\sigma$ rejection
     1928analysis has been run, the Gaussian sigma from the largest sub
     1929population is squared and used.  Otherwise, a $10\%$ systematic error
     1930on the input values is used.  Each point then has a limit calculated
     1931using a $4\sigma$ rejection
    18481932
    18491933\begin{eqnarray}
     
    18561940\mathrm{mean})^2$ exceeding this limit is identified.  If there are
    18571941suspect pixels in the set, those pixels are marked for rejection,
    1858 otherwise this worst pixel is marked for rejection.  Following this,
    1859 the combine and test loop is repeated for until no more pixels are
     1942otherwise this worst pixel is marked for rejection.  Following this step,
     1943the combine and test loop is repeated until no more pixels are
    18601944rejected, up to a maximum number of iterations equal to $50\%$ of the
    18611945number of inputs.
     
    18701954the entire image is rejected as it likely has some systematic issue.
    18711955
    1872 Finally, a second pass at rejecting pixels is conducted, by growing the
    1873 current list to include pixels that are neighbors to many rejected
     1956Finally, a second pass at rejecting pixels is conducted, by extending
     1957the current list to include pixels that are neighbors to many rejected
    18741958pixels.  The ISIS kernel used in the previous step is again used to
    1875 determine the largest square box that contains under the limit of
     1959determine the largest square box that does not exceed the limit of
    18761960$0.25 * \sum_{x,y} kernel^2$.  This square box is then convolved with
    18771961the rejected pixel mask to reject the neighboring pixels.  This final
     
    18831967a map of the number of inputs per pixel.
    18841968
    1885 These convolved stack products are not retained, as the convolution
    1886 reduces the resolution of the final image.  Instead, we apply the
    1887 normalizations and rejected pixel maps generated from the convolved
    1888 stack process to the original unconvolved input images.  This produces
    1889 an unconvolved stack that has the optimum image quality possible from
    1890 the input images.  Not convolving does mean that the PSF shape changes
    1891 across the image, as the different PSF widths of the input images
    1892 print through in the different regions to which they have contributed.
     1969These convolved stack products are not retained, as the convolution is
     1970only used to ensure the pixel rejection uses seeing-matched images.
     1971Instead, we apply the normalizations and rejected pixel maps generated
     1972from the convolved stack process to the original unconvolved input
     1973images.  This produces an unconvolved stack that has the optimum image
     1974quality possible from the input images.  Not convolving does mean that
     1975the PSF shape changes across the image, as the different PSF widths of
     1976the input images print through in the different regions to which they
     1977have contributed.
    18931978
    18941979%% Asinh compression
    18951980
    1896 Due to the expected large range of data values in the final stacked
    1897 image, saving them as compressed 16-bit integer images with linear
    1898 BSCALE and BZERO scaling values is likely to offer poor
    1899 reconstructions of the stacked image.  This will lead either to
    1900 truncation of the extrema of the image, or quantized values that are
    1901 poorly spaced for the image histogram.  Saving the images as 32-bit
    1902 floating point values would alleviate this quantization issue, at the
    1903 cost of a large increase in the disk space required for the stacked
    1904 images.
    1905 
    1906 Transforming the data prior to writing to disk by taking the logarithm
    1907 of the pixel values can resolve this, with the complication that all
    1908 data values must first be made positive, which then sets the highest
    1909 quantization sampling near the lowest values in the image.  Following
    1910 techniques used by SDSS \citep{2000AJ....120.1579Y}, we have instead opted to use the
    1911 inverse hyperbolic sine function to transform the data.  The domain of
    1912 this function allows any input value to be converted.  In addition,
    1913 the quantization sampling can be tuned by placing the zero of the
    1914 inverse hyperbolic sine function at a value where the highest sampling
    1915 is desired.
     1981While IPP image products from single exposures use compressed 16-bit
     1982integer images, this dynamic range is insufficient for the expected
     1983scale of the stacked images.  This will lead either to truncation of
     1984the extrema of the image, or quantized values that poorly sample the
     1985image noise distribution.  Saving the images as 32-bit floating point
     1986values would alleviate this quantization issue, at the cost of a large
     1987increase in the disk space required for the stacked images.
     1988
     1989Inspired by techniques used by SDSS \citep{2000AJ....120.1579Y}
     1990\czw{better citation?}, we use the inverse hyperbolic sine function to
     1991transform the data.  The domain of this function allows any input
     1992value to be converted.  In addition, the quantization sampling can be
     1993tuned by placing the zero of the inverse hyperbolic sine function at a
     1994value where the highest sampling is desired.
    19161995
    19171996Formally, prior to being written to disk, the pixel values are
     
    20202099\label{sec:diffs}
    20212100
    2022 Constructing difference images is essentially the same as that used in
    2023 the stacking process.  An image is chosen as a template, another image
    2024 as the input, and after matching sources to determine the scaling and
    2025 transparency, convolution kernels are defined that are used to
    2026 convolve one or both of the images to a target PSF.  The images are
    2027 then subtracted, and as they should now share a common PSF, static
    2028 sources are largely subtracted (completely in an ideal case), whereas
    2029 sources that are not static between the two images leave a significant
    2030 remnant.  More information on the difference image construction is
    2031 contained in \citet{price2017}.  The follow section contains a
    2032 overview of the difference image construction used for the data in
    2033 DR2.
     2101The image matching process used in constructing difference images is
     2102essentially the same the stacking process.  An image is chosen as a
     2103template, another image as the input, and after matching sources to
     2104determine the scaling and transparency, convolution kernels are
     2105defined that are used to convolve one or both of the images to a
     2106target PSF.  The images are then subtracted, and as they should now
     2107share a common PSF, static sources are largely subtracted (completely
     2108in an ideal case), whereas sources that are not static between the two
     2109images leave a significant remnant.  More information on the
     2110difference image construction is contained in \citet{price2017}.  The
     2111following section contains an overview of the difference image
     2112construction used for the data in DR2.
    20342113
    20352114The images used to construct difference images can be either
     
    20522131
    20532132For warp-warp differences, such as those used for the ongoing Solar
    2054 System moving object search in nightly observations \citep{2013PASP..125..357D}, the
    2055 warp that was taken first is used as the template.  As there is less
    2056 certainty in which of the two input images will have better seeing, a
    2057 ``dual'' convolution method is used.  Both inputs are convolved to a
    2058 target PSF that is not identical to either input.  This intermediate
    2059 target is essential for the case in which the PSFs of the two inputs
    2060 have been distorted in orthogonal directions.  Simply convolving one
    2061 to match the other would require some degree of deconvolution along
    2062 one axis.  As this convolution method by necessity uses more free
    2063 parameters, the ISIS kernels used are chosen to be simpler than those
    2064 used in the warp-stack differences.  The ISIS widths are kept the same
    2065 (1.5, 3.0, 6.0 pixel FWHMs), but each Gaussian kernel is constrained
    2066 to only use a second-order polynomial.  As with the warp-stack
    2067 differences, the mask fraction grows between the input warp and the
    2068 final difference image due to the convolution.  For the warp-warp
    2069 differences, each image mask grows based on the appropriate
    2070 convolution kernel, so the final usable image area is highly dependent
    2071 on ensuring that the telescope pointings are as close to identical as
    2072 possible.  The observing strategy to enable this is discussed in more
    2073 detail in \citet{chambers2017}.
    2074 
    2075 
    2076 
    2077 \section{Discussion}
     2133System moving object search in nightly observations
     2134\citep{2013PASP..125..357D}, the warp that was taken first is used as
     2135the template.  As there is less certainty in which of the two input
     2136images will have better seeing, a ``dual'' convolution method is used.
     2137Both inputs are convolved to a target PSF that is not identical to
     2138either input.  This intermediate target is essential for the case in
     2139which the PSFs of the two inputs have been distorted in orthogonal
     2140directions.  Simply convolving one to match the other would require
     2141some degree of deconvolution along one axis.  As this convolution
     2142method by necessity uses more free parameters, the ISIS kernels used
     2143are chosen to be simpler than those used in the warp-stack
     2144differences.  The ISIS widths are kept the same (1.5, 3.0, 6.0 pixel
     2145FWHMs), but each Gaussian kernel is constrained to only use a
     2146second-order polynomial.  As with the warp-stack differences, the mask
     2147fraction grows between the input warp and the final difference image
     2148due to the convolution.  For the warp-warp differences, each image
     2149mask grows based on the appropriate convolution kernel, so the final
     2150usable image area is highly dependent on ensuring that the telescope
     2151pointings are as close to identical as possible.  The observing
     2152strategy to enable this is discussed in more detail in
     2153\citet{chambers2017}.
     2154
     2155
     2156
     2157\section{Future Plans}
    20782158\label{sec:discussion}
    20792159
     
    21502230
    21512231The Pan-STARRS1 PV3 processing has reduced an unprecedented volume of
    2152 image data, and has produced a catalog for the $3\Pi$ Survey
     2232image data, and has produced a catalog for the $3\pi$ Survey
    21532233containing hundreds of billions of individual measurements of
    21542234three billion astronomical objects.  Accurately calibrating
Note: See TracChangeset for help on using the changeset viewer.