Changeset 40423
- Timestamp:
- May 14, 2018, 6:20:09 PM (8 years ago)
- File:
-
- 1 edited
-
trunk/doc/release.2015/ps1.detrend/detrend.tex (modified) (66 diffs)
Legend:
- Unmodified
- Added
- Removed
-
trunk/doc/release.2015/ps1.detrend/detrend.tex
r40081 r40423 109 109 \begin{abstract} 110 110 111 The Pan-STARRS1 Science Consortium ha vecarried out a set of imaging112 surveys using the 1.4 giga -pixel GPC1 camera on the PS1 telescope. As111 The Pan-STARRS1 Science Consortium has carried out a set of imaging 112 surveys using the 1.4 gigapixel GPC1 camera on the PS1 telescope. As 113 113 this camera is composed of many individual electronic readouts, and 114 114 covers a very large field of view, great care was taken to ensure that … … 133 133 the data reduction techniques and the resulting data products. This paper (Paper III) 134 134 describes the details of the pixel processing algorithms, including 135 detrending, warping, a nd adding (to create stacked images)and subtracting136 (to create difference images) andresulting image products and their135 detrending, warping, adding (to create stacked images), and subtracting 136 (to create difference images), along with the resulting image products and their 137 137 properties. 138 138 \citet[][Paper I]{chambers2017} provides an overview of the Pan-STARRS System, the … … 142 142 %Magnier et al. 2017 (Paper II) 143 143 %Pan-STARRS Data Processing Stages 144 \citet[][Paper II]{magnier2017 a}144 \citet[][Paper II]{magnier2017.datasystem} 145 145 describes how the various data processing stages are organized and 146 146 implemented … … 153 153 %Magnier et al. 2017 (Paper IV) 154 154 %Pan-STARRS Pixel Analysis : Source Detection 155 \citet[][Paper IV]{magnier2017 b}155 \citet[][Paper IV]{magnier2017.analysis} 156 156 describes the details of the source detection and photometry, including 157 157 point-spread-function and extended source fitting models, and the … … 159 159 %Magnier et al. 2017 (Paper V) 160 160 %Pan-STARRS Photometric and Astrometric Calibration 161 \citet[][Paper V]{magnier2017 c}161 \citet[][Paper V]{magnier2017.calibration} 162 162 describes the final calibration process, and the resulting photometric and 163 163 astrometric quality. … … 178 178 179 179 180 The Pan-STARRS 1 Science Survey uses the 1.4 giga -pixel GPC1 camera180 The Pan-STARRS 1 Science Survey uses the 1.4 gigapixel GPC1 camera 181 181 with the PS1 telescope on Haleakala Maui to image the sky north of 182 182 $-30^\circ$ declination. The GPC1 camera is composed of 60 orthogonal 183 183 transfer array (OTA) devices, each of which is an $8\times{}8$ grid of 184 readout cells. This parallelizes the readout process, reducing the 185 overhead in each exposure. However, as a consequence of this large 186 number of individual detector readouts, many calibrations are needed 187 to ensure the response is consistent across the entire field of view. 188 189 The Processing Version 3 (PV3) reduction represents the third full 190 reduction of the Pan-STARRS archival data. The first two reductions 191 were used internally for pipeline optimization and the development of 192 the initial photometric and astrometric reference catalog \citep{magnier2017c}. The 193 products from these reductions were not publicly released, but have 194 been used to produce a wide range of scientific papers from the 184 readout cells. The large number of cells parallelizes the readout 185 process, reducing the overhead in each exposure. However, as a 186 consequence, many calibration operations are needed to ensure the 187 response is consistent across the entire seven square degree field of 188 view. 189 190 %The Processing Version 3 (PV3) reduction represents the third full 191 DR1 contains the results of the third full reduction of the Pan-STARRS 192 archival data. The first two reductions were used internally for 193 pipeline optimization and the development of the initial photometric 194 and astrometric reference catalog \citep{magnier2017.calibration}. 195 The products from these reductions were not publicly released, but 196 have been used to produce a wide range of scientific papers from the 195 197 Pan-STARRS 1 Science Consortium members. 196 198 197 199 The Pan-STARRS image processing pipeline (IPP) is described elsewhere 198 \citep{magnier2017a}, but a short summary follows. The 199 archive of raw exposures is stored on disk, with a database storing 200 the metadata of exposure parameters. For the PV3 processing, large 201 contiguous regions were defined, and the images for all exposures 202 within that region launched for the \IPPstage{chip} stage processing. 203 This stage performs the image detrending (described below in section 200 \citep{magnier2017.datasystem}, but a short summary follows. The raw 201 image data is stored on the processing cluster, with a database 202 storing the metadata of exposure parameters. These raw images can be 203 launched for the initial \IPPstage{chip} stage processing. This stage 204 performs the image detrending (described below in section 204 205 \ref{sec:detrending}), as well as the single epoch photometry 205 \citep{magnier2017 b}, in parallel on the individual OTA device data.206 Following the \IPPstage{chip} stage is the \IPPstage{camera} stage, in 207 which the astrometry and photometry for the entire exposure is 208 calibrated by matching the detections against thereference catalog.206 \citep{magnier2017.analysis}, in parallel on the individual OTA device 207 data. Following the \IPPstage{chip} stage is the \IPPstage{camera} 208 stage, in which the astrometry and photometry for the entire exposure 209 is calibrated by matching the detections against a reference catalog. 209 210 This stage also performs masking updates based on the now-known 210 211 positions and brightnesses of stars that create dynamic features (see … … 213 214 \IPPstage{chip} stage images onto common sky oriented images that have 214 215 fixed sky projections (Section \ref{sec:warping}). When all 215 \IPPstage{warp} stage processing is done for theregion of the sky,216 \IPPstage{warp} stage processing is done for a region of the sky, 216 217 \IPPstage{stack} processing is performed (Section \ref{sec:stacking}) 217 218 to construct deeper, fully populated images from the set of 218 \IPPstage{warp} images that cover that region of the sky. Beyond the 219 \IPPstage{stack} stage, a series of additional stages are done that 220 are more fully described in other papers. Transient features are 221 identified in the \IPPstage{diff} stage, which takes input 222 \IPPstage{warp} and/or \IPPstage{stack} data and performs image 219 \IPPstage{warp} images that cover that region of the sky. Transient 220 features are identified in the \IPPstage{diff} stage, which takes 221 input \IPPstage{warp} and/or \IPPstage{stack} data and performs image 223 222 differencing (Section \ref{sec:diffs}). Further photometry is 224 223 performed in the \IPPstage{staticsky} and \IPPstage{skycal} stages, 225 224 which add extended source fitting to the point source photometry of 226 objects detected in the \IPPstage{stack} images, and calibrate the227 results against the reference catalog. The \IPPstage{fullforce} stage 228 takes the catalog output of the \IPPstage{skycal} stage, and uses the 229 objects detected in that to perform forced photometry on the225 objects detected in the \IPPstage{stack} images, and again calibrate 226 the results against a reference catalog. The \IPPstage{fullforce} 227 stage takes the catalog output of the \IPPstage{skycal} stage, and 228 uses the objects detected in that to perform forced photometry on the 230 229 individual \IPPstage{warp} stage images. The details of these stages 231 are provided in \citet{magnier2017b}. 232 233 The same reduction procedure described above is also performed in real 234 time on new exposures as they are observed by the telescope. This 235 process is largely automatic, with new exposures being downloaded from 236 the summit to the main IPP processing cluster at the Maui Research and 237 Technology Center in Kihei, and registered into the processing 238 database. This triggers a new \IPPstage{chip} stage reduction for 239 science exposures, advancing processing upon completion through to the 240 \IPPstage{diff} stage. This allows the ongoing solar system moving 241 object search to identify candidates for follow up observations within 242 24 hours of the initial set of observations \citep{2015IAUGA..2251124W}. 230 are provided in \citet{magnier2017.analysis}. 231 232 The limited version of same reduction procedure described above is 233 also performed in real time on new exposures as they are observed by 234 the telescope. This process is automatic, with new exposures being 235 downloaded from the summit to the main IPP processing cluster at the 236 Maui Research and Technology Center in Kihei, and registered into the 237 processing database. New \IPPstage{chip} stage reductions are 238 launched for science exposures, advancing processing upon completion 239 through to the \IPPstage{diff} stage, skipping the additional stack 240 and forced warp photometry stages. This automatic processing allows 241 the ongoing solar system moving object search to identify candidates 242 for follow up observations within 24 hours of the initial set of 243 observations \citep{2015IAUGA..2251124W}. 243 244 244 245 Section \ref{sec:detrending} provides an overview of the detrending 245 246 process that corrects the instrumental signatures of GPC1, with 246 details of the construction of th ose detrends in Section247 \ref{sec:detrend construction}. An analysis of the algorithms used to 248 completethe \IPPstage{warp} (section \ref{sec:warping}),247 details of the construction of the reference detrend templates in 248 Section \ref{sec:detrend construction}. An analysis of the algorithms 249 used to perform the \IPPstage{warp} (section \ref{sec:warping}), 249 250 \IPPstage{stack} (section \ref{sec:stacking}), and \IPPstage{diff} 250 (section \ref{sec:diffs}) stage transformations of the image data to 251 from the detector frame to a common sky frame, and the co-adding of 252 those common sky frame images continues after the list of detrend 253 steps. Finally, a discussion of the remaining issues and possible 254 future improvements is presented in section \ref{sec:discussion}. 255 256 Image products presented in figures have been 257 mosaicked to arrange pixels as follows. Single cell images are 258 arranged such that pixel $(1,1)$ is at the lower left corner. Images 259 mosaicked to the OTA level have cell xy00 in the lower left corner, 260 with cells xy10, xy20, etc. sequentially to the right, and cells xy01, 261 xy02, etc. sequentially to the top of this cell. Again, pixel $(1,1)$ 262 of cell xy00 is located in the lower left corner of the image. For 263 mosaicks of the full field of view, the OTAs are arranged as they see 264 the sky. The lower left corner is the empty location where OTA70 265 would exist. Toward the right, the OTA labels decrease in $X$ label, 266 with the empty OTA00 located in the lower right. The OTA $Y$ labels 267 increase upward in the mosaic. The OTAs to the left of the midplane 268 (OTA4Y-OTA7Y) are oriented with cell xy00 and pixel $(1,1)$ to the 269 lower left of their position. Due to the electronic connections of 251 (section \ref{sec:diffs}) stage transformations of the image data 252 follows after the list of detrend steps. Finally, a discussion of the 253 remaining issues and possible future improvements is presented in 254 section \ref{sec:discussion}. 255 256 Image products presented in figures have been mosaicked to arrange 257 pixels as follows. Single cell images are arranged such that pixel 258 $(1,1)$ is at the lower right corner (for example Figure 259 \ref{fig:burntool images}). This corrects the parity difference 260 between the raw data and the sky. Images mosaicked to show a full OTA 261 detector are arranged as they are on the focal plane (as in Figure 262 \ref{fig:dark image}. The OTAs to the left of the midplane 263 (OTA4Y-OTA7Y) are oriented with cell xy00 and pixel $(590,1)$ to the 264 lower right of their position. Due to the electronic connections of 270 265 the OTAs in the focal plane, the OTAs to the right of the midplane 271 266 (OTA0Y-OTA3Y) are rotated 180 degrees, and are oriented with cell xy00 272 and pixel $(1,1)$ to the top right of their position. 267 and pixel $(590,1)$ to the top left of their position. For mosaics of 268 the full field of view, the OTAs are arranged as they see the sky, 269 with the cells arranged as in the single OTA images (Figure \ref{fig:optical ghosts}). The lower left 270 corner is the empty location where OTA70 would exist. Toward the 271 right, the OTA labels decrease in $X$ label, with the empty OTA00 272 located in the lower right. The OTA $Y$ labels increase upward in the 273 mosaic. \czw{This is somewhat of a mess?} 273 274 274 275 \textit{Note: These papers are being placed on the arXiv.org to … … 290 291 level, dark frame subtraction to remove temperature and exposure time 291 292 dependent detector glows, and flat field correction to remove pixel to 292 pixel response functions. We also construct fringe correction for the 293 reddest data in the \yps{} filter, to remove the interference patterns that 294 arise in that filter due to the variations in the thickness of the 295 detector surface. 296 297 These corrections, however, assume that the detector response is 298 linear across the full range of values. This is not universally the 299 case with GPC1, and this requires an additional set of detrending 300 steps to remove these non-linear responses. The first of these is the 301 \IPPprog{burntool} correction, which removes the persistence trails 302 caused by the incomplete transfer of charge along the readout columns. 303 This bright-end nonlinearity is generally only evident for the 304 brightest stars, as only pixels that are at or beyond the saturation 305 point of the detector have this issue. More widespread is the 306 non-linearity at the faint end of the pixel range. Some readout cells 307 and some readout cell edge pixels experience a sag relative to linear 308 at low illumination, such that faint pixels appear fainter than 309 expected. The correction to this requires amplifying the pixel values 310 in these regions to match the expected model. 311 312 The final non-linear response issue has no good option for correction. 293 pixel response functions. We also perform fringe correction for the 294 reddest data in the \yps{} filter to remove the interference patterns 295 that arise in that filter due to the variations in the thickness of 296 the detector surface. 297 298 These corrections assume that the detector response is linear across 299 the full range of values. This assumption is not universally true for 300 GPC1, and an additional set of detrending steps are required to remove 301 these artifacts. The first of these is the \IPPprog{burntool} 302 correction, which removes the flux trails left by the incomplete 303 transfer of charge along the readout columns. These trails are 304 generally only evident for the brightest stars, as only pixels that 305 are at or beyond the saturation point of the detector leave residual 306 charge. More widespread is the non-linearity at the faint end of the 307 pixel range. Some readout cells and some readout cell edge pixels 308 experience a sag relative to linear at low illumination, such that 309 faint pixels appear fainter than expected. The correction to this 310 requires amplifying the pixel values in these regions to match the 311 expected model. 312 313 313 Large regions of some OTA cells experience significant charge transfer 314 314 issues, making them unusable for science observations. These regions 315 315 are therefore masked in processing, with these CTE regions making up 316 316 the largest fraction of masked pixels on the detector. Other regions 317 are masked for other regions, such as static bad pixel features or 318 temporary readout masking caused by issues in the camera electronics 319 that make these regions unreliable. These all contribute to the 320 detector mask, which is augmented in each exposure for dynamic 317 are masked for reasons such as static bad pixel features or temporary 318 readout masking caused by issues in the camera electronics that make 319 these regions unreliable. These all contribute to the detector mask, 320 a 16 bit value which records the reason a pixel is masked based on the 321 value added. This mask is augmented in each exposure for dynamic 321 322 features that are masked based on the astronomical features within the 322 323 field of view. 323 324 324 325 For the PV3 processing, all detrending is done by the 325 \IPPprog{ppImage} program. This program applies the detrends to the 326 individual cells, and then an OTA level mosaic is constructed for the 327 science image, the mask image, and the variance map image. The single 328 epoch photometry is done at this stage as well. The following 329 subsections (\ref{sec:burntool} - \ref{sec:background}) detail these 330 detrending steps, presented in the order in which they are applied to 331 the individual OTA image data. 326 \IPPprog{ppImage} program. This program applies the detrend 327 corrections to the individual cells, and then an OTA-level mosaic is 328 constructed for the signal image, the mask image, and the variance map 329 image. The single epoch photometry is done at this stage as well. 330 The following subsections (\ref{sec:burntool} - \ref{sec:background}) 331 detail these detrending steps, presented in the order in which they 332 are applied to the individual OTA image data. \czw{I haven't 333 rearranged into regular and ``special'' yet.} 332 334 333 335 \subsection{Burntool / Persistence effect} … … 335 337 336 338 Pixels that approach the saturation point on GPC1, which varies by 337 readout with common values around 60000 DN, cause persistence problems 338 on that and subsequent images. During the read out process of an 339 image with such a bright pixel, some of the charge associated with it 340 is not fully shifted down the detector column toward the amplifier. 341 As a result, this charge remains in the starting cell, and is 342 partially collected in subsequent shifts, resulting in a ``burn 339 cell with common values around 60000 DN, introduce ``persistent 340 charge'' on that and subsequent images. During the read out process 341 of a cell with such a bright pixel, some of the charge remains in the 342 undepleted region of the silicon and is not shifted down the detector 343 column toward the amplifier. This charge remains in the starting 344 pixel and slowly leaks out of the undepleted region, contaminating 345 subsequent pixels during the read out process, resulting in a ``burn 343 346 trail'' that extends from the center of the bright source away from 344 347 the amplifier (vertically along the pixel columns toward the top of 345 348 the cell). 349 350 %associated with it 351 %is not fully shifted down the detector column toward the amplifier. 352 %As a result, this charge remains in the starting cell, and is 353 %partially collected in subsequent shifts, resulting in a ``burn 354 %trail'' that extends from the center of the bright source away from 355 %the amplifier (vertically along the pixel columns toward the top of 356 %the cell). 346 357 347 358 This incomplete charge shifting in nearly full wells continues as each … … 352 363 image towards the amplifier (vertically down along the pixel column). 353 364 This remnant charge can remain on the detector for up to thirty 354 minutes, requiring the locations of these ``burns'' be retained 355 between exposures. 365 minutes. 366 %, requiring the locations of these ``burns'' be retained 367 %between exposures. 356 368 357 369 Both of these types of persistence trails are measured and optionally 358 370 repaired via the \IPPprog{burntool} program. This program does an 359 initial scan of the image s, and identifies objects with pixel values360 brighter than a conservative threshold of 30000 DN. The trail from 361 the peak of that object is fit with a one-dimensional power law in 362 each pixel column above the threshold, based on empirical evidence 363 th at this is the functional form of this persistence effect. This364 alsomatches the expectation that a constant fraction of charge is371 initial scan of the image, and identifies objects with pixel values 372 higher than a conservative threshold of 30000 DN. The trail from the 373 peak of that object is fit with a one-dimensional power law in each 374 pixel column above the threshold, based on empirical evidence that 375 this is the functional form of this persistence effect. This fit also 376 matches the expectation that a constant fraction of charge is 365 377 incompletely transferred at each shift beyond the persistence 366 threshold. Once th isfit is done, the model can be subtracted from367 the image , and the location of the staris stored in a table along378 threshold. Once the fit is done, the model can be subtracted from 379 the image. The location of the source is stored in a table along 368 380 with the exposure PONTIME, which denotes the number of seconds since 369 the detector was last powered on , and provides an internally consistent370 time scale.381 the detector was last powered on and provides an internally 382 consistent time scale. 371 383 372 384 For subsequent exposures, the table associated with the previous image 373 385 is read in, and after correcting trails from the stars on the new 374 386 image, the positions of the bright stars from the table are used to 375 check for remnant trails on the image. These are fit and subtracted 376 using a one-dimensional exponential model, again based on empirical 377 studies. If a significant model is found, then this location is 378 retained in the image output table. If not, the old burn is allowed 379 to expire. 380 381 The main concern with this method of correcting the persistence trails 382 is that it is based on fits to the raw image data, which may have 383 other signal sources not determined by the persistence effect. The 384 presence of other stars or artifacts along the path of the burn can 387 check for remnant trails from previous exposures on the image. These 388 are fit and subtracted using a one-dimensional exponential model, 389 again based on empirical studies. The output table retains this 390 remnant position for 2000 seconds after the initial PONTIME recorded. 391 This allows fits to be attempted well beyond the nominal lifetime of 392 these trails. Figure \ref{fig:burntool images} shows an example of a 393 cell with a persistence trail from a bright star, the post-correction 394 result, as well as the pre and post correction versions of the same 395 cell on the subsequent exposure. The profiles along the detector 396 columns for these two exposures are presented in Figure 397 \ref{fig:burntool plot}. 398 399 Using this method of correcting the persistence trails has the 400 challenge that it is based on fits to the raw image data, which may 401 have other signal sources not determined by the persistence effect. 402 The presence of other stars or artifacts in the detector column can 385 403 result in a poor model to be fit, resulting in either an over- or 386 under-subtraction of the persistence burn. For this reason, the image 387 mask is marked with a value indicating that this correction has been 388 applied. These pixels are not fully excluded, but they are marked as 389 suspect, which allows them to be excluded from consideration in 390 subsequent stages, such as image stacking. 391 392 Another concern is that the cores of very bright stars are deformed by 393 this process, as the burntool fitting subtracts flux 394 from only one side of the star. As most stars that result in burns already 395 have saturated cores, they are already ignored for the purpose of 396 PSF determination and are flagged as saturated by the photometry 397 reduction. 398 399 \begin{figure} 400 \centering 401 \begin{minipage}{0.45\hsize} 402 \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0123o_XY11_bt_trail.png} 403 % \caption{(a)} 404 % \end{subfigure}% 405 % \begin{subfigure}[]{.45\hsize} 406 \end{minipage}% 407 \begin{minipage}{0.45\hsize} 408 \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0124o_XY11_bt_trail.png} 409 % \caption{(b)} 410 % \end{subfigure} 411 \end{minipage} 412 413 \caption{Example of a profile cut along the y-axis through a bright star on exposure o5677g0123o OTA11 in cell xy60 (left panel) and on the subsequent exposure o5677g0124o (right panel). In both figures, the green points show the image corrected with all appropriate detrending steps, but without burntool applied, illustrating the amplitude of the persistence trails. The red points show the same data after the burntool correction, which reduces the impact of these features. Both exposures are in the \gps{} filter with exposure times of 43s} 414 \end{figure} 404 under-subtraction of the trail. For this reason, the image mask is 405 marked with a value indicating that this correction has been applied. 406 These pixels are not fully excluded, but they are marked as suspect, 407 which allows them to be excluded from consideration in subsequent 408 stages, such as image stacking. 409 410 The cores of very bright stars can also be deformed by this process, 411 as the burntool fitting subtracts flux from only one side of the star. 412 As most stars that result in persistence trails already have saturated 413 cores, they are already ignored for the purpose of PSF determination 414 and are flagged as saturated by the photometry reduction. 415 415 416 416 \begin{figure} … … 438 438 % \end{subfigure} 439 439 \end{minipage} 440 \caption{Example of OTA11 cell xy60 on exposures o5677g0123o (left) and o5677g0124o (right). The top panels show the image with all appropriate detrending steps, but without burntool, and the bottom show the same with burntool applied. There is some slight over subtraction in fitting the initial trail, but the impact of the trail is greatly reduced in both exposures.} 440 \caption{Example of OTA11 cell xy50 on exposures o5677g0123o (left) and o5677g0124o (right). The top panels show the image with all appropriate detrending steps, but without burntool, and the bottom show the same with burntool applied. There is some slight over subtraction in fitting the initial trail, but the impact of the trail is greatly reduced in both exposures.} 441 \label{fig:burntool images} 441 442 \end{figure} 443 444 445 \begin{figure} 446 \centering 447 \begin{minipage}{0.45\hsize} 448 \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0123o_XY11_bt_trail.png} 449 % \caption{(a)} 450 % \end{subfigure}% 451 % \begin{subfigure}[]{.45\hsize} 452 \end{minipage}% 453 \begin{minipage}{0.45\hsize} 454 \includegraphics[width=0.9\hsize,angle=0,clip]{images/o5677g0124o_XY11_bt_trail.png} 455 % \caption{(b)} 456 % \end{subfigure} 457 \end{minipage} 458 459 \caption{Example of a profile cut along the y-axis through a bright star on exposure o5677g0123o OTA11 in cell xy50 (left panel) and on the subsequent exposure o5677g0124o (right panel). In both figures, the green points show the image corrected with all appropriate detrending steps, but without burntool applied, illustrating the amplitude of the persistence trails. The red points show the same data after the burntool correction, which reduces the impact of these features. Both exposures are in the \gps{} filter with exposure times of 43s} 460 \label{fig:burntool plot} 461 \end{figure} 462 442 463 443 464 … … 447 468 Each cell on GPC1 has an overscan region that covers the first 34 448 469 columns of each row, and the last 10 rows of each column. No light 449 lands on these pixels, so the image region is trimmed to exclude them. 450 Each row has an overscan value subtracted, calculated by finding the 451 median value of that row's overscan pixels and then smoothing between 452 rows with a three-row boxcar median. 470 lands on these pixels, so the science region is trimmed to exclude 471 them. Each row has an overscan value subtracted, calculated by 472 finding the median value of that row's overscan pixels and then 473 smoothing between rows with a three-row boxcar median. \czw{something 474 about this sounding like real pixels?} 453 475 454 476 \subsection{Non-linearity Correction} … … 461 483 evidence of this effect. 462 484 463 To correct this sag, we studied the fluxbehavior of a series of flat485 To correct this sag, we studied the behavior of a series of flat 464 486 frames for a ramp of exposure times with approximate logarithmically 465 487 equal spacing between 0.01s and 57.04s. As the exposure time 466 increases, the fluxon each pixel also increases in what is expected467 to be a linear manner. Each of the seflat exposures in this ramp is488 increases, the signal on each pixel also increases in what is expected 489 to be a linear manner. Each of the flat exposures in this ramp is 468 490 overscan corrected, and then the median is calculated for each cell, 469 491 as well as for the rows and columns within ten pixels of the edge of 470 492 the science region. From these median values at each exposure time 471 value, we can construct the expected trend by fitting a linear model, 472 $f_{region} = G * t_{exp} + B$, to determine the gain, $G$, and the 473 bias, $B$, for the region considered. This fitting was limited to only 474 the range of fluxes between 12000 and 38000 counts, as these ranges 475 were found to match the linear model well. This range avoids the 476 non-linearity at low fluxes, as well as the possibility of high-flux 477 non-linearity effects. 493 value, we can construct the expected trend by fitting a linear model 494 for the region considered. This fitting was limited to only the range 495 of fluxes between 12000 and 38000 counts, as these ranges were found 496 to match the linear model well. This range avoids the non-linearity 497 at low fluxes, as well as the possibility of high-flux non-linearity 498 effects. 478 499 479 500 We store the average flux measurement and deviation from the linear 480 501 fit for each exposure time for all regions on all detector cells in 481 the linearity detrend look up tables. When this is applied to science 482 data, these lookup tables are loaded, and a linear interpolation is 483 performed to determine the correction needed for the flux in that 484 pixel. This look up is performed for both the row and column of each 485 pixel, to allow the edge correction to be applied where applicable, 486 and the full cell correction elsewhere. The average of these two 487 values is then applied to the pixel value, reducing the effects of 488 pixel nonlinearity. 502 the linearity detrend look up tables. An example of this data is 503 shown in figure \ref{fig: nonlinearity}. When this correction is 504 applied to science data, these lookup tables are loaded, and a linear 505 interpolation is performed to determine the correction needed for the 506 flux in that pixel. This look up is performed for both the row and 507 column of each pixel, to allow the edge correction to be applied where 508 applicable, and the full cell correction elsewhere. The average of 509 these two values is then applied to the pixel value, reducing the 510 effects of pixel nonlinearity. 489 511 490 512 This non-linearity effect appears to be stable in time for the … … 503 525 \includegraphics[width=0.9\hsize,angle=0,clip]{images/linearity_XY27_xy16.png} 504 526 \caption{Example plot of the linearity correction as a fraction of observed flux for OTA27, cell xy16.} 527 \label{fig: nonlinearity} 505 528 \end{figure} 506 529 … … 508 531 \label{sec:dark} 509 532 510 The dark model we make for GPC1 considers each pixel individually, 511 independent of any neighbors. To construct this model, we fit a 512 multi-dimensional model to the array of input pixels from a randomly 513 selected set of 100-150 overscan and non-linearity corrected dark 514 frames chosen from a given date range. The model fits each pixel as a 515 function of the exposure time $t_{exp}$ and the detector temperature 516 $T_{chip}$ of the input images such that $\mathrm{dark} = a_0 + a_1 517 t_{exp} + a_2 T_{chip} t_{exp} + a_3 T_{chip}^2 t_{exp}$. This 518 fitting uses two iterations to produce a clipped fit, rejecting at the 519 $3\sigma$ level. The final coefficients $a_i$ for the dark model are 520 stored in the detrend image. The constant $a_0$ term includes the 521 residual bias signal after overscan subtraction, and as such, a 522 separate bias subtraction is not necessary. 533 The dark current in the GPC1 detectors has significant variations 534 across each cell. The model we make to remove this signal considers 535 each pixel individually, independent of any neighbors. To construct 536 this model, we fit a multi-dimensional model to the array of input 537 pixels from a randomly selected set of 100-150 overscan and 538 non-linearity corrected dark frames chosen from a given date range. 539 The model fits each pixel as a function of the exposure time $t_{exp}$ 540 and the detector temperature $T_{chip}$ of the input images such that 541 $\mathrm{dark} = a_0 + a_1 t_{exp} + a_2 T_{chip} t_{exp} + a_3 542 T_{chip}^2 t_{exp}$. This fitting uses two iterations to produce a 543 clipped fit, rejecting at the $3\sigma$ level. The final coefficients 544 $a_i$ for the dark model are stored in the detrend image. The 545 constant $a_0$ term includes the residual bias signal after overscan 546 subtraction, and as such, a separate bias subtraction is not 547 necessary. 523 548 524 549 Applying the dark model is simply a matter of calculating the response 525 550 to the exposure time and detector temperature for the image to be 526 551 corrected, and subtracting the resulting dark signal from the image. 552 Figure \ref{fig:dark image} shows the results of the dark subtraction. 527 553 528 554 \subsubsection{Time evolution} … … 531 557 significant drift over the course of multiple months. Some of the 532 558 changes in the dark can be attributed to changes in the voltage 533 settings of the GPC1 controller electronics, but the majority seem to 534 be the result of some unknown parameter. We can separate the dark 535 model history of GPC1 into three epochs. The first epoch covers all 536 data taken prior to 2010-01-23. This epoch used a different header 537 keyword for the detector temperature, making data from this epoch 538 incompatible with later dark models. 559 settings of the GPC1 controller electronics, but the causes of others 560 are unknown. We can separate the dark model history of GPC1 into 561 three epochs. The first epoch covers all data taken prior to 562 2010-01-23. This epoch used a different header keyword for the 563 detector temperature, making data from this epoch incompatible with 564 later dark models. In addition, the temperatures recorded in this 565 value were not fully calibrated, making the dark model generated less 566 reliable. 539 567 540 568 The second epoch covers data between 2010-01-23 and 2011-05-01, and is 541 569 characterized by a largely stable but oscillatory dark solution. 542 The re are two modes that the dark model switches betweenapparently at570 The dark model switches between two modes apparently at 543 571 random. No clear cause has been established for the switching, but 544 572 there are clear differences between the two modes that require the … … 570 598 571 599 After 2011-05-01, the two-mode behavior of the dark disappears, and is 572 replaced with a slow observation datedependent drift in the magnitude600 replaced with a slow observation-date-dependent drift in the magnitude 573 601 of the gradient. This drift is sufficiently slow that we have modeled 574 it using three observation date independent dark model for different 575 date ranges. These darks cover the range from 2011-05-01 to 576 2011-08-01, 2011-08-01 to 2011-11-01, and 2011-11-01 and on. The 577 reason for this time evolution is unknown, but as it is correctable 578 with a small number of dark models, this does not significantly impact 579 detrending. 602 it by generating models for different date ranges. These darks cover 603 the range from 2011-05-01 to 2011-08-01, 2011-08-01 to 2011-11-01, and 604 2011-11-01 and on. The reason for this time evolution is unknown, but 605 as it is correctable with a small number of dark models, this does not 606 significantly impact detrending. 580 607 581 608 \begin{figure} … … 593 620 % \end{subfigure} 594 621 \end{minipage} 595 \caption{An example of the dark model application to exposure o5677g0123o, OTA23 (2011-04-26, 43s \gps{} filter). The left panel shows the image data mosaicked to the OTA level, and has had the static mask applied, the overscan subtracted, and the detector non-linearity corrected. The right panel, shows the same exposure with the dark applied in addition to the processing shown on the left.} 622 \caption{An example of the dark model application to exposure o5677g0123o, OTA23 (2011-04-26, 43s \gps{} filter). The left panel shows the image data mosaicked to the OTA level, and has had the static mask applied, the overscan subtracted, and the detector non-linearity corrected. The right panel, shows the same exposure with the dark applied in addition to the processing shown on the left, removing the amplifier glows in the cell corners.} 623 \label{fig:dark image} 596 624 \end{figure} 597 625 … … 599 627 \centering 600 628 \includegraphics[width=0.9\hsize,angle=0,clip]{images/B_profile_ex.png} 601 \caption{Example showing a profile cut across exposure o5676g0195, OTA67 (2011-04-25, 43s \gps{} filter). The entire first row of cells (xy00-xy07) have had a median calculated along each pixel column on the OTA mosaicked image. Arbitrary offsets have been applied to shift the curves to not overlap. The top curve (in purple) shows the initial raw profile, with no dark model applied. The next curve (in green) shows the smoother profile after applying the correct B-mode dark model. Applying the incorrect A-mode darkresults in the blue curve, which shows a significant increase in gradients across the cells. The orange curve shows the result of the PATTERN.CONTINUITY correction. Although this creates a larger gradient across the mosaicked images, it decreases the cell-to-cell level changes. The final yellow curve shows the final image profile after all detrending and background subtraction, and has not had an offset applied. The bright source at the cell xy00 to xy01 transition is a result of a large optical ghost, which due to the area covered, increases the median level more than the field stars.}629 \caption{Example showing a profile cut across exposure o5676g0195, OTA67 (2011-04-25, 43s \gps{} filter). The entire first row of cells (xy00-xy07) have had a median calculated along each pixel column on the OTA mosaicked image. Arbitrary offsets have been applied to shift the curves to not overlap. The top curve (in purple) shows the initial raw profile, with no dark model applied. The next curve (in green) shows the smoother profile after applying the appropriate B-mode dark model. Applying the A-mode dark instead results in the blue curve, which shows a significant increase in gradients across the cells. The orange curve shows the result of the PATTERN.CONTINUITY correction. Although this creates a larger gradient across the mosaicked images, it decreases the cell-to-cell level changes. The final yellow curve shows the final image profile after all detrending and background subtraction, and has not had an offset applied. The bright source at the cell xy00 to xy01 transition is a result of a large optical ghost, which due to the area covered, increases the median level more than the field stars.} 602 630 \label{fig:dark switching} 603 631 \end{figure} … … 606 634 \label{sec:video_darks} 607 635 608 The dark signal is stronger in cell corners due to glow from the 609 read-out amplifiers. The standard dark model corrects this for most 610 observations. However, as mentioned above, when a cell is repeatedly 611 read in video mode, the dark model for the OTA containing it changes. 612 Surprisingly, added reads for the video cell do not amplify the 613 amplifier glow, but rather decrease the dark signal in these regions. 614 As a result, using the standard dark model on the data for these OTAs 615 results in oversubtraction of the corner glow. 616 617 Video darks have been constructed to eliminate the effect this 618 observational change has on the final image quality. This was done by 619 running the standard dark construction process on a series of dark 620 frames that have had the video signal enabled for some cells. GPC1 621 can only run video signals on a subset of the OTAs at a given time. 622 This requires two passes to enable the video signal across the full 623 set of OTAs that support video cells. This is convenient for the 624 process of creating darks, as those OTAs that do not have video 625 signals enabled create standard dark models, while the video dark is 626 created for those that do. 636 Individual cells on GPC1 can be repeatedly read to create a video 637 signal used for telescope guiding. However, when a cell is used for 638 this purpose, the dark signal for the entire OTA is changed. The most 639 noticeable feature of this change is that the glows in cell corners 640 caused by the read-out amplifiers are suppressed. As a result, using 641 the standard dark model on the data for these OTAs results in 642 oversubtraction of the corner glow. 643 644 To generate a correction for this change, a set of video dark models 645 were created by running the standard dark construction process on a 646 series of dark frames that have had the video signal enabled for some 647 cells. GPC1 can only run video signals on a subset of the OTAs at a 648 given time. This requires two passes to enable the video signal 649 across the full set of OTAs that support video cells. This is 650 convenient for the process of creating darks, as those OTAs that do 651 not have video signals enabled create standard dark models, while the 652 video dark is created for those that do. 627 653 628 654 This simultaneous construction of video and standard dark models is 629 useful, as it provides the ability to isolate the response on the 630 standard dark from the video signals. Isolating this response is 631 essential for attempting to create archival video darks. We only have 632 raw video dark frame data after 2012-05-16, when this problem was 633 initially identified, so any data prior to that can not be directly 634 corrected for the video dark signal. Isolating the video signal 635 response allows linear corrections to the pre-existing standard dark 636 models for archival data. Testing this shows that constructing a 637 video dark for older data simply as $VD_{2009} = D_{2009} - D_{Modern} 638 + VD_{Modern}$ produces a satisfactory result that does not 639 over subtract the amplifier glow. This is shown in figure 640 \ref{fig:video_darks}, which shows video cells from before 2012-05-16, 641 corrected with both the standard and video darks, with the early video 642 dark constructed in such a manner. 655 useful, as it provides a way to isolate the response on the standard 656 dark from the video signals. If the standard and video dark signals 657 are separable, then archival video darks can be constructed by 658 applying the video dark response to the previously constructed dark 659 models. Raw video dark frame data only exists after 2012-05-16, when 660 this problem was initially identified, so any data prior to that can 661 not be directly corrected for the video dark signal. Testing the 662 separability shows that constructing a video dark for older data 663 simply as $VD_{Old} = D_{Old} - D_{Modern} + VD_{Modern}$ produces a 664 satisfactory result that does not over subtract the amplifier glow. 665 This is shown in figure \ref{fig:video_darks}, which shows video cells 666 from before 2012-05-16, corrected with both the standard and video 667 darks, with the early video dark constructed in such a manner. 643 668 644 669 \begin{figure} … … 668 693 with the noise generally higher away from the read out amplifier 669 694 (higher cell x pixel positions). This is likely an effect of the 670 row-by-row bias issue discussed below. This gradient causes the read 671 noise to increase as the row is read out. As a result of this 695 row-by-row bias issue discussed below. As a result of this 672 696 increased noise, more sources are detected in the higher noise regions 673 when the read noise is assumed constant across the readout. Read noise is the 674 675 To 676 mitigate this noise gradient, we constructed an initial set of 677 noisemap images by measuring the median variance on bias frames. The 678 variance is calculated in boxes of 20x20 pixels, and then linearly 679 interpolated to cover the full image. 697 when the read noise is assumed constant across the readout. 698 699 To mitigate this noise gradient, we constructed an initial set of 700 noisemap images by measuring the median variance on bias frames 701 processed as science images. The variance is calculated in boxes of 702 20x20 pixels, and then linearly interpolated to cover the full image. 680 703 681 704 Unfortunately, due to correlations within this noise, the variance … … 689 712 contaminating the final object catalogs. 690 713 691 In the detection process, we expect false positives at a rate equal to 692 the one-tailed probability beyond the detection threshold. For these 693 tests, only detections measured at the $\sigma_{thresh} = 5\sigma$ 694 level are used, to match that used in the photometry on science data. 695 This probability can be converted into a number of false number by 696 considering a given area. As the detections must be isolated to not 697 be detected as an extended object, this area must be reduced by the 698 area a given PSF occupies. Combining this, we find that we expect a 699 probability $P = 1 - \Phi_{normal}(5) = \frac{1}{2} 700 \erfcinv\left(\frac{5}{\sqrt{2}}\right)$, and an area given $N$ 701 exposures of area $X\times Y$, $A = \frac{X \times Y \times 702 N}{A_{PSF}}$. For a typical $1"$ seeing, $A_{PSF}$ is approximately 703 16 pixels. Using this model for the false positives, we found that 704 the added read noise was insufficient to account for the observed 705 false positive rate. Inverting this relation, we can measure 706 $\sigma_{obs}$, the true threshold level based on the number of false 707 positives observed. This $\sigma_{obs}$ is the combined to form a 708 boost factor $B = \sigma_{thresh} / \sigma_{obs}$ that amplifies the 709 noisemap to match the observed false detection rate. 710 711 The row-to-row variations that contribute to the extra noise are 712 related to the dark model, and because of this, as the dark model 713 changes, the effective noise also changes. To ensure that the 714 noisemap accurately matches the true noise level, we have created 715 different noisemap models for the three major time ranges of the dark 716 model. We do not see any strong evidence that the noisemaps have the 717 A/B modes visible in the dark, and so we do not generate different 718 models for each individual dark model. The additional pixel-to-pixel 719 variance from this noisemap is added to the Poissonian variance to 720 form the science variance image generated by the \IPPstage{chip} 721 processing. 714 In the detection process, we expect false positives at a low rate, 715 given that all sources are required to be significant at the $5\sigma$ 716 level. Since the observed false positive rate was significantly 717 higher than expected, we implemented an empirical ``boost'' to 718 increase the noisemap to more accurately account for the position 719 dependent read noise. By binning the number of false positives 720 measured on the bias frames on the noisemap inputs using 20 pixel 721 boxes in the cell x-axis, and comparing this to the number expected 722 from random Gaussian noise, we estimated the true read noise level. 723 724 As the noisemap uses bias frames that have had a dark model 725 subtracted, we constructed noisemaps for each dark model used for 726 science processing. There is some evidence that the noise has changed 727 over time as measured on full cells, so matching the noisemap to the 728 dark model allows for these changes to be tracked. There is no 729 evidence that the noisemap has the A/B modes found in the dark, so we 730 do not generate separate models for that time period. 731 732 The noisemap detrend is not directly applied to the science image. 733 Instead, it is used to construct the weight image that contains the 734 pixel-by-pixel variance for the \IPPstage{chip} stage image. The 735 initial weight image is constructed by dividing the science image by 736 the cell gain (approximately 1.0 e$^{-} /$ DN). This weight image 737 contains the expected Poissonian variance in electrons measured. The 738 square of the noisemap is then added to this initial weight, adding 739 the additional empirical variance term in place of a single read noise 740 value. 741 742 %% In the detection process, we expect false positives at a rate equal to 743 %% the one-tailed probability beyond the detection threshold. For these 744 %% tests, only detections measured at the $\sigma_{thresh} = 5\sigma$ 745 %% level are used, to match that used in the photometry on science data. 746 %% This probability can be converted into a number of false detections by 747 %% considering a given area. As the detections must be isolated to not 748 %% be detected as an extended object, this area must be reduced by the 749 %% area a given PSF occupies. Combining this, we find that we expect a 750 %% probability $P = 1 - \Phi_{normal}(5) = \frac{1}{2} 751 %% \erfcinv\left(\frac{5}{\sqrt{2}}\right)$, and an area given $N$ 752 %% exposures of area $X\times Y$, $A = \frac{X \times Y \times 753 %% N}{A_{PSF}}$. For a typical $1"$ seeing, $A_{PSF}$ is approximately 754 %% 16 pixels. Using this model for the false positives, we found that 755 %% the added read noise was insufficient to account for the observed 756 %% false positive rate. Inverting this relation, we can measure 757 %% $\sigma_{obs}$, the true threshold level based on the number of false 758 %% positives observed. This $\sigma_{obs}$ is the combined to form a 759 %% boost factor $B = \sigma_{thresh} / \sigma_{obs}$ that amplifies the 760 %% noisemap to match the observed false detection rate. 761 762 %% The row-to-row variations that contribute to the extra noise are 763 %% correlated with the dark model, 764 %% changes, the effective noise also changes. To ensure that the 765 %% noisemap accurately matches the true noise level, we have created 766 %% different noisemap models for the three major time ranges of the dark 767 %% model. We do not see any strong evidence that the noisemaps have the 768 %% A/B modes visible in the dark, and so we do not generate different 769 %% models for each individual dark model. The additional pixel-to-pixel 770 %% variance from this noisemap is added to the Poissonian variance to 771 %% form the science variance image generated by the \IPPstage{chip} 772 %% processing. 722 773 723 774 \subsection{Flat} … … 736 787 correction to remove the effect of the illumination differences over 737 788 the detector surface. This is done by dithering a series of science 738 exposures with a given pointing. By fully calibrating these exposures 789 exposures with a given pointing, as described in 790 \citet{2004PASP..116..449M}. By fully calibrating these exposures 739 791 with the initial flat model, and then comparing the measured fluxes 740 792 for the same star as a function of position on the detector, we can … … 748 800 In addition to this flat field applied to the individual images, the 749 801 ubercal process used to calibrate the database of all detections 750 \citep{2012ApJ...756..158S} constructs internal ``flat field'' corrections.751 Although a single set of image flat fields was used for the entire PV3 752 survey, five separate ``seasons'' of database flat fields were needed 753 to ensure proper calibration. This indicates that the flat field 754 response is not completely fixed in time. More details on this755 process are contained in \citet{magnier2017c}.802 \citep{2012ApJ...756..158S} constructs ``in catalog'' flat field 803 corrections. Although a single set of image flat fields was used for 804 the entire PV3 survey, five separate ``seasons'' of database flat 805 fields were needed to ensure proper calibration. This indicates that 806 the flat field response is not completely fixed in time. More details 807 on this process are contained in \citet{magnier2017.calibration}. 756 808 757 809 \subsection{Pattern correction} 758 810 \label{sec:pattern} 759 811 760 Due to detector specific issues that are not cleanly removed by the 761 dark model, we have a set of ``pattern'' corrections that are applied 762 to some selection of the OTAs in the camera. This is done to reduce 763 the effect that detector differences have on the measured astronomical 764 signal that are not stable enough to be corrected with a static model. 765 Because of this, the pattern corrections attempt to identify and 766 correct the detector issues based on appropriate filtering the 767 individual science exposures. 768 769 The PATTERN.ROW correction is used to remove any remaining row-by-row 770 bias variation, and the PATTERN.CONTINUITY correction attempts to 771 ensure that the cells of a given OTA are consistent with the other 772 cells on that OTA. 812 %% Due to detector specific issues that are not cleanly removed by the 813 %% dark model, we have a set of ``pattern'' corrections that are applied 814 %% to some selection of the OTAs in the camera. This is done to reduce 815 %% the effect that detector differences have on the measured astronomical 816 %% signal that are not stable enough to be corrected with a static model. 817 %% Because of this, the pattern corrections attempt to identify and 818 %% correct the detector issues based on appropriate filtering the 819 %% individual science exposures. 820 821 %% In addition to the standard detrend corrections, we apply additional 822 %% adjustments for features that are not completely removed by the dark 823 %% model. 824 825 %% The PATTERN.ROW correction is used to remove any remaining row-by-row 826 %% bias variation, and the PATTERN.CONTINUITY correction attempts to 827 %% ensure that the cells of a given OTA are consistent with the other 828 %% cells on that OTA. 773 829 774 830 \subsubsection{Pattern Row} … … 784 840 As discussed above in the dark and noisemap sections, certain 785 841 detectors have significant bias offsets between adjacent rows, caused 786 by noise in the camera control electronics. The magnitude of these787 offsets increases as the distance from the readout amplifier 788 increases, resulting in horizontal streaks that are more pronounced 789 along the large x pixel edge of the cell. As the level of the offset 790 is apparently random between exposures, the dark correction cannot 791 fully remove this structure from the images, and the noisemap value792 only indicates the level of the average variance added by these bias 793 offsets. Therefore, we apply the PATTERN.ROW correction in an attempt 794 to mitigate the offsets and correct the image values. To force the 795 rows to agree, a second order clipped polynomial is fit to each row in 796 t he cell. Four fit iterations are run, and pixels $2.5\sigma$ deviant797 are excluded from subsequent fits, to minimize the effect stars and 798 other astronomical signals have. This final trend is then subtracted 799 from that row. Simply doing this subtraction will also have the 800 effect of removing the background sky level. To prevent this, the 801 constant and linear terms for each row are stored, and linear fits are 802 made to these parameters as a function of row, perpendicular to the 803 initial fits. This produces a plane that is added back to the image 804 to restore the background offset and any linear ramp that exists in 805 the sky.842 by drifts in the bias level due to cross talk. The magnitude of these 843 offsets increases as the distance from the readout amplifier and 844 overscan region increases, resulting in horizontal streaks that are 845 more pronounced along the large x pixel edge of the cell. As the 846 level of the offset is apparently random between exposures, the dark 847 correction cannot fully remove this structure from the images, and the 848 noisemap value only indicates the level of the average variance added 849 by these bias offsets. Therefore, we apply the PATTERN.ROW correction 850 in an attempt to mitigate the offsets and correct the image values. 851 To force the rows to agree, a second order clipped polynomial is fit 852 to each row in the cell. Four fit iterations are run, and pixels 853 $2.5\sigma$ deviant are excluded from subsequent fits, to minimize the 854 effect stars and other astronomical signals have. This final trend is 855 then subtracted from that row. Simply doing this subtraction will 856 also have the effect of removing the background sky level. To prevent 857 this, the constant and linear terms for each row are stored, and 858 linear fits are made to these parameters as a function of row, 859 perpendicular to the initial fits. This produces a plane that is 860 added back to the image to restore the background offset and any 861 linear ramp that exists in the sky. 806 862 807 863 These row-by-row variations have the largest impact on data taken in 808 864 the \gps{} filter, as the read noise is the dominant noise source in 809 865 that filter. At longer wavelengths, the noise from the Poissonian 810 variation in the sky level increases. Although the PATTERN.ROW correction is still applied to data taken in the other filters, 866 variation in the sky level increases. The PATTERN.ROW correction is 867 still applied to data taken in the other filters, because the increase 868 in sky noise does not fully obscure the row-by-row noise. 811 869 812 870 This correction was required on all cells on all OTAs prior to 813 871 2009-12-01, at which point a modification of the camera electronics 814 872 reduced the scale of the row-by-row offsets for the majority of the 815 OTAs. As a result, we only apply this correction to the cells where816 it is still necessary, as shown in Figure \ref{fig: pattern row 817 cells}. A list of these cells is listed in Table 818 \ref{tab:pattern_row_cells}.819 820 Although this correction does largely resolve the row-by-row offset821 i ssue in a satisfactory way, large and bright astronomical objects can822 biasthe fit significantly. This results in an oversubtraction of the873 OTAs. \czw{describe modification} As a result, we only apply this 874 correction to the cells where it is still necessary, as shown in 875 Figure \ref{fig: pattern row cells}. A list of these cells is in 876 Table \ref{tab:pattern_row_cells}. 877 878 Although this correction largely resolves the row-by-row offset issue 879 in a satisfactory way, large and bright astronomical objects can bias 880 the fit significantly. This results in an oversubtraction of the 823 881 offset near these objects. As the offsets are calculated on the pixel 824 882 rows, this oversubtraction is not uniform around the object, but is … … 826 884 astronomical objects are not significantly distorted by this, with 827 885 this only becoming on issue for only bright objects comparable to the 828 size of the cell (598 pixels = 150"). 886 size of the cell (598 pixels = 150"). Figure \ref{fig: pattern row example} 887 shows an example of a cell pre- and post-correction. 829 888 830 889 \begin{deluxetable}{lcccc} … … 869 928 % \end{subfigure} 870 929 \end{minipage} 871 \caption{Example of the PATTERN.ROW correction on exposure o5379g0103o OTA57 cell xy00 (\ips{} filter 45s). The left panel shows the cell with all appropriate detrending except the PATTERN.ROW, and the right shows the same cell with PATTERN.ROW applied. The correction reduces the correlated noise on the right side, which is most distant from the read out amplifier. There is a slight over subtraction along the rows near the bright star.} 930 \caption{Example of the PATTERN.ROW correction on exposure o5379g0103o OTA57 cell xy01 (\ips{} filter 45s). The left panel shows the cell with all appropriate detrending except the PATTERN.ROW, and the right shows the same cell with PATTERN.ROW applied. The correction reduces the correlated noise on the right side, which is most distant from the read out amplifier. There is a slight over subtraction along the rows near the bright star. \czw{I don't think this fits the convention I stated earlier}} 931 \label{fig: pattern row example} 872 932 \end{figure} 873 933 874 934 \subsubsection{Pattern Continuity} 875 935 876 After previous attempts to ensure that adjacent cells on an OTA 877 matched background levels were insufficient in many situations, we 878 designed a replacement correction that would reduce the background 879 distortion for large objects. In addition, studies of the background 880 level illustrated that the row-by-row bias can introduce small 881 background gradient variations along the rows of the cells that is not 882 stable enough to be completely fit by the dark model. This common 883 feature across the columns of cells results in a ``saw tooth'' pattern 884 horizontally across an OTA, and as the background model fits a smooth 885 sky level, this induces over and under subtraction at the cell 886 boundaries. 936 The background levels of cells on a single OTA do not always have the 937 same value. Even with dark and flat corrections applied, adjacent 938 cells may not match. In addition, studies of the background level 939 indicate that the row-by-row bias can introduce small background 940 gradient variations along the rows of the cells that are not stable. 941 This common feature across the columns of cells results in a ``saw 942 tooth'' pattern horizontally across an the mosaicked OTA, and as the 943 background model fits a smooth sky level, this induces over and under 944 subtraction at the cell boundaries. 887 945 888 946 The PATTERN.CONTINUITY correction, attempts to match the edges of a 889 947 cell to those of its neighbors. For each cell, a thin box 10 pixels 890 wide on each edge is extracted and the median value of unmasked values 891 calculated for that box. These median values are then used to 892 construct a vector of differences $\Delta_i = \sum_{j} \mathrm{Edge}_{i} - 893 \mathrm{Edge}_{j}$, along with a matrix of associations $A_{i,i'} = \sum_{j} 894 \delta(i,j) \delta(j,i')$ denoting which cell boundaries are adjacent. 895 By solving the system $A x = \Delta$, we find the set of offsets $x_i$ 896 to be applied to each cell to ensure the minimum differences between 897 all cell edges and their neighbors. 948 wide running the full length of each edge is extracted and the median 949 value of unmasked values calculated for that box. These median values 950 are then used to construct a vector of the sum of the differences 951 between that cell's edges and the corresponding edge on any adjacent 952 cell $\Delta$. A matrix $A$ of these associations is also 953 constructed, with the diagonal containing the number of cells adjacent 954 to that cell, and the off-diagonal values being set to -1 for each 955 pair of adjacent cells. The offsets needed for each chip, $x$ can 956 then be found by solving the system $A x = \Delta$. A cell with the 957 maximum number of neighbors, usually cell xy11, the first cell not on 958 the edge of the OTA, is used to constrain the system, ensuring that 959 that cell has zero correction and that there is a single solution. 898 960 899 961 For OTAs that initially show the saw tooth pattern, the effect of this … … 901 963 the absolute background level. However, as we subtract off a smooth 902 964 background model prior to doing photometry, these deviations from an 903 absolute sky level are unimportant. The fact that the final ramp is 904 smoother than it would be otherwise also allows for the background 905 subtracted image to more closely match the astronomical sky, without 906 significant errors at cell boundaries. An example of the effect of 907 this correction on an image profile is shown in Figure \ref{fig:dark switching}. 965 absolute sky level do not affect photometry for small sources. The 966 fact that the final ramp is smoother than it would be otherwise also 967 allows for the background subtracted image to more closely match the 968 astronomical sky, without significant errors at cell boundaries. An 969 example of the effect of this correction on an image profile is shown 970 in Figure \ref{fig:dark switching}. 908 971 909 972 … … 928 991 input images with two iteration of clipping at the $3\sigma$ level. 929 992 930 A co urse background model for each cell is constructed by calculating993 A coarse background model for each cell is constructed by calculating 931 994 the median on a 3x3 grid (approximately 200x200 pixels each). A set 932 of 1000 randomly selected points are then selected on the fringe image933 for each cell, and a median calculated for this position in a 10x10 934 pixel box, with the background level subtracted. These sample935 locations provide scale points to allow the amplitude of the measured 936 fringe to becompared to that found on science images.995 of 1000 points are randomly selected from the fringe image for each 996 cell, and a median calculated for this position in a 10x10 pixel box, 997 with the background level subtracted. These sample locations provide 998 scale points to allow the amplitude of the measured fringe to be 999 compared to that found on science images. 937 1000 938 1001 To apply the fringe, the same sample locations are measured on the … … 962 1025 \label{sec:static_masks} 963 1026 964 Due to the large size of the detector, it is expected that there 965 are a number of pixel defects that do not have the detection 966 sensitivity on par with their neighbors. To remove these pixels, we 967 have constructed a static mask that identifies the known defects. 968 This mask is constructed in three phases. 969 970 First, a CTEMASK is constructed to mask out regions in which the 971 charge transfer efficiency is low compared to the rest of the 972 detector. Twenty-five of the sixty OTAs in GPC1 show some evidence of 973 CTE issues, with this pattern appearing (to varying degrees) in 974 roughly triangular patches on the OTA due to defects in the 975 semiconductor manufacturing. To generate the mask for these regions, 976 a sample set of 26 evenly illuminated flat field images were measured 977 to produce a map of the image variance in 20x20 pixel bins. As the 978 flat image is expected to illuminate the image uniformly, the expected 979 variances in each bin should be Poissonian distributed with the flux 980 level. However, in regions with CTE issues, adjacent pixels are not 981 independent, as the charge in those pixels is more free to spread. 982 This reduces the pixel-to-pixel differences, resulting in a lower than 983 expected variance. All regions with variance less than half the 984 average image level are added to the static CTEMASK. 1027 Due to the large size of the detector, it is expected that there are a 1028 number of pixels that respond poorly. To remove these pixels, we have 1029 constructed a mask that identifies the known defects. This mask is 1030 referred to as the ``static'' mask, as it is applied to all images 1031 processed. The ``dynamic'' mask (Section \ref{sec:dynamic_masks}) is 1032 calculated based on objects in the field, and so changes between 1033 images. Construction of the static mask consists of three phases. 1034 1035 First, regions in which the charge transfer efficiency (CTE) is low 1036 compared to the rest of the detector are identified. Twenty-five of 1037 the sixty OTAs in GPC1 show some evidence of poor CTE, with this 1038 pattern appearing (to varying degrees) in roughly triangular patches 1039 on the OTA due to defects in the semiconductor manufacturing 1040 \czw{check this fact with Peter}. To generate the mask for these 1041 regions, a sample set of \czw{26} evenly-illuminated flat-field images 1042 were measured to produce a map of the image variance in 20x20 pixel 1043 bins. As the flat screen is expected to illuminate the image 1044 uniformly, the expected variances in each bin should be Poissonian 1045 distributed with the flux level. However, in regions with poor CTE, 1046 adjacent pixels are not independent, as the charge in those pixels is 1047 more free to spread along the image columns. This reduces the 1048 pixel-to-pixel differences, resulting in a lower than expected 1049 variance. All regions with variance less than half the average image 1050 level are added to the static mask. 985 1051 986 1052 The next step of mask construction is to examine the flat and dark … … 1000 1066 The final step of mask construction is to examine the detector for 1001 1067 bright columns and other static pixel issues. This is first done by 1002 processing a set of 100 \ips{} filter science images in the same fashion as1068 processing \czw{examining residuals in flattened flat-field images?} a set of 100 \ips{} filter science images in the same fashion as 1003 1069 for the DARKMASK. A median image is constructed from these inputs 1004 1070 along with the per-pixel variance. These images are used to identify … … 1020 1086 \centering 1021 1087 \includegraphics[width=0.9\hsize,angle=0,clip]{images/gpc1_mask_indexed.png} 1022 \label{fig:static mask}1023 1088 1024 1089 \caption{Image map of the GPC1 static mask. The CTE regions are clearly visible as roughly triangular patches covering the corners of some OTAs. Some entire cells are masked, including an entire column of cells on OTA14. Calcite cells remove large areas from OTA17 AND OTA76.} 1090 \label{fig:static mask} 1025 1091 \end{figure} 1026 1092 … … 1065 1131 deviations due to imperfections in the burntool correction. 1066 1132 1067 The remaining dynamic masks are not generated until the IPP1068 \IPPstage{camera} stage, at which point all object photometry is 1069 complete, and an astrometric solution is known for the exposure. This 1070 added information provides the positions of bright sources based on 1071 thereference catalog, including those that fall slightly out of the1133 The remaining dynamic masks are generated in the IPP \IPPstage{camera} 1134 stage, at which point all object photometry is complete, and an 1135 astrometric solution is known for the exposure. This added 1136 information provides the positions of bright sources based on the 1137 reference catalog, including those that fall slightly out of the 1072 1138 detector field of view or within the inter chip gaps, where internal 1073 1139 photometry may not identify them. These bright sources are the origin … … 1083 1149 Table \ref{tab:crosstalk_rules} summarizes the list of known crosstalk 1084 1150 rules, with an estimate of the magnitude difference between the source 1085 and ghost. For all of the rules, any cell $v$ within the specified1151 and ghost. For all of the rules, any source cell $v$ within the specified 1086 1152 column of cells on any of the OTAs in the specified column of OTAs $Y$ 1087 c reates the ghost in the same $v$ and$Y$ in the target column of1153 can create a ghost in the same cell $v$ and OTA $Y$ in the target column of 1088 1154 cells and OTAs. In each of these cases, a source object with an 1089 1155 instrumental magnitude brighter than -14.47 creates a ghost object … … 1134 1200 \label{sec:optical_ghosts} 1135 1201 1136 Due to imperfections in the anti-reflective coating on the optical 1137 surfaces of GPC1, bright sources can also result in large out of focus 1138 objects, particularly in the \gps{} filter data. These objects are the 1139 result of light reflecting back off the surface of the detector, 1140 reflecting again off the lower surfaces of the optics (particularly 1141 the L1 corrector lens), and then back down onto the focal plane. Due 1142 to the extra travel distance, the resulting source is out of focus and 1202 The anti-reflective coating on the optical surfaces of GPC1 is less 1203 effective at shorter wavelengths, which can allow bright sources to 1204 reflect back onto the focal plane and generate large out-of-focus 1205 objects. Due to the wavelength dependence, these objects are most 1206 prominent in the \gps{} filter data. These objects are the result of 1207 light reflecting back off the surface of the detector, reflecting 1208 again off the lower surfaces of the optics (particularly the L1 1209 corrector lens), and then back down onto the focal plane. Due to the 1210 extra travel distance, the resulting source is out of focus and 1143 1211 elongated along the radial direction of the camera focal plane. These 1144 1212 optical ghosts can be modeled in the focal plane coordinates (L,M) 1145 1213 which has its origin at the center of the focal plane. In this 1146 1214 system, a bright object at location (L,M) on the focal plane creates a 1147 reflection ghost on the opposite side of the optical axis at (-L,-M).1148 The exact location is fit as a third order polynomial in the focal 1149 plane L and M directions (as listed in Table \ref{tab:ghost_centers}). 1150 An elliptical annulus mask is constructed at the expected ghost 1151 location, with the major and minor axes defined by linear functions of 1152 the ghost distance from the optical axis, and oriented with the 1153 ellipse major axis is along the radial direction (Table 1154 \ref{tab:ghost_radii}). All stars brighter than a filter-dependent 1155 threshold (listed in Table \ref{tab:ghost_magnitudes}) have such masks 1156 constructed.1215 reflection ghost on the opposite side of the optical axis near 1216 (-L,-M). The exact location is fit as a third order polynomial in the 1217 focal plane L and M directions (as listed in Table 1218 \ref{tab:ghost_centers}). An elliptical annulus mask is constructed 1219 at the expected ghost location, with the major and minor axes defined 1220 by linear functions of the ghost distance from the optical axis, and 1221 oriented with the ellipse major axis is along the radial direction 1222 (Table \ref{tab:ghost_radii}). All stars brighter than a 1223 filter-dependent threshold (listed in Table 1224 \ref{tab:ghost_magnitudes}) have such masks constructed. 1157 1225 1158 1226 \begin{deluxetable}{lcc} … … 1209 1277 \includegraphics[width=0.9\hsize,angle=0,clip]{images/full_fpa_ghosts.jpg} 1210 1278 \caption{Example of the full GPC1 field of view illustrating the sources and destinations of optical ghosts on exposure o5677g0123o (2011-04-26, 43s \gps{} filter). The bright stars on OTA33 and OTA44 result in nearly circular ghosts on the opposite OTA. In contrast, the trio of stars on OTA11 result in very elongated ghosts on OTA66.} 1279 \label{fig:optical ghosts} 1211 1280 \end{figure} 1212 1281 … … 1220 1289 detector surface in a long narrow glint. This surface was physically 1221 1290 masked on 2010-08-24, removing the possibility of glints in subsequent 1222 data, but that taken prior have an advisory dynamic mask constructed 1223 when a reference source falls on the focal plane within one degree of 1224 the detector edge. This mask is 150 pixels wide, with length $L = 1225 2500 \left(-20 - m_{inst}\right)$ pixels. These glint masks are 1226 constructed by selecting sufficiently bright sources in the reference 1227 catalog that fall within rectangular regions around each edge of the 1228 GPC1 camera. These regions are separated from the edge of the camera 1229 by 17 arcminutes, and extend outwards an additional degree. 1291 data, but images that were taken prior to this date have an advisory 1292 dynamic mask constructed when a reference source falls on the focal 1293 plane within one degree of the detector edge. This mask is 150 pixels 1294 wide, with length $L = 2500 \left(-20 - m_{inst}\right)$ pixels. 1295 These glint masks are constructed by selecting sufficiently bright 1296 sources in the reference catalog that fall within rectangular regions 1297 around each edge of the GPC1 camera. These regions are separated from 1298 the edge of the camera by 17 arcminutes, and extend outwards an 1299 additional degree. 1230 1300 1231 1301 \begin{figure} … … 1233 1303 \includegraphics[width=0.9\hsize,angle=0,clip]{images/glint_example_o5379g0103o.jpg} 1234 1304 \caption{Example of a glint on exposure o5379g0103o (2010-07-02, 45s \ips{} filter). The source star out of the field of view creates a long reflection that extends through OTA73 and OTA63.} 1305 \label{fig:optical glints} 1235 1306 \end{figure} 1236 1307 … … 1244 1315 mask is constructed, as the source is likely too faint to produce the 1245 1316 feature. These spikes are dependent on the camera rotation, and are 1246 oriented at $\theta = n * \frac{\pi}{2} - \mathrm{ROTANGLE} + 0.798$,1247 based on the header keyword.1317 oriented based on the header keyword at $\theta = n * \frac{\pi}{2} - 1318 \mathrm{ROTANGLE} + 0.798$, for $n = {0,1,2,3}$. 1248 1319 1249 1320 The cores of stars that are saturated are masked as well, with a … … 1263 1334 \label{sec:masking_fraction} 1264 1335 1265 Although there are a large number of masked pixels within the sixty 1266 OTAs of GPC1, the camera was designed to move chips with problematic 1267 areas (most notably CTE issues) to the edges of the detector. Because 1268 of this, the main analysis of the mask fraction is based not on the 1269 total footprint of the detector, but upon a circular reference field 1270 of view with a radius of 1.5 degrees. This field of view corresponds 1271 approximately to half the width and height of the detector. This 1272 field of view underestimates the unvignetted region of GPC1. A second 1273 ``maximum'' field of view is also used to estimate the mask fraction 1274 within a larger 1.628 degree radius. This larger radius includes far 1275 larger missing fractions due to the circular regions outside region 1276 populated with OTAs, but does include the contribution from 1277 well-illuminated pixels that are ignored by the reference radius. 1278 1279 The results of simulating simulating the footprint of the detector as 1280 a grid of uniformly sized pixels of $0\farcs{}258$ size are provided 1281 in Table \ref{tab:mask fraction}. Both fields of view contain 1282 circular segments outside of the footprint of the detector, which 1283 increase the area estimate that is unpopulated. This category also 1284 accounts for the inter-OTA and inter-cell gaps. The regions with poor 1285 CTE also account for a significant fraction of the masked pixels. The 1336 The GPC1 camera was designed such that where possible, OTAs with CTE 1337 issues were placed towards the edge of the detector. Because of this, 1338 the main analysis of the mask fraction is based not on the total 1339 footprint of the detector, but upon a circular reference field of view 1340 with a radius of 1.5 degrees. This radius corresponds approximately 1341 to half the width and height of the detector. This field of view 1342 underestimates the unvignetted region of GPC1. A second ``maximum'' 1343 field of view is also used to estimate the mask fraction within a 1344 larger 1.628 degree radius. This larger radius includes far larger 1345 missing fractions due to the circular regions outside region populated 1346 with OTAs, but does include the contribution from well-illuminated 1347 pixels that are ignored by the reference radius. 1348 1349 The results of simulating the footprint of the detector as a grid of 1350 uniformly sized pixels of $0\farcs{}258$ size are provided in Table 1351 \ref{tab:mask fraction}. Both fields of view contain circular 1352 segments outside of the footprint of the detector, which increase the 1353 area estimate that is unpopulated. This category also accounts for 1354 the inter-OTA and inter-cell gaps. The regions with poor CTE also 1355 contribute to a significant fraction of the masked pixels. The 1286 1356 remaining mask category accounts for known bad columns, cells that do 1287 1357 not calibrate well, and vignetting. There are also a small fraction … … 1297 1367 input masks already account for the inter-cell gaps). This estimate 1298 1368 does not include the circular segments outside of the detector 1299 footprint. This is minor for the reference field of view (1\%1300 difference), but does underestimatethe static mask fraction for the1301 maximum radius by 7.3\%. This analysis does provide a the observed1302 dynamic and advisory mask fractions, which are 0.03\% and 3\% 1303 respectively. The significant advisory value is a result of applying 1304 such masks toall burntool corrected pixels.1369 footprint. This difference is minor for the reference field of view 1370 (1\% difference), but underestimates the static mask fraction for the 1371 maximum radius by 7.3\%. This analysis provides the observed dynamic 1372 and advisory mask fractions, which are 0.03\% and 3\% respectively. 1373 The significant advisory value is a result of applying such masks to 1374 all burntool corrected pixels. 1305 1375 1306 1376 \begin{deluxetable}{lcc} … … 1326 1396 background model for the full OTA is then determined prior to the 1327 1397 photometric analysis. The mosaicked image is subdivided into 1328 $800\times{}800$ pixel segments that define each pixel of the1329 background model, with the s egments centered on the image center, and1330 overlapping adjacent subdivisions by 400 pixels. These overlaps help 1331 smooth the background model, as adjacent model pixels share input1398 $800\times{}800$ pixel segments that define each superpixel of the 1399 background model, with the superpixels centered on the image center 1400 and overlapping adjacent superpixels by 400 pixels. These overlaps 1401 help smooth the background model, as adjacent model pixels share input 1332 1402 pixels. 1333 1403 1334 From each s ubdivision, 10000 random unmasked pixels are drawn. In the1404 From each segment, 10000 random unmasked pixels are drawn. In the 1335 1405 case where the mask fraction is large (such as on OTAs near the edge 1336 1406 of the field of view), and there are insufficient unmasked pixels to 1337 1407 meet this criterion, all possible unmasked pixels are used instead. 1338 1408 If this number is still small (less than 100 good pixels), the 1339 su bdivision does not have a background model calculated, and instead,1409 superpixel does not have a background model calculated. Instead, 1340 1410 the value assigned to that model pixel is set as the average of the 1341 1411 adjacent model pixels. This allows up to eight neighboring background 1342 1412 values to be used to patch these bad pixels. 1343 1413 1344 For the remaining subdivisions that have sufficient unmasked pixels1345 for the background to be measured, the pixel values are used to 1346 calculate a set of robust statistics for the initial background guess. 1347 Theminimum and maximum of the values are found, and checked to ensure1414 For the subdivisions that have sufficient unmasked pixels for the 1415 background to be measured, the pixel values are used to calculate a 1416 set of robust statistics for the initial background guess. The 1417 minimum and maximum of the values are found, and checked to ensure 1348 1418 that these are not the same value, which would indicate some problem 1349 1419 with the input values. The values are then inserted into a histogram 1350 with 1000 bins between the minimum and maximum values , and again1351 checked for issues with the inputs by ensuring that the bin with the 1352 most input pixels does not contain more than half of the input values. 1353 In this case, the minimum and maximum do not constrain the true 1354 distribution of the input values well, and any values outside of the 1355 20 bins closest to the bin with the peak are masked for future 1356 consideration. A cumulative distribution is then constructed from the1420 with 1000 bins between the minimum and maximum values. If the bin 1421 with the most input pixels contains more than half of the input 1422 values, the bin size is too coarse for the population of interest. In 1423 this case, a new histogram is constructed using a range corresponding 1424 to the 20 bins closes to the peak, again dividing the range into 1000 1425 bins. This process is iterated up to 20 times until a binsize is 1426 determined. A cumulative distribution is then constructed from the 1357 1427 histogram, which saves the computational cost of sorting all the input 1358 1428 values. The bins containing the 50-percentile point, as well as the … … 1368 1438 If this measured standard deviation is smaller than 3 times the bin 1369 1439 size, then all points more than 25 bins away from the calculated 1370 median are masked, and the process is repeated until the bin size is1371 sufficiently small to ensure that the distribution width is well 1372 sampled. Once this iterative process converges, or 20 iterations are 1373 run, the 25- and 75-percentile values are found by interpolating the 5 1374 bins around the expected bin as well, and the count of the number of 1375 input values within this inner 50-percentile region, $N_{50}$ is 1376 calculated.1440 median are masked, and the process is repeated with a new 1000 bin 1441 histogram until the bin size is sufficiently small to ensure that the 1442 distribution width is well sampled. Once this iterative process 1443 converges, or 20 iterations are run, the 25- and 75-percentile values 1444 are found by interpolating the 5 bins around the expected bin as well, 1445 and the count of the number of input values within this inner 1446 50-percentile region, $N_{50}$ is calculated. 1377 1447 1378 1448 These initial statistics are then used as the starting guesses for a … … 1383 1453 500 \right)$. With this bin size, we expect that a bin at $\pm 2 1384 1454 \sigma$ will have approximately 50 input points, which gives a 1385 Poissonian signal tonoise estimate around 7. In the case where1455 Poissonian signal-to-noise estimate around 7. In the case where 1386 1456 $N_{50}$ is small (due to a poorly populated input image), this bin 1387 1457 size is fixed to be no larger than the guess of the standard … … 1393 1463 Two second order polynomial fits are then performed to the logarithm 1394 1464 of the histogram counts set at the midpoint of each bin. The first 1395 fit considers the ``lower half''of the distribution, under the1465 fit considers the lower portion of the distribution, under the 1396 1466 assumption that deviations from a normal distribution are caused by 1397 1467 real astrophysical sources that will be brighter than the true … … 1407 1477 fit. The Gaussian mean and standard deviation are calculated from the 1408 1478 polynomial coefficients, and the symmetric fit results are accepted 1409 unless the lower-half fit results in a smaller mean. This process is 1410 repeated again if the calculated standard deviation is not larger than 1411 75\% of the initial guess (suggesting an issue with the initial bin 1412 size). 1479 unless the lower-half fit results in a smaller mean. This histogram 1480 and polynomial fit process is repeated again, with updated bin size 1481 based on the previous iteration standard deviation, if the calculated 1482 standard deviation is not larger than 75\% of the initial guess 1483 (suggesting an issue with the initial bin size). 1413 1484 1414 1485 With this two-stage calculation performed across all subdivisions of 1415 1486 the mosaicked OTA image, and missing model pixels filled with the 1416 1487 average of their neighbors, the final background model is stored on 1417 disk as a $13\times{}13$ image with header entries listing the binning1418 used. The full scale background image is then constructed by 1419 bilinearly interpolating this binned model, and this is subtracted 1420 from the science image. Each object in the photometric catalog has a 1421 SKY and SKY\_SIGMA value that is the evaluation of this model at the 1422 location of that object.1488 disk as a $13\times{}13$ image for the GPC1 chips with header entries 1489 listing the binning used. The full scale background image is then 1490 constructed by bilinearly interpolating this binned model, and this is 1491 subtracted from the science image. Each object in the photometric 1492 catalog has a SKY and SKY\_SIGMA value determined from the background 1493 model mean and standard deviation. 1423 1494 1424 1495 Although this background modeling process works well for most of the … … 1431 1502 of GPC1, the measured background was added back to the \IPPstage{chip} 1432 1503 stage images, but this special processing was not used for the large 1433 scale $3\ Pi$ PV3 reduction.1504 scale $3\pi$ PV3 reduction. 1434 1505 1435 1506 \section{GPC1 Detrend Construction} 1436 1507 \label{sec:detrend construction} 1437 1508 1438 The various detrends for GPC1 are constructed in similar ways. A1439 series of appropriate exposures is selected from the database, and 1440 processed with the \IPPprog{ppImage} program. This program is used 1441 for the \IPPstage{chip} stage processing as well, and is designed to 1442 do multiple image processing operations. The extent of this 1443 processing is dependent on the order in which the detrend to be1444 constructed is applied to science data. In general, the input 1445 exposures to the detrend have all prior stages of detrend processing 1446 applied. Table \ref{tab:detrend ppImage} summarizes stages applied 1447 for the detrendswe construct.1509 The various master detrend images for GPC1 are constructed using a 1510 common approach. A series of appropriate exposures is selected from 1511 the database, and processed with the \IPPprog{ppImage} program. This 1512 program is used for the \IPPstage{chip} stage processing as well, and 1513 is designed to do multiple image processing operations. The 1514 processing steps applied to the images depend on the type of master 1515 detrend to be constructed. In general, the input exposures to the 1516 detrend have all prior stages of detrend processing applied. Table 1517 \ref{tab:detrend ppImage} summarizes stages applied for the detrends 1518 we construct. 1448 1519 1449 1520 Once the input data has been prepared, the \IPPprog{ppMerge} program 1450 is used to co nstruct some sort of ``average'' of the inputs. This1451 step need not be a mathematical average, but is used to combinethe1452 signal from the individual exposures into a single output product. 1453 Table \ref{tab:detrend ppMerge} lists some of the properties of the1454 process for the detrends, including how discrepant values are removed 1455 and the combination method used. The outputs from this step have the 1456 format of the detrend under construction, and after construction, are 1457 applied to the processed input data. This creates a set of residual 1458 files that are checked to determine if the newly created detrend 1459 correctly removes the detectordependent signal.1521 is used to combine the inputs. In some cases, this is the 1522 mathematical average, but in other cases it is a fit across the 1523 inputs. Table \ref{tab:detrend ppMerge} lists some of the properties 1524 of the process for the detrends, including how discrepant values are 1525 removed and the combination method used. The outputs from this step 1526 have the format of the detrend under construction. After 1527 construction, these combined outputs are applied to the processed 1528 input data. This creates a set of residual files that are checked to 1529 determine if the newly created detrend correctly removes the detector 1530 dependent signal. 1460 1531 1461 1532 This process of detrend construction and testing can be iterated, with 1462 1533 individual exposures excluded if they are found to be contaminating 1463 the output. If the final detrend has sufficiently small residuals, 1464 then the iterations are stopped and the detrend is finalized by 1465 selecting the date range to which it applies. This allows subsequent 1466 science processing to select the detrends needed based on the 1467 observation date. Table \ref{tab:detrend list} lists the set of 1468 detrends used in the PV3 processing. 1534 the output. The construction of detrends is largely automatic, but 1535 manual intervention is needed to accept the detrend for use on science 1536 data. If the final detrend has sufficiently small residuals, then the 1537 iterations are stopped and the detrend is finalized by selecting the 1538 date range to which it applies. This allows subsequent science 1539 processing to select the detrends needed based on the observation 1540 date. Table \ref{tab:detrend list} lists the set of detrends used in 1541 the PV3 processing. 1469 1542 1470 1543 \begin{deluxetable}{lcccc} … … 1475 1548 \startdata 1476 1549 LINEARITY & Y & & & \\ 1550 %% DARKMASK & Y & Y & Y & \\ 1551 %% FLATMASK & Y & Y & Y & Y \\ 1552 %% CTEMASK & Y & Y & Y & Y \\ 1553 DARK & Y & Y & & \\ 1554 %% NOISEMAP & Y & Y & & \\ 1555 FLAT & Y & Y & Y & \\ 1556 FRINGE & Y & Y & Y & Y \\ 1477 1557 DARKMASK & Y & Y & Y & \\ 1478 1558 FLATMASK & Y & Y & Y & Y \\ 1479 1559 CTEMASK & Y & Y & Y & Y \\ 1480 DARK & Y & Y & & \\1481 1560 NOISEMAP & Y & Y & & \\ 1482 FLAT & Y & Y & Y & \\1483 FRINGE & Y & Y & Y & Y \\1484 1561 \enddata 1485 1562 \label{tab:detrend ppImage} … … 1551 1628 \section{Warping} 1552 1629 \label{sec:warping} 1553 To provide a consistent and uniform set of images for co-added image 1554 stacking and differences, the individual mosaicked OTA images are 1555 projected onto a common set of tangent plane projected regions called 1556 projection cells. These projection cells are $4\times{}4$ degree 1557 fields spaced onto a set of centers that fully cover the sky. They are 1558 arranged into rings of constant declination, and allowed to overlap as 1559 $|\delta|$ increases. Each projection cell is further subdivided into 1560 $10\times{}10$ sky cells with fixed $0.25"$ resolution pixels, and 1561 constant overlap regions between adjacent skycells of $60"$. These 1562 skycells are the main image unit used for processing image data beyond 1563 the initial chip stage. The coordinate system used for these images 1564 matches the parity of the sky, with north in the positive y direction 1565 and east to the negative x direction. 1630 To provide a consistent and uniform set of coordinates for image 1631 combination (including stacking and differences), the individual 1632 mosaicked OTA images are projected onto a common pixel grids, called 1633 tessellations. A tessellation can contain any number of tangent plane 1634 projections, with those designed for single pointing surveys using 1635 only one, while the tessellation used for the $3\pi$ survey containing 1636 2643 tangent plane projections. These ``projection cells'' are 1637 $4\times{}4$ degree fields spaced onto a set of centers that fully 1638 cover the sky. They are arranged into rings of constant declination, 1639 and allowed to overlap as $|\delta|$ increases. Each projection cell 1640 is further subdivided into $10\times{}10$ ``skycells'' with fixed 1641 $0.25"$ resolution pixels, and constant overlap regions between 1642 adjacent skycells of $60"$. These skycells are the main image unit 1643 used for processing image data beyond the initial chip stage. The 1644 coordinate system used for these images matches the parity of the sky, 1645 with north in the positive y direction and east to the negative x 1646 direction. 1566 1647 1567 1648 After the detrending and photometry, the detection catalog for the 1568 full camera is fit to the reference catalog, producing third-order1569 astrometric solutions that map the detector focal plane to the sky, 1570 and map the individual OTA pixels to the detector focal plane. This 1571 solution is then used to determine which skycells the exposure OTAs 1572 overlap.1649 full camera is fit to the reference catalog, producing astrometric 1650 solutions that map the detector focal plane to the sky, and map the 1651 individual OTA pixels to the detector focal plane 1652 \citep[][see]{magnier2017.calibration}. This solution is then used to 1653 determine which skycells the exposure OTAs overlap. 1573 1654 1574 1655 For each output skycell, all overlapping OTAs and the calibrated 1575 catalog are read into the \IPPprog{pswarp} program. Each input image 1576 is examined in order, and the same transformation performed. This 1577 transformation breaks the output warp image into $128\times{}128$ 1578 pixel grid boxes. Each grid box has a locally linear map calculated 1579 that converts the output warp image coordinates to the input chip 1580 image coordinates. By doing the transformation in this direction, 1581 each output pixel has a unique sampling position on the input image 1656 catalog are read into the \IPPprog{pswarp} program. The output warp 1657 image is broken into $128\times{}128$ pixel grid boxes. For purposes 1658 of speed, each grid box has a locally linear map calculated that 1659 converts the output warp image coordinates to the input chip image 1660 coordinates. By doing the transformation in this direction, each 1661 output pixel has a unique sampling position on the input image 1582 1662 (although it may be off the image frame and therefore not populated), 1583 1663 preventing gaps in the output image due to the spacing of the input 1584 1664 pixels. 1585 1665 1586 With the locally linear grid defined, Lanczos interpolation with 1587 filter size parameter $a = 3$ on the input image is used to determine 1588 the values to assign to the output pixel location. The output 1589 locations are shifted by 0.5 pixels to let the interpolation select 1590 the value that would be assigned to the center of the output 1591 pixel. This process is repeated for all grid boxes, for all input 1666 With the locally linear grid defined, Lanczos interpolation 1667 \citep{Lanczos:1950zz} with filter size parameter $a = 3$ on the input 1668 image is used to determine the values to assign to the output pixel 1669 location. This process is repeated for all grid boxes, for all input 1592 1670 images, and for each output image product: the science image, the 1593 1671 variance, and the mask. The image values are scaled by the absolute 1594 value of the Jacobian determinant of the transformation . This1595 corrects the pixel values for the possible change in pixel area due to 1596 the transformation. Similarly, the variance image is scaled by the 1597 s quare of this value, again to correctly account for the pixel area1598 change.1599 1600 As the interpolation constructs the output pixels from more than one1601 input pixel, there is a covariance term that is must be included. For1602 each locally linear grid box, the covarianceis calculated from the1672 value of the Jacobian determinant of the transformation for each grid 1673 box. This corrects the pixel values for the possible change in pixel 1674 area due to the transformation. Similarly, the variance image is 1675 scaled by the square of this value, again to correctly account for the 1676 pixel area change. 1677 1678 The interpolation constructs the output pixels from more than one 1679 input pixel, which introduces covariance between pixels. For each 1680 locally-linear grid box, the covariance matrix is calculated from the 1603 1681 kernel in the center of the 128 pixel range. Once the image has been 1604 fully populated, this set of individual covariance matrices is1682 fully populated, this set of individual covariance matrices are 1605 1683 averaged to create the final covariance for the full image. 1606 1684 … … 1608 1686 catalog, including only those objects that fall on the new warped image. 1609 1687 These detections are transformed to match the new image location, and 1610 to scale the position errors based on the new orientation.1688 to scale the position uncertainties based on the new orientation. 1611 1689 1612 1690 The output image also contains header keywords SRC\_0000, SEC\_0000, 1613 MPX\_0000, and MPY\_0000 that containthe mappings from the warped1691 MPX\_0000, and MPY\_0000 that define the mappings from the warped 1614 1692 pixel space to the input image. The SRC keyword lists the input OTA 1615 name, and the SEC keyword lists the image section corresponding to the1616 locally linear grid box. The MPX and MPY contain the transformation 1617 parameters for the locally linear grid. These parameters are stored 1618 in a string listing the reference position in the chip coordinate1619 frame, the slope of the relation in the warp x axis, and the slope of 1620 the relation in the warp y axis. From these keywords, any position in 1621 the warp can be mapped back to the location in any of the input OTA 1622 images.1693 name, and the SEC keyword lists the image section that the mapping 1694 covers. The MPX and MPY contain the back-transformation linearized 1695 across the full chip. These parameters are stored in a string listing 1696 the reference position in the chip coordinate frame, the slope of the 1697 relation in the warp x axis, and the slope of the relation in the warp 1698 y axis. From these keywords, any position in the warp can be mapped 1699 back to the location in any of the input OTA images, with some 1700 reduction in accuracy. 1623 1701 1624 1702 \begin{figure} … … 1671 1749 1672 1750 Once individual exposures have been warped onto a common projection 1673 system, they can thenbe combined pixel-by-pixel regardless of their1751 system, they can be combined pixel-by-pixel regardless of their 1674 1752 original orientation. Creating a stacked image by co-adding the 1675 1753 individual warps increases the signal to noise, allowing for the 1676 detection of objects that would not be sufficiently significant to be measured from a single image. 1677 Creating this stack also allows a complete image to be 1678 constructed that does not have regions masked due to the gaps between 1679 cells and OTAs. This fully populated static sky image can also be 1680 used as a template for subtraction to find transient sources. 1754 detection of objects that would not be sufficiently significant to be 1755 measured from a single image. Creating this stack also allows a more 1756 complete image to be constructed that has fewer regions masked due to 1757 the gaps between cells and OTAs. This deeper and more complete image 1758 can also be used as a template for subtraction to find transient 1759 sources. 1681 1760 1682 1761 The stacked image is comprised of all warp frames for a given skycell … … 1689 1768 FWHM greater than 10 pixels (2.5 arcseconds), as those images have the 1690 1769 seeing far worse than average, and would degrade the final output 1691 stack. For the PV3 $3\ Pi$ survey, this size represents a PSF larger1770 stack. For the PV3 $3\pi$ survey, this size represents a PSF larger 1692 1771 than the $97$th percentile in all filters. A target PSF for the stack 1693 1772 is constructed by finding the maximum envelope of all input PSFs, … … 1697 1776 input images when matched to the target. 1698 1777 1699 The input image s also need to have their fluxes normalized to prevent1700 differences in seeing and sky transparency from causing discrepancies 1701 during pixel rejection. From the reference catalog calibrated input 1702 catalogs, we have the instrumental magnitudes of all sources, along 1703 with the airmass, image exposure time, and zeropoint. All output 1704 stacks are calibrated to a zeropoint of 25.0 in all filters, and to 1705 have an airmass of 1.0. The output exposure time is set to the sum of 1706 the input exposure times, regardless of if those inputs are rejected 1707 laterin the combination process. We can determine the relative1778 The input image fluxes are normalized to prevent differences in seeing 1779 and sky transparency from causing discrepancies during pixel 1780 rejection. From the reference catalog calibrated input catalogs, we 1781 have the instrumental magnitudes of all sources, along with the 1782 airmass, image exposure time, and zeropoint. All output stacks are 1783 constructed to a target zeropoint of 25.0 in all filters, and to have 1784 an airmass of 1.0. The output exposure time is set to the sum of the 1785 input exposure times, regardless of whether those inputs are rejected later 1786 in the combination process. We can determine the relative 1708 1787 transparency for each input image by comparing the magnitudes of 1709 1788 matched sources between the different images. Each image then has a 1710 normalization factor defined, equal to $\mathrm{norm}_{input} = (ZP_\mathrm{input} 1711 - ZP_\mathrm{target}) - \mathrm{transparency}_\mathrm{input} - 2.5 * 1712 \log_{10} (t_\mathrm{target} / t_\mathrm{input}) - 1713 \mathrm{F}_\mathrm{airmass} * (\mathrm{airmass}_\mathrm{input} - 1714 \mathrm{airmass}_\mathrm{target})$. For the PV3 processing, the 1715 airmass factor $\mathrm{F}_\mathrm{airmass}$ was set to zero, such 1716 that all flux differences from differing exposure airmasses are 1717 assumed to be included in the zeropoint and transparency values. 1789 normalization factor defined, equal to $\mathrm{norm}_{input} = 1790 (ZP_\mathrm{input} - ZP_\mathrm{target}) - 1791 \mathrm{transparency}_\mathrm{input} - 2.5 * \log_{10} 1792 (t_\mathrm{target} / t_\mathrm{input}) - \mathrm{F}_\mathrm{airmass} * 1793 (\mathrm{airmass}_\mathrm{input} - \mathrm{airmass}_\mathrm{target})$. 1794 For the PV3 processing, the airmass factor 1795 $\mathrm{F}_\mathrm{airmass}$ was set to zero, such that all flux 1796 differences from differing exposure airmasses are assumed to be 1797 included in the zeropoint and transparency values. 1718 1798 1719 1799 The zeropoint calibration performed here uses the calibration of the … … 1724 1804 the entire region of the sky imaged. This further calibration is not 1725 1805 available at the time of stacking, and so there may be small residuals 1726 in the transparency values as a result of this \citet{magnier2017 c}.1806 in the transparency values as a result of this \citet{magnier2017.calibration}. 1727 1807 1728 1808 With the flux normalization factors and target PSF chosen, the … … 1733 1813 the kernel, and the residual with the target PSF used to update the 1734 1814 parameters of the kernel via least squares optimization. Stamps that 1735 significantly deviate are rejected, but asthe squared residual1815 significantly deviate are rejected, although the squared residual 1736 1816 difference will increase with increasing source flux. To mitigate 1737 1817 this effect, a parabola is fit to the distribution of squared … … 1741 1821 convolution kernel is returned. 1742 1822 1743 This convolution may change the image flux scaling, so a normalization1744 factor is used to correct this. Thisnormalization factor is equal to1823 This convolution may change the image flux scaling, so the kernel is 1824 normalized to account for this. The normalization factor is equal to 1745 1825 the ratio of $10^{-0.4 \mathrm{norm}_{input}}$ to the sum of the 1746 1826 kernel. The image is multiplied by this factor, and the variance by … … 1749 1829 Once the convolution kernels are defined for each image, they are used 1750 1830 to convolve the image to match the target PSF. Any input image that 1751 has a kernel match $\chi^2$ value greater than 4.0$\sigma$ larger than 1752 the median value is rejected from the stack. Each image also has a 1753 weight assigned, based on the image variance after convolution. A 1754 full image weight is then calculated for each input, with the weight, 1755 $W_\mathrm{input}$ is equal to the inverse of the median of the image 1756 variance multiplied by the peak of the image covariance (due to the 1757 warping process). 1831 has a kernel match $chi^2$ value (defined as the sum of the RMS error 1832 across the kernel) greater than 4.0$\sigma$ larger than the median 1833 value is rejected from the stack. Each image also has a weight 1834 assigned, based on the image variance after convolution. A full image 1835 weight is then calculated for each input, with the weight, 1836 $W_\mathrm{input}$ equal to the inverse of the median of the image 1837 variance multiplied by the peak of the image covariance (from the 1838 warping process). This ensures that low signal-to-noise images are 1839 down-weighted in the final combination. 1758 1840 1759 1841 Following the convolution, an initial stack is constructed. For a 1760 1842 given pixel coordinate, the values at that coordinate are extracted 1761 from all input images. Images that have a suspect mask bit (including 1762 the SUSPECT, BURNTOOL, SPIKE, STREAK, STARCORE, and CONV.POOR bit 1763 values) are appended to a suspect pixel list for preferential 1764 exclusion. Following this, the pixel values are combined and tested 1765 to attempt to identify discrepant input values that should be excluded. 1843 from all input images, with pixels masked excluded from consideration. 1844 Images that only have a suspect mask bit (including the SUSPECT, 1845 BURNTOOL, SPIKE, STREAK, STARCORE, and CONV.POOR bit values) are 1846 appended to a suspect pixel list for preferential exclusion. 1847 Following this, the pixel values are combined and tested to attempt to 1848 identify discrepant input values that should be excluded. 1766 1849 1767 1850 If only a single input is available, the initial stack contains the 1768 1851 value from that single input. If there are only two inputs, the 1769 1852 average of the two is used. These cases should occur only rarely in 1770 the $3\ Pi$ survey, as there are many input exposures that overlap each1853 the $3\pi$ survey, as there are many input exposures that overlap each 1771 1854 point on the sky. For the more common case of three or more inputs, a 1772 1855 weighted average from the inputs is used, with the weight for each … … 1789 1872 there were no valid inputs, in which case the BLANK mask bit is set. 1790 1873 1791 Due to the various non-astronomical ghosts that can occur on GPC1, and1792 the fact that they may not be fully masked to ensure all bad pixels1793 are removed, it is expected that some of the inputs for a given stack 1794 pixel are not in agreement with the others. In general, there is the 1795 population of input pixel values around the correct astronomical 1796 level, as well as possible populations at lower pixel value (such as 1797 due to an over-subtracted burntool trail) and at higher pixel values1798 (such as that caused by an incompletely masked optical ghost). Due to 1799 the observation strategy to image a given field twice to allow for1874 Due to uncorrected artifacts that can occur on GPC1, and the fact that 1875 they may not be fully masked to ensure all bad pixels are removed, it 1876 is expected that some of the inputs for a given stack pixel are not in 1877 agreement with the others. In general, there is the population of 1878 input pixel values around the correct astronomical level, as well as 1879 possible populations at lower pixel value (such as due to an 1880 over-subtracted burntool trail) and at higher pixel values (such as 1881 that caused by an incompletely masked optical ghost). Due to the 1882 observation strategy to observe a given field twice to allow for 1800 1883 warp-warp difference images to be constructed to identify transient 1801 1884 detections, higher pixel values that come from sources like optical 1802 ghosts that depend on the telescope pointing will come in pairs as well. 1803 The higher pixel value contaminants are also potentially problematic 1804 as they may appear to be real sources, prompting photometry to be 1805 performed on false objects. Because of the expectation that there are 1806 more bright contaminants than faint ones, there is a slight preference 1807 to reject higher pixel values than lower pixel values. 1885 ghosts that depend on the telescope pointing will come in pairs. 1886 Detector artifacts will appear in pairs as well. The higher 1887 pixel value contaminants are also potentially problematic as they may 1888 appear to be real sources, prompting photometry to be performed on 1889 false objects. Because of the expectation that there are more positive 1890 deviations than negative ones, there is a slight preference to reject 1891 higher pixel value outliers than lower pixel values, as described below. 1808 1892 1809 1893 Following the initial combination, a ``testing'' loop iterates in an … … 1818 1902 and both are flagged for rejection 1819 1903 1820 If the number of input s is larger than 6, then a Gaussian mixture1821 model analysis is run on the inputs to fit two sub populations, and1822 determine anthe likelihood that the distribution is best described by1904 If the number of input pixels is larger than 6, then a Gaussian mixture 1905 model analysis is run on the inputs fitting two sub populations, to 1906 determine the likelihood that the distribution is best described by 1823 1907 an uni-modal model. If this probability is less than $5\%$, then the 1824 1908 mean is taken from the bimodal sub population with the largest … … 1826 1910 comprised of high pixel value outliers. 1827 1911 1828 If th is is not the case, and the distribution is likely unimodal, or1829 if there are insufficient inputs for this mixture model analysis, the 1830 input values are passed to an Olympic weighted mean calculation. We1831 reject $20\%$ of the number of inputs through this process. The 1832 number of bad inputs is set to $N_\mathrm{bad} = 0.2 * 1833 N_\mathrm{input} + 0.5$, with the 0.5 term ensuring at least one input 1834 is rejected. This number is further separated into the number of low 1835 values to exclude $N_\mathrm{low} = N_\mathrm{bad} / 2$, which will 1836 default to zero if there are few inputs, and $N_\mathrm{high} = 1837 N_\mathrm{input} + N_\mathrm{low} - N_\mathrm{bad}$. After sorting 1838 the input values to determine which values fall into the low and high 1839 groups, the remaining input values are used in a weighted mean using 1840 the image weights above.1912 If the unimodal probability is greater than $5\%$ (indicating the 1913 distribution is likely to be unimodal), or if there are insufficient 1914 inputs for this mixture model analysis, the input values are passed to 1915 an Olympic weighted mean calculation. We reject $20\%$ of the number 1916 of inputs through this process. The number of bad inputs is set to 1917 $N_\mathrm{bad} = 0.2 * N_\mathrm{input} + 0.5$, with the 0.5 term 1918 ensuring at least one input is rejected. This number is further 1919 separated into the number of low values to exclude, $N_\mathrm{low} = 1920 N_\mathrm{bad} / 2$, which will default to zero if there are few 1921 inputs, and $N_\mathrm{high} = N_\mathrm{low} - N_\mathrm{bad}$. 1922 After sorting the input values to determine which values fall into the 1923 low and high groups, the remaining input values are used in a weighted 1924 mean using the image weights above. 1841 1925 1842 1926 A systematic variance term is necessary to correctly scale how 1843 1927 discrepant points can be from the ensemble mean. If the mixture model 1844 analysis was run, the Gaussian sigma from the largest sub population1845 is squared and used. If this is not available, a $10\%$ systematic 1846 error on the input values is used. Each point then has a limit 1847 calculatedusing a $4\sigma$ rejection1928 analysis has been run, the Gaussian sigma from the largest sub 1929 population is squared and used. Otherwise, a $10\%$ systematic error 1930 on the input values is used. Each point then has a limit calculated 1931 using a $4\sigma$ rejection 1848 1932 1849 1933 \begin{eqnarray} … … 1856 1940 \mathrm{mean})^2$ exceeding this limit is identified. If there are 1857 1941 suspect pixels in the set, those pixels are marked for rejection, 1858 otherwise this worst pixel is marked for rejection. Following this ,1859 the combine and test loop is repeated foruntil no more pixels are1942 otherwise this worst pixel is marked for rejection. Following this step, 1943 the combine and test loop is repeated until no more pixels are 1860 1944 rejected, up to a maximum number of iterations equal to $50\%$ of the 1861 1945 number of inputs. … … 1870 1954 the entire image is rejected as it likely has some systematic issue. 1871 1955 1872 Finally, a second pass at rejecting pixels is conducted, by growing the1873 current list to include pixels that are neighbors to many rejected1956 Finally, a second pass at rejecting pixels is conducted, by extending 1957 the current list to include pixels that are neighbors to many rejected 1874 1958 pixels. The ISIS kernel used in the previous step is again used to 1875 determine the largest square box that contains underthe limit of1959 determine the largest square box that does not exceed the limit of 1876 1960 $0.25 * \sum_{x,y} kernel^2$. This square box is then convolved with 1877 1961 the rejected pixel mask to reject the neighboring pixels. This final … … 1883 1967 a map of the number of inputs per pixel. 1884 1968 1885 These convolved stack products are not retained, as the convolution 1886 reduces the resolution of the final image. Instead, we apply the 1887 normalizations and rejected pixel maps generated from the convolved 1888 stack process to the original unconvolved input images. This produces 1889 an unconvolved stack that has the optimum image quality possible from 1890 the input images. Not convolving does mean that the PSF shape changes 1891 across the image, as the different PSF widths of the input images 1892 print through in the different regions to which they have contributed. 1969 These convolved stack products are not retained, as the convolution is 1970 only used to ensure the pixel rejection uses seeing-matched images. 1971 Instead, we apply the normalizations and rejected pixel maps generated 1972 from the convolved stack process to the original unconvolved input 1973 images. This produces an unconvolved stack that has the optimum image 1974 quality possible from the input images. Not convolving does mean that 1975 the PSF shape changes across the image, as the different PSF widths of 1976 the input images print through in the different regions to which they 1977 have contributed. 1893 1978 1894 1979 %% Asinh compression 1895 1980 1896 Due to the expected large range of data values in the final stacked 1897 image, saving them as compressed 16-bit integer images with linear 1898 BSCALE and BZERO scaling values is likely to offer poor 1899 reconstructions of the stacked image. This will lead either to 1900 truncation of the extrema of the image, or quantized values that are 1901 poorly spaced for the image histogram. Saving the images as 32-bit 1902 floating point values would alleviate this quantization issue, at the 1903 cost of a large increase in the disk space required for the stacked 1904 images. 1905 1906 Transforming the data prior to writing to disk by taking the logarithm 1907 of the pixel values can resolve this, with the complication that all 1908 data values must first be made positive, which then sets the highest 1909 quantization sampling near the lowest values in the image. Following 1910 techniques used by SDSS \citep{2000AJ....120.1579Y}, we have instead opted to use the 1911 inverse hyperbolic sine function to transform the data. The domain of 1912 this function allows any input value to be converted. In addition, 1913 the quantization sampling can be tuned by placing the zero of the 1914 inverse hyperbolic sine function at a value where the highest sampling 1915 is desired. 1981 While IPP image products from single exposures use compressed 16-bit 1982 integer images, this dynamic range is insufficient for the expected 1983 scale of the stacked images. This will lead either to truncation of 1984 the extrema of the image, or quantized values that poorly sample the 1985 image noise distribution. Saving the images as 32-bit floating point 1986 values would alleviate this quantization issue, at the cost of a large 1987 increase in the disk space required for the stacked images. 1988 1989 Inspired by techniques used by SDSS \citep{2000AJ....120.1579Y} 1990 \czw{better citation?}, we use the inverse hyperbolic sine function to 1991 transform the data. The domain of this function allows any input 1992 value to be converted. In addition, the quantization sampling can be 1993 tuned by placing the zero of the inverse hyperbolic sine function at a 1994 value where the highest sampling is desired. 1916 1995 1917 1996 Formally, prior to being written to disk, the pixel values are … … 2020 2099 \label{sec:diffs} 2021 2100 2022 Constructing difference images is essentially the same as that used in 2023 the stacking process. An image is chosen as a template, another image 2024 as the input, and after matching sources to determine the scaling and 2025 transparency, convolution kernels are defined that are used to 2026 convolve one or both of the images to a target PSF. The images are 2027 t hen subtracted, and as they should now share a common PSF, static2028 s ources are largely subtracted (completely in an ideal case), whereas2029 sources that are not static between the two images leave a significant 2030 remnant. More information on the difference image construction is 2031 contained in \citet{price2017}. The follow section contains a 2032 overview of the difference image construction used for the data in 2033 DR2.2101 The image matching process used in constructing difference images is 2102 essentially the same the stacking process. An image is chosen as a 2103 template, another image as the input, and after matching sources to 2104 determine the scaling and transparency, convolution kernels are 2105 defined that are used to convolve one or both of the images to a 2106 target PSF. The images are then subtracted, and as they should now 2107 share a common PSF, static sources are largely subtracted (completely 2108 in an ideal case), whereas sources that are not static between the two 2109 images leave a significant remnant. More information on the 2110 difference image construction is contained in \citet{price2017}. The 2111 following section contains an overview of the difference image 2112 construction used for the data in DR2. 2034 2113 2035 2114 The images used to construct difference images can be either … … 2052 2131 2053 2132 For warp-warp differences, such as those used for the ongoing Solar 2054 System moving object search in nightly observations \citep{2013PASP..125..357D}, the 2055 warp that was taken first is used as the template. As there is less 2056 certainty in which of the two input images will have better seeing, a 2057 ``dual'' convolution method is used. Both inputs are convolved to a 2058 target PSF that is not identical to either input. This intermediate 2059 target is essential for the case in which the PSFs of the two inputs 2060 have been distorted in orthogonal directions. Simply convolving one 2061 to match the other would require some degree of deconvolution along 2062 one axis. As this convolution method by necessity uses more free 2063 parameters, the ISIS kernels used are chosen to be simpler than those 2064 used in the warp-stack differences. The ISIS widths are kept the same 2065 (1.5, 3.0, 6.0 pixel FWHMs), but each Gaussian kernel is constrained 2066 to only use a second-order polynomial. As with the warp-stack 2067 differences, the mask fraction grows between the input warp and the 2068 final difference image due to the convolution. For the warp-warp 2069 differences, each image mask grows based on the appropriate 2070 convolution kernel, so the final usable image area is highly dependent 2071 on ensuring that the telescope pointings are as close to identical as 2072 possible. The observing strategy to enable this is discussed in more 2073 detail in \citet{chambers2017}. 2074 2075 2076 2077 \section{Discussion} 2133 System moving object search in nightly observations 2134 \citep{2013PASP..125..357D}, the warp that was taken first is used as 2135 the template. As there is less certainty in which of the two input 2136 images will have better seeing, a ``dual'' convolution method is used. 2137 Both inputs are convolved to a target PSF that is not identical to 2138 either input. This intermediate target is essential for the case in 2139 which the PSFs of the two inputs have been distorted in orthogonal 2140 directions. Simply convolving one to match the other would require 2141 some degree of deconvolution along one axis. As this convolution 2142 method by necessity uses more free parameters, the ISIS kernels used 2143 are chosen to be simpler than those used in the warp-stack 2144 differences. The ISIS widths are kept the same (1.5, 3.0, 6.0 pixel 2145 FWHMs), but each Gaussian kernel is constrained to only use a 2146 second-order polynomial. As with the warp-stack differences, the mask 2147 fraction grows between the input warp and the final difference image 2148 due to the convolution. For the warp-warp differences, each image 2149 mask grows based on the appropriate convolution kernel, so the final 2150 usable image area is highly dependent on ensuring that the telescope 2151 pointings are as close to identical as possible. The observing 2152 strategy to enable this is discussed in more detail in 2153 \citet{chambers2017}. 2154 2155 2156 2157 \section{Future Plans} 2078 2158 \label{sec:discussion} 2079 2159 … … 2150 2230 2151 2231 The Pan-STARRS1 PV3 processing has reduced an unprecedented volume of 2152 image data, and has produced a catalog for the $3\ Pi$ Survey2232 image data, and has produced a catalog for the $3\pi$ Survey 2153 2233 containing hundreds of billions of individual measurements of 2154 2234 three billion astronomical objects. Accurately calibrating
Note:
See TracChangeset
for help on using the changeset viewer.
