Changeset 8703
- Timestamp:
- Aug 29, 2006, 6:19:30 PM (20 years ago)
- Location:
- trunk/doc/design
- Files:
-
- 2 edited
-
ippCDRresponse.tex (modified) (23 diffs)
-
ippSSDD.tex (modified) (17 diffs)
Legend:
- Unmodified
- Added
- Removed
-
trunk/doc/design/ippCDRresponse.tex
r8140 r8703 20 20 21 21 % -- Revision History -- 22 \RevisionsStart22 %\RevisionsStart 23 23 % version Date Description 24 DR.01 & 2006.07.28 & First Draft\\ \hline25 \RevisionsEnd24 %00 & 2006.08.13 & Initial Release \\ \hline 25 %\RevisionsEnd 26 26 27 27 \tableofcontents … … 104 104 This question appears to be principally concerned with perceived 105 105 overlap between the IPP DVO system and the PSPS Object Data Manager 106 (ODM). Wh ere there is superficial similarity between the systems, it106 (ODM). While there is superficial similarity between the systems, it 107 107 is our belief that the development of two complementary systems is not 108 only worthwhile, but extremely important for a variety of reasons. 109 110 The IPP operational requir ments include the analysis of images to108 only worthwhile, but extremely important for a variety of reasons. 109 110 The IPP operational requirements include the analysis of images to 111 111 produce the P2, P4$\Delta$, and P4$\Sigma$ detections, the Static Sky 112 112 images, the Astrometric and Photometric Reference catalog, and the … … 128 128 and objects, along with the photometric and astrometric calibration 129 129 information. There are several motivations for defining a separate 130 entity within the IPP specifi ally for this task:130 entity within the IPP specifically for this task: 131 131 132 132 \begin{itemize} 133 \item the PSPS system is foreseen as the definitive view on the object133 \item The PSPS system is foreseen as the definitive view on the object 134 134 database problem, while the IPP requires the ability to manipulate 135 135 and update the astrometric and photometric calibrations. 136 \item the PSPS system requires a high-level of sophistication in the136 \item The PSPS system requires a high-level of sophistication in the 137 137 scientific queries which may be performed. While critical, this 138 sophistication will result in a much longer development time line for138 sophistication will result in a much longer development time line for 139 139 the PSPS. 140 140 \item By performing the analysis of the astrometric and photometric … … 146 146 trade-off allows the IPP DVO system to operate at a much higher 147 147 throughput than is possible for the PSPS, and perform more 148 re processing operations.148 re-processing operations. 149 149 \end{itemize} 150 150 151 151 It is useful to ask if the PSPS could have been ready within the 152 necessary time frame, would the IPP use that instead of the DVO system152 necessary time frame, would the IPP use that instead of the DVO system 153 153 for support of photometric and astrometric calibrations. The answer 154 154 would be 'yes' only if the PSPS requirements were modified to allow it … … 157 157 subsystems after they have both matured? My opinion, and I believe, 158 158 that of the project management, is that this choice would result in 159 development of a system which would be substantially over engineered159 development of a system which would be substantially over-engineered 160 160 for its role in either of the two subsystems, with concomitant 161 161 increase in the cost of development and support. By dividing the … … 187 187 psphot and ppImage upgrades. 188 188 \item {\bf Release Capabilities} Image warping, Image differencing, 189 Image stacking, object model ling for difference images (loading PSF189 Image stacking, object modeling for difference images (loading PSF 190 190 from an external source, fitting positive and negative sources), 191 191 analysis of the OTA guide kernel, detrend images convolved with the … … 220 220 We have worked with the other PS1 teams to define complete ICDs 221 221 between all of the interacting subsystems. The IPP interfaces with 4 222 existing subsystems, and will interact with several other science223 clients. We have completed the definitions of the ICDs for the 224 Camera-IPP (PSDC-940-003), OTIS-IPP (PSDC-940-004), IPP-MOPS222 existing subsystems, and will interact with other as-yet-undefined 223 science clients. We have completed the definitions of the ICDs for 224 the Camera-IPP (PSDC-940-003), OTIS-IPP (PSDC-940-004), IPP-MOPS 225 225 (PSDC-940-005), and IPP-PSPS (PSDC-940-006). These are now part of 226 226 the PSDC document tree. … … 277 277 The CDR Review Committee requests a plan for finalizing the analysis 278 278 algorithms within the IPP. There are three stages to freezing the 279 analysis algorith ims within the IPP:279 analysis algorithms within the IPP: 280 280 \begin{itemize} 281 281 \item Algorithm conceptual design. … … 287 287 The IPP SSDD defines the conceptual details of nearly all of the 288 288 analysis algorithms in Sections 5, 6, and 7. Of these descriptions, 289 only Sections 5.5.5, describing the stacking of the Static Sky, 290 and Section 7, defining the Static Sky photometry analysis, are291 insufficient in detail for the analysis to be well understood. Also292 missing from version 00 of the document is a discussion of the 293 creation of the astrometric and photometric reference catalogs. A new 294 version of the SSDD with these three sections updated \note{will be295 posted by DATE}.289 only Sections 5.5.5, describing the stacking of the Static Sky, and 290 Section 7.2, defining the Static Sky photometry analysis, are 291 insufficient in detail for the analysis to be well understood. The 292 discussion of the analysis used to generate the astrometric and 293 photometric reference catalog are included in Section 4 on the design 294 of DVO, and are somewhat sparse. A new version of the SSDD with these 295 three sections updated will be posted by Aug 30. 296 296 297 297 Table~\ref{algorithms} lists the IPP algorithms, the relevant program 298 298 in the IPP tree, the development state of the program with respect to 299 299 the identified analysis, and the data needed to set the algorithm 300 parameters. In a separate document ( REF), we present a detailed301 discussion of the choices to be made and guidelines on making those302 choices.300 parameters. In a separate document (`How to Define IPP Analysis 301 Parameters'), we present a detailed discussion of the choices to be 302 made and guidelines on making those choices. 303 303 304 304 There are a handful of decisions which need to be made which have an … … 332 332 analysis be minimized since it is impossible to afford the I/O load 333 333 demanded by a large number of input fringe images. A related 334 question is that of how to sub select the night-time fringe images334 question is that of how to sub-select the night-time fringe images 335 335 for best effect, if sky fringe images are used. Based on experience 336 336 from CFHT/Megacam, it may be possible to use fringe images selected 337 337 on the basis of the time of night, but this must be tested for 338 Haleakala. It seems unlikely at this time that a spectral skyprobe 339 will be available for the start of PS1, so we cannot rely on such a 340 device to guide our choices. 338 Haleakala. An alternative strategy, in which master fringe images 339 are combined based on their consistency with the science image, has 340 also been implemented for the IPP. The decision between these 341 options will be guided by further testing of Megacam images and real 342 Haleakala data in a range of conditions. It seems unlikely at this 343 time that a spectral skyprobe will be available for the start of 344 PS1, so we cannot rely on such a device to guide our choices. 341 345 342 346 \item {\bf Static Sky Cell definition} What is the layout of the … … 376 380 range of field angles into the static sky. These will have a range 377 381 of image qualities. We will need to set the cuts to trade-off 378 between degr edation of the final image quality versus degredation of382 between degradation of the final image quality versus degradation of 379 383 the signal-to-noise in the final image. In general, our guidance 380 384 for the Static Sky is to maximize our ability to measure accurate … … 391 395 common seeing (for a stable PSF across the image for difference 392 396 image)? Is it possible to measure the weak lensing parameters 393 sufficiently accura ctly from the stacked image with knowledge of the397 sufficiently accurately from the stacked image with knowledge of the 394 398 PSF? To what level of detail is the PSF model required? Is it 395 399 necessary to perform the weak lensing analysis (and other galaxy … … 410 414 steps. The functional flow of these different analysis steps can be 411 415 seen in IPP SSDD Figures 22-30. The use of the Q/A measurements is 412 not summarized very clearly within the text of version 00; \note{this 413 will be updated in the new SSDD release}. Within the IPP, the 414 analysis stages use these measurements to mark input images as 415 accepted or rejected. These assessments are passed back to the OTIS 416 subsystem, along with the image statistics measured for the input 417 images. OTIS has the option of setting more stringent filters on the 418 input images and re-observing images on the basis of the IPP feedback, 419 even if the IPP accepted images which OTIS re-observes. There is also 420 a system-wide plan in place to use feedback from the IPP and from OTIS 421 to guide the project's choices for survey strategies and science 422 goals. 416 not summarized very clearly within the text of version 00; this will 417 be updated in the new SSDD release. The types of Q/A measurements are 418 also discussed below. 419 420 Within the IPP, the analysis stages use these measurements to mark 421 input images as accepted or rejected. These assessments are passed 422 back to the OTIS subsystem, along with the image statistics measured 423 for the input images. OTIS has the option of setting more stringent 424 filters on the input images and re-observing images on the basis of 425 the IPP feedback, even if the IPP accepted images which OTIS 426 re-observes. There is also a system-wide plan in place to use 427 feedback from the IPP and from OTIS to guide the project's choices for 428 survey strategies and science goals. 423 429 424 430 \subsection{Detrend Images} 425 431 426 The input detrend images all have their pixel count levels measured427 foreach chip. Input images which have counts or fluxes outside of a432 The input detrend images have their pixel count levels measured for 433 each chip. Input images which have counts or fluxes outside of a 428 434 defined range will be flagged and excluded from any detrend analysis. 429 435 For example, the flat-field images should never use input images which 430 436 are saturated, nor should the dark image analysis use input images 431 437 with flux levels wildly outside of the nominal range. Both conditions 432 are evidence that the observing process was performed inappropriately.438 are evidence that the data were incorrectly obtained. 433 439 434 440 In addition to raw pixel values, the input detrend images are 435 con tronted with the resulting master detrend images. In general, the436 effect corrected by the master detrend image should ad aquately correct441 confronted with the resulting master detrend images. In general, the 442 effect corrected by the master detrend image should adequately correct 437 443 each of the input raw detrend images. The residual scatter of the 438 444 detrended raw images should be small. As part of the detrend creation … … 483 489 scatter, and substantial photometric offsets are all evidence of 484 490 problems with the observing conditions. These may be the presence 485 of clouds and/or haze, degr edation of the optics, and/or extreme491 of clouds and/or haze, degradation of the optics, and/or extreme 486 492 image-quality problems. 487 493 \end{itemize} … … 500 506 result in large deviations between the components of an image stack, 501 507 and will also result in large numbers of difference image detections. 502 Similarly, bright stars with larger thanexpected halos or saturation508 Similarly, bright stars with larger-than-expected halos or saturation 503 509 regions will result in excess difference image detections. 504 510 … … 519 525 hardware to meet the processing and I/O requirements. 520 526 521 That analysis is based largely on prototype tests of our processing522 algorithms , and is somewhat limited by being focused on the steady523 state operations. We present here new numbers for the processing 524 timeline based on the current baseline software on our existing 525 baseline clusterhardware.526 527 For PS1, there is a significant processing challenge in the first 6 -528 9 months when only a fraction of the IPP storage hardware will be529 available. This period is further complicated by the budgetary530 constraints placed on the IPP to limit the hardware purchase to a bare531 minimum. An important area for clarification by the project is the532 processing requirement in the beginning of the project. If the IPP is533 required to perform a complete Static Sky analysis on every image as534 it becomes available, then the total hardware required in the first535 6-9 months for processing must be increased. If it is only necessary536 to stack sets of, for example, 4 images as they are available, then537 the requirements are somewhat reduced. A trade-off must be made by538 the project to choose between these options.527 That analysis was based largely on prototype tests of our processing 528 algorithms and has been made out-of-date by changes to the survey 529 plans. We present here new numbers for the processing time line based 530 on the current baseline software on our existing baseline cluster 531 hardware. 532 533 % For PS1, there is a significant processing challenge in the first 6 - 534 % 9 months when only a fraction of the IPP storage hardware will be 535 % available. This period is further complicated by the budgetary 536 % constraints placed on the IPP to limit the hardware purchase to a bare 537 % minimum. An important area for clarification by the project is the 538 % processing requirement in the beginning of the project. If the IPP is 539 % required to perform a complete Static Sky analysis on every image as 540 % it becomes available, then the total hardware required in the first 541 % 6-9 months for processing must be increased. If it is only necessary 542 % to stack sets of, for example, 4 images as they are available, then 543 % the requirements are somewhat reduced. A trade-off must be made by 544 % the project to choose between these options. 539 545 540 546 The data storage requirements are determined from the design reference … … 546 552 images per year. Combining these two, we find that the total number 547 553 of raw image data is roughly 1.7PB (555,000 images). In addition, we 548 have a requirement for Static Sky storage ( using 0.2 arcsec pixels) of549 roughly 300TB, and miscellaneous additional storage of nearly 100 TB. 550 Our hardware purchase plan has a minimum total storage of 2.4PB,554 have a requirement for Static Sky storage (assuming 0.2 arcsec pixels) 555 of roughly 300TB, and miscellaneous additional storage of nearly 100 556 TB. Our hardware purchase plan has a minimum total storage of 2.4PB, 551 557 giving us a margin of about 10\%. Our plan is to purchase the 552 558 hardware in 5 stages of 16 computers each, for a total system cost of … … 558 564 required to buy all of the machines up front), we would require 559 565 roughly 145 machines, increasing the cost of the cluster to a total of 560 roughly \$2.2M. We judge this to be a very low risk. 566 roughly \$2.2M. We judge this to be a very low risk as hard disk 567 capacities continue to grow. 561 568 562 569 We have performed timing test of the current versions of the IPP tools … … 572 579 speeds of the CPU cores, but increasing numbers of cores per socket. 573 580 By the end of this year, quad-core processors are expected. If we 574 stagger the purchase of the computers as planned, and make reasonable 575 estimates for the number of cores available, we expect the final 576 cluster configuration to have between 400 and 800 cores. We will use 577 600 as an estimate. Note that it is possible to supplement the 578 processing power of the cluster by buying 1U boxes with processors but 579 no storage. Each of these boxes cost roughly 15\% of a storage node 580 and add an equal number of processor cores. Such an option can be 581 taken at any time, though it is not needed in our current development 582 plan. 581 stagger the purchase of the computers as planned (5 sets of 16 582 machines), and make reasonable estimates for the number of cores 583 available, we expect the final cluster configuration to have between 584 400 and 800 cores. We will use 600 as an estimate. Note that it is 585 possible to supplement the processing power of the cluster by buying 586 1U boxes with processors but minimal storage. Each of these boxes 587 cost roughly 15\% of a storage node and add an equal number of 588 processor cores. Such an option can be taken at any time to 589 supplement the raw processing power, though it is not needed in our 590 current development plan. 583 591 584 592 There are two potentially dominant analysis steps in the process: the … … 589 597 takes roughly 16 seconds for a Megacam chip (single core), equivalent 590 598 to roughly 38 seconds on a full GPC-1 chip. Most other steps of the 591 analysis scale are constant per image, and contribute only a few592 seconds relative to the 38 seconds. We use 50 seconds per chipper593 c ore to judge the total processing power for the portion which scales594 by the number of images. 599 analysis require a constant amount of time per image, and contribute 600 only a few seconds relative to the 38 seconds. We use 50 seconds per 601 chip per core to judge the total processing power for the portion 602 which scales by the number of images. 595 603 596 604 A useful statistic to judge the capability of the processing system is 597 the time required to re process all images at the end of the survey.605 the time required to re-process all images at the end of the survey. 598 606 Given the total number of images above (555,000), the per-image 599 607 analysis portion of the processing would require a total of $\sim 1.8 … … 617 625 predict the Pan-STARRS magnitudes of stars, then extrapolated the 618 626 source counts to our magnitudes limits. We find 50,000 objects per 619 square degree above our threshold in this region. If every image 620 required the non-linear fitting for this density of objects, and we 621 accept the 10ms time, this analysis would require a total of $\sim 2.0 622 \times 10^9$ CPU core-seconds, or about 39 days on the 600 cores. 627 square degree above our threshold in this region. If we make the 628 conservative assumption that every image required the non-linear 629 fitting for this density of objects, and that each object requies 630 10ms, this analysis would require a total of $\sim 2.0 \times 10^9$ 631 CPU core-seconds, or about 39 days on the 600 cores. 623 632 624 633 In conclusion, given the assumptions above, the processing power of … … 650 659 requirements of the IPP for object databasing. Some effort has been 651 660 needed to make DVO completely suitable for its role within the IPP. 652 Regardless of what object databasing system was chosen, a certain 653 level of effort would have been required. In this case, we were clear 654 just how much would be required, and it was not large. 661 Regardless of what object databasing system was chosen, however, a 662 certain level of effort would have been required. In the case of DVO, 663 we were clear what effort was required, and it was judged to be 664 reasonable. 655 665 656 666 Of that effort, only the ability to support older table formats was 657 required to maintain CFHT Elixir compatibility. In fact, this is a 658 feature which we would have added even if we did not want to maintain 659 compatibility with CFHT's DVO installation. We have found in our 660 experience with the Elixir system that having a rigidly defined schema 661 hindered the usability and extensibility of the DVO system. The fixed 662 tables made it difficult to add new elements to the database, and 663 required multiple versions to support previously defined tables. The 664 new design allows us to be more flexible about changes without fear 665 that this will break database instances which already exist. One of 666 the best ways we have found to test the DVO object databasing system 667 is to engage students in science projects using DVO. These projects 668 explore the user interface and highlight problems and areas for 669 possible improvement. Such projects would not be possible if the 670 users feared that their DVO instances would be unusable in the future 671 because of lack of backwards compatibility. 667 required to maintain compatibility with the existing CFHT DVO 668 databases. In fact, this is a feature which we would have added even 669 if we did not want to maintain compatibility with CFHT's DVO 670 installation. We have found in our experience with the Elixir system 671 that having a rigidly defined schema hindered the usability and 672 extensibility of the DVO system. The fixed tables made it difficult 673 to add new elements to the database, and required multiple versions to 674 support previously defined tables. The new design allows us to be 675 more flexible about changes without fear that this will break database 676 instances which already exist. One of the best ways we have found to 677 test the DVO object databasing system is to engage students in science 678 projects using DVO. These projects explore the user interface and 679 highlight problems and areas for possible improvement. Such projects 680 would not be possible if the users feared that their DVO instances 681 would be unusable in the future because of lack of backwards 682 compatibility. 672 683 673 684 The PanTasks system used the existing Opihi command-line interface … … 684 695 of a large set of real images obtained by the CFHT engineering staff 685 696 over several years. We also gain by discussions with our Elixir 686 colle gues about details of the analysis and possible sources of errors697 colleagues about details of the analysis and possible sources of errors 687 698 observed in the CFHT dataset. The only cost to the IPP is in 688 699 preventing excessive forking of the DVO databasing system, something -
trunk/doc/design/ippSSDD.tex
r8140 r8703 613 613 \paragraph{House keeping} 614 614 615 \subparagraph{Lock sweeping} In the event that a Storage Object operation fails to complete successfully 616 stale locks will have to be identified and removed from the IPP Pixel 617 Data Server Database. This should be done periodically by comparing 618 the entries in the Lock table to the list of active nodes maintained 619 by the IPP Controller. It should also happen as soon as possible 620 after a node goes offline (triggered by the IPP Controller marking a 621 node as offline?). A sweep must be /completed/ before an offline node 622 can be marked on-line. 615 \subparagraph{Lock sweeping} 616 In the event that a Storage Object operation fails to complete 617 successfully stale locks will have to be identified and removed from 618 the IPP Pixel Data Server Database. This should be done periodically 619 by comparing the entries in the Lock table to the list of active nodes 620 maintained by the IPP Controller. It should also happen as soon as 621 possible after a node goes offline (triggered by the IPP Controller 622 marking a node as offline?). A sweep must be /completed/ before an 623 offline node can be marked on-line. 623 624 624 625 Once a node is determined to be offline all entries in the Lock table … … 628 629 table. 629 630 630 \subparagraph{Consistency sweeping} Periodically the IPP Pixel Data Server meta-data and Storage Object will need 631 to be checked for sanity. This would be similar to running fsck on a 632 modern filesystem. Consistency sweeping should include Lock sweeping 633 and should be considered a super-set. 631 \subparagraph{Consistency sweeping} 632 Periodically the IPP Pixel Data Server meta-data and Storage Object 633 will need to be checked for sanity. This would be similar to running 634 fsck on a modern filesystem. Consistency sweeping should include Lock 635 sweeping and should be considered a super-set. 634 636 635 637 \subsubsection{Nebulous Database} … … 1491 1493 photometrically corrected flats (-grid option). 1492 1494 1495 \tbd{fill out this discussion in the analysis section on the 1496 astrometric and photometric reference catalog}. 1497 1493 1498 \subsubsection{Uniphot : Zero Point Analysis} 1494 1499 … … 1496 1501 points for images and the spatial overlap information to determine a 1497 1502 best set of image zero points which have a specific time scale for the 1498 atmospheric stability. This analysis would beused after relative1503 atmospheric stability. This analysis is used after relative 1499 1504 photometry has been determined for data in DVO. This analysis 1500 1505 currently is defined to unify the zero points of a collection of … … 1503 1508 photometry corrections for a collection of images distributed over a 1504 1509 large range in space and time, but still with significant 1505 overlap. distritions with subustanailaccount for the c 1506 1507 \subsubsection{Global Astrometry Analysis} 1508 1509 This operation uses the reference and image detections to determine an 1510 optical distortion model for the camera and static astrometry model 1511 components. The astrometry model includes: (1) field distortion 1510 overlap. 1511 1512 \tbd{fill out this discussion in the analysis section on the 1513 astrometric and photometric reference catalog}. 1514 1515 \subsubsection{relastro : Global Astrometry Analysis} 1516 1517 This operation uses the reference and image detections to improve the 1518 astrometric reference catalog. It determines an improved optical 1519 distortion model for the camera and static astrometry model 1520 components, and then applies the improved astrometric solutions to the 1521 observations to yield high-quality astrometry for the average object 1522 positions. The astrometry model includes: (1) field distortion 1512 1523 introduced by the telescope optics, which is a smoothly-varying 1513 1524 function of the field position relative to the center of the telescope … … 1516 1527 along with chip-dependent plate-scale modifications needed to 1517 1528 represent tilts or warps of the individual detectors relative to the 1518 ideal flat focal plane. . 1529 ideal flat focal plane. 1530 1531 \tbd{fill out this discussion in the analysis section on the 1532 astrometric and photometric reference catalog}. 1519 1533 1520 1534 \subsubsection{DVO shell} … … 2835 2849 to an error upstream in the processing). 2836 2850 2851 \tbd{add discussion of the choices to be made in generating the 2852 static sky image stacks: interpolation methods, selection of input 2853 images by IQ, smoothing of input images by their PSF, weighting and 2854 clipping of input pixels} 2855 2837 2856 Object analysis of the static sky images is {\em not} a part of the 2838 2857 Phase 4 analysis. This processing is envisioned to take place … … 2840 2859 scheduled as a separate analysis task, probably run during the day at 2841 2860 a time when the computing infrastructure is not under significant load. 2861 2862 \tbd{add discussion of the multiple image analysis and object 2863 analysis without the static sky (ie, on all input images at once)} 2842 2864 2843 2865 \subsubsection{Magic and Phase 4 Modifications} … … 3161 3183 parameter $\nu$, and a collection of annular aperture flux 3162 3184 measurements, all of which are also measured for the P4$\Sigma$ 3163 images. In addition, the galaxy-shape parameters $ Gamma_1 \&3185 images. In addition, the galaxy-shape parameters $\Gamma_1 \& 3164 3186 \Gamma_2$, along with the complete `polarization' terms are measured, 3165 3187 as well as a collection of annular aperture flux and variance … … 3173 3195 per second), it is only necessary to process the complete sky in a 3174 3196 year, or an average rate of $\sim$2 Mpix per second, or $< 1$\% of the 3175 object analysis in the other analysis stages. 3197 object analysis in the other analysis stages. These operations are 3198 all functions which will be performed within the PSPhot program using 3199 recipe options. 3200 3201 \subsection{Astrometric and Photometric Reference Catalog} 3202 3203 The IPP is responsible for generating the Astrometric and Photometric 3204 (AP) Reference Catalog. The IPP provides several tools for performing 3205 this analysis. The DVO programs \code{relphot}, \code{uniphot}, and 3206 \code{relastro} perform most of the operations required to generate 3207 the AP Reference Catalog. These include the determination of the 3208 image zero-points, the identification of objects with significant 3209 variability, the detection of individual outlier measurements, the 3210 detection of objects with substantial astrometric error, the analysis 3211 of parallax and proper-motions, etc. In addition, the DVO shell 3212 program will be used to generate the color transformations from the 3213 observed data and to perform other tests of the catalog quality. 3176 3214 3177 3215 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% … … 3191 3229 functions in the operational system, the IPP will make use of Perl as 3192 3230 the scripting language to provide the required flow-control to tie the 3193 modules together. \tbd{note that we use C only, not perl for 3194 scripting}. 3231 modules together. 3195 3232 3196 3233 This approach satisfies the requirement that complicated low-level … … 3551 3588 from the selected reference catalog. The observed sources are matched 3552 3589 to the reference sources, using either a two-point grid search or 3553 optionally a \tbd{triangle match}. Once an approximate match is3554 found, a linear fit between detector coordinates an projected 3555 c elestial coordinates is attempted. The projected coordinate system3556 may optionally make use of the default telescope distortion model, if 3557 i t is known. The radius of the match between observed and reference3590 optionally a triangle match. Once an approximate match is found, a 3591 linear fit between detector coordinates an projected celestial 3592 coordinates is attempted. The projected coordinate system may 3593 optionally make use of the default telescope distortion model, if it 3594 is known. The radius of the match between observed and reference 3558 3595 sources is reduced to improve the statistics of the match. This 3559 3596 anaysis mode is used in the Phase 2 processing. … … 3610 3647 \subsection{poisub} 3611 3648 3612 Poisub is the image difference analysis program. \tbd{ Paul: please3613 flesh this out!}.3649 Poisub is the image difference analysis program. \tbd{finish this 3650 discussion}. 3614 3651 3615 3652 \subsection{stac} … … 3618 3655 same region of the sky. It consists of two major stages: the warping 3619 3656 stage and the image combination stage with robust outlier rejection. 3620 \tbd{ Paul: flesh this out!}3657 \tbd{update / finish this discussion} 3621 3658 3622 3659 \subsection{Command Sequences} … … 4035 4072 \subsection{IPP Pipelines Overview} 4036 4073 4074 \tbd{add the use of Q/A measurements from the IPP CDR Response 4075 document} 4076 4037 4077 The IPP as a whole performs all of the image analysis functions 4038 4078 required by the Pan-STARRS telescopes, including images from the full … … 4380 4420 header or new exp table?), the exposure is added to the `raw exposure' 4381 4421 table for images of that type. The allowed types are `detrend', (all 4382 bias, dark, flat images), `object', `focus'(??), etc. (** The4383 different tables represent different analysis modes. This process 4384 also adds an entry to the exp ID / image file match **). This process 4385 also adds all science (OBJECT) exposures to the P1 exposure table (for 4386 mosaic data) or the P2 chip table (for single detector data). These4387 tables areused to trigger the Phase 1 and Phase 2 analysis stages.4422 bias, dark, flat images), `object', `focus'(??), etc. The different 4423 tables represent different analysis modes. This process also adds an 4424 entry to the exp ID / image file match. This process also adds all 4425 science (OBJECT) exposures to the P1 exposure table (for mosaic data) 4426 or the P2 chip table (for single detector data). These tables are 4427 used to trigger the Phase 1 and Phase 2 analysis stages. 4388 4428 4389 4429 \subsection{Phase 1} … … 4559 4599 rules. 4560 4600 4561 \ note{Phase 4 run can be defined by selecting an observation group, a4601 \tbd{Phase 4 run can be defined by selecting an observation group, a 4562 4602 set of exposures, or a set of rules related to a spatial region (eg, 4563 4603 region, time range, and filter}. 4564 4604 4565 \ note{Phase 4 discussion (and diagram) needs more work}4605 \tbd{Phase 4 discussion (and diagram) needs more work} 4566 4606 4567 4607 \subsection{Analysis Version and Recipes}
Note:
See TracChangeset
for help on using the changeset viewer.
