IV. 2MASS Data Processing
10. Quality Assurance
Quality Assurance (hereafter "QA") is the final analysis ensuring that the 2MASS data meet Level 1 specifications. While data were still being collected at Mt. Hopkins and Cerro Tololo, QA was responsible for closing the loop with the observatory by determining which of the tiles could forever be checked off as "done," and which needed high or low priority re-scans. For reprocessing of data for the All-Sky Release, QA was responsible for assigning quality scoring to all scans based on a consistent and uniform set of criteria across the sky. Many regions of the sky had multiple observations, so this uniform set of scores assured that the All-Sky Release Catalogs could be built from the very best scans of each region.
There are three steps in QA:
- The first step is referred to as "24-hr QA" and was performed within 24 hours after the data tape reached IPAC. This was done to provide a quick check of the health of the instruments and to identify data of obviously poor quality so that those areas of sky could be reobserved.
- The second step is referred to as "Nightly Science QA" and was a more comprehensive check of the data performed when the night was fully processed through the preliminary pipeline.
- After all data were run through the first two steps, the 2MASS processing pipeline was adjusted and refined, using knowledge gained throughout the preliminary processing of the entire data set. Scans assigned quality scores of "3" or greater during the Nightly Science QA were then re-processed again with the final pipeline, and a "Final Science QA" applied to all data uniformly.
The philosophy behind the QA grading scheme is to provide a numerical score for each scan indicating the likelihood that those data meet the Level 1 specifications. The best score, quality=10, is given to scans that have a 100% chance of meeting these requirements. The worst score, quality=0, is reserved for scans known to fail the Level 1 requirements. For scans in between these two regimes, integral quality scores between 0 and 10 are assigned.
A scan's quality score is assessed from a number of diagnostics:
- The photometric quality of the sky at the time of observation and our ability to measure it.
- The sensitivity of the data and the stability of sky backgrounds.
- The seeing and our ability to track its variations.
- The quality of the astrometric reconstruction.
For readers wanting more in-depth discussion of the quality diagnostics, the sections below describe the steps in Final Science QA. Figures linked to the discussion are all taken from a random night, 000129s, i.e., 2000 Jan 29 south. (For detailed descriptions of the QA diagnostic plots, the reader is referred to this subsection.)
a. Photometricity
The scatter in mean zero-points for the six individual measures in a calibration scan set was computed as a first diagnostic of the photometric stability. Figure 1 shows an example of the nightly photometric solutions (fits to the mean zero-points of each six-calibration-scan set as a function of UT; see IV.8.b), which were reviewed by eye for each night, providing a second check of the photometric stability. A third check -- statistics on the magnitude differences of SNR>20 stars falling in the region of overlap between adjacent scans -- was also used, and Figure 2 shows the resulting average magnitude per scan pair as a function of UT, which were also checked by eye.
Figure 1 | Figure 2 |
Using the above diagnostics, a photometric quality factor (fct1) was computed for each photometric solution. (Sometimes nights were divided into separate intervals with independent solutions if, for example, a brief period of clouds interrupted data collection partway through the night.) This factor considers the number of calibration scan sets going into the night's photometric solution, the photometric dispersion in each calibration scan set, and the size of the photometric scatter in scan overlaps. It was computed via the formula fct1 = pfct1*pfct2*pfct3, where the three subfactors are described as follows:
- pfct1: If the number of calibration scan sets falls below 5, the calibration is considered less than robust and is downgraded. Five or more calibration sets gives a perfect subfactor of pfct1=1.0, only four sets results in pfct1=0.9, and only three gives pfct1=0.8. Photometric solutions having only 2 calibration scan sets were given pfct1=0.3, but only if the two subfactors discussed in the next items were perfect (pfct2=pfct3=1.0); otherwise, the subfactor was set to pfct1=0.0 for non-photometric.
- pfct2: If the photometric dispersion for field stars in the six-times-repeated calibration scan sets exceeded 0.04 mag in any calibration scan in any band, this subfactor was reduced from a perfect grade of pfct2=1.0 using the formula pfct2 = 2 - ((worst dispersion)/0.04 mag). The calibration scan having the worst dispersion of all calibration scans in the photometric solution is used. This downgrade was waived if the high dispersion occurred only for a high-airmass (airmass>1.5) calibration set in an otherwise photometrically stable night, or if the high dispersion was caused by a single band in a single calibration set whose zero-point is consistent with all others on the night.
- pfct3: If the peak-to-peak spread of photometric repeatability in the overlap regions exceeded 0.05 mag, this subfactor was reduced from a perfect 1.0. This subfactor was set to pfct3=0.7 for scatters between 0.050 and 0.075 mag, pfct3=0.4 for scatters between 0.075 and 0.100 mag, and pfct3=0.0 for scatters of 0.100 mag or greater. These downgrades could be waived only if there were known to be biased by poor statistics (few SNR>20 stars, because of low source density or small regions of overlap) or by photometrically-confused sources falling near a bright star. In rare cases where the peak-to-peak scatters suggested downgrades only at Ks, despite the fact that the night appeared otherwise cloud-free, the downgrade was also ignored; this effect was caused by a PSF cross-scan bias and not by sky transparency.
It should be noted that some scans do not have overlapping scans taken on the same night, meaning that there are no stars in common with which to judge the stability of the photometry on a scan-to-scan basis. Here other indicators, such as the background plots and jump counters (discussed below), may indicate the presence of clouds. In these cases, fct1 can be further downgraded to 0.0 if the scan, or set of scans, is believed to be non-photometric.
A suite of diagnostic plots, providing other internal checks of the photometricity, were also reviewed for each night's data:
- Computed PSF as function of PSF magnitude; computed 2 values as a function of magnitude and cross-scan position on detector. This subsection shows example diagrams used by reviewers to check the behavior of photometric diagnostic parameters, verifying that the computed certainties are reasonable, that the 2 values tend near 1 for clean sources, and that the 2 values do not dramatically increase for sources near the edges of the array.
- Difference between PSF and aperture photometry as a function of magnitude and detector position; difference between PSF and aperture photometry for bright vs. faint stars. This subsection shows example diagrams which provide internal checking of the photometry for Read_2-Read_1 detections, by comparing their aperture and PSF-fit photometry.
- Difference between Read_1 aperture magnitudes and Read_2-Read_1 PSF-fit magnitudes. Figure 3 shows at J (top), H (middle) and Ks (bottom), a check for a clean transition at the switchover point, from aperture to PSF-fit photometry, by comparing the photometry of stars that are PSF photometered in Read_2-Read_1, but aperture photometered in Read_1.
Figure 3 |
These plots were added to final processing as additional checks of the photometry. Review of these plots provided a first characterization of the dataset at large, often suggesting more in-depth analysis of the data once they were loaded into the databases. These plots did not, however, directly affect the photometric scoring of scans, since none of the problems uncovered were severe enough to warrant additional downgrades.
b. Sensitivity/Backgrounds (Airglow)/Meteor Blanking
For each scan a photometric sensitivity parameter (hereafter "PSP"; see VI.2) was computed from a convolution of the seeing shape and background level. It correlates with the probability that a scan will meet the Level 1 specifications for sensitivity. The conversion of PSP value into an actual probability was slightly different for each detector, making this value observatory dependent. The northern camera had its H-band array replaced in mid-survey, so there was date dependence as well. These values were calculated automatically by the QA pipeline and converted into a sensitivity quality factor, fct2, as follows:
Table 1: Conversion of PSP values into fct2 | |||||
Actual Probability |
North Ks PSP (& H before 990701) |
North H PSP (after 990701) |
South H PSP | South Ks PSP | fct2 |
>75% | <= 10.85 | <= 9.0 | <= 9.6 | <= 10.6 | 1.0 |
50-75% | <= 11.11 | <= 9.3 | <= 9.8 | <= 10.9 | 0.8 |
25-50% | <= 11.35 | <= 9.5 | <= 10.3 | <= 11.7 | 0.5 |
0-25% | <= 11.85 | <= 9.7 | <= 11.7 | <= 11.7 | 0.3 |
0% | > 11.85 | > 9.7* | > 11.7 | > 11.7 | 0.1 |
It should be noted that under photometric conditions, the sensitivity at J-band always met Level-1 specifications and so was not a factor in the computation. Only the H- and Ks-band PSP values affected the probability. Figure 4 shows the PSP values versus scan number, which provided the QA reviewer a visual summary of the automatically-generated fct2 values.
The QA reviewer also examined plots (Figure 5) of the frame background level per band. These plots were instrumental in showing the onset of clouds, but also alerted the reviewer to other problems, such as extreme airglow variations or transient sources entering the field of view.
Figure 4 | Figure 5 |
A diagnostic known as Cnoise(4) was used to automatically flag scans with such dramatic airglow variations that residual structure remained in the image data. This Cnoise(4) statistic is the difference between the measured Atlas Image background noise (after modelling large-scale gradients and structure) and the theoretical noise expected from the overall background level. Of the three 2MASS bandpasses, H-band shows by far the largest effect from OH airglow variations, so the H-band Cnoise(4) value was used as the sole diagnostic for the airglow quality parameter, fct5. For values of H-band Cnoise(4) < 4.5, the airglow quality factor remained at fct5=1.0; for values of H-band Cnoise(4) > 4.5, the airglow quality factor was downgraded to fct5=0.1. This downgrade was overridden in cases where the the logarithm of the scan's maximum source density (determined in subregions along the scan length) was greater than 4.2, or when a visual inspection of the image data by the QA reviewer showed no obvious problems caused by the airglow. QA reviewers were also asked to examine image data for scans with values of 2.5 < Cnoise(4) < 4.5, to look for any problems not automatically receiving a downgrade.
There were also concomitant diagnostics, known as "jump counters," that counted the number of frames in a scan where the frame background exceeded the average background of its adjacent frames by >0.5 times the root-sum-squared pixel noise. For scans with three or more H- or Ks-band jumps (out of 247 total frames), the QA pipeline automatically alerted the reviewer to examine the image data for problems. These counters were excellent diagnostics of extreme airglow variations, clouds, and electronic anomalies.
Finally, the automated QA pipeline produced images of each frame from which a transient source was removed. This transient source removal was aimed primarily at removing meteor streaks and satellite trails from the data frames, although other one-time sources, such as scattered light from bright stars near the array edge, were also eliminated. The images in this subsection showed every frame for which a transient source was detected and removed and the extent to which the image was blanked. QA reviewers examined all these images to monitor whether the blanked sources were indeed transient and whether their removal had a beneficial result. As a result of these (and other) visual inspections of the data, a (fortunately small) list of remaining anomalies (see II.4b) was amassed.
c. Seeing
For a scan to receive a high quality score, the seeing had to be within tolerance, point source images had to be round, and the final pipeline processing was able to track the seeing on timescales better than or comparable to the variation timescale of the seeing itself. To this end, several quality diagnostics were developed.
- Overall seeing: The wavelength dependence of atmospheric seeing dictates that of the three 2MASS passbands, seeing will be worst at J. A seeing shape quality factor, fct3, was used to indicate scans in which the maximum J seeing shape exceeded 1.30 (or FWHM > 3.61´´, as obtained from the derived equation, FWHM(arcseconds) = 3.13*shape - 0.46), or in which the average J-band seeing shape exceeded 1.25 (FWHM > 3.45´´). In either case, a downgrade of fct3=0.1 resulted. Although this downgrade was computed automatically by the QA software, a diagnostic plot (see Figure 6, bottom panel) was also produced for inspection by the QA reviewer.
- PSF elongation: The roundness of the images was also measured for each scan via a second image moment ratio. Round images have ratios of 1.0, and when this ratio dropped below 0.81, data were downgraded to a maximum total quality grade of 1 (see below). Again, this downgrade was computed automatically by the QA software, but a reviewable plot (see Figure 6, top panel) was also produced.
- Tracking of seeing variations: For star/galaxy separation, it was crucial for GALWORKS to be able to track the seeing, so that it could reliably distinguish between extended and point sources. In some cases, GALWORKS was not able to track the seeing quickly enough if the timescale for seeing variations was very rapid. Diagnostic parameters and plots (see Figure 7) were developed to check how well the tracking worked. (For more on the meaning of these plots, please see section 3.5.3 of Jarrett et al. 2000, AJ, 119, 2498.) For scans in which the seeing was untracked in any band for more than 900´´ along declination, the untracked seeing quality factor, fct4, was set to fct4=0.1. Occasionally a downgrade would be manually overridden if it was discovered to be caused by benign factors. For example, galaxy clusters in sparse fields sometimes falsely triggered the untracked seeing diagnostic.
Figure 6 | Figure 7 |
d. Astrometry
Each QA review also included a check of several plots related to the astrometric quality of the data:
- Astrometric wander. These plots (see Figure 8) show comparisons of the astrometric solution for scans which overlap on the sky. Large deviations from zero indicate problems with one or both of the solutions.
- Global astrometry. These plots (see Figure 9) show the differences between 2MASS and Tycho positions of stars in the Tycho catalogs, along with the differences in 2MASS astrometry for high-SNR sources found in scan overlaps.
- Check of Read_1 vs. Read_2-Read_1 astrometry. These plots (see Figure 10) monitor any positional changes arising from the different Read_1 and Read_2-Read_1 paths through the pipeline processing.
- Distortion monitoring. These plots (see Figure 11) monitor the quality of astrometric distortion removal by providing both external (2MASS vs. USNO-A) and internal (PosFrm vs. ProPhot) checks of source positions.
Occasionally, small astrometric anomalies were uncovered in some scans -- all of which were investigated in more detail outside the normal QA process -- but none of these problems was severe enough to warrant a downgrade to the final quality score.
Figure 8 | Figure 9 | Figure 10 | Figure 11 |
e. Science Diagnostics
Another suite of plots served as astrophysical checks of each night's data. All plots were reviewed each night to ensure that there were no patterns/anomalies suggesting a non-astrophysical imprint on the data:
- Point sources
- Color-color and color-magnitude (Hess) diagrams for all data on the night divided into high, intermediate, and low galactic latitude zones, with fiducial tracks for dwarf and giant stars indicated. These diagrams (see this subsection) serve as checks that the resultant colors and magnitudes are consistent with what is expected for real, astrophysical sources.
- Colors as a function of x and y position on detector. These plots (see this subsection) serve as a check that there is no dependence on detector position for the resultant colors.
- Color-color and color-magnitude diagrams for stars saturated in Read_1. These diagrams (see this subsection) monitor the health of the photometry for very bright (Read_1-saturated) stars, by comparing their magnitudes and colors to those of unsaturated stars.
- Extended sources
- Colors and source counts as a function of time during the night; color-color and color-magnitude diagrams. These diagrams (see this subsection) serve as checks that the number, magnitudes, and colors of extended sources behave as expected.
No problems were uncovered that resulting in the direct downgrading of scans.
f. Miscellaneous Diagnostics
To check that the flagging of minor planets was working correctly, the QA subsystem checked that any low-numbered asteroids (i.e., one of the first 500 asteroids discovered, as numbered by the IAU) were correctly correlated to a 2MASS source. Because all of these low-numbered asteroids are bright, they should correlate with a 2MASS source when a 2MASS scan covers their predicted positions. QA reviewers were asked to study the output of the correlations or non-correlations (usually there were only a few, if any, such asteroids on any given night), and no problems were seen. The only non-correlations occurred when the predicted asteroid positions were very close to a scan edge.
The final QA check was a monitor of the differences between the final processing and the preliminary processing for the incremental releases. Any differences in scoring were noted, and the reason for their changes documented and understood, before the final grades were approved:
- Differences in the nightly photometric solution, in scan-by-scan sensitivity values, and in number of sources detected. These plots (see Figure 12) monitor changes between the preliminary and final processing, to ensure that any differences were of the expected variety.
- Differences in scan-by-scan grading and a check to make sure all intended scans were processed.
Figure 12 |
g. Final Quality Scoring
These resultant checks of the above diagnostics were noted in a final summary form (see this page) by the QA reviewer for each night. These results were encapsulated into a final grade for each science scan. Each scan is scored using a base quality number of 10 multiplied by the minimum of the individual quality factors detailed above; that is, grade = 10 * min(fct1, fct2, fct3, fct4, fct5), where fct1 is allowed to range from 0.0 to 1.0, and all others from 0.1 to 1.0 only. The grade will always be 1, unless the photometric quality factor fct1=0.
The final step in the QA review process was a submittal of this final summary form to the Principal Investigator, Michael Skrutskie, for an independent assessment of the diagnostics. At this point any disagreements with the scoring could be discussed (which rarely was required), before the night's scoring was declared as official.
[Last Update: 2003 Mar 13, J.D. Kirkpatrick & R. Hurt]
Previous page. Next page.
Return to Explanatory Supplement TOC Page.