{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Calibrate dark images\n", "\n", "Dark images, like any other images, need to be calibrated. Depending on the data\n", "you have and the choices you have made in reducing your data, the steps to\n", "reducing your images may include:\n", "\n", "1. Subtracting overscan (only if you decide to subtract overscan from all\n", "images).\n", "2. Trim the image (if it has overscan, whether you are using the overscan or\n", "not).\n", "3. Subtract bias (if you need to scale the calibrated dark frames to a different\n", "exposure time)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "from astropy.nddata import CCDData\n", "from ccdproc import ImageFileCollection\n", "import ccdproc as ccdp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 1: Overscan subtracted, bias not removed" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Take a look at what images you have\n", "\n", "[*Click here to comment on this section on GitHub (opens in new tab).*](https://github.com/mwcraig/ccd-reduction-and-photometry-guide/pull/217/files#diff-fc0aaef3c8ddfc5f7a8566af66d54320R45){:target=\"_blank\"}\n", "\n", "First we gather up some information about the raw images and the reduced images\n", "up to this point. These examples have darks stored in a subdirectory of the\n", "folder with the rest of the images, so we create an `ImageFileCollection` for\n", "each." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "ex1_path_raw = Path('example-cryo-LFC')\n", "\n", "ex1_images_raw = ImageFileCollection(ex1_path_raw)\n", "ex1_darks_raw = ImageFileCollection(ex1_path_raw / 'darks')\n", "\n", "ex1_path_reduced = Path('example1-reduced')\n", "ex1_images_reduced = ImageFileCollection(ex1_path_reduced)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Raw images, everything except the darks" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Table length=14\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
fileimagetypexptimefilter
str14str9float64str2
ccd.001.0.fitsBIAS0.0i'
ccd.002.0.fitsBIAS0.0i'
ccd.003.0.fitsBIAS0.0i'
ccd.004.0.fitsBIAS0.0i'
ccd.005.0.fitsBIAS0.0i'
ccd.006.0.fitsBIAS0.0i'
ccd.014.0.fitsFLATFIELD70.001g'
ccd.015.0.fitsFLATFIELD70.011g'
ccd.016.0.fitsFLATFIELD70.001g'
ccd.017.0.fitsFLATFIELD7.0i'
ccd.018.0.fitsFLATFIELD7.0i'
ccd.019.0.fitsFLATFIELD7.0i'
ccd.037.0.fitsOBJECT300.062g'
ccd.043.0.fitsOBJECT300.014i'
" ], "text/plain": [ "\n", " file imagetyp exptime filter\n", " str14 str9 float64 str2 \n", "-------------- --------- ------- ------\n", "ccd.001.0.fits BIAS 0.0 i'\n", "ccd.002.0.fits BIAS 0.0 i'\n", "ccd.003.0.fits BIAS 0.0 i'\n", "ccd.004.0.fits BIAS 0.0 i'\n", "ccd.005.0.fits BIAS 0.0 i'\n", "ccd.006.0.fits BIAS 0.0 i'\n", "ccd.014.0.fits FLATFIELD 70.001 g'\n", "ccd.015.0.fits FLATFIELD 70.011 g'\n", "ccd.016.0.fits FLATFIELD 70.001 g'\n", "ccd.017.0.fits FLATFIELD 7.0 i'\n", "ccd.018.0.fits FLATFIELD 7.0 i'\n", "ccd.019.0.fits FLATFIELD 7.0 i'\n", "ccd.037.0.fits OBJECT 300.062 g'\n", "ccd.043.0.fits OBJECT 300.014 i'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ex1_images_raw.summary['file', 'imagetyp', 'exptime', 'filter']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Raw dark frames" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Table length=10\n", "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
fileimagetypexptimefilter
str14str4float64str2
ccd.002.0.fitsBIAS0.0r'
ccd.013.0.fitsDARK300.0r'
ccd.014.0.fitsDARK300.0r'
ccd.015.0.fitsDARK300.0r'
ccd.017.0.fitsDARK70.0r'
ccd.018.0.fitsDARK70.0r'
ccd.019.0.fitsDARK70.0r'
ccd.023.0.fitsDARK7.0r'
ccd.024.0.fitsDARK7.0r'
ccd.025.0.fitsDARK7.0r'
" ], "text/plain": [ "\n", " file imagetyp exptime filter\n", " str14 str4 float64 str2 \n", "-------------- -------- ------- ------\n", "ccd.002.0.fits BIAS 0.0 r'\n", "ccd.013.0.fits DARK 300.0 r'\n", "ccd.014.0.fits DARK 300.0 r'\n", "ccd.015.0.fits DARK 300.0 r'\n", "ccd.017.0.fits DARK 70.0 r'\n", "ccd.018.0.fits DARK 70.0 r'\n", "ccd.019.0.fits DARK 70.0 r'\n", "ccd.023.0.fits DARK 7.0 r'\n", "ccd.024.0.fits DARK 7.0 r'\n", "ccd.025.0.fits DARK 7.0 r'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ex1_darks_raw.summary['file', 'imagetyp', 'exptime', 'filter']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Decide which calibration steps to take\n", "\n", "[*Click here to comment on this section on GitHub (opens in new tab).*](https://github.com/mwcraig/ccd-reduction-and-photometry-guide/pull/217/files#diff-fc0aaef3c8ddfc5f7a8566af66d54320R104){:target=\"_blank\"}\n", "\n", "This example is, again, one of the chips of the LFC camera at Palomar. In\n", "earlier notebooks we have seen that the chip has a [useful overscan region](01.08-Overscan.ipynb#Case-1:-Cryogenically-cooled-Large-Format-Camera-(LFC)-at-Palomar), has little dark current except for some hot pixels, and sensor glow in\n", "one corner of the chip.\n", "\n", "Looking at the list of non-dark images (i.e., the flat and light images) shows\n", "that for each exposure time in the non-dark images there is a set of dark\n", "exposures that has a matching, or very close to matching, exposure time.\n", "\n", "To be more explicit, there are flats with exposure times of 7.0 sec and 70.011\n", "sec and darks with exposure time of 7.0 and 70.0 sec. The dark and flat exposure\n", "times are close enough that there is no need to scale them. The two images of\n", "an object are each roughly 300 sec, matching the darks with exposure time 300\n", "sec. The very small difference in exposure time, under 0.1 sec, does not need to\n", "be compensated for.\n", "\n", "Given this, we will:\n", "\n", "1. Subtract overscan from each of the darks. The useful overscan region is XXX\n", "(see LINK).\n", "2. Trim the overscan out of the dark images.\n", "\n", "We will *not* subtract bias from these images because we will *not* need to\n", "rescale them to a different exposure time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calibrate the individual dark frames" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "for ccd, file_name in ex1_darks_raw.ccds(imagetyp='DARK', # Just get the dark frames\n", " ccd_kwargs={'unit': 'adu'}, # CCDData requires a unit for the image if \n", " # it is not in the header\n", " return_fname=True # Provide the file name too.\n", " ): \n", " # Subtract the overscan\n", " ccd = ccdp.subtract_overscan(ccd, overscan=ccd[:, 2055:], median=True)\n", " \n", " # Trim the overscan\n", " ccd = ccdp.trim_image(ccd[:, :2048])\n", " \n", " # Save the result\n", " ccd.write(ex1_path_reduced / file_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Reduced images (so far)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Table length=16\n", "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
fileimagetypexptimefiltercombined
str17str4float64str2object
ccd.001.0.fitsBIAS0.0i'--
ccd.002.0.fitsBIAS0.0i'--
ccd.003.0.fitsBIAS0.0i'--
ccd.004.0.fitsBIAS0.0i'--
ccd.005.0.fitsBIAS0.0i'--
ccd.006.0.fitsBIAS0.0i'--
ccd.013.0.fitsDARK300.0r'--
ccd.014.0.fitsDARK300.0r'--
ccd.015.0.fitsDARK300.0r'--
ccd.017.0.fitsDARK70.0r'--
ccd.018.0.fitsDARK70.0r'--
ccd.019.0.fitsDARK70.0r'--
ccd.023.0.fitsDARK7.0r'--
ccd.024.0.fitsDARK7.0r'--
ccd.025.0.fitsDARK7.0r'--
combined_bias.fitBIAS0.0i'True
" ], "text/plain": [ "\n", " file imagetyp exptime filter combined\n", " str17 str4 float64 str2 object \n", "----------------- -------- ------- ------ --------\n", " ccd.001.0.fits BIAS 0.0 i' --\n", " ccd.002.0.fits BIAS 0.0 i' --\n", " ccd.003.0.fits BIAS 0.0 i' --\n", " ccd.004.0.fits BIAS 0.0 i' --\n", " ccd.005.0.fits BIAS 0.0 i' --\n", " ccd.006.0.fits BIAS 0.0 i' --\n", " ccd.013.0.fits DARK 300.0 r' --\n", " ccd.014.0.fits DARK 300.0 r' --\n", " ccd.015.0.fits DARK 300.0 r' --\n", " ccd.017.0.fits DARK 70.0 r' --\n", " ccd.018.0.fits DARK 70.0 r' --\n", " ccd.019.0.fits DARK 70.0 r' --\n", " ccd.023.0.fits DARK 7.0 r' --\n", " ccd.024.0.fits DARK 7.0 r' --\n", " ccd.025.0.fits DARK 7.0 r' --\n", "combined_bias.fit BIAS 0.0 i' True" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ex1_images_reduced.refresh()\n", "ex1_images_reduced.summary['file', 'imagetyp', 'exptime', 'filter', 'combined']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 2: Overscan not subtracted, bias is removed" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "ex2_path_raw = Path('example-thermo-electric')\n", "\n", "ex2_images_raw = ImageFileCollection(ex2_path_raw)\n", "\n", "ex2_path_reduced = Path('example2-reduced')\n", "ex2_images_reduced = ImageFileCollection(ex2_path_reduced)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We begin by looking at what exposure times we have in this data." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Table length=32\n", "
\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
idxfileimagetypexposure
0AutoFlat-PANoRot-r-Bin1-001.fitFLAT1.0
1AutoFlat-PANoRot-r-Bin1-002.fitFLAT1.0
2AutoFlat-PANoRot-r-Bin1-003.fitFLAT1.0
3AutoFlat-PANoRot-r-Bin1-004.fitFLAT1.0
4AutoFlat-PANoRot-r-Bin1-005.fitFLAT1.0
5AutoFlat-PANoRot-r-Bin1-006.fitFLAT1.02
6AutoFlat-PANoRot-r-Bin1-007.fitFLAT1.06
7AutoFlat-PANoRot-r-Bin1-008.fitFLAT1.11
8AutoFlat-PANoRot-r-Bin1-009.fitFLAT1.16
9AutoFlat-PANoRot-r-Bin1-010.fitFLAT1.21
10Bias-S001-R001-C001-NoFilt.fitBIAS0.0
11Bias-S001-R001-C002-NoFilt.fitBIAS0.0
12Bias-S001-R001-C003-NoFilt.fitBIAS0.0
13Bias-S001-R001-C004-NoFilt.fitBIAS0.0
14Bias-S001-R001-C005-NoFilt.fitBIAS0.0
15Bias-S001-R001-C006-NoFilt.fitBIAS0.0
16Bias-S001-R001-C007-NoFilt.fitBIAS0.0
17Bias-S001-R001-C008-NoFilt.fitBIAS0.0
18Bias-S001-R001-C009-NoFilt.fitBIAS0.0
19Bias-S001-R001-C020-NoFilt.fitBIAS0.0
20Dark-S001-R001-C001-NoFilt.fitDARK90.0
21Dark-S001-R001-C002-NoFilt.fitDARK90.0
22Dark-S001-R001-C003-NoFilt.fitDARK90.0
23Dark-S001-R001-C004-NoFilt.fitDARK90.0
24Dark-S001-R001-C005-NoFilt.fitDARK90.0
25Dark-S001-R001-C006-NoFilt.fitDARK90.0
26Dark-S001-R001-C007-NoFilt.fitDARK90.0
27Dark-S001-R001-C008-NoFilt.fitDARK90.0
28Dark-S001-R001-C009-NoFilt.fitDARK90.0
29Dark-S001-R001-C020-NoFilt.fitDARK90.0
30kelt-16-b-S001-R001-C084-r.fitLIGHT90.0
31kelt-16-b-S001-R001-C125-r.fitLIGHT90.0
\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ex2_images_raw.summary['file', 'imagetyp', 'exposure'].show_in_notebook()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Decide what steps to take next\n", "\n", "[*Click here to comment on this section on GitHub (opens in new tab).*](https://github.com/mwcraig/ccd-reduction-and-photometry-guide/pull/217/files#diff-fc0aaef3c8ddfc5f7a8566af66d54320R218){:target=\"_blank\"}\n", "\n", "In this case the only dark frames have exposure time 90 sec. Though that matches\n", "the exposure time of the science images, the flat field images are much shorter\n", "exposure time, ranging from 1 sec to 1.21 sec. This type of range of exposure is\n", "typical when twilight flats are taken. Since these are a much different\n", "exposure time than the darks, the dark frames will need to be scaled.\n", "\n", "Recall that for this camera the overscan is not useful and should be\n", "trimmed off.\n", "\n", "Given this, we will:\n", "\n", "1. Trim the overscan from each of the dark frames.\n", "2. Subtract calibration bias from the dark frames so that we can scale the darks\n", "to a different exposure time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Calibration the individual dark frames" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we read the combined bias image created in the previous notebook. Though\n", "we could do this based on the file name, using a systematic set of header\n", "keywords to keep track of which images have been combined is less likely to lead\n", "to errors." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "combined_bias = CCDData.read(ex2_images_reduced.files_filtered(imagetyp='bias', \n", " combined=True, \n", " include_path=True)[0])" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "for ccd, file_name in ex2_images_raw.ccds(imagetyp='DARK', # Just get the bias frames\n", " return_fname=True # Provide the file name too.\n", " ):\n", " \n", " # Trim the overscan\n", " ccd = ccdp.trim_image(ccd[:, :4096])\n", " \n", " # Subtract bias\n", " ccd = ccdp.subtract_bias(ccd, combined_bias)\n", " # Save the result\n", " ccd.write(ex2_path_reduced / file_name)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 4 }