PSF Photometry

Contents

PSF Photometry#

Use case: PSF photometry, creating a PSF, derive Color-Magnitude Diagram.
Data: NIRCam simulated images of the Large Magellanic Cloud (LMC) Astrometric Calibration Field obtained using MIRAGE and run through the JWST pipeline. The simulations are obtained using a 4-point subpixel dither for three pairs of wide filters: F070W, F115W, and F200W for the SW channel, and F277W, F356W, and F444W for the LW channel. We simulated only 1 NIRCam SW detector (i.e., “NRCB1”). For this example, we use Level-2 images (.cal, calibrated but not rectified) for two SW filters (F115W and F200W) and derive the photometry in each of them. The images for the other filters are also available and can be used to test the notebook and/or different filter combinations.
Tools: photutils.
Cross-intrument: NIRSpec, NIRISS, MIRI.
Documentation: This notebook is part of a STScI’s larger post-pipeline Data Analysis Tools Ecosystem.

PSF Photometry can be obtained using:

  • single model obtained from WebbPSF

  • grid of PSF models from WebbPSF

  • single effective PSF (ePSF)

Work in Progress:#

  • create a grid of ePSF and perform reduction using the ePSF grid

  • use the ePSF grid to perturbate the WebbPSF model

The notebook shows:

  • how to obtain the PSF model from WebbPSF (or build an ePSF)

  • how to perform PSF photometry on the image

  • how to cross-match the catalogs of the different images

  • how to derive and apply photometric zeropoint

Final plots show:

  • Instrumental Color-Magnitude Diagrams for the 4 images

  • Instrumental Color-Magnitude Diagrams and errors

  • Magnitudes Zeropoints

  • Calibrated Color-Magnitude Diagram (compared with Input Color-Magnitude Diagram)

  • Comparison between input and output photometry

Note on pysynphot: Data files for pysynphot are distributed separately by the Calibration Reference Data System (CRDS). They are expected to follow a certain directory structure under the root directory, identified by the PYSYN_CDBS environment variable that must be set prior to using this package. In the example below, the root directory is arbitrarily named /my/local/dir/trds/.
export PYSYN_CDBS=/my/local/dir/trds/
See documentation here for the configuration and download of the data files.

Imports#

import glob as glob
import os
import tarfile
import time
import warnings
from urllib import request

import numpy as np
import pandas as pd
import webbpsf
from astropy import units as u
from astropy.coordinates import SkyCoord, match_coordinates_sky
from astropy.io import fits
from astropy.modeling.fitting import LevMarLSQFitter
from astropy.nddata import NDData
from astropy.stats import sigma_clipped_stats
from astropy.table import QTable, Table
from astropy.utils.exceptions import AstropyUserWarning
from astropy.visualization import simple_norm
from jwst.datamodels import ImageModel
from photutils.aperture import (CircularAnnulus, CircularAperture,
                                aperture_photometry)
from photutils.background import MADStdBackgroundRMS, MMMBackground
from photutils.detection import DAOStarFinder
from photutils.psf import (EPSFBuilder, GriddedPSFModel, IterativePSFPhotometry,
                           SourceGrouper, extract_stars)

Import Plotting Functions#

%matplotlib inline
from matplotlib import pyplot as plt
import matplotlib.ticker as ticker

plt.rcParams['image.cmap'] = 'viridis'
plt.rcParams['image.origin'] = 'lower'
plt.rcParams['axes.titlesize'] = plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = plt.rcParams['ytick.labelsize'] = 14

font1 = {'family': 'helvetica', 'color': 'black', 'weight': 'normal', 'size': '12'}
font2 = {'family': 'helvetica', 'color': 'black', 'weight': 'normal', 'size': '20'}

Download WebbPSF and Synphot Data#

# Set environmental variables
os.environ["WEBBPSF_PATH"] = "./webbpsf-data/webbpsf-data"
os.environ["PYSYN_CDBS"] = "./grp/redcat/trds/"

# WEBBPSF Data
boxlink = 'https://stsci.box.com/shared/static/qxpiaxsjwo15ml6m4pkhtk36c9jgj70k.gz'
boxfile = './webbpsf-data/webbpsf-data-LATEST.tar.gz'
synphot_url = 'http://ssb.stsci.edu/trds/tarfiles/synphot5.tar.gz'
synphot_file = './synphot5.tar.gz'

webbpsf_folder = './webbpsf-data'
synphot_folder = './grp'

# Gather webbpsf files
if not os.path.exists(webbpsf_folder):
    os.makedirs(webbpsf_folder)
    request.urlretrieve(boxlink, boxfile)
    gzf = tarfile.open(boxfile)
    gzf.extractall(webbpsf_folder, filter='data')

# Gather synphot files
if not os.path.exists(synphot_folder):
    os.makedirs(synphot_folder)
    request.urlretrieve(synphot_url, synphot_file)
    gzf = tarfile.open(synphot_file)
    gzf.extractall('./', filter='data')

Load the images and create some useful dictionaries#

We load all the images and we create a dictionary that contains all of them, divided by detectors and filters. This is useful to check which detectors and filters are available and to decide if we want to perform the photometry on all of them or only on a subset (for example, only on the SW filters).

We also create a dictionary with some useful parameters for the analysis. The dictionary contains the photometric zeropoints (from MIRAGE configuration files) and the NIRCam point spread function (PSF) FWHM, from the NIRCam Point Spread Function JDox page. The FWHM are calculated from the analysis of the expected NIRCam PSFs simulated with WebbPSF.

Note: this dictionary will be updated once the values for zeropoints and FWHM will be available for each detectors after commissioning.

Hence, we have two dictionaries:

  • dictionary for the single Level-2 calibrated images

  • dictionary with some other useful parameters

dict_images = {'NRCA1': {}, 'NRCA2': {}, 'NRCA3': {}, 'NRCA4': {}, 'NRCA5': {},
               'NRCB1': {}, 'NRCB2': {}, 'NRCB3': {}, 'NRCB4': {}, 'NRCB5': {}}

dict_filter_short = {}
dict_filter_long = {}

ff_short = []
det_short = []
det_long = []
ff_long = []
detlist_short = []
detlist_long = []
filtlist_short = []
filtlist_long = []

if not glob.glob('./*cal*fits'):
    print("Downloading images")

    boxlink_images_lev2 = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/images_level2.tar.gz'
    boxfile_images_lev2 = './images_level2.tar.gz'
    request.urlretrieve(boxlink_images_lev2, boxfile_images_lev2)

    tar = tarfile.open(boxfile_images_lev2, 'r')
    tar.extractall(filter='data')

    images_dir = './'
    images = sorted(glob.glob(os.path.join(images_dir, "*cal.fits")))

else:
    images_dir = './'
    images = sorted(glob.glob(os.path.join(images_dir, "*cal.fits")))

for image in images:
    im = fits.open(image)
    f = im[0].header['FILTER']
    d = im[0].header['DETECTOR']

    if d == 'NRCBLONG':
        d = 'NRCB5'
    elif d == 'NRCALONG':
        d = 'NRCA5'
    else:
        d = d

    wv = float(f[1:3])

    if wv > 24:         
        ff_long.append(f)
        det_long.append(d)

    else:
        ff_short.append(f)
        det_short.append(d)   

    detlist_short = sorted(list(dict.fromkeys(det_short)))
    detlist_long = sorted(list(dict.fromkeys(det_long)))

    unique_list_filters_short = []
    unique_list_filters_long = []

    for x in ff_short:
        if x not in unique_list_filters_short:
            dict_filter_short.setdefault(x, {})

    for x in ff_long:
        if x not in unique_list_filters_long:
            dict_filter_long.setdefault(x, {})   

    for d_s in detlist_short:
        dict_images[d_s] = dict_filter_short

    for d_l in detlist_long:
        dict_images[d_l] = dict_filter_long

    filtlist_short = sorted(list(dict.fromkeys(dict_filter_short)))
    filtlist_long = sorted(list(dict.fromkeys(dict_filter_long)))

    if len(dict_images[d][f]) == 0:
        dict_images[d][f] = {'images': [image]}
    else:
        dict_images[d][f]['images'].append(image)

print("Available Detectors for SW channel:", detlist_short)
print("Available Detectors for LW channel:", detlist_long)
print("Available SW Filters:", filtlist_short)
print("Available LW Filters:", filtlist_long)
filters = ['F070W', 'F090W', 'F115W', 'F140M', 'F150W2', 'F150W', 'F162M', 'F164N', 'F182M',
           'F187N', 'F200W', 'F210M', 'F212N', 'F250M', 'F277W', 'F300M', 'F322W2', 'F323N',
           'F335M', 'F356W', 'F360M', 'F405N', 'F410M', 'F430M', 'F444W', 'F460M', 'F466N', 'F470N', 'F480M']

psf_fwhm = [0.987, 1.103, 1.298, 1.553, 1.628, 1.770, 1.801, 1.494, 1.990, 2.060, 2.141, 2.304, 2.341, 1.340,
            1.444, 1.585, 1.547, 1.711, 1.760, 1.830, 1.901, 2.165, 2.179, 2.300, 2.302, 2.459, 2.507, 2.535, 2.574]

zp_modA = [25.7977, 25.9686, 25.8419, 24.8878, 27.0048, 25.6536, 24.6957, 22.3073, 24.8258, 22.1775, 25.3677, 24.3296,
           22.1036, 22.7850, 23.5964, 24.8239, 23.6452, 25.3648, 20.8604, 23.5873, 24.3778, 23.4778, 20.5588,
           23.2749, 22.3584, 23.9731, 21.9502, 20.0428, 19.8869, 21.9002]

zp_modB = [25.7568, 25.9771, 25.8041, 24.8738, 26.9821, 25.6279, 24.6767, 22.2903, 24.8042, 22.1499, 25.3391, 24.2909,
           22.0574, 22.7596, 23.5011, 24.6792, 23.5769, 25.3455, 20.8631, 23.4885, 24.3883, 23.4555, 20.7007,
           23.2763, 22.4677, 24.1562, 22.0422, 20.1430, 20.0173, 22.4086]

dict_utils = {filters[i]: {'psf fwhm': psf_fwhm[i], 'VegaMAG zp modA': zp_modA[i],
                           'VegaMAG zp modB': zp_modB[i]} for i in range(len(filters))}

Select the detectors and/or filters for the analysis#

If we are interested only in some filters (and/or some detectors) in the analysis, as in this example, we can select the Level-2 calibrated images from the dictionary for those filters (detectors) and analyze only those images.

In this particular example, we analyze images for filters F115W and F200W for the detector NRCB1.

dets_short = ['NRCB1']  # detector of interest in this example
filts_short = ['F115W', 'F200W']  # filters of interest in this example

Display the images#

Check that our images do not present artifacts and can be used in the analysis.

fig, ax = plt.subplots(ncols=2, figsize=(14, 14))

for det in dets_short:
    for i, filt in enumerate(filts_short):
        image = fits.open(dict_images[det][filt]['images'][0])
        data_sb = image[1].data
        norm = simple_norm(data_sb, 'sqrt', percent=99.)
        ax[i].imshow(data_sb, norm=norm, cmap='Greys')
        ax[i].set_xlabel("X [px]", fontdict=font2)
        ax[i].set_ylabel("Y [px]", fontdict=font2)
        ax[i].set_title(filt, fontdict=font2)

plt.tight_layout()

Create the PSF models#

I. Create the PSF model using WebbPSF#

We create a dictionary that will contain the PSFs created using WebbPSF for the detectors and filters selected above.

dict_psfs_webbpsf = {}

for det in dets_short:
    dict_psfs_webbpsf.setdefault(det, {})
    for j, filt in enumerate(filts_short):
        dict_psfs_webbpsf[det].setdefault(filt, {})

        dict_psfs_webbpsf[det][filt]['psf model grid'] = None
        dict_psfs_webbpsf[det][filt]['psf model single'] = None

The function below creates a single PSF or a grid of PSFs and allows one to save the PSF as a fits file. The model PSFs are stored by default in the psf dictionary. For the grid of PSFs, users can select the number of PSFs to be created. The PSF can be created detector sampled or oversampled (the oversampling factor can be changed inside the function).

Note: The default source spectrum is, if pysynphot is installed, a G2V star spectrum from Castelli & Kurucz (2004). Without pysynphot, the default is a simple flat spectrum such that the same number of photons are detected at each wavelength.

def create_psf_model(det='NRCB1', fov=11, create_grid=False, num=9, save_psf=False, detsampled=False):
    nrc = webbpsf.NIRCam()
    nrc.detector = det 
    nrc.filter = filt

    src = webbpsf.specFromSpectralType('G5V', catalog='phoenix')
    if detsampled:
        print("Creating a detector sampled PSF")
        fov = 21
    else:
        print("Creating an oversampled PSF")
        fov = fov

    print(f"Using a {fov} px fov")

    if create_grid:
        print("")
        print(f"Creating a grid of PSF for filter {filt} and detector {det}")
        print("")

        outname = f'nircam_{det}_{filt}_fovp{fov}_samp4_npsf{num}.fits'
        if os.path.exists(outname):
            grid_psf = GriddedPSFModel.read(outname)
        else:
            grid_psf = nrc.psf_grid(num_psfs=num, oversample=4, source=src, all_detectors=False,
                                    fov_pixels=fov, use_detsampled_psf=detsampled,
                                    save=save_psf)

        dict_psfs_webbpsf[det][filt]['psf model grid'] = grid_psf
        
    else:
        print("")
        print(f"Creating a single PSF for filter {filt} and detector {det}")
        print("")

        outname = f'nircam_{det}_{filt}_fovp{fov}_samp4_npsf{num}.fits'
        if os.path.exists(outname):
            single_psf = GriddedPSFModel.read(outname)
        else:
            single_psf = nrc.psf_grid(num_psfs=1, oversample=4, source=src, all_detectors=False,
                                      fov_pixels=fov, use_detsampled_psf=detsampled,
                                      save=save_psf)

        dict_psfs_webbpsf[det][filt]['psf model single'] = single_psf

    return dict_psfs_webbpsf    

Single PSF model#

for det in dets_short:
    for filt in filts_short:
        create_psf_model(fov=11, num=1, create_grid=False, save_psf=True, detsampled=False)

Display the single PSF models#

fig, ax = plt.subplots(ncols=2, figsize=(14, 14))

for det in dets_short:
    for i, filt in enumerate(filts_short):
        img = dict_psfs_webbpsf[det][filt]['psf model single'].data[0]
        norm_epsf = simple_norm(img, 'log', percent=99.)
        ax[i].imshow(img, norm=norm_epsf)
        ax[i].set_xlabel('X [px]', fontdict=font2)
        ax[i].set_ylabel('Y [px]', fontdict=font2)
        ax[i].set_title(filt, fontdict=font2)

plt.tight_layout()

PSF grid#

for det in dets_short:
    for filt in filts_short:
        create_psf_model(fov=11, num=25, create_grid=True, save_psf=True, detsampled=False)

Display the PSFs grid#

We show for 1 filter (F115W) the grid of PSFs and the difference from the mean

griddedpsfmodel = dict_psfs_webbpsf[dets_short[0]][filts_short[0]]['psf model grid']
fig = griddedpsfmodel.plot_grid(figsize=(10, 10))
fig = griddedpsfmodel.plot_grid(figsize=(10, 10), deltas=True, cmap='viridis', vmax_scale=0.3)

II. Create the PSF model building an Effective PSF (ePSF)#

More information on the photutils Effective PSF can be found here.

  • Select the stars from the images we want to use for building the PSF. We use the DAOStarFinder function to find bright stars in the images (setting a high detection threshold). DAOStarFinder detects stars in an image using the DAOFIND (Stetson 1987) algorithm. DAOFIND searches images for local density maxima that have a peak amplitude greater than threshold (approximately; threshold is applied to a convolved image) and have a size and shape similar to the defined 2D Gaussian kernel.
    Note: The threshold and the maximum distance to the closest neighbour depend on the user science case (i.e.; number of stars in the field of view, crowding, number of bright sources, minimum number of stars required to build the ePSF, etc.) and must be modified accordingly.

  • Build the effective PSF (excluding objects for which the bounding box exceed the detector edge) using EPSBuilder function.

We create a dictionary that contains the effective PSF for the detectors and filters selected above.

dict_psfs_epsf = {}

for det in dets_short:
    dict_psfs_epsf.setdefault(det, {})
    for j, filt in enumerate(filts_short):
        dict_psfs_epsf[det].setdefault(filt, {})

        dict_psfs_epsf[det][filt]['table psf stars'] = {}
        dict_psfs_epsf[det][filt]['epsf single'] = {}
        dict_psfs_epsf[det][filt]['epsf grid'] = {}

        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):
            dict_psfs_epsf[det][filt]['table psf stars'][i + 1] = None
            dict_psfs_epsf[det][filt]['epsf single'][i + 1] = None
            dict_psfs_epsf[det][filt]['epsf grid'][i + 1] = None

Note that the unit of the Level-2 and Level-3 Images from the pipeline is MJy/sr (hence a surface brightness). The actual unit of the image can be checked from the header keyword BUNIT. The scalar conversion constant is copied to the header keyword PHOTMJSR, which gives the conversion from DN/s to MJy/steradian. For our analysis we revert back to DN/s.

def find_stars_epsf(img_num, filt_num, det='NRCA1', filt='F070W', dist_sel=False):

    bkgrms = MADStdBackgroundRMS()
    mmm_bkg = MMMBackground()

    image = fits.open(dict_images[det][filt]['images'][img_num])
    data_sb = image[1].data
    imh = image[1].header

    print(f"Finding PSF stars on image {img_num + 1} of filter {filt}, detector {det}")

    data = data_sb / imh['PHOTMJSR']
    units = imh['BUNIT']
    print(f"Conversion factor from {units} to DN/s for filter {filt}: {imh['PHOTMJSR']}")

    sigma_psf = dict_utils[filt]['psf fwhm']

    print(f"FWHM for the filter {filt}: {sigma_psf} px")

    std = bkgrms(data)
    bkg = mmm_bkg(data)
    daofind = DAOStarFinder(threshold=th[filt_num] * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,
                            sharplo=0.30, sharphi=1.40)

    psf_stars = daofind(data)
    dict_psfs_epsf[det][filt]['table psf stars'][img_num + 1] = psf_stars
    
    if dist_sel:

        print("")
        print("Calculating closest neigbhour distance")

        d = []

        daofind_tot = DAOStarFinder(threshold=10 * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,
                                    sharplo=0.30, sharphi=1.40)

        stars_tot = daofind_tot(data)

        x_tot = stars_tot['xcentroid']
        y_tot = stars_tot['ycentroid']

        for xx, yy in zip(psf_stars['xcentroid'], psf_stars['ycentroid']):

            sep = []
            dist = np.sqrt((x_tot - xx)**2 + (y_tot - yy)**2)
            sep = np.sort(dist)[1:2][0]
            d.append(sep)

        psf_stars['min distance'] = d
        mask_dist = (psf_stars['min distance'] > min_sep[filt_num])

        psf_stars = psf_stars[mask_dist]

        dict_psfs_epsf[det][filt]['table psf stars'][img_num + 1] = psf_stars

        print("Minimum distance required:", min_sep[filt_num], "px")
        print("")
        print(f"Number of isolated sources found in the image used to build ePSF for {filt}: {len(psf_stars)}")
        print("-----------------------------------------------------")
        print("")
    else:
        print("")
        print(f"Number of sources used to build ePSF for {filt}: {len(psf_stars)}")
        print("--------------------------------------------")
        print("")
tic = time.perf_counter()

th = [700, 500]  # threshold level for the two filters (length must match number of filters analyzed)
min_sep = [10, 10]  # minimum separation acceptable for ePSF stars from closest neighbour

for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):

            find_stars_epsf(i, j, det=det, filt=filt, dist_sel=False)

toc = time.perf_counter()

print("Elapsed Time for finding stars:", toc - tic)

II. Build Effective PSF#

def build_epsf(det='NRCA1', filt='F070W'):
    
    mmm_bkg = MMMBackground()
    
    image = fits.open(dict_images[det][filt]['images'][i])
    data_sb = image[1].data
    imh = image[1].header

    data = data_sb / imh['PHOTMJSR']

    hsize = (sizes[j] - 1) / 2

    x = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['xcentroid']
    y = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['ycentroid']
    mask = ((x > hsize) & (x < (data.shape[1] - 1 - hsize)) & (y > hsize) & (y < (data.shape[0] - 1 - hsize)))

    stars_tbl = Table()
    stars_tbl['x'] = x[mask]
    stars_tbl['y'] = y[mask]

    bkg = mmm_bkg(data)

    data_bkgsub = data.copy()

    data_bkgsub -= bkg

    nddata = NDData(data=data_bkgsub)
    stars = extract_stars(nddata, stars_tbl, size=sizes[j])

    print(f"Creating ePSF for image {i + 1} of filter {filt}, detector {det}")

    epsf_builder = EPSFBuilder(oversampling=oversample, maxiters=3, progress_bar=False)

    epsf, fitted_stars = epsf_builder(stars)
    dict_psfs_epsf[det][filt]['epsf single'][i + 1] = epsf

Note: here we limit the maximum number of iterations to 3 (to limit it’s run time), but in practice one should use about 10 or more iterations.

tic = time.perf_counter()

sizes = [11, 11]  # size of the cutout (extract region) for each PSF star - must match number of filters analyzed
oversample = 4

for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):
            with warnings.catch_warnings():
                # ignore warnings about stars close to image edge
                warnings.simplefilter("ignore", category=AstropyUserWarning) 
                build_epsf(det=det, filt=filt)

toc = time.perf_counter()

print("Time to build the Effective PSF:", toc - tic)

Display the ePSFs#

We display only 1 ePSF for each filter

fig, ax = plt.subplots(ncols=2, figsize=(14, 14))

for det in dets_short:
    for i, filt in enumerate(filts_short):
        img = dict_psfs_epsf[det][filt]['epsf single'][i + 1].data
        norm_epsf = simple_norm(img, 'log', percent=99.)
        ax[i].imshow(img, norm=norm_epsf)
        ax[i].set_title(filt, fontdict=font2)

Work in Progress - Build a grid of effective PSF#

Two functions:

  • count PSF stars in the grid

  • create a gridded ePSF

The purpose of the first function is to count how many good PSF stars are in each sub-region defined by the grid number N. The function should start from the number provided by the user and iterate until the minimum grid size 2x2. Depending on the number of PSF stars that the users want in each cell of the grid, they can choose the appropriate grid size or modify the threshold values for the stars detection, selected when creating the single ePSF (in the Finding stars cell above).

The second function creates a grid of PSFs with EPSFBuilder. The function will return a a GriddedEPSFModel object containing a 3D array of N × n × n. The 3D array represents the N number of 2D n × n ePSFs created. It should include a grid_xypos key which will state the position of the PSF on the detector for each of the PSFs. The order of the tuples in grid_xypos refers to the number the PSF is in the 3D array.

I. Counting PSF stars in each region of the grid#

def count_PSFstars_grid(grid_points=5, size=15, min_numpsf=40):

    num_grid_calc = np.arange(2, grid_points + 1, 1)
    num_grid_calc = num_grid_calc[::-1]

    for num in num_grid_calc:
        print(f"Calculating the number of PSF stars in a {num} x {num} grid")
        print("")

        image = fits.open(dict_images[det][filt]['images'][i])
        data_sb = image[1].data

        points = np.int16((data_sb.shape[0] / num) / 2)
        x_center = np.arange(points, 2 * points * (num), 2 * points)
        y_center = np.arange(points, 2 * points * (num), 2 * points)

        centers = np.array(np.meshgrid(x_center, y_center)).T.reshape(-1, 2)

        for n, val in enumerate(centers):

            x = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['xcentroid']
            y = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['ycentroid']
            # flux = dict_psfs_epsf[det][filt]['table psf stars'][i + 1]['flux']

            half_size = (size - 1) / 2

            lim1 = val[0] - points + half_size
            lim2 = val[0] + points - half_size
            lim3 = val[1] - points + half_size
            lim4 = val[1] + points - half_size

            test = (x > lim1) & (x < lim2) & (y > lim3) & (y < lim4)

            if np.count_nonzero(test) < min_numpsf:
                print(f"Center Coordinates of grid cell {i + 1} are ({val[0]}, {val[1]}) --- Not enough PSF stars in the cell (number of PSF stars < {min_numpsf})")

            else:
                print(f"Center Coordinate of grid cell {n + 1} are ({val[0]}, {val[1]}) --- Number of PSF stars: {np.count_nonzero(test)}")                
        print("")
for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):

            print(f"Analyzing image {i + 1} of filter {filt}, detector {det}")
            print("")

            count_PSFstars_grid(grid_points=5, size=15, min_numpsf=40)

TODO - Create a grid of ePSF#

Here goes the function that creates a grid of ePSF that can be saved in the epsf dictionary.

TODO - Use the ePSF grid to perturbate the WebbPSF model#

Here goes the function that create a grid of PSF models obtained perturbating the WebbPSF PSF models using the ePSF grid created above.

Perform PSF photometry#

We perform the PSF photometry on the images, saving by default the output catalogs and the residual images in the dictionary created below. It is also possible to save the output catalogs (pickles pandas object) and residual images (fits files) in the current directory using the parameters save_output and save_residuals.

dict_phot = {}

for det in dets_short:
    dict_phot.setdefault(det, {})
    for j, filt in enumerate(filts_short):
        dict_phot[det].setdefault(filt, {})

        dict_phot[det][filt]['residual images'] = {}
        dict_phot[det][filt]['output photometry tables'] = {}

        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):
            dict_phot[det][filt]['residual images'][i + 1] = None
            dict_phot[det][filt]['output photometry tables'][i + 1] = None

Note: to speed up the notebook, we use a high threshold in the finding algorithm (threshold ~ 2000) and we will use in the analyis below the catalogs obtained with a sigma threshold = 10 from a previous reduction run. To perform a meaningful data reduction, the user should modify the threshold accordingly.

Here we use as PSF model the grid of WebbPSF PSFs, but the users can change the model and use the others available (i.e., single WebbPSF PSF, single ePSF) modifying the psf parameter in the function.

def psf_phot(det='NRCA1', filt='F070W', th=2000, psf='grid_webbpsf', save_residuals=False, save_output=False):

    bkgrms = MADStdBackgroundRMS()
    mmm_bkg = MMMBackground()
    fitter = LevMarLSQFitter()

    im = fits.open(dict_images[det][filt]['images'][i])
    imh = im[1].header
    data_sb = im[1].data

    d = im[0].header['DETECTOR']
    prim_dith_pos = im[0].header['PATT_NUM']
    prim_dith_num = im[0].header['NUMDTHPT']
    subpx_dith_pos = im[0].header['SUBPXNUM']
    subpx_dith_num = im[0].header['SUBPXPNS']

    data = data_sb / imh['PHOTMJSR']

    units = imh['BUNIT']
    print(f"Conversion factor from {units} to DN/s for filter {filt}: {imh['PHOTMJSR']}")
    print("Applying conversion to the data")
            
    sigma_psf = dict_utils[filt]['psf fwhm']
    print(f"FWHM for the filter {filt}: {sigma_psf}")
    
    std = bkgrms(data)
    bkg = mmm_bkg(data)
    
    daofind = DAOStarFinder(threshold=th * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,
                            sharplo=0.30, sharphi=1.40)
    
    grouper = SourceGrouper(5.0 * sigma_psf)
    
    # grid PSF

    if psf == 'grid_webbpsf':
        print("Using as PSF model WebbPSF PSFs grid")
        psf_model = dict_psfs_webbpsf[det][filt]['psf model grid'].copy()

    # single psf:

    if psf == 'single_webbpsf':
        print("Using as PSF model WebbPSF single PSF")
        psf_model = dict_psfs_webbpsf[det][filt]['psf model single'].copy()

    # epsf:

    if psf == 'single_epsf':
        print("Using as PSF model single ePSF")
        psf_model = dict_psfs_epsf[det][filt]['epsf single'][i + 1].copy()

    print(f"Performing the photometry on image {i + 1} of filter {filt}, detector {det}")
            
    tic = time.perf_counter()

    data_sub = data - mmm_bkg(data)
    psf_shape = (11, 11)
    
    phot = IterativePSFPhotometry(psf_model, psf_shape, daofind,
                                  grouper=grouper, fitter=fitter,
                                  fitter_maxiters=500,
                                  maxiters=2, aperture_radius=ap_radius[j])
    result = phot(data_sub)
    
    toc = time.perf_counter()

    dtime = (toc - tic)
    print(f"Time needed to perform photometry on image {i + 1}: {dtime:.2f} sec")
    print(f"Number of sources detected in image {i + 1} for filter {filt}: {len(result)}")
        
    residual_image = phot.make_residual_image(data_sub, psf_shape=psf_shape)
                            
    dict_phot[det][filt]['residual images'][i + 1] = residual_image
    dict_phot[det][filt]['output photometry tables'][i + 1] = result

    # save the residual images as fits file:

    if save_residuals:
        hdu = fits.PrimaryHDU(residual_image)
        hdul = fits.HDUList([hdu])
        residual_outname = f'residual_{d}_{filt}_webbPSF_gridPSF_{prim_dith_pos}of{prim_dith_num}_{subpx_dith_pos}of{subpx_dith_num}.fits'

        dir_output_phot = './'

        hdul.writeto(os.path.join(dir_output_phot, residual_outname))
        outname = 'phot_{d}_{filt}_webbPSF_gridPSF_level2_{prim_dith_pos}of{prim_dith_num}_{subpx_dith_pos}of{subpx_dith_num}.pkl'

    # save the output photometry Tables
    if save_output:
        tab = result.to_pandas()
        tab.to_pickle(os.path.join(dir_output_phot, outname))
tic_tot = time.perf_counter()

ap_radius = [3.0, 3.5]  # must match the number of filters analyzed

if glob.glob('./*residual*.fits'):
    print("Deleting Residual images from directory")
    files = glob.glob('./residual*.fits')
    for file in files:
        os.remove(file)

for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images[det][filt]['images']), 1):
            
            psf_phot(det=det, filt=filt, th=2000, psf='grid_webbpsf', save_residuals=True, save_output=False) 

toc_tot = time.perf_counter()
number = len(filts_short) * len(dict_images[det][filt]['images'])
dtime = (toc_tot - tic_tot)
print(f"Time elapsed to perform the photometry of the {number} images: {dtime:.2f} sec")

Output Photometry Table#

dict_phot['NRCB1']['F115W']['output photometry tables'][1]

Display subtracted image#

As an example, we show the comparison between one science image and the residual image after the data reduction for both filters. Note that the residual image is obtained from the photometry run in the cell above with a very high detection threshold.

fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(14, 14))

for det in dets_short:
    for i, filt in enumerate(filts_short):
        image = fits.open(dict_images[det][filt]['images'][0])
        data_sb = image[1].data
        norm = simple_norm(data_sb, 'sqrt', percent=99.)
        ax[0, i].imshow(data_sb, norm=norm, cmap='Greys')
        ax[0, i].set_xlabel("X [px]", fontdict=font2)
        ax[0, i].set_ylabel("Y [px]", fontdict=font2)
        ax[0, i].set_title(filt, fontdict=font2)

for det in dets_short:
    for i, filt in enumerate(filts_short):
        res = dict_phot[det][filt]['residual images'][1]
        norm = simple_norm(res, 'sqrt', percent=99.)
        ax[1, i].imshow(res, norm=norm, cmap='Greys')
        ax[1, i].set_xlabel("X [px]", fontdict=font2)
        ax[1, i].set_ylabel("Y [px]", fontdict=font2)

plt.tight_layout()

Part II - Data Analysis#

Note: here we use the reduction obtained using a grid of WebbPSF PSFs as PSF models. The users can perform the data analysis using different PSF models (single PSF model, PSF grid, etc.) and compare the results.

Load Tables with PSF Photometry#

if not glob.glob('./*phot*gridPSF*.pkl'):

    print("Downloading Photometry Output")

    boxlink_cat_f115w = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/phot_cat_F115W.tar.gz'
    boxfile_cat_f115w = './phot_cat_F115W.tar.gz'
    request.urlretrieve(boxlink_cat_f115w, boxfile_cat_f115w)

    tar = tarfile.open(boxfile_cat_f115w, 'r')
    tar.extractall(filter='data')

    boxlink_cat_f200w = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/phot_cat_F200W.tar.gz'
    boxfile_cat_f200w = './phot_cat_F200W.tar.gz'
    request.urlretrieve(boxlink_cat_f200w, boxfile_cat_f200w)

    tar = tarfile.open(boxfile_cat_f200w, 'r')
    tar.extractall(filter='data')

    cat_dir = './'
    phots_pkl_f115w = sorted(glob.glob(os.path.join(cat_dir, '*F115W*gridPSF*.pkl')))
    phots_pkl_f200w = sorted(glob.glob(os.path.join(cat_dir, '*F200W*gridPSF*.pkl')))                       

else:

    cat_dir = './'
    phots_pkl_f115w = sorted(glob.glob(os.path.join(cat_dir, '*F115W*gridPSF*.pkl')))
    phots_pkl_f200w = sorted(glob.glob(os.path.join(cat_dir, '*F200W*gridPSF*.pkl')))                      

results_f115w = []
results_f200w = []

for phot_pkl_f115w, phot_pkl_f200w in zip(phots_pkl_f115w, phots_pkl_f200w):

    ph_f115w = pd.read_pickle(phot_pkl_f115w)
    ph_f200w = pd.read_pickle(phot_pkl_f200w)

    result_f115w = QTable.from_pandas(ph_f115w)
    result_f200w = QTable.from_pandas(ph_f200w)

    results_f115w.append(result_f115w)
    results_f200w.append(result_f200w)

Transform the images to DataModel#

In order to assign the WCS coordinate and hence cross-match the images, we need to transform the images to DataModel. The coordinates are assigned during the step assign_wcs step in the JWST pipeline and allow us to cross-match the different catalogs obtained for each filter.

images_f115w = []
images_f200w = []

for i in np.arange(0, len(dict_images['NRCB1']['F115W']['images']), 1):

    image_f115w = ImageModel(dict_images['NRCB1']['F115W']['images'][i])
    images_f115w.append(image_f115w)
        
for i in np.arange(0, len(dict_images['NRCB1']['F200W']['images']), 1):

    image_f200w = ImageModel(dict_images['NRCB1']['F200W']['images'][i])
    images_f200w.append(image_f200w)

Cross-match the catalogs from the two filters for the 4 images#

We cross-match the catalogs to obtain the single color-magnitude diagrams.

Stars from the two filters are associated if the distance between the matches is < 0.5 px.

results_clean_f115w = []
results_clean_f200w = []

for i in np.arange(0, len(images_f115w), 1):

    mask_f115w = ((results_f115w[i]['x_fit'] > 0) & (results_f115w[i]['x_fit'] < 2048) &
                  (results_f115w[i]['y_fit'] > 0) & (results_f115w[i]['y_fit'] < 2048) &
                  (results_f115w[i]['flux_fit'] > 0))

    result_clean_f115w = results_f115w[i][mask_f115w]

    ra_f115w, dec_f115w = images_f115w[i].meta.wcs(result_clean_f115w['x_fit'], result_clean_f115w['y_fit'])
    radec_f115w = SkyCoord(ra_f115w, dec_f115w, unit='deg')
    result_clean_f115w['radec'] = radec_f115w
    results_clean_f115w.append(result_clean_f115w)

    mask_f200w = ((results_f200w[i]['x_fit'] > 0) & (results_f200w[i]['x_fit'] < 2048) &
                  (results_f200w[i]['y_fit'] > 0) & (results_f200w[i]['y_fit'] < 2048) &
                  (results_f200w[i]['flux_fit'] > 0))

    result_clean_f200w = results_f200w[i][mask_f200w]

    ra_f200w, dec_f200w = images_f200w[i].meta.wcs(result_clean_f200w['x_fit'], result_clean_f200w['y_fit'])
    radec_f200w = SkyCoord(ra_f200w, dec_f200w, unit='deg')

    result_clean_f200w['radec'] = radec_f200w
    results_clean_f200w.append(result_clean_f200w)
max_sep = 0.015 * u.arcsec

matches_phot_single = []
filt1 = 'F115W'
filt2 = 'F200W'

for res1, res2 in zip(results_clean_f115w, results_clean_f200w):

    idx, d2d, _ = match_coordinates_sky(res1['radec'], res2['radec'])

    sep_constraint = d2d < max_sep

    match_phot_single = Table()

    x_0_f115w = res1['x_0'][sep_constraint]
    y_0_f115w = res1['y_0'][sep_constraint]
    x_fit_f115w = res1['x_fit'][sep_constraint]
    y_fit_f115w = res1['y_fit'][sep_constraint]
    radec_f115w = res1['radec'][sep_constraint]
    mag_f115w = (-2.5 * np.log10(res1['flux_fit']))[sep_constraint]
    emag_f115w = (1.086 * (res1['flux_unc'] / res1['flux_fit']))[sep_constraint]

    x_0_f200w = res2['x_0'][idx[sep_constraint]]
    y_0_f200w = res2['y_0'][idx[sep_constraint]]
    x_fit_f200w = res2['x_fit'][idx[sep_constraint]]
    y_fit_f200w = res2['y_fit'][idx[sep_constraint]]
    radec_f200w = res2['radec'][idx][sep_constraint]
    mag_f200w = (-2.5 * np.log10(res2['flux_fit']))[idx[sep_constraint]]
    emag_f200w = (1.086 * (res2['flux_unc'] / res2['flux_fit']))[idx[sep_constraint]]

    match_phot_single['x_0_' + filt1] = x_0_f115w
    match_phot_single['y_0_' + filt1] = y_0_f115w
    match_phot_single['x_fit_' + filt1] = x_fit_f115w
    match_phot_single['y_fit_' + filt1] = y_fit_f115w
    match_phot_single['radec_' + filt1] = radec_f115w
    match_phot_single['mag_' + filt1] = mag_f115w
    match_phot_single['emag_' + filt1] = emag_f115w
    match_phot_single['x_0_' + filt2] = x_0_f200w
    match_phot_single['y_0_' + filt2] = y_0_f200w
    match_phot_single['x_fit_' + filt2] = x_fit_f200w
    match_phot_single['y_fit_' + filt2] = y_fit_f200w
    match_phot_single['radec_' + filt2] = radec_f200w
    match_phot_single['mag_' + filt2] = mag_f200w
    match_phot_single['emag_' + filt2] = emag_f200w

    matches_phot_single.append(match_phot_single)    

Color-Magnitude Diagrams (Instrumental Magnitudes) for the 4 images#

plt.figure(figsize=(12, 16))
plt.clf()

for i in np.arange(0, len(matches_phot_single), 1):
    ax = plt.subplot(2, 2, i + 1)

    j = str(i + 1)

    xlim0 = -0.5
    xlim1 = 0.8
    ylim0 = -1
    ylim1 = -9

    ax.set_xlim(xlim0, xlim1)
    ax.set_ylim(ylim0, ylim1)

    ax.xaxis.set_major_locator(ticker.AutoLocator())
    ax.xaxis.set_minor_locator(ticker.AutoMinorLocator())
    ax.yaxis.set_major_locator(ticker.AutoLocator())
    ax.yaxis.set_minor_locator(ticker.AutoMinorLocator())

    f115w_single = matches_phot_single[i]['mag_' + filt1]
    f200w_single = matches_phot_single[i]['mag_' + filt2]

    ax.scatter(f115w_single - f200w_single, f115w_single, s=1, color='k')

    ax.set_xlabel(filt1 + '-' + filt2, fontdict=font2)
    ax.set_ylabel(filt1, fontdict=font2)
    ax.text(xlim0 + 0.1, -8.65, f"Image {j}", fontdict=font2)
    
plt.tight_layout()

Difference in retrieved positions (in pixels) between daofind an PSF routine#

We show the difference in the stars position derived from daofind and the psf fitting algorithm. We also show the difference \(\Delta\)X and \(\Delta\)Y as a function of the instrumental magnitudes.

plt.figure(figsize=(12, 6))

ax1 = plt.subplot(1, 2, 1)

xlim0 = -1
xlim1 = 1
ylim0 = -1
ylim1 = 1

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

x_find_f115w = results_clean_f115w[0]['x_0']
y_find_f115w = results_clean_f115w[0]['y_0']

x_psf_f115w = results_clean_f115w[0]['x_fit']
y_psf_f115w = results_clean_f115w[0]['y_fit']

delta_x_f115w = x_find_f115w - x_psf_f115w
delta_y_f115w = y_find_f115w - y_psf_f115w

_, d_x_f115w, sigma_d_x_f115w = sigma_clipped_stats(delta_x_f115w)
_, d_y_f115w, sigma_d_y_f115w = sigma_clipped_stats(delta_y_f115w)

ax1.scatter(delta_x_f115w, delta_y_f115w, s=1, color='gray')

ax1.set_xlabel(r'$\Delta$ X (px)', fontdict=font2)
ax1.set_ylabel(r'$\Delta$ Y (px)', fontdict=font2)
ax1.set_title(filt1, fontdict=font2)
ax1.text(xlim0 + 0.05, ylim1 - 0.15, rf'$\Delta$ X = {d_x_f115w:5.3f} $\pm$ {sigma_d_x_f115w:5.3f}',
         color='k', fontdict=font2)
ax1.text(xlim0 + 0.05, ylim1 - 0.30, rf'$\Delta$ Y = {d_y_f115w:5.3f} $\pm$ {sigma_d_y_f115w:5.3f}',
         color='k', fontdict=font2)
ax1.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')
ax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax2 = plt.subplot(1, 2, 2)

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

x_find_f200w = results_clean_f200w[0]['x_0']
y_find_f200w = results_clean_f200w[0]['y_0']

x_psf_f200w = results_clean_f200w[0]['x_fit']
y_psf_f200w = results_clean_f200w[0]['y_fit']

delta_x_f200w = x_find_f200w - x_psf_f200w
delta_y_f200w = y_find_f200w - y_psf_f200w

_, d_x_f200w, sigma_d_x_f200w = sigma_clipped_stats(delta_x_f200w)
_, d_y_f200w, sigma_d_y_f200w = sigma_clipped_stats(delta_y_f200w)

ax2.scatter(delta_x_f200w, delta_y_f200w, s=1, color='gray')
ax2.text(xlim0 + 0.05, ylim1 - 0.15, rf'$\Delta$ X = {d_x_f200w:5.3f} $\pm$ {sigma_d_x_f200w:5.3f}',
         color='k', fontdict=font2)
ax2.text(xlim0 + 0.05, ylim1 - 0.30, rf'$\Delta$ Y = {d_y_f200w:5.3f} $\pm$ {sigma_d_y_f200w:5.3f}',
         color='k', fontdict=font2)
ax2.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')
ax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax2.set_xlabel(r'$\Delta$ X (px)', fontdict=font2)
ax2.set_ylabel(r'$\Delta$ Y (px)', fontdict=font2)
ax2.set_title(filt2, fontdict=font2)

plt.tight_layout()
plt.figure(figsize=(12, 8))

ax1 = plt.subplot(2, 2, 1)

xlim0 = -9
xlim1 = -1
ylim0 = -1
ylim1 = 1

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

mag_inst_f115w = -2.5 * np.log10(results_clean_f115w[0]['flux_fit'])

ax1.scatter(mag_inst_f115w, delta_x_f115w, s=1, color='gray')
ax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax1.set_xlabel(filt1 + '_inst', fontdict=font2)
ax1.set_ylabel(r'$\Delta$ X (px)', fontdict=font2)

ax2 = plt.subplot(2, 2, 2)

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax2.scatter(mag_inst_f115w, delta_y_f115w, s=1, color='gray')
ax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax2.set_xlabel(filt1 + '_inst', fontdict=font2)
ax2.set_ylabel(r'$\Delta$ Y (px)', fontdict=font2)

ax3 = plt.subplot(2, 2, 3)

ax3.set_xlim(xlim0, xlim1)
ax3.set_ylim(ylim0, ylim1)

ax3.xaxis.set_major_locator(ticker.AutoLocator())
ax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax3.yaxis.set_major_locator(ticker.AutoLocator())
ax3.yaxis.set_minor_locator(ticker.AutoMinorLocator())

mag_inst_f200w = -2.5 * np.log10(results_clean_f200w[0]['flux_fit'])

ax3.scatter(mag_inst_f200w, delta_x_f200w, s=1, color='gray')
ax3.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax3.set_xlabel(filt2 + '_inst', fontdict=font2)
ax3.set_ylabel(r'$\Delta$ X (px)', fontdict=font2)

ax4 = plt.subplot(2, 2, 4)

ax4.set_xlim(xlim0, xlim1)
ax4.set_ylim(ylim0, ylim1)

ax4.xaxis.set_major_locator(ticker.AutoLocator())
ax4.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax4.yaxis.set_major_locator(ticker.AutoLocator())
ax4.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax4.scatter(mag_inst_f200w, delta_y_f200w, s=1, color='gray')
ax4.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

ax4.set_xlabel(filt2 + '_inst', fontdict=font2)
ax4.set_ylabel(r'$\Delta$ Y (px)', fontdict=font2)

plt.tight_layout()

Cross-match the 4 catalogs for each filter#

To obtain a final color-magnitude diagram, we need to cross-match all the catalogs for each filters and then cross-match the derived final catalogs.

Note: this is the most conservative approach since we impose that a star must be found in all 4 catalogs.

Note for developer:#

I couldn’t find an easier way to write this function, where you need to match the first two catalogs, derive a sub-catalogs with only the matches and then iterate for all the other catalogs available. We should also think on how to create a function that allows to keep the stars also if they are available in X out of Y catalogs (i.e., if for some reasons, a measure is not available in 1 of the images, but the star is well measured in the other 3, it doesn’t make sense to discard the object).

def crossmatch_filter(table=None):

    num = 0
    num_cat = np.char.mod('%d', np.arange(1, len(table) + 1, 1))

    idx_12, d2d_12, _ = match_coordinates_sky(table[num]['radec'], table[num + 1]['radec'])

    sep_constraint_12 = d2d_12 < max_sep

    matches_12 = Table()

    matches_12['radec_' + num_cat[num]] = table[num]['radec'][sep_constraint_12]
    matches_12['mag_' + num_cat[num]] = (-2.5 * np.log10(table[num]['flux_fit']))[sep_constraint_12]
    matches_12['emag_' + num_cat[num]] = (1.086 * (table[num]['flux_unc'] / 
                                                   table[num]['flux_fit']))[sep_constraint_12]

    matches_12['radec_' + num_cat[num + 1]] = table[num + 1]['radec'][idx_12[sep_constraint_12]]
    matches_12['mag_' + num_cat[num + 1]] = (-2.5 * np.log10(table[num + 1]['flux_fit']))[idx_12[sep_constraint_12]]
    matches_12['emag_' + num_cat[num + 1]] = (1.086 * (table[num + 1]['flux_unc'] /
                                                       table[num + 1]['flux_fit']))[idx_12[sep_constraint_12]]

    idx_123, d2d_123, _ = match_coordinates_sky(matches_12['radec_' + num_cat[num]], table[num + 2]['radec'])

    sep_constraint_123 = d2d_123 < max_sep

    matches_123 = Table()

    matches_123['radec_' + num_cat[num]] = matches_12['radec_' + num_cat[num]][sep_constraint_123]
    matches_123['mag_' + num_cat[num]] = matches_12['mag_' + num_cat[num]][sep_constraint_123]
    matches_123['emag_' + num_cat[num]] = matches_12['emag_' + num_cat[num]][sep_constraint_123]
    matches_123['radec_' + num_cat[num + 1]] = matches_12['radec_' + num_cat[num + 1]][sep_constraint_123]
    matches_123['mag_' + num_cat[num + 1]] = matches_12['mag_' + num_cat[num + 1]][sep_constraint_123]
    matches_123['emag_' + num_cat[num + 1]] = matches_12['emag_' + num_cat[num + 1]][sep_constraint_123]
    matches_123['radec_' + num_cat[num + 2]] = table[num + 2]['radec'][idx_123[sep_constraint_123]]
    matches_123['mag_' + num_cat[num + 2]] = (-2.5 * np.log10(table[num + 2]['flux_fit']))[idx_123[sep_constraint_123]]
    matches_123['emag_' + num_cat[num + 2]] = (1.086 * (table[num + 2]['flux_unc'] /
                                                        table[num + 2]['flux_fit']))[idx_123[sep_constraint_123]]

    idx_1234, d2d_1234, _ = match_coordinates_sky(matches_123['radec_' + num_cat[num]], table[num + 3]['radec'])

    sep_constraint_1234 = d2d_1234 < max_sep

    matches_1234 = Table()

    matches_1234['radec_' + num_cat[num]] = matches_123['radec_' + num_cat[num]][sep_constraint_1234]
    matches_1234['mag_' + num_cat[num]] = matches_123['mag_' + num_cat[num]][sep_constraint_1234]
    matches_1234['emag_' + num_cat[num]] = matches_123['emag_' + num_cat[num]][sep_constraint_1234]
    matches_1234['radec_' + num_cat[num + 1]] = matches_123['radec_' + num_cat[num + 1]][sep_constraint_1234]
    matches_1234['mag_' + num_cat[num + 1]] = matches_123['mag_' + num_cat[num + 1]][sep_constraint_1234]
    matches_1234['emag_' + num_cat[num + 1]] = matches_123['emag_' + num_cat[num + 1]][sep_constraint_1234]
    matches_1234['radec_' + num_cat[num + 2]] = matches_123['radec_' + num_cat[num + 2]][sep_constraint_1234]
    matches_1234['mag_' + num_cat[num + 2]] = matches_123['mag_' + num_cat[num + 2]][sep_constraint_1234]
    matches_1234['emag_' + num_cat[num + 2]] = matches_123['emag_' + num_cat[num + 2]][sep_constraint_1234]
    matches_1234['radec_' + num_cat[num + 3]] = table[num + 3]['radec'][idx_1234[sep_constraint_1234]]
    matches_1234['mag_' + num_cat[num + 3]] = (-2.5 * np.log10(table[num + 3]['flux_fit']))[idx_1234[sep_constraint_1234]]
    matches_1234['emag_' + num_cat[num + 3]] = (1.086 * (table[num + 3]['flux_unc'] /
                                                         table[num + 3]['flux_fit']))[idx_1234[sep_constraint_1234]]

    matches_1234

    return matches_1234
matches_f115w = crossmatch_filter(table=results_clean_f115w)
matches_f200w = crossmatch_filter(table=results_clean_f200w)

For the final catalog, we assume that the magnitude is the mean of the 4 measures and the error on the magnitude is its standard deviation.

To easily perform this arithmetic operation on the table, we convert the table to pandas dataframe.

df_f115w = matches_f115w.to_pandas()
df_f200w = matches_f200w.to_pandas()

df_f115w['RA_' + filt1] = df_f115w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].mean(axis=1)
df_f115w['e_RA_' + filt1] = df_f115w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].std(axis=1)
df_f115w['Dec_' + filt1] = df_f115w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].mean(axis=1)
df_f115w['e_Dec_' + filt1] = df_f115w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].std(axis=1)
df_f115w[filt1 + '_inst'] = df_f115w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].mean(axis=1)
df_f115w['e' + filt1 + '_inst'] = df_f115w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].std(axis=1)

df_f200w['RA_' + filt2] = df_f200w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].mean(axis=1)
df_f200w['e_RA_' + filt2] = df_f200w[['radec_1.ra', 'radec_2.ra', 'radec_3.ra', 'radec_4.ra']].std(axis=1)
df_f200w['Dec_' + filt2] = df_f200w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].mean(axis=1)
df_f200w['e_Dec_' + filt2] = df_f200w[['radec_1.dec', 'radec_2.dec', 'radec_3.dec', 'radec_4.dec']].std(axis=1)
df_f200w[filt2 + '_inst'] = df_f200w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].mean(axis=1)
df_f200w['e' + filt2 + '_inst'] = df_f200w[['mag_1', 'mag_2', 'mag_3', 'mag_4']].std(axis=1)

Final Color-Magnitude Diagram (Instrumental Magnitudes)#

plt.figure(figsize=(12, 14))
plt.clf()

ax1 = plt.subplot(1, 2, 1)

ax1.set_xlabel(filt1 + '_inst -' + filt2 + '_inst', fontdict=font2)
ax1.set_ylabel(filt1 + '_inst', fontdict=font2)

xlim0 = -0.5
xlim1 = 0.8
ylim0 = -1.5
ylim1 = -9

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

radec_f115w_inst = SkyCoord(df_f115w['RA_' + filt1], df_f115w['Dec_' + filt1], unit='deg')
radec_f200w_inst = SkyCoord(df_f200w['RA_' + filt2], df_f200w['Dec_' + filt2], unit='deg')

idx_inst, d2d_inst, _ = match_coordinates_sky(radec_f115w_inst, radec_f200w_inst)

sep_constraint_inst = d2d_inst < max_sep

f115w_inst = np.array(df_f115w[filt1 + '_inst'][sep_constraint_inst])
ef115w_inst = np.array(df_f115w['e' + filt1 + '_inst'][sep_constraint_inst])
radec_f115w = radec_f115w_inst[sep_constraint_inst]

f200w_inst = np.array(df_f200w[filt2 + '_inst'][idx_inst[sep_constraint_inst]])
ef200w_inst = np.array(df_f200w['e' + filt2 + '_inst'][idx_inst[sep_constraint_inst]])
radec_f200w = radec_f200w_inst[idx_inst[sep_constraint_inst]]

ax1.scatter(f115w_inst - f200w_inst, f115w_inst, s=1, color='k')

ax2 = plt.subplot(2, 2, 2)

ax2.set_xlabel(filt1 + '_inst', fontdict=font2)
ax2.set_ylabel(r'$\sigma$' + filt1, fontdict=font2)

xlim0 = -9
xlim1 = -1.5
ylim0 = -0.01 
ylim1 = 1

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax2.scatter(df_f115w[filt1 + '_inst'], df_f115w['e' + filt1 + '_inst'], s=1, color='k')

ax3 = plt.subplot(2, 2, 4)

ax3.set_xlabel(filt2 + '_inst', fontdict=font2)
ax3.set_ylabel(r'$\sigma$' + filt2, fontdict=font2)

ax3.set_xlim(xlim0, xlim1)
ax3.set_ylim(ylim0, ylim1)

ax3.xaxis.set_major_locator(ticker.AutoLocator())
ax3.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax3.yaxis.set_major_locator(ticker.AutoLocator())
ax3.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax3.scatter(df_f200w[filt2 + '_inst'], df_f200w['e' + filt2 + '_inst'], s=1, color='k')

plt.tight_layout()

Photometric Zeropoints#

To obtain the final calibrated color-magnitude diagram, we need to calculate the photometric zeropoints. Hence we need to perform aperture photometry on the calibrated images (Level-3), apply the appropriate aperture correction for the finite aperture adopted (the values provided in the dictionary above are for an infinite aperture) and then compare it with the PSF photometry. Hence, we can summarize the steps as follows:

  • perform aperture photometry

  • apply appropriate aperture correction

  • apply tabulated zeropoint

  • cross-match with psf photometry

Load the calibrated and rectified images (Level 3 imaging pipeline)#

dict_images_combined = {'NRCA1': {}, 'NRCA2': {}, 'NRCA3': {}, 'NRCA4': {}, 'NRCA5': {},
                        'NRCB1': {}, 'NRCB2': {}, 'NRCB3': {}, 'NRCB4': {}, 'NRCB5': {}}

dict_filter_short = {}
dict_filter_long = {}

ff_short = []
det_short = []
det_long = []
ff_long = []
detlist_short = []
detlist_long = []
filtlist_short = []
filtlist_long = []

if not glob.glob('./*combined*fits'):

    print("Downloading images")

    boxlink_images_lev3 = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/images_level3.tar.gz'
    boxfile_images_lev3 = './images_level3.tar.gz'
    request.urlretrieve(boxlink_images_lev3, boxfile_images_lev3)

    tar = tarfile.open(boxfile_images_lev3, 'r')
    tar.extractall(filter='data')

    images_dir = './'
    files_singles = sorted(glob.glob(os.path.join(images_dir, "*combined*fits")))

else:

    images_dir = './'
    files_singles = sorted(glob.glob(os.path.join(images_dir, "*combined*fits")))

for file in files_singles:

    im = fits.open(file)
    f = im[0].header['FILTER']
    d = im[0].header['DETECTOR']

    if d == 'NRCBLONG':
        d = 'NRCB5'
    elif d == 'NRCALONG':
        d = 'NRCA5'
    else:
        d = d

    wv = float(f[1:3])

    if wv > 24:
        ff_long.append(f)
        det_long.append(d)

    else:
        ff_short.append(f)
        det_short.append(d)

    detlist_short = sorted(list(dict.fromkeys(det_short)))
    detlist_long = sorted(list(dict.fromkeys(det_long)))

    unique_list_filters_short = []
    unique_list_filters_long = []

    for x in ff_short:

        if x not in unique_list_filters_short:

            dict_filter_short.setdefault(x, {})

    for x in ff_long:
        if x not in unique_list_filters_long:
            dict_filter_long.setdefault(x, {})

    for d_s in detlist_short:
        dict_images_combined[d_s] = dict_filter_short

    for d_l in detlist_long:
        dict_images_combined[d_l] = dict_filter_long

    filtlist_short = sorted(list(dict.fromkeys(dict_filter_short)))
    filtlist_long = sorted(list(dict.fromkeys(dict_filter_long)))

    if len(dict_images_combined[d][f]) == 0:
        dict_images_combined[d][f] = {'images': [file]}
    else:
        dict_images_combined[d][f]['images'].append(file)

print("Available Detectors for SW channel:", detlist_short)
print("Available Detectors for LW channel:", detlist_long)
print("Available SW Filters:", filtlist_short)
print("Available LW Filters:", filtlist_long)

Display the images#

plt.figure(figsize=(14, 14))

for det in dets_short:
    for i, filt in enumerate(filts_short):

        image = fits.open(dict_images_combined[det][filt]['images'][0])
        data_sb = image[1].data

        ax = plt.subplot(1, len(filts_short), i + 1)

        norm = simple_norm(data_sb, 'sqrt', percent=99.)
        plt.xlabel("X [px]", fontdict=font2)
        plt.ylabel("Y [px]", fontdict=font2)
        plt.title(filt, fontdict=font2)

        ax.imshow(data_sb, norm=norm, cmap='Greys')
plt.tight_layout()

Aperture Photometry#

As we have done previously, we create a dictionary that contains the tables with the derived aperture photometry for each image.

dict_aper = {}

for det in dets_short:

    dict_aper.setdefault(det, {})
    for j, filt in enumerate(filts_short):

        dict_aper[det].setdefault(filt, {})

        dict_aper[det][filt]['stars for ap phot'] = None
        dict_aper[det][filt]['stars for ap phot matched'] = None
        dict_aper[det][filt]['aperture phot table'] = None

Find bright isolated stars#

def find_bright_stars(det='NRCA1', filt='F070W', dist_sel=False):

    bkgrms = MADStdBackgroundRMS()
    mmm_bkg = MMMBackground()

    image = fits.open(dict_images_combined[det][filt]['images'][i])
    data_sb = image[1].data
    imh = image[1].header

    print(f"Selecting stars for aperture photometry on image {i + 1} of filter {filt}, detector {det}")

    data = data_sb / imh['PHOTMJSR']
    units = imh['BUNIT']
    print(f"Conversion factor from {units} to DN/s for filter {filt}: {imh['PHOTMJSR']}")

    sigma_psf = dict_utils[filt]['psf fwhm']

    print(f"FWHM for the filter {filt}: {sigma_psf} px")

    std = bkgrms(data)
    bkg = mmm_bkg(data)
    daofind = DAOStarFinder(threshold=th[j] * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,
                            sharplo=0.30, sharphi=1.40)

    apcorr_stars = daofind(data)
    dict_aper[det][filt]['stars for ap phot'] = apcorr_stars
    
    if dist_sel:

        print("")
        print("Calculating closest neigbhour distance")

        d = []

        daofind_tot = DAOStarFinder(threshold=10 * std + bkg, fwhm=sigma_psf, roundhi=1.0, roundlo=-1.0,
                                    sharplo=0.30, sharphi=1.40)

        stars_tot = daofind_tot(data)

        x_tot = stars_tot['xcentroid']
        y_tot = stars_tot['ycentroid']

        for xx, yy in zip(apcorr_stars['xcentroid'], apcorr_stars['ycentroid']):

            sep = []
            dist = np.sqrt((x_tot - xx)**2 + (y_tot - yy)**2)
            sep = np.sort(dist)[1:2][0]
            d.append(sep)

        apcorr_stars['min distance'] = d
        mask_dist = (apcorr_stars['min distance'] > min_sep[j])

        apcorr_stars = apcorr_stars[mask_dist]

        dict_aper[det][filt]['stars for ap phot'] = apcorr_stars

        print("Minimum distance required:", min_sep[j], "px")
        print("")
        print(f"Number of bright isolated sources found in the image for {filt}: {len(apcorr_stars)}")
        print("-----------------------------------------------------")
        print("")
    else:
        print("")
        print(f"Number of bright sources found in the image for {filt}: {len(apcorr_stars)}")
        print("--------------------------------------------")
        print("")    
    
    return
tic = time.perf_counter()

th = [700, 500]  # threshold level for the two filters (length must match number of filters analyzed)
min_sep = [10, 10]  # minimum separation acceptable for zp stars from closest neighbour


for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images_combined[det][filt]['images']), 1):

            find_bright_stars(det=det, filt=filt, dist_sel=False)

toc = time.perf_counter()

print("Elapsed Time for finding stars for Aperture Photometry:", toc - tic)            

As a further way to obtain a good quality sample, we cross-match the catalogs from the two filters and retain only the stars in common

for det in dets_short:
    for j, filt in enumerate(filts_short):
        for i in np.arange(0, len(dict_images_combined[det][filt]['images']), 1):

            image = ImageModel(dict_images_combined[det][filt]['images'][i])

            ra, dec = image.meta.wcs(dict_aper[det][filt]['stars for ap phot']['xcentroid'],
                                     dict_aper[det][filt]['stars for ap phot']['ycentroid'])
        
            radec = SkyCoord(ra, dec, unit='deg')
            dict_aper[det][filt]['stars for ap phot']['radec'] = radec
idx_ap, d2d_ap, _ = match_coordinates_sky(dict_aper[det][filt1]['stars for ap phot']['radec'],
                                          dict_aper[det][filt2]['stars for ap phot']['radec'])

sep_constraint_ap = d2d_ap < max_sep

matched_apcorr_f115w = Table()
matched_apcorr_f200w = Table()

matched_apcorr_f115w = dict_aper[det][filt1]['stars for ap phot'][sep_constraint_ap]
matched_apcorr_f200w = dict_aper[det][filt2]['stars for ap phot'][idx_ap[sep_constraint_ap]]

dict_aper[det][filt1]['stars for ap phot matched'] = matched_apcorr_f115w
dict_aper[det][filt2]['stars for ap phot matched'] = matched_apcorr_f200w

Load aperture correction table#

Note: these values are obtained from the study of the synthetic WebbPSF PSFs. They will be updated once we have in-flight measures.

if os.path.isfile('./aperture_correction_table.txt'):
    ap_tab = './aperture_correction_table.txt'
else:
    print("Downloading the aperture correction table")

    boxlink_apcorr_table = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/aperture_correction_table.txt'
    boxfile_apcorr_table = './aperture_correction_table.txt'
    request.urlretrieve(boxlink_apcorr_table, boxfile_apcorr_table)
    ap_tab = './aperture_correction_table.txt'

aper_table = pd.read_csv(ap_tab, header=None, sep=r'\s+', index_col=0,
                         names=['filter', 'pupil', 'wave', 'r10', 'r20', 'r30', 'r40', 'r50', 'r60', 'r70', 'r80',
                                'r85', 'r90', 'sky_flux_px', 'apcorr10', 'apcorr20', 'apcorr30', 'apcorr40',
                                'apcorr50', 'apcorr60', 'apcorr70', 'apcorr80', 'apcorr85', 'apcorr90', 'sky_in',
                                'sky_out'], comment='#', skiprows=0, usecols=range(0, 26))
aper_table.head()

Perform Aperture Photometry#

def aperture_phot(det=det, filt='F070W'):

    radii = [aper_table.loc[filt]['r70']]

    ees = '70'.split()
    ee_radii = dict(zip(ees, radii))

    positions = np.transpose((dict_aper[det][filt]['stars for ap phot matched']['xcentroid'],
                              dict_aper[det][filt]['stars for ap phot matched']['ycentroid']))

    image = fits.open(dict_images_combined[det][filt]['images'][0])
    data_sb = image[1].data
    imh = image[1].header
    data = data_sb / imh['PHOTMJSR']

    # sky from the aperture correction table:

    sky = {"sky_in": aper_table.loc[filt]['r80'], "sky_out": aper_table.loc[filt]['r85']}

    tic = time.perf_counter()

    table_aper = Table()

    for ee, radius in ee_radii.items():
        print(f"Performing aperture photometry for radius equivalent to EE = {ee}% for filter {filt}")
        aperture = CircularAperture(positions, r=radius)
        annulus_aperture = CircularAnnulus(positions, r_in=sky["sky_in"], r_out=sky["sky_out"])
        annulus_mask = annulus_aperture.to_mask(method='center')

        bkg_median = []
        for mask in annulus_mask:
            annulus_data = mask.multiply(data)
            annulus_data_1d = annulus_data[mask.data > 0]
            _, median_sigclip, _ = sigma_clipped_stats(annulus_data_1d)
            bkg_median.append(median_sigclip)
        bkg_median = np.array(bkg_median)

        phot = aperture_photometry(data, aperture, method='exact')
        phot['annulus_median'] = bkg_median
        phot['aper_bkg'] = bkg_median * aperture.area
        phot['aper_sum_bkgsub'] = phot['aperture_sum'] - phot['aper_bkg']

        apcorr = [aper_table.loc[filt]['apcorr70']]

        phot['aper_sum_corrected'] = phot['aper_sum_bkgsub'] * apcorr

        phot['mag_corrected'] = -2.5 * np.log10(phot['aper_sum_corrected']) + dict_utils[filt]['VegaMAG zp modB']

        table_aper.add_column(phot['aperture_sum'], name='aper_sum_' + ee)
        table_aper.add_column(phot['annulus_median'], name='annulus_median_' + ee)
        table_aper.add_column(phot['aper_bkg'], name='aper_bkg_ee_' + ee)
        table_aper.add_column(phot['aper_sum_bkgsub'], name='aper_sum_bkgsub_' + ee)
        table_aper.add_column(phot['aper_sum_corrected'], name='aper_sum_corrected_' + filt) 
        table_aper.add_column(phot['mag_corrected'], name='mag_corrected_' + filt)

        dict_aper[det][filt]['aperture phot table'] = table_aper

    toc = time.perf_counter()
    print("Time Elapsed:", toc - tic)

    return
aperture_phot(det=det, filt=filt1)
aperture_phot(det=det, filt=filt2)

Derive Zeropoints#

plt.figure(figsize=(14, 8))
plt.clf()

ax1 = plt.subplot(2, 1, 1)

ax1.set_xlabel(filt1, fontdict=font2)
ax1.set_ylabel('Zeropoint', fontdict=font2)

idx_zp_1, d2d_zp_1, _ = match_coordinates_sky(dict_aper[det][filt1]['stars for ap phot matched']['radec'], radec_f115w_inst)

sep_constraint_zp_1 = d2d_zp_1 < max_sep

f115w_ap_matched = np.array(dict_aper[det][filt1]['aperture phot table']['mag_corrected_' + filt1][sep_constraint_zp_1])
f115w_psf_matched = np.array(df_f115w[filt1 + '_inst'][idx_zp_1[sep_constraint_zp_1]])

diff_f115w = f115w_ap_matched - f115w_psf_matched
_, zp_f115w, zp_sigma_f115w = sigma_clipped_stats(diff_f115w)

xlim0 = -9
xlim1 = -5
ylim0 = np.mean(diff_f115w) - 0.5
ylim1 = np.mean(diff_f115w) + 0.5

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax1.scatter(f115w_psf_matched, diff_f115w, s=50, color='k')
ax1.plot([xlim0, xlim1], [zp_f115w, zp_f115w], color='r', lw=5, ls='--')
ax1.text(xlim0 + 0.05, ylim1 - 0.15, filt1 + rf' Zeropoint = {zp_f115w:5.3f} $\pm$ {zp_sigma_f115w:5.3f}', color='k', fontdict=font2)
                
ax2 = plt.subplot(2, 1, 2)

ax2.set_xlabel(filt2, fontdict=font2)
ax2.set_ylabel('Zeropoint', fontdict=font2)

idx_zp_2, d2d_zp_2, _ = match_coordinates_sky(dict_aper[det][filt2]['stars for ap phot matched']['radec'], radec_f200w_inst)

sep_constraint_zp_2 = d2d_zp_2 < max_sep

f200w_ap_matched = np.array(dict_aper[det][filt2]['aperture phot table']['mag_corrected_' + filt2][sep_constraint_zp_2])
f200w_psf_matched = np.array(df_f200w[filt2 + '_inst'][idx_zp_2[sep_constraint_zp_2]])

diff_f200w = f200w_ap_matched - f200w_psf_matched
_, zp_f200w, zp_sigma_f200w = sigma_clipped_stats(diff_f200w)

xlim0 = -9
xlim1 = -5
ylim0 = np.mean(diff_f200w) - 0.5
ylim1 = np.mean(diff_f200w) + 0.5

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax2.scatter(f200w_psf_matched, diff_f200w, s=50, color='k')
ax2.plot([xlim0, xlim1], [zp_f200w, zp_f200w], color='r', lw=5, ls='--')
ax2.text(xlim0 + 0.05, ylim1 - 0.15, filt2 + rf' Zeropoint = {zp_f200w:5.3f} $\pm$ {zp_sigma_f200w:5.3f}', color='k', fontdict=font2)
                
plt.tight_layout()

Import input photometry#

if os.path.isfile('./pointsource.cat'):
    input_cat = './pointsource.cat'

else:
    
    print("Downloading input pointsource catalog")

    boxlink_input_cat = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/stellar_photometry/pointsource.cat'
    boxfile_input_cat = './pointsource.cat'
    request.urlretrieve(boxlink_input_cat, boxfile_input_cat)
    input_cat = './pointsource.cat'

cat = pd.read_csv(input_cat, header=None, sep=r'\s+', names=['ra_in', 'dec_in', 'f070w_in', 'f115w_in',
                                                             'f200w_in', 'f277w_in', 'f356w_in', 'f444w_in'],
                  comment='#', skiprows=7, usecols=range(0, 8))

cat.head()

Extract from the input catalog the stars in the same region as the one analyzed

lim_ra_min = np.min(radec_f115w.ra)
lim_ra_max = np.max(radec_f115w.ra)
lim_dec_min = np.min(radec_f115w.dec)
lim_dec_max = np.max(radec_f115w.dec)

cat_sel = cat[(cat['ra_in'] > lim_ra_min) & (cat['ra_in'] < lim_ra_max) & (cat['dec_in'] > lim_dec_min)
              & (cat['dec_in'] < lim_dec_max)]

Calibrated Color-Magnitude Diagram#

plt.figure(figsize=(12, 14))
plt.clf()

ax1 = plt.subplot(1, 2, 1)

mag1_in = np.array(cat_sel['f115w_in'])
mag2_in = np.array(cat_sel['f200w_in'])
diff_in = mag1_in - mag2_in

xlim0 = -0.25
xlim1 = 1.75
ylim0 = 25
ylim1 = 15 
ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax1.scatter(mag1_in - mag2_in, mag1_in, s=1, color='k')

ax1.set_xlabel(filt1 + ' - ' + filt2, fontdict=font2)
ax1.set_ylabel(filt1, fontdict=font2)
ax1.text(xlim0 + 0.15, 15.5, "Input", fontdict=font2)

ax2 = plt.subplot(1, 2, 2)

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

f115w = f115w_inst + zp_f115w 
f200w = f200w_inst + zp_f200w

maglim = np.arange(18, 25, 1)
mags = []
errs_mag = []
errs_col = []

for i in np.arange(0, len(maglim) - 1, 1):

    mag = (maglim[i] + maglim[i + 1]) / 2
    err_mag1 = ef115w_inst[(f115w > maglim[i]) & (f115w < maglim[i + 1])]
    err_mag2 = ef200w_inst[(f115w > maglim[i]) & (f115w < maglim[i + 1])]
    err_mag = np.mean(err_mag1[i])
    err_temp = np.sqrt(err_mag1**2 + err_mag2**2)
    err_col = np.mean(err_temp[i])

    errs_mag.append(err_mag)                  
    errs_col.append(err_col)
    mags.append(mag)

col = [0] * (len(maglim) - 1)

ax2.errorbar(col, mags, yerr=errs_mag, xerr=errs_col, fmt='o', color='k')
        
ax2.scatter(f115w - f200w, f115w, s=1, color='k')
ax2.text(xlim0 + 0.15, 15.5, "Output", fontdict=font2)

ax2.set_xlabel(filt1 + ' - ' + filt2, fontdict=font2)
ax2.set_ylabel(filt1, fontdict=font2)

plt.tight_layout()

Comparison between input and output photometry#

plt.figure(figsize=(14, 8))
plt.clf()

ax1 = plt.subplot(2, 1, 1)

ax1.set_xlabel(filt1, fontdict=font2)
ax1.set_ylabel(r'$\Delta$ Mag', fontdict=font2)

radec_input = SkyCoord(cat_sel['ra_in'], cat_sel['dec_in'], unit='deg')

idx_f115w_cfr, d2d_f115w_cfr, _ = match_coordinates_sky(radec_input, radec_f115w)

sep_f115w_cfr = d2d_f115w_cfr < max_sep

f115w_inp_cfr = np.array(cat_sel['f115w_in'][sep_f115w_cfr])
f115w_psf_cfr = np.array(f115w[idx_f115w_cfr[sep_f115w_cfr]])

diff_f115w_cfr = f115w_inp_cfr - f115w_psf_cfr
_, med_diff_f115w_cfr, sig_diff_f115w_cfr = sigma_clipped_stats(diff_f115w_cfr)

xlim0 = 16
xlim1 = 24.5
ylim0 = np.mean(diff_f115w_cfr) - 0.5
ylim1 = np.mean(diff_f115w_cfr) + 0.5

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax1.scatter(f115w_psf_cfr, diff_f115w_cfr, s=5, color='k')
ax1.plot([xlim0, xlim1], [0, 0], color='r', lw=5, ls='--')
text = rf'{filt1} $\Delta$ Mag = {med_diff_f115w_cfr:5.3f} $\pm$ {sig_diff_f115w_cfr:5.3f}'
ax1.text(xlim0 + 0.05, ylim1 - 0.15, text, color='k', fontdict=font2)

ax2 = plt.subplot(2, 1, 2)

ax2.set_xlabel(filt2, fontdict=font2)
ax2.set_ylabel(r'$\Delta$ Mag', fontdict=font2)

idx_f200w_cfr, d2d_f200w_cfr, _ = match_coordinates_sky(radec_input, radec_f200w)

sep_f200w_cfr = d2d_f200w_cfr < max_sep

f200w_inp_cfr = np.array(cat_sel['f200w_in'][sep_f200w_cfr])
f200w_psf_cfr = np.array(f200w[idx_f200w_cfr[sep_f200w_cfr]])

diff_f200w_cfr = f200w_inp_cfr - f200w_psf_cfr
_, med_diff_f200w_cfr, sig_diff_f200w_cfr = sigma_clipped_stats(diff_f200w_cfr)

xlim0 = 16
xlim1 = 24
ylim0 = np.mean(diff_f200w_cfr) - 0.5 
ylim1 = np.mean(diff_f200w_cfr) + 0.5

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax2.scatter(f200w_psf_cfr, diff_f200w_cfr, s=5, color='k')
ax2.plot([xlim0, xlim1], [0, 0], color='r', lw=5, ls='--')

text = rf'{filt2} $\Delta$ Mag = {med_diff_f200w_cfr:5.3f} $\pm$ {sig_diff_f200w_cfr:5.3f}'

ax2.text(xlim0 + 0.05, ylim1 - 0.15, text, color='k', fontdict=font2)

plt.tight_layout()
plt.figure(figsize=(12, 6))

ax1 = plt.subplot(1, 2, 1)

xlim0 = -10
xlim1 = 10
ylim0 = -10
ylim1 = 10

ax1.set_xlim(xlim0, xlim1)
ax1.set_ylim(ylim0, ylim1)

ax1.xaxis.set_major_locator(ticker.AutoLocator())
ax1.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax1.yaxis.set_major_locator(ticker.AutoLocator())
ax1.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax1.set_xlabel(r'$\Delta$ RA (mas)', fontdict=font2)
ax1.set_ylabel(r'$\Delta$ Dec (mas)', fontdict=font2)
ax1.set_title(filt1, fontdict=font2)

ra_f115w_inp_cfr = np.array(cat_sel['ra_in'][sep_f115w_cfr])
ra_f115w_psf_cfr = np.array(radec_f115w.ra[idx_f115w_cfr[sep_f115w_cfr]])

dec_f115w_inp_cfr = np.array(cat_sel['dec_in'][sep_f115w_cfr])
dec_f115w_psf_cfr = np.array(radec_f115w.dec[idx_f115w_cfr[sep_f115w_cfr]])

dec_rad_f115w = np.radians(dec_f115w_psf_cfr)

diffra_f115w_cfr = ((((ra_f115w_inp_cfr - ra_f115w_psf_cfr) * np.cos(dec_rad_f115w)) * u.deg).to(u.mas) / (1 * u.mas))

_, med_diffra_f115w_cfr, sig_diffra_f115w_cfr = sigma_clipped_stats(diffra_f115w_cfr)

diffdec_f115w_cfr = (((dec_f115w_inp_cfr - dec_f115w_psf_cfr) * u.deg).to(u.mas) / (1 * u.mas))

_, med_diffdec_f115w_cfr, sig_diffdec_f115w_cfr = sigma_clipped_stats(diffdec_f115w_cfr)

ax1.scatter(diffra_f115w_cfr, diffdec_f115w_cfr, s=1, color='k')
ax1.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')
ax1.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

text = rf'$\Delta$ RA (mas) = {med_diffra_f115w_cfr:5.3f} $\pm$ {sig_diffra_f115w_cfr:5.3f}'
ax1.text(xlim0 + 0.05, ylim1 - 1.50, text, color='k', fontdict=font2)

text = rf'$\Delta$ Dec (mas) = {med_diffdec_f115w_cfr:5.3f} $\pm$ {sig_diffdec_f115w_cfr:5.3f}'
ax1.text(xlim0 + 0.05, ylim1 - 3.0, text, color='k', fontdict=font2)

ax2 = plt.subplot(1, 2, 2)

xlim0 = -10
xlim1 = 10
ylim0 = -10
ylim1 = 10

ax2.set_xlim(xlim0, xlim1)
ax2.set_ylim(ylim0, ylim1)
ax2.set_title(filt2, fontdict=font2)

ax2.xaxis.set_major_locator(ticker.AutoLocator())
ax2.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax2.yaxis.set_major_locator(ticker.AutoLocator())
ax2.yaxis.set_minor_locator(ticker.AutoMinorLocator())

ax2.set_xlabel(r'$\Delta$ RA (mas)', fontdict=font2)
ax2.set_ylabel(r'$\Delta$ Dec (mas)', fontdict=font2)

ra_f200w_inp_cfr = np.array(cat_sel['ra_in'][sep_f200w_cfr])
ra_f200w_psf_cfr = np.array(radec_f200w.ra[idx_f200w_cfr[sep_f200w_cfr]])

dec_f200w_inp_cfr = np.array(cat_sel['dec_in'][sep_f200w_cfr])
dec_f200w_psf_cfr = np.array(radec_f200w.dec[idx_f200w_cfr[sep_f200w_cfr]])

dec_rad_f200w = np.radians(dec_f200w_psf_cfr)

diffra_f200w_cfr = ((((ra_f200w_inp_cfr - ra_f200w_psf_cfr) * np.cos(dec_rad_f200w)) * u.deg).to(u.mas) / (1 * u.mas))

_, med_diffra_f200w_cfr, sig_diffra_f200w_cfr = sigma_clipped_stats(diffra_f200w_cfr)

diffdec_f200w_cfr = (((dec_f200w_inp_cfr - dec_f200w_psf_cfr) * u.deg).to(u.mas) / (1 * u.mas))

_, med_diffdec_f200w_cfr, sig_diffdec_f200w_cfr = sigma_clipped_stats(diffdec_f200w_cfr)

ax2.scatter(diffra_f200w_cfr, diffdec_f200w_cfr, s=1, color='k')
ax2.plot([0, 0], [ylim0, ylim1], color='k', lw=2, ls='--')
ax2.plot([xlim0, xlim1], [0, 0], color='k', lw=2, ls='--')

text = rf'$\Delta$ Dec (mas) = {med_diffdec_f200w_cfr:5.3f} $\pm$ {sig_diffdec_f200w_cfr:5.3f}'
ax2.text(xlim0 + 0.05, ylim1 - 1.50, text, color='k', fontdict=font2)

text = rf'$\Delta$ RA (mas) = {med_diffra_f200w_cfr:5.3f} $\pm$ {sig_diffra_f200w_cfr:5.3f}'
ax2.text(xlim0 + 0.05, ylim1 - 3.0, text, color='k', fontdict=font2)

plt.tight_layout()

Final notes#

This notebook provides a general overview on how to perform PSF photometry using the photutils package. The choice of the different parameters adopted in all the reduction steps as well as the choice of the PSF model depend on the specific user science case. Moreover, a detailed analysis that allow to provide recommendations on how to set those parameters and outline the differences in the output photometry when different PSF models are adopted (single vs PSF grid, number of PSFs in the grid, etc.) will be possible only when real data will be available after the instrument commissioning. In this context, we note that one of the selected ERS program (ERS 1334 - The Resolved Stellar Populations Early Release Science Program) will provide a fundamental test benchmark to explore how the different choices outlined above will impact the quality of the PSF photometry in a crowded stellar region.

About this Notebook#

Authors: Matteo Correnti, JWST/NIRCam STScI Scientist II & Larry Bradley, Branch Deputy, Data Analysis Tools Branch
Updated on: 2024-10-22

Top of Page

Space Telescope Logo