Pandeia Tutorial#
Kernel Information#
To run this notebook, please select the “Roman Calibration” kernel at the top right of your window.
This notebook is read-only. You can run cells and make edits, but you must save changes to a different location. We recommend saving the notebook within your home directory, or to a new folder within your home (e.g. file > save notebook as > my-nbs/nb.ipynb). Note that a directory must exist before you attempt to add a notebook to it.
Introduction#
This notebook provides useful examples of common uses of Pandeia.
Pandeia is a high-fidelity exposure time calculator developed by STScI to characterize optimal observing setups for user-created astronomical scenes. It supports the Roman Wide Field Instrument (WFI) as well as the James Webb Space Telescope’s full complement of instruments. For more information about its functionality with Roman, refer to the Overview of Pandeia documentation.
Due to the complexity of its simulations, Pandeia is best used on scenes that encompass ~5% of a single WFI detector. (To simulate larger areas, see the STIPS notebook tutorial.)
Reference Data#
The cell below will check to ensure ancillary reference files for pandeia package are installed. If not, it will download the ancillary reference files and install them under your home directory (i.e., ${HOME}/refdata/).
Local Run Settings#
If you want to run the notebook in your local machine, refer to the information in local installation instructions before proceeding with the notebook. The instructions provide inportant information about setting up your environment, installing dependnecies, and adding to your working directory scripts to help with the reference data installation.
Depending on which (if any) reference data are missing, this cell may take several minutes to execute.
On the Roman Research Nexus#
If you are working on the Nexus, then the ancillary reference data are pre-installed and this cell will execute instantly.
import os
import sys
import importlib.util
try:
import notebook_data_dependencies as ndd
local = True
except ImportError:
local = False
# If running locally Get the directory with the script
if not local:
notebook_dir = os.getcwd()
shared_path = os.path.abspath(
os.path.join(notebook_dir, '..', '..', 'shared', 'notebook_data_dependencies.py')
)
if os.path.exists(shared_path):
print(f"Loading notebook_data_dependencies from shared location: {shared_path}")
spec = importlib.util.spec_from_file_location("notebook_data_dependencies", shared_path)
ndd = importlib.util.module_from_spec(spec)
sys.modules['notebook_data_dependencies'] = ndd # Optional: makes subsequent imports work
spec.loader.exec_module(ndd)
else:
raise FileNotFoundError(f"Local install script not found at {shared_path}")
if not local:
print("Running local data dependency installation...")
result = ndd.install_files(packages=['pandeia', 'synphot'])
# Update environment variables (if necessary) and print reference data paths
print('Reference data paths set to:')
for k, v in result.items():
if not v['pre_installed']:
os.environ[k] = v['path']
print(f"\t{k} = {v['path']}")
else:
print("Running on RNN — data already available, skipping local install.")
Running on RNN — data already available, skipping local install.
Imports#
In here we limit our imports from Pandeia to the basic functions used in our calculations:
build_default_calcto create a configuration dictionaryperform_calculationto perform a calculation with defined configuration dictionary
We also use functions in the SciPy library:
scipy.optimize.minimize_scalarto help with optimizing signal-to-noise ratios (SNRs),scipy.interpolate.interp1dto calculate a desired target magnitude for a given observing setup.
from pandeia.engine.calc_utils import build_default_calc
from pandeia.engine.perform_calculation import perform_calculation
from scipy.optimize import minimize_scalar
from scipy import interpolate
import rbt
from textwrap import wrap
# General imports
import json
import numpy as np
from astropy.io import fits
We set FILTER as global variable before beginning since all examples make use of the same F129 imaging filter. Please note that a filter definition in Pandeia is case-sensitive and will only take lower-case letters for filter names.
# Pandeia's filter definitaion is case-sensitive and will only take lower-case letters for filter names
FILTER = 'f129'
Examples#
Calculate a Scene’s Signal-to-Noise Ratio#
In this first example, we calculate the expected SNR for a point source with a flat spectral distribution (default target) normalized to 25 AB magnitudes. We place the source on Detector #1 (internally: WFI01) and take three exposures in band F129 with the multi-accumulation (MA) table “im_135_8”, with no truncation (139.1487 seconds of total exposure time). MA tables describe the sequence of individual reads that are combined into resultants and comprise the up-the-ramp sampling during a single exposure of the WFI detectors. For more information on the WFI detectors, please refer to the RDox documentation on WFI and for the MA tables, please refer to the RDox documentation on MA tables.
We first create a default calculation using Pandeia’s built-in function build_default_calc(<telescope>, <instrument>, <mode>):
# Creating a default calculation using Roman's WFI with an imaing mode
calc = build_default_calc('roman', 'wfi', 'imaging')
Let’s take a look at how the default calculation is set up.
# Print the default calculation in a compact JSON view
print(json.dumps(calc, indent=2, sort_keys=True))
{
"background": "hlwas-medium_field1",
"background_level": "medium",
"calculation": {
"effects": {
"random_seed": null,
"saturation": null
},
"noise": {
"crs": null,
"dark": null,
"excess": null,
"ffnoise": null,
"readnoise": null,
"scatter": null
}
},
"configuration": {
"detector": {
"ma_table_name": "im_135_8",
"nexp": 1,
"nresultants": -1,
"subarray": "full"
},
"instrument": {
"aperture": "imaging",
"detector": "wfi01",
"disperser": null,
"filter": "f158",
"instrument": "wfi",
"mode": "imaging"
}
},
"scene": [
{
"position": {
"orientation": 0.0,
"x_offset": 0.0,
"y_offset": 0.0
},
"shape": {
"geometry": "point"
},
"spectrum": {
"extinction": {
"bandpass": "j",
"law": "mw_rv_31",
"unit": "mag",
"value": 0.0
},
"extinction_first": true,
"lines": [],
"name": "generic source",
"normalization": {
"norm_flux": 0.001,
"norm_fluxunit": "mjy",
"norm_wave": 1.5749,
"norm_waveunit": "microns",
"type": "at_lambda"
},
"redshift": 0.0,
"sed": {
"sed_type": "flat",
"unit": "fnu",
"z": 0.0
}
}
}
],
"strategy": {
"aperture_size": 0.2,
"background_subtraction": true,
"display_string": "Imaging Aperture Photometry",
"is_aperture_ee": false,
"method": "imagingapphot",
"sky_annulus": [
0.4,
0.6
],
"target_source": "1",
"target_type": "coords",
"target_xy": [
0.0,
0.0
],
"units": "arcsec"
}
}
The build_default_calc created a scene with a single point source to observe with SCA01, F158 filter, “im_135_8” MA table, with no truncatation, and with a single exposure. With the WFI, an exposure refers to a single multi-accum sequence of the detector array at a single dither point in the dither pattern.
Next, we define the observing setup and make some changes to the default setting. In this case, we will set up an observation with 3 exposures that is truncated after 6 resultants.
# Editing the configuration file
calc['configuration']['instrument']['filter'] = FILTER # Setting the filter to F129
calc['configuration']['detector']['nexp'] = 3 # Taking three exposures of multi-accum sequence
calc['configuration']['detector']['nresultants'] = 6 # Truncate after 6 resultant
Next, we change the normalization of the flux of the source to a flux density of 25 AB magnitudes:
# Define the normalization value
MAG = 25
calc['scene'][0]['spectrum']['normalization']['norm_flux'] = MAG
calc['scene'][0]['spectrum']['normalization']['norm_fluxunit'] = 'abmag'
Finally, we perform the signal to noise calculation using the built-in function perform_calculation in Pandeia and print the result:
# Perform the calculation
report = perform_calculation(calc)
# Extract the S/N vallue
sn = report['scalar']['sn']
print(f'Estimated S/N: {sn:.2f}')
Estimated S/N: 11.73
Note that this step may generate a WARNING from synphot that the spectrum is extrapolated, which can be ignored.
Running Pandeia for Roman may return a warning such as: if np.log(abs(val)) < -1*precision and val != 0.0. This is related to a JWST-specific test for float precision, and can be ignored.
Calculating Magnitude and Optimizing Exposures for Roman WFI Simulations#
For the next example, we begin by determining the corresponding magnitude at a given signal-to-noise ratio (SNR) and setup parameters. Next, we extend this analysis to calculate the optimal number of exposures required to reach a target SNR for a given source flux.
The following helper functions use Pandeia to simulate a range of scenes at different magnitudes in order to estimate the best magnitude for a specified SNR and a number of exposures. As above, we assume a point source with a flat spectrum, and the MA table is set to the “im_135_8” table but this time without any truncation.
Step 1: Calculating the Magnitude for a Given Setup#
In the first step, we estimate the limiting magnitude for a point source at a given SNR. This involves iterating over a range of magnitudes, computing the SNR for each, and interpolating the results to determine the best magnitude for the specified SNR. The observing parameters include the number of exposures and a specified filter.
Example Use Case:
SNR = 5
Number of exposures = 10
Filter = F129
The following helper functions use Pandeia to compute the star’s magnitude that yields the desired SNR.
def compute_mag(filt, nexp, bracket=(18, 30)):
"""
Method to compute the S/N from a range of magnitudes and a fixed number of exposures
Parameters
----------
filt : str
Name of Roman WFI filter
nexp : int
Number of exposures
bracket : tuple
Range of magnitudes to test. default: (18, 30)
Returns
-------
mag_range : float
An array of magnitudes used to compute the SNRs
computed_snrs: float
An array of computed SNRs from Pandeia calculations
"""
# Set up default Roman observation
calc = build_default_calc('roman', 'wfi', 'imaging')
# Modify defaults to place a source with an AB magnitude
calc['scene'][0]['spectrum']['normalization']['norm_fluxunit'] = 'abmag'
calc['scene'][0]['spectrum']['normalization']['norm_waveunit'] = 'um'
# Set number of exposures and filter
calc['configuration']['detector']['nexp'] = nexp
calc['configuration']['instrument']['filter'] = filt
# Create an array of magnitudes range of interest
mag_range = np.arange(bracket[0], bracket[1] + 1, 1)
# Create empty lists to save the computations
computed_snrs = []
# Compute the SNRs for a given magnitude
for m in range(len(mag_range)):
mag = mag_range[m]
calc['scene'][0]['spectrum']['normalization']['norm_flux'] = mag
report = perform_calculation(calc)
computed_snrs.append(report['scalar']['sn'])
return mag_range, computed_snrs
def _mag2sn_(mag_range, computed_snrs, sntarget):
"""
Calculate a magnitude given a desired SNR by interpolating (computed_snrs, mag_range) from compute_mag
Parameters
----------
mag_range: float
An array of magnitudes used in calculating a range of SNRs in compute_mag
computed_snrs: float
An array of computed SNR given the mag_range using Pandeia calculation object
sntarget: float
Required S/N
"""
interpolator = interpolate.interp1d(computed_snrs, mag_range)
mag = interpolator(sntarget)
return mag
Now we can estimate the magnitude of the star that in 10 exposures reaches an S/N = 5
SN = 5
NEXP = 10
# Run minimizer function to estimate the magnitude given sn and nexp
mag_range, computed_snrs = compute_mag(FILTER, NEXP)
mag = _mag2sn_(mag_range, computed_snrs, SN)
print(f'Estimated magnitude: {mag:.2f}')
Estimated magnitude: 26.76
Step 2: Determining Optimal Number of Exposures#
With the magnitude determined, we then calculate the optimal number of exposures required to achieve a specified S/N for a known flux. This is done by simulating observations with varying numbers of exposures, identifying the minimum exposure count necessary to meet or exceed the target S/N. This ensures efficient use of telescope time while maintaining data quality.
The following helper functions use Pandeia to simulate a range of scenes with different numbers of exposures in order to estimate the optimal observing time to reach the expected limiting magnitude for a source with a given flux. As above, we assume a point source with a flat spetrum, and the MA table is set to the “c2a_img_hlwas” table, truncated to 6 resultants.
def _nexp2sn_(nexp, calc, sntarget):
"""
Optimize a S/N ratio given a number of exposures. This is a helper function
used as an argument for scipy's minimize_scalar() as used in compute_mag().
Parameters
----------
nexp : int
The number of exposures used in an iteration of minimize_scalar()
calc :
A Pandeia calculation object
sntarget :
Required S/N from the matching argument of compute_mag()
"""
calc['configuration']['detector']['nexp'] = int(nexp)
etc = perform_calculation(calc)['scalar']
return (sntarget - etc['sn'])**2
def compute_nexp(filt, sn, mag, bracket=(1, 1000), xtol=0.1):
"""
Method to compute the number of exposures from S/N and magnitude
Parameters
----------
filt : str
Name of Roman WFI filter
sn : float
Required S/N
mag : float
AB Magnitude of source
bracket : tuple, default (1, 1000)
Range of magnitudes to test
xtold: float, default 0.1
Target tolerance for minimizer
Returns
-------
nexp : float
Optimal number of exposures for specified S/N and magnitude
report: dict
Pandeia dictionary with optimal parameters
exptime: float
Exposure time for optimal observation
"""
# Set up default Roman observation
calc = build_default_calc('roman', 'wfi', 'imaging')
# Modify defaults to place a source with an AB magnitude
calc['scene'][0]['spectrum']['normalization']['norm_flux'] = mag
calc['scene'][0]['spectrum']['normalization']['norm_fluxunit'] = 'abmag'
calc['scene'][0]['spectrum']['normalization']['norm_waveunit'] = 'um'
# Set filter
calc['configuration']['instrument']['filter'] = filt
# Check that the minimum of 1 exposure has a S/N lower than requested,
# otherwise there is no sense in attempting to minimize nexp.
calc['configuration']['detector']['nexp'] = 1
calc['configuration']['detector']['nresultants'] = 6
report = perform_calculation(calc)
if report['scalar']['sn'] > sn:
nexp = 1
else:
res = minimize_scalar(_nexp2sn_, bracket=bracket, bounds=bracket,
args=(calc, sn), method='bounded',
options={'xatol': xtol})
# Take the optimization result and set it to nexp
# 'x' is the solution array in the optimization result object
# For more details on the minimize_scalar function, refer to https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html
nexp = int(res['x'])
calc['configuration']['detector']['nexp'] = nexp
report = perform_calculation(calc)
# this generally returns a S/N less than the required amount.
# let's ensure that we get *AT LEAST* the required S/N for 2 reasons:
# 1) better to err on the side of caution
# 2) make code consistent with the above if clause
if report['scalar']['sn'] < sn:
nexp += 1
# Re-calculate
calc['configuration']['detector']['nexp'] = nexp
report = perform_calculation(calc)
exptime = report['scalar']['total_exposure_time']
return nexp, report, exptime
For example, we can use the functions above to determine the optimal number of exposures to reach a SNR of 20 when observing a point source of magnitude 26.84 in the F129 band:
# Define S/N and Magnitude
SN = 20.
MAG = 26.84
# Compute values
nexp, etc, exptime = compute_nexp(FILTER, SN, MAG)
print(f'number of exposures: {nexp}')
print(f'actual S/N reached: {etc["scalar"]["sn"]:.2f}')
print(f'Exposure time: {exptime:.2f}')
number of exposures: 224
actual S/N reached: 20.04
Exposure time: 30460.92
Modifying the Spectral Energy Distribution#
While previous examples assume a point source with a flat SED, Pandeia also offers the ability to use a a variety of different shapes and spectral inputs. In the example below, we calculate the SNR for an A0V star (Phoenix model) of magnitude 25 AB, observed in the F129 band, with 3 exposures of the default MA table (“im_135_8”, with no truncation). For more information on how to implement complex scenes with a variety of shapes and SEDs, please refer to the JWST Tutorials.
# Define the default observation
calc = build_default_calc('roman', 'wfi', 'imaging')
# Update the observation parameters
NEXP = 3
calc['configuration']['detector']['nexp'] = NEXP
calc['configuration']['instrument']['filter'] = FILTER
# Define the scene
MAG = 25
calc['scene'][0]['spectrum']['normalization']['norm_flux'] = MAG
calc['scene'][0]['spectrum']['normalization']['norm_fluxunit'] = 'abmag'
calc['scene'][0]['spectrum']['sed']['sed_type'] = 'phoenix'
calc['scene'][0]['spectrum']['sed']['key'] = 'a0v'
Print the results
report = perform_calculation(calc)
sn = report['scalar']['sn']
print(f'Estimated S/N: {sn:.2f}')
Estimated S/N: 16.12
Observing NGC2506-G31 with Roman WFI#
In this example, we show a real science case using NGC2506-G31, a G1V standard star that is used as a cross-mission calibration standard for both JWST and HST observations. We would like to see if Roman can observe the same star and if so, with which observing setup. We are interested in placing the star on SCA11.
# Creating a default calculation using Roman's WFI with an imaing mode
calc = build_default_calc('roman', 'wfi', 'imaging')
# Update the observation parameters
calc['configuration']['instrument']['filter'] = FILTER # Setting the filter to F129
calc['configuration']['instrument']['detector'] = 'wfi11' # Setting the detector fo WFI11
calc['configuration']['detector']['nexp'] = 1 # Taking one exposure of multi-accum sequence
calc['configuration']['detector']['ma_table_name'] = 'im_135_8' # Using the default MA table
calc['configuration']['detector']['nresultants'] = -1 # No truncation of the MA table
# Setting up the source SED
calc['scene'][0]['spectrum']['sed']['sed_type'] = 'phoenix'
calc['scene'][0]['spectrum']['sed']['key'] = 'g2v' # Using the closest spectral type available to G1V
# Setting up the normalization parameters
calc['scene'][0]['spectrum']['normalization']['type'] = 'photsys'
calc['scene'][0]['spectrum']['normalization']['norm_flux'] = 16.260 # K-band magnitude of the source
calc['scene'][0]['spectrum']['normalization']['norm_fluxunit'] = 'vegamag'
calc['scene'][0]['spectrum']['normalization']['bandpass'] = '2mass,ks'
# Run the calculation
report = perform_calculation(calc)
With the calculation ran, we would like to check to see if there are any warning messages from the report. You can access the warnings through the the “warnings” dictionary.
# print warnings in a list
print("Warnings:")
for key, msg in report['warnings'].items():
clean_key = key.replace('_', ' ').title()
print(f"• {clean_key}:\n {msg}\n")
Warnings:
• Normalizephotsys Unsupported Norm Wave:
Unsupported configuration parameter, norm_wave, being passed to NormalizePhotsys.
• Normalizephotsys Unsupported Norm Waveunit:
Unsupported configuration parameter, norm_waveunit, being passed to NormalizePhotsys.
• Phoenix Unsupported Unit:
Unsupported configuration parameter, unit, being passed to Phoenix.
• Partial Saturated:
Partial saturation:
There are 1 pixels saturated at the end of a ramp. Partial ramps may still be used in some cases.
We see that there is a pixel that is partially saturated. If you are concerned that there is a partially saturated pixel, you can check the report to see what the maximum number of resultants is to avoid having the brighest pixel on the detector getting saturated. You can access this information through the “sat_nresultants” key within the “scalar” dictionary.
print(report['scalar']['sat_nresultants'])
5
Let’s check manually that it indeed is nresultant=5. The maximum nresultant for “im_135_8” MA table is 7 and we will calculate backward, from no truncation to incremented truncation.
for n in reversed(range(7)):
# Check to see if there is any warning message about the saturation (both partial and full)
# Stop the loop if the saturation warning no longer exists
# Adding 1 for the resultant because python indexing starts with 0.
resultant = n + 1
calc['configuration']['detector']['nresultants'] = resultant
report = perform_calculation(calc)
partial = report['warnings'].get('partial_saturated')
full = report['warnings'].get('full_saturated')
if (partial is None) and (full is None):
nresultant = resultant
print(f'No saturation happens with resultant {nresultant}')
break
No saturation happens with resultant 4
The answer turns out to be 4.
Although we know the actual coordinate of this star, Pandeia engine does not support a simulation at a specific location or time, as the engine comes with canned backgrounds at the locations that are representative locations for the footprints of three Core Community Surveys (see the Background Models in the Roman WFI ETC section in RDox for more details).
If you would like to see how the SNR changes with time at the specific location of this star, you have to use the web application of the ETC that is available at the Roman WFI ETC.
Modifying the Background#
Pandeia defines the background using 2 values in the configuration dictionary: background and background_level. Combination of these two values determines whether a pre-computed background data (the canned background) or a user-supplied background data (custom background) is used. As of Pandeia version 2025.9, the default values used by the engine are hlwas-medium_field1 for the background and medium for the background_level.
Canned Backgrounds#
The Pandeia engine is shipped with a set of pre-computed backgrounds that are generated by the Roman Backgrounds Tool (RBT) at various locations within the 3 Core Community Surveys (CCS). Please refer to the Pandeia Engine & Roman Interactive Sensitivity Tool (RIST) section “Tunable Parameters of RIST” for the details. The followings are valid values for the background configuration dictionaries:
calc[‘background’]
hlwas-medium_field1
hlwas-medium_field2
hlwas-wide_field1
hlwas-wide_field2
hlwas-wide_field3
hlwas-wide_field4
hltds
gbtds_mid_5stripe
calc[‘background_level’]
low
medium (for hltds, only this background level is available)
high
In the code below, we show how to set up the imaging observation and change the background and level to hltds and medium, respectively.
# Get Default Parameters
calc = build_default_calc('roman', 'wfi', 'imaging')
# Update the background model to a different set
calc['background'] = 'hltds'
calc['background_level'] = 'medium'
Custom Backgrounds#
If you are interested in exploring the observing setup with background other than the pre-computed canned backgrounds, you can use the RBT (available on the Roman Research Nexus) to generate the background.
Another option is to download a calculation from the web ETC (available as a tar file in the Downloads tab under the Reports pane) where you will find the ‘background.fits’ file to import from the engine. The custom background needs to be defined as a list containing the wavelength and flux arrays, denoted by square brackets, and set to the background. The background_level key will be ignored by the engine. The code below shows how to define the custom background in the engine using the background.fits file from the web ETC.
# Get Default Parameters
calc = build_default_calc('roman', 'wfi', 'imaging')
# Try to load a custom background from backgrounds.fits
custom_bg = None # will stay None if the file is not found
bg_file = 'backgrounds.fits'
try:
with fits.open('backgrounds.fits') as hdul:
bg_wvl = hdul[1].data['wavelength']
bg_flux = hdul[1].data['background']
# Define the custom background
custom_bg = [bg_wvl, bg_flux]
# Attach the custom background to the calc dict
calc['background'] = custom_bg
except FileNotFoundError:
print(f"File '{bg_file}' not found – using default background.")
except Exception as e:
print(f"Error reading '{bg_file}': {e} – using default background.")
# Perform the calculation (default background is used if custom_bg is None)
report = perform_calculation(calc)
File 'backgrounds.fits' not found – using default background.
Here, we show how to generate a custom background using the RBT and supply it to the engine. The code will simulate a background spectrum at a specific coordinate (RA = 34.5656 & Dec = -52.6140, both in decimal degrees) and wavelength (0.6291 microns). Then you can choose the specific observable day to supply to the engine and save it if needed. Note that RBT considers 366 calendar days in a year with Day 0 corresponding to Jan 1. In this example, we will look at the first observable day in the year.
# Define the coordinates
ra = 34.5656 # in degrees
dec = -52.6140 # in degrees
wavelength = 0.6291 # in microns
threshold = 1.5 # A cut off value above the minimum background, to calculate number of good days.
# Create the RBT instance
bg = rbt.background(ra, dec, wavelength, thresh=threshold)
# Look at the all available observable days
good_days = bg.bkg_data['calendar']
# Define the day to look at the background spectrum for the target
this_day = good_days[0]
# Define the background spectrum
bg_wvl = bg.bkg_data['wave_array']
bg_flux = bg.bkg_data['total_bg'][this_day]
# Get Default Parameters
calc = build_default_calc('roman', 'wfi', 'imaging')
# Set the background configuration dictionary to the RBT-supplied background spectrum
calc['background'] = [bg_wvl, bg_flux]
# Save the background spectrum to a file if it exists
try:
bg.write_background(thisday=this_day, background_file=your_filename)
except NameError:
print("\n No file name provided to save Background, continue with calculation ...")
# Perform the calculation
report = perform_calculation(calc)
No file name provided to save Background, continue with calculation ...
Importing input.json from the Web ETC into the Pandeia engine#
For calculations that completed successfully, the Web ETC provides a downloadable tar file in the Downloads tab in the Reports pane that contains useful products for additional offline analysis. One of the files contains the input configurations for the calculation in a json file (input.json). In this example, we show how to read in the input.json file in Pandeia and re-run the calculation again.
# Defining file wit ETC configuration
calculation = "input.json"
# load the JSON file if exists, ortherwise uses previous scene
try:
with open(calculation, 'r') as calcfile:
calc = json.load(calcfile)
except FileNotFoundError:
print(f"Warning: File '{calculation}' not found – using last defined scene in notebook.")
calculation = None
except Exception as e:
print(f"Error reading '{calculation}': {e} – using last defined scene in notebook.")
# run the calculation
result = perform_calculation(calc)
# Extract the pretty-printed web report
report = result['web_report']
# Formatted output
if calculation:
print("\n" + "-" * 20 + "\033[1m RESULTS from ETC scene\033[0m" + "-" * 20)
else:
print("\n" + "-" * 30 + "\033[1m Default scene \033[0m" + "-" * 30)
for category in report:
# ANSI Pretty-print the report categories and apply word wrapping
print("\033[1m" + category["category"] + "\033[0m")
print('-' * len(category["category"]))
for item in category["items"]:
if "value" in item:
namelist = wrap(item["name"], width=36, subsequent_indent=" ")
if "indent" in item and item["indent"]:
namelist[-1] = " " + namelist[-1]
for i in range(len(namelist) - 1):
print(f"{namelist[i]:<36}")
print(f"{namelist[-1]:<36} {item['value']:>16} {item['unit']:<10}")
else:
print(f"{item['name']}")
print()
print("-" * 30 + "\033[1m WARNINGS \033[0m" + "-" * 30)
for x in result['warnings']:
# ANSI Pretty-print the warnings and apply word wrapping
warning_display = wrap(result["warnings"][x], width=45, subsequent_indent=" ")
print(f"{x:<25}: {warning_display[0]}")
if len(warning_display) > 1:
for idx in range(len(warning_display) - 1):
print(f"{" " * 25} {warning_display[idx + 1]}")
print("-" * 70)
Warning: File 'input.json' not found – using last defined scene in notebook.
------------------------------ Default scene ------------------------------
Results
-------
Extracted Signal-to-Noise Ratio 16.84
Extracted Flux 5.64 e-/s
Standard Deviation of Extracted Flux 0.33 e-/s
Brightest Pixel Rate 3.03 e-/s
Maximum Fraction of Saturation 4.22e-03
Maximum Number of Resultants Before
Saturation 8 resultants
Instrument and Detector
-----------------------
Instrument Detector/Filter/Disperser wfi01, f158, n/a
Integration Duration 139.15 s
Single Exposure Time 139.15 s
Fraction of Time Spent Collecting
Flux 1.00
Effective Exposure Time 135.99 s
Science Time 135.99 s
Total time collecting photons 139.15 s
Saturation Time 139.15 s
Extraction Strategy Settings
----------------------------
Radius of Extraction Aperture 0.20 arcsec
Area of Extraction Aperture 10.39 pixels
Effective Wavelength 1.57 microns
Extraction Aperture Position (0.00, 0.00) arcsec
Background
----------
Input Background Surface Brightness 0.14 MJy/sr
Area of Background Measurement 51.93 pixels
Total Sky Background Flux in
Background Aperture 4.06 e-/s
Total Flux in Background Aperture 4.12 e-/s
Fraction of Total Background due to
Signal from Scene 1.61e-02
Average Number of Cosmic Ray Events
per Pixel per Ramp 3.41e-03 events/pixel/read
------------------------------ WARNINGS ------------------------------
----------------------------------------------------------------------
Additional Resources#
The Roman User Documentation’s “Pandeia for Roman” page and associated overview.
Full API references for Pandeia Engine inputs and Pandeia Engine outputs.
The Roman Help Desk, an official outlet for user questions about Pandeia.
About this notebook#
Author: Justin Otor, Eunkyu Han, Harish Khandrika, Rosa Diaz
Updated On: 2025-12-08
| ↑ Top of page |
|
|