Hubble Source Catalog SWEEPS Proper Motion Notebook#

2019 - 2022, Steve Lubow, Rick White, Trenton McKinney#

A new MAST interface supports queries to the current and previous versions of the Hubble Source Catalog. It allows searches of the summary table (with multi-filter mean photometry) and the detailed table (with all the multi-epoch measurements). It also has an associated API, which is used in this notebook.

The web-based user interface to the HSC does not currently include access to the new proper motions available for the SWEEPS field in version 3.1 of the Hubble Source Catalog. However those tables are accessible via the API. This notebook shows how to use them.

This notebook is similar to the previously released version that uses CasJobs rather than the new API. The Casjobs interface is definitely more powerful and flexible, but the API is simpler to use for users who are not already experienced Casjobs users. Currently the API does not include easy access to the colors and magnitudes of the SWEEPS objects, but they will be added soon.

Additional information is available on the SWEEPS Proper Motions help page.

A notebook that provides a quick introduction to the HSC API is also available. Another notebook generates a color-magnitude diagram for the Small Magellanic Cloud in only a couple of minutes.


  • Complete the initialization steps described below.

  • Run the notebook to completion.

  • Modify and rerun any sections of the Table of Contents below.

Running the notebook from top to bottom takes about 4 minutes (depending on the speed of your computer).

Table of Contents#

  • Initialization

  • Properties of Full Catalog

    • Sky Coverage

    • Proper Motion Distributions

    • Visit Distribution

    • Time Coverage Distributions

    • Detection Positions

    • Positions for a Sample With Good PMs

  • Science Applications

    • High Proper Motion Objects

    • HLA Cutout Images for Selected Objects

Initialization #

Install Python modules#

  1. This notebook requires the use of Python 3.

  2. Modules can be installed with conda, if using the Anaconda distribution of python, or with pip.

    • If you are using conda, do not install / update / remove a module with pip, that exists in a conda channel.

    • If a module is not available with conda, then it’s okay to install it with pip

import astropy
import time
import sys
import os
import requests
import json
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from pathlib import Path

## For handling ordinary astropy Tables
from astropy.table import Table
from import fits, ascii

from PIL import Image
from io import BytesIO

from fastkde import fastKDE
from scipy.interpolate import RectBivariateSpline
from astropy.modeling import models, fitting

# There are a number of relatively unimportant warnings that 
# show up, so for now, suppress them:
import warnings

# set width for pprint
astropy.conf.max_width = 150
# set universal matplotlib parameters
plt.rcParams.update({'font.size': 16})

MAST API functions#

  • Execute HSC searches and resolve names using MAST query.

  • Here we define several interrelated functions for retrieving information from the MAST API.

    • The hcvcone(ra, dec, radius [, keywords]) function searches the HCV catalog near a position.

    • The hcvsearch() function performs general non-positional queries.

    • The hcvmetadata() function gives information about the columns available in a table.

hscapiurl = ""

def hsccone(ra, dec, radius, table="summary", release="v3", format="csv", magtype="magaper2",
            columns=None, baseurl=hscapiurl, verbose=False, **kw):
    """Do a cone search of the HSC catalog
    ra (float): (degrees) J2000 Right Ascension
    dec (float): (degrees) J2000 Declination
    radius (float): (degrees) Search radius (<= 0.5 degrees)
    table (string): summary, detailed, propermotions, or sourcepositions
    release (string): v3 or v2
    magtype (string): magaper2 or magauto (only applies to summary table)
    format: csv, votable, json
    columns: list of column names to include (None means use defaults)
    baseurl: base URL for the request
    verbose: print info about request
    **kw: other parameters (e.g., 'numimages.gte':2)
    data = kw.copy()
    data['ra'] = ra
    data['dec'] = dec
    data['radius'] = radius
    return hscsearch(table=table, release=release, format=format, magtype=magtype,
                     columns=columns, baseurl=baseurl, verbose=verbose, **data)

def hscsearch(table="summary", release="v3", magtype="magaper2", format="csv",
              columns=None, baseurl=hscapiurl, verbose=False, **kw):
    """Do a general search of the HSC catalog (possibly without ra/dec/radius)
    table (string): summary, detailed, propermotions, or sourcepositions
    release (string): v3 or v2
    magtype (string): magaper2 or magauto (only applies to summary table)
    format: csv, votable, json
    columns: list of column names to include (None means use defaults)
    baseurl: base URL for the request
    verbose: print info about request
    **kw: other parameters (e.g., 'numimages.gte':2).  Note this is required!
    data = kw.copy()
    if not data:
        raise ValueError("You must specify some parameters for search")
    if format not in ("csv", "votable", "json"):
        raise ValueError("Bad value for format")
    url = "{}.{}".format(cat2url(table, release, magtype, baseurl=baseurl), format)
    if columns:
        # check that column values are legal
        # create a dictionary to speed this up
        dcols = {}
        for col in hscmetadata(table, release, magtype)['name']:
            dcols[col.lower()] = 1
        badcols = []
        for col in columns:
            if col.lower().strip() not in dcols:
        if badcols:
            raise ValueError(f"Some columns not found in table: {', '.join(badcols)}")
        # two different ways to specify a list of column values in the API
        # data['columns'] = columns
        data['columns'] = f"[{','.join(columns)}]"

    # either get or post works
    # r =, data=data)
    r = requests.get(url, params=data)

    if verbose:
    if format == "json":
        return r.json()
        return r.text

def hscmetadata(table="summary", release="v3", magtype="magaper2", baseurl=hscapiurl):
    """Return metadata for the specified catalog and table
    table (string): summary, detailed, propermotions, or sourcepositions
    release (string): v3 or v2
    magtype (string): magaper2 or magauto (only applies to summary table)
    baseurl: base URL for the request
    Returns an astropy table with columns name, type, description
    url = "{}/metadata".format(cat2url(table, release, magtype, baseurl=baseurl))
    r = requests.get(url)
    v = r.json()
    # convert to astropy table
    tab = Table(rows=[(x['name'], x['type'], x['description']) for x in v],
                names=('name', 'type', 'description'))
    return tab

def cat2url(table="summary", release="v3", magtype="magaper2", baseurl=hscapiurl):
    """Return URL for the specified catalog and table
    table (string): summary, detailed, propermotions, or sourcepositions
    release (string): v3 or v2
    magtype (string): magaper2 or magauto (only applies to summary table)
    baseurl: base URL for the request
    Returns a string with the base URL for this request
    checklegal(table, release, magtype)
    if table == "summary":
        url = f"{baseurl}/{release}/{table}/{magtype}"
        url = f"{baseurl}/{release}/{table}"
    return url

def checklegal(table, release, magtype):
    """Checks if this combination of table, release and magtype is acceptable
    Raises a ValueError exception if there is problem
    releaselist = ("v2", "v3")
    if release not in releaselist:
        raise ValueError(f"Bad value for release (must be one of {', '.join(releaselist)})")
    if release == "v2":
        tablelist = ("summary", "detailed")
        tablelist = ("summary", "detailed", "propermotions", "sourcepositions")
    if table not in tablelist:
        raise ValueError(f"Bad value for table (for {release} must be one of {', '.join(tablelist)})")
    if table == "summary":
        magtypelist = ("magaper2", "magauto")
        if magtype not in magtypelist:
            raise ValueError(f"Bad value for magtype (must be one of {', '.join(magtypelist)})")

Use metadata query to get information on available columns#

This query works for any of the tables in the API (summary, detailed, propermotions, sourcepositions).

meta = hscmetadata("propermotions")
print(' '.join(meta['name']))
objID numSources raMean decMean lonMean latMean raMeanErr decMeanErr lonMeanErr latMeanErr pmRA pmDec pmLon pmLat pmRAErr pmDecErr pmLonErr pmLatErr pmRADev pmDecDev pmLonDev pmLatDev epochStart epochEnd epochMean DSigma NumFilters NumVisits NumImages CI CI_Sigma KronRadius KronRadius_Sigma HTMID X Y Z
Table length=5

Retrieve data on selected SWEEPS objects#

This makes a single large request to the HSC search interface to the get the contents of the proper motions table. Despite its large size (460K rows), the query is relatively efficient: it takes about 25 seconds to retrieve the results from the server, and then another 20 seconds to convert it to an astropy table. The speed of the table conversion will depend on your computer.

Note that the query restricts the sample to objects with at least 20 images total spread over at least 10 different visits. These constraints can be modified depending on your science goals.

columns = """ObjID,raMean,decMean,raMeanErr,decMeanErr,NumFilters,NumVisits,
columns = [x.strip() for x in columns]
columns = [x for x in columns if x and not x.startswith('#')]

# missing -- impossible with current data I think
# MagMed, n, MagMAD

constraints = {'': 1, '': 9, '': 19}

# note the pagesize parameter, which allows retrieving very large results
# the default pagesize is 50000 rows

t0 = time.time()
results = hscsearch(table="propermotions", release='v3', columns=columns, verbose=True, pagesize=500000, **constraints)
print("{:.1f} s: retrieved data".format(time.time()-t0))
tab =
print(f"{(time.time()-t0):.1f} s: converted to astropy table")

# change some column names for consistency with the Casjobs version of this notebook
tab.rename_column("raMean", "RA")
tab.rename_column("decMean", "Dec")
tab.rename_column("raMeanErr", "RAerr")
tab.rename_column("decMeanErr", "Decerr")
tab.rename_column("pmLat", "bpm")
tab.rename_column("pmLon", "lpm")
tab.rename_column("pmLatErr", "bpmerr")
tab.rename_column("pmLonErr", "lpmerr")

# compute some additional columns

tab['pmdev'] = np.sqrt(tab['pmLonDev']**2+tab['pmLatDev']**2)
tab['yr'] = (tab['epochMean'] - 47892)/365.25+1990
tab['dT'] = (tab['epochEnd']-tab['epochStart'])/365.25
tab['yrStart'] = (tab['epochStart'] - 47892)/365.25+1990
tab['yrEnd'] = (tab['epochEnd'] - 47892)/365.25+1990

# delete some columns that are not needed after the computations
del tab['pmLonDev'], tab['pmLatDev'], tab['epochEnd'], tab['epochStart'], tab['epochMean']

11.2 s: retrieved data
20.4 s: converted to astropy table
Table length=462899

Properties of Full Catalog #

Sky Coverage #

fig, ax = plt.subplots(figsize=(12, 10))
ax.scatter('RA', 'Dec', data=tab, s=1, alpha=0.1)
ax.set(xlabel='RA', ylabel='Dec', title=f'{len(tab)} stars in SWEEPS')

Proper Motion Histograms #

Proper motion histograms for lon and lat#

bins = 0.2
hrange = (-20, 20)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
for col, label in zip(['lpm', 'bpm'], ['Longitude', 'Latitude']):
    ax.hist(col, data=tab, range=hrange, bins=bincount, label=label, histtype='step', linewidth=2)
ax.set(xlabel='Proper motion [mas/yr]', ylabel=f'Number [in {bins:.2} mas bins]', title=f'{len(tab):,} stars in SWEEPS')
_ = ax.legend()

Proper motion error cumulative histogram for lon and lat#

bins = 0.01
hrange = (0, 2)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
for col, label in zip(['lpmerr', 'bpmerr'], ['Longitude Error', 'Latitude Error']):
    ax.hist(col, data=tab, range=hrange, bins=bincount, label=label, histtype='step', cumulative=1, linewidth=2)
ax.set(xlabel='Proper motion error [mas/yr]', ylabel=f'Cumulative number [in {bins:0.2} mas bins]', title=f'{len(tab):,} stars in SWEEPS')
_ = ax.legend(loc='upper left')

Proper motion error log histogram for lon and lat#

bins = 0.01
hrange = (0, 6)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
for col, label in zip(['lpmerr', 'bpmerr'], ['Longitude Error', 'Latitude Error']):
    ax.hist(col, data=tab, range=hrange, bins=bincount, label=label, histtype='step', linewidth=2)
ax.set(xlabel='Proper motion error [mas/yr]', ylabel=f'Number [in {bins:0.2} mas bins]', title=f'{len(tab):,} stars in SWEEPS', yscale='log')
_ = ax.legend(loc='upper right')

Proper motion error as a function of dT#

Exclude objects with dT near zero, and to improve the plotting add a bit of random noise to spread out the quanitized time values.

# restrict to sources with dT > 1 year
dtmin = 1.0
w = np.where(tab['dT'] > dtmin)[0]
if ('rw' not in locals()) or len(rw) != len(w):
    rw = np.random.random(len(w))
x = np.array(tab['dT'][w]) + 0.5*(rw-0.5)
y = np.log(np.array(tab['lpmerr'][w]))

# Calculate the point density
t0 = time.time()
myPDF, axes = fastKDE.pdf(x, y, numPoints=2**9+1)
print(f"kde took {(time.time()-t0):.1f} sec")

# interpolate to get z values at points
finterp = RectBivariateSpline(axes[1], axes[0], myPDF)
z = finterp(y, x, grid=False)

# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
xs, ys, zs = x[idx], y[idx], z[idx]

# select a random subset of points in the most crowded regions to speed up plotting
wran = np.where(np.random.random(len(zs))*zs < 0.05)[0]
print(f"Plotting {len(wran)} of {len(zs)} points")
xs = xs[wran]
ys = ys[wran]
zs = zs[wran]
kde took 1.6 sec
Plotting 190206 of 461199 points
fig, ax = plt.subplots(figsize=(12, 10))
sc = ax.scatter(xs, np.exp(ys), c=zs, s=2, edgecolors='none', cmap='plasma')
ax.set(xlabel='Date range [yr]', ylabel='Proper motion error [mas/yr]',
       title=f'{len(tab):,} stars in SWEEPS\nLongitude PM error', yscale='log')
_ = fig.colorbar(sc, ax=ax)

Proper motion error log histogram for lon and lat#

Divide sample into points with \(<6\) years of data and points with more than 6 years of data.

bins = 0.01
hrange = (0, 6)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1
tsplit = 6

# select the data to plot
mask = tab['dT'] <= tsplit
data1 = tab[mask]
data2 = tab[~mask]
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(12, 12), sharey=True)

for ax, data in zip([ax1, ax2], [data1, data2]):
    for col, label in zip(['lpmerr', 'bpmerr'], ['Longitude Error', 'Latitude Error']):
        ax.hist(col, data=data, range=hrange, bins=bincount, label=label, histtype='step', linewidth=2)
ax1.set(xlabel='Proper motion error [mas/yr]', ylabel=f'Number [in {bins:0.2} mas bins]',
        title=f'{len(data1):,} stars in SWEEPS with dT < {tsplit} yrs', yscale='log')
ax2.set(xlabel='Proper motion error [mas/yr]', ylabel=f'Number [in {bins:0.2} mas bins]',
        title=f'{len(data2):,} stars in SWEEPS with dT > {tsplit} yrs', yscale='log')


_ = fig.tight_layout()

Number of Visits Histogram #

bins = 1
hrange = (0, 130)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
ax.hist('NumVisits', data=tab, range=hrange, bins=bincount, label='Number of visits ', histtype='step', linewidth=2)
ax.set(xlabel='Number of visits', ylabel='Number of objects', title=f'{len(tab):,} stars in SWEEPS')
_ = ax.margins(x=0)

Time Histograms #

First plot histogram of observation dates.

bins = 1
hrange = (2000, 2020)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
ax.hist('yr', data=tab, range=hrange, bins=bincount, label='year ', histtype='step', linewidth=2)
ax.set(xlabel='mean detection epoch (year)', ylabel='Number of objects', title=f'{len(tab):,} stars in SWEEPS')
ax.set_xticks(ticks=range(2000, 2021, 2))
_ = ax.margins(x=0)

Then plot histogram of observation duration for the objects.

bins = 0.25
hrange = (0, 15)
bincount = int((hrange[1]-hrange[0])/bins + 0.5) + 1

fig, ax = plt.subplots(figsize=(12, 10))
ax.hist('dT', data=tab, range=hrange, bins=bincount, label='year ', histtype='step', linewidth=2)
_ = ax.set(xlabel='time span (years)', ylabel='Number of objects', title=f'{len(tab):,} stars in SWEEPS', yscale='log')

Detection Positions #

Define a function to plot the PM fit for an object.

# define function
def positions(Obj):
    input parameter Obj is the value of the ObjID 
    output plots change in (lon, lat) as a function of time
    overplots proper motion fit
    provides ObjID and proper motion information in labels

    # get the measured positions as a function of time
    pos ="sourcepositions", columns="dT,dLon,dLat".split(','), objid=Obj))
    # get the PM fit parameters
    pm ="propermotions", columns="pmlon,pmlonerr,pmlat,pmlaterr".split(','), objid=Obj))
    lpm = pm['pmlon'][0]
    bpm = pm['pmlat'][0]
    # get the intercept for the proper motion fit referenced to the start time
    # time between mean epoch and zero (ref) epoch (years)

    x = pos['dT']
    # xpm = np.linspace(0, max(x), 10)
    xpm = np.array([x.min(), x.max()])
    y1 = pos['dLon']
    ypm1 = lpm*xpm
    y2 = pos['dLat']
    ypm2 = bpm*xpm

    # plot figure 
    fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(6, 3), tight_layout=True)
    ax1.scatter(x, y1, s=10)
    ax1.plot(xpm, ypm1, '-r')
    ax2.scatter(x, y2, s=10)
    ax2.plot(xpm, ypm2, '-r')
    ax1.set_xlabel('dT (yrs)', fontsize=10)
    ax1.set_ylabel('dLon (mas)', fontsize=10)
    ax2.set_xlabel('dT (yrs)', fontsize=10)
    ax2.set_ylabel('dLat (mas)', fontsize=10)
    fig.suptitle(f'ObjID {Obj}'
                 f'\n{len(x)} detections,  (lpm, bpm) = ({lpm:.1f}, {bpm:.1f}) mas/yr', fontsize=10)

Plot positions for a sample of objects with good proper motions #

This selects objects that are detected in more than 90 visits with a median absolute deviation from the fit of less than 1.5 mas and proper motion error less than 1.0 mas/yr.

n = tab['NumVisits']
dev = tab['pmdev']
objid = tab['ObjID']
lpmerr0 = np.array(tab['lpmerr'])
bpmerr0 = np.array(tab['bpmerr'])
wi = np.where((dev < 1.5) & (n > 90) & (np.sqrt(bpmerr0**2+lpmerr0**2) < 1.0))[0]
print(f"Plotting {len(wi)} objects")
for o in objid[wi]:
Plotting 21 objects
../../../_images/1dfeb9baf400ad41b09ecc26a1922f0780ae5e37001acaa65e57de74560a3ce4.png ../../../_images/0f0591d5b0154dbfc7be55135d80eeff09ee0be57d5ed165d2f3c90e927fd621.png ../../../_images/52f04894ba8b88360608fc07e89d4db16354db06516247cfffc2da29bb805d0c.png ../../../_images/61c725fa04139e9bae9d46460b3ef047d48b058631d21b237f927ecfe3908282.png ../../../_images/4527ed728b0a9554ccbc7be4d9ec935b7b42ce9a86abf28e7458f9b27af7a0b2.png ../../../_images/c3ea286268f4e72a7674208d732eb5f96eba83474b3b0fb0e1e66fd54ee4deb8.png ../../../_images/4e3db1f43e401cb46a54c5ee2b7426ffef15a93556eff29e6b8cb0ff0bfa2467.png ../../../_images/6628d1e7a9548303e41e73ff2e4d61dc9d80df6ec581f6a2c5caefd23ae7dadf.png ../../../_images/86763544f19e747d47b0e89e2894aa5d384bb0aaa1e9a5032444764c932f9d0e.png ../../../_images/3478c780341517d9c705bde995ab2b1a4d11e627322efeb9e27b92880ccf85f4.png ../../../_images/d7a45e149ff525c5b75bceaf22a9a4ea78bd18ecf1d53182575ce14b6712af9f.png ../../../_images/1aa188df93e457529391772fad02271237f2b6ab85e0c962f9b243f9821da5eb.png ../../../_images/ff97cde763dbf47f8c27c39b7be187e72b023bed55b43fb09f6b7dd577a2f091.png ../../../_images/5e81b79bad610e1e3c621609a71c74aaf1ed39195987faa1da298e1a9efd02ba.png ../../../_images/d1462e2ab728bd4da052da2921b9c960335f4ff1a70ba12e547bd7cf243d6a70.png ../../../_images/6c3c9c5db1507199f508f33d0a271a5f1b9f46172172c6133925e8395518e6ef.png ../../../_images/55e1380680b26501bd6086ff4453f93cbd6843491523fc9d58fdebefc514e450.png ../../../_images/a82732f2e11b9b9374c01cb775ec3cd46f5148b50b52793acff86213bf61c0ee.png ../../../_images/603c77525dbf925f2beee2bb102e23db0254d10a2840bd6070a55aaf57b926e5.png ../../../_images/599f762c7a814238ec59a1b0a8c16f2cfb2c9a4f04226b6025b3d7fff3298b10.png ../../../_images/6794ec08471db313061df260dde1942178ce880be14c4ccea7a62aac5cfe8fac.png

Science Applications #

High Proper Motion Objects #

Get a list of objects with high, accurately measured proper motions. Proper motions are measured relative to the Galactic center.

lpm_sgra = -6.379 # +- 0.026
bpm_sgra = -0.202 # +- 0.019

lpm0 = np.array(tab['lpm'])
bpm0 = np.array(tab['bpm'])
lpmerr0 = np.array(tab['lpmerr'])
bpmerr0 = np.array(tab['bpmerr'])
pmtot0 = np.sqrt((bpm0-bpm_sgra)**2+(lpm0-lpm_sgra)**2)
pmerr0 = np.sqrt(bpmerr0**2+lpmerr0**2)
dev = tab['pmdev']

# sort samples by decreasing PM
wpmh = np.where((pmtot0 > 15) & (pmerr0 < 1.0) & (dev < 5))[0]
wpmh = wpmh[np.argsort(-pmtot0[wpmh])]

print(f"Plotting {len(wpmh)} objects")
for o in tab["ObjID"][wpmh]:
Plotting 31 objects
../../../_images/58b522aa7d296c9d16b487d5797eacf4ae65de422e61d935045571edc0770c4a.png ../../../_images/221651c57b29e3bf9c9949d955bb89f92837d1ee4ad2ce404349e5bf0beaac7d.png ../../../_images/2df893e08eb5140a37093e61a7ef0544850370886f500c139ce63daadf5cd4a2.png ../../../_images/dd58a92df77ba58b5dc8dc96cb47f9a91e45e3e7e672d5931f7b1a512dbefdc2.png ../../../_images/40809eefdc926fe8c3b1a01ec07252bf510bd1bc6fbe08049af9cde2216b5add.png ../../../_images/f18b4094b5c2815886dff1ade0adf0c8a0ff79975936420774e3f7a467647ad8.png ../../../_images/9e9a94daab742766b1e599b2efc7a7ba5b357bea9b118b0e3d9e526fd4c8e27d.png ../../../_images/fbe23f3335949cd332b7e09fa47d739b8eacc8ca5281d173d95490fc74add56a.png ../../../_images/b5c80f8ad1faeffb9404a1d8eafe54b8630e673ca6e95fb79ae9e134a7e39e34.png ../../../_images/75c30c99ee14684a7d017360349094d7fbaa0465b8b891be32e4dce0014e013a.png ../../../_images/cb48b857634fc1aba9242c765e0f177b7f7301b24f24063ac66f949e5b137a5b.png ../../../_images/ec19e9cc1612accc674d719554783edd331c188fefa88264fd0b3b4e8e3c8ee0.png ../../../_images/26f9b3b9f9c3b1a7cf9cf4805cd2009225c3e7965d77ce8daa051f1be250a113.png ../../../_images/54d50ff1f70995e11b610ef396040de7ac35e99a6999b3f74134840a8a69c71d.png ../../../_images/8963d08ffdf546f1ed09e74afacdf3ca9a1c4c984c8e39d73bc69fc8ad44af9a.png ../../../_images/e13dc92b8df39f0b75806d7249d18d35029e8d696aee5c818b18947174abd045.png ../../../_images/18f74b153155df13e6a7b44650be98c2da0cb442e46d49cdccfb6e6038a9d503.png ../../../_images/923af00113c752d22c5ecf6f76a018ecc9dd1d16af9ae650364712bcb22ea89e.png ../../../_images/13e83859eb3432d17025ec84889a67ec204b275e122ee3310f300d9930a445b4.png ../../../_images/4aa84051b811744efb0777eb73758731b19bebad2d41b9524b0748859f2d5f52.png ../../../_images/9b140503947f09952aa90de8128f9b8d6b79e5863128587e382e3def6ee699fd.png ../../../_images/1a82b0e4263fc69e7a1550c2750d93a10ca737a5940df6e41ee5a097317a8ed1.png ../../../_images/efc98dea0b237899b2bce5066f9b8399f0c40ed4dd72b4287d9aa510359c3192.png ../../../_images/0e85c91e5db0bf6e3f457064d95eb69fc69650183c8b458fb36a4f9c1db9e9df.png ../../../_images/c7780dcd6b948ac60b5f2b12cd4f2d0541c484e7e29f3e7bfce914c165e33099.png ../../../_images/6ccd72c123daeddb9436cacf6dec3e47e3e43d3019357f5a58f835e480e97727.png ../../../_images/30511a8c96dfea7cef5aec87e34044fc18ff5f43bb002ed47ae732b6e919d6da.png ../../../_images/8688dbe90ce4afd1f6fea0131b559cff6d9b75c7c0afc7f2b18901d5960f968b.png ../../../_images/8e74504aa35151ef6af94662a502b26e614d171a0b519d0d26ab3fe2cda83b1f.png ../../../_images/b6c62b73fe0e00edf376e1a9fa8f159d0fa22593a5d335783f3d4842af82d078.png ../../../_images/b1f156aa75e21864aed2d4ea967250c6765cc1d0a73466f247cd047a972b5c3b.png

Get HLA cutout images for selected objects #

Get HLA color cutout images for the high-PM objects. The query_hla function gets a table of all the color images that are available at a given position using the f814w+f606w filters. The get_image function reads a single cutout image (as a JPEG color image) and returns a PIL image object.

See the documentation on HLA VO services and the fitscut image cutout service for more information on the web services being used.

def query_hla(ra, dec, size=0.0, imagetype="color", inst="ACS", format="image/jpeg",
              spectral_elt=("f814w", "f606w"), autoscale=95.0, asinh=1, naxis=33):
    # convert a list of filters to a comma-separated string
    if not isinstance(spectral_elt, str):
        spectral_elt = ",".join(spectral_elt)
    siapurl = (""
    votable =, format="votable")
    return votable

def get_image(url):
    """Get image from a URL"""
    r = requests.get(url)
    im =
    return im
# display earliest and latest images side-by-side
# top 10 highest PM objects
wsel = wpmh[:10]
nim = len(wsel)
icols = 1        # objects per row
ncols = 2*icols  # two images for each object
nrows = (nim+icols-1)//icols

imsize = 33
xcross = np.array([-1, 1, 0, 0, 0])*2 + imsize/2
ycross = np.array([0, 0, 0, -1, 1])*2 + imsize/2

# selected data from tab
sd = tab[['RA', 'Dec', 'ObjID']][wsel]

# create the figure
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12, (12/ncols)*nrows))

# iterate each observation, and each set of axes for the first and last image
for (ax1, ax2), obj in zip(axes, sd):
    # get the image urls and observation datetime
    hlatab = query_hla(obj["RA"], obj["Dec"], naxis=imsize)[['URL', 'StartTime']]
    # sort the data by the observation datetime, and get the first and last observation url
    (url1, time1), (url2, time2) = hlatab[np.argsort(hlatab['StartTime'])][[0, -1]]
    # get the images
    im1 = get_image(url1)
    im2 = get_image(url2)
    # plot the images
    ax1.imshow(im1, origin="upper")
    ax2.imshow(im2, origin="upper")
    # plot the center
    ax1.plot(xcross, ycross, 'g')
    ax2.plot(xcross, ycross, 'g')
    # labels and titles
    ax1.set(ylabel=f'ObjID {obj["ObjID"]}', title=time1)

Look at the entire collection of images for the highest PM object#

i = wpmh[0]

# selected data
sd = tab['ObjID', 'RA', 'Dec', 'bpm', 'lpm', 'yr', 'dT'][i]

imsize = 33
# get the URL and StartTime data
hlatab = query_hla(sd['RA'], sd['Dec'], naxis=imsize)[['URL', 'StartTime']]
# sort the data
hlatab = hlatab[np.argsort(hlatab['StartTime'])]

nim = len(hlatab)
ncols = 8
nrows = (nim+ncols-1)//ncols

xcross = np.array([-1, 1, 0, 0, 0])*2 + imsize/2
ycross = np.array([0, 0, 0, -1, 1])*2 + imsize/2
Row index=193587
# get the images: takes about 90 seconds for 77 images
images = [get_image(url) for url in hlatab['URL']]
# create the figure
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(20, (20/ncols)*nrows))
# flatten the axes for easy iteration and zipping
axes = axes.flat

plt.rcParams.update({"font.size": 11})
for ax, time1, img in zip(axes, hlatab['StartTime'], images):
    # plot image
    ax.imshow(img, origin="upper")
    # plot the center
    ax.plot(xcross, ycross, 'g')
    # set the title
# remove the last 3 unused axes
for ax in axes[nim:]:

fig.suptitle(f"ObjectID: {sd['ObjID']}\nRA: {sd['RA']:0.2f} Dec: {sd['Dec']:0.2f}\nObservations: {nim}", y=1, fontsize=14)