Skip to content

Imaging a Black Hole using only Closure Quantities

In this tutorial, we will create a preliminary reconstruction of the 2017 M87 data on April 6 using closure-only imaging. This tutorial is a general introduction to closure-only imaging in Comrade. For an introduction to simultaneous image and instrument modeling, see Stokes I Simultaneous Image and Instrument Modeling

Introduction to Closure Imaging

The EHT is one of the highest-resolution telescope ever created. Its resolution is equivalent to roughly tracking a hockey puck on the moon when viewing it from the earth. However, the EHT is also a unique interferometer. First, EHT data is incredibly sparse, the array is formed from only eight geographic locations around the planet. Second, the obseving frequency is much higher than traditional VLBI. Lastly, each site in the array is unique. They have different dishes, recievers, feeds, and electronics. Putting this all together implies that many of the common imaging techniques struggle to fit the EHT data and explore the uncertainty in both the image and instrument. One way to deal with some of these uncertainties is to not directly fit the data but instead fit closure quantities, which are independent of many of the instrumental effects that plague the data. The types of closure quantities are briefly described in Introduction to the VLBI Imaging Problem.

In this tutorial, we will do closure-only modeling of M87 to produce a posterior of images of M87.

To get started, we will load Comrade

julia
using Comrade

Pyehtim loads eht-imaging using PythonCall this is necessary to load uvfits files currently.

julia
using Pyehtim
    CondaPkg Found dependencies: /home/runner/.julia/packages/DimensionalData/bwTLK/CondaPkg.toml
    CondaPkg Found dependencies: /home/runner/.julia/packages/PythonCall/WMWY0/CondaPkg.toml
    CondaPkg Found dependencies: /home/runner/.julia/packages/Pyehtim/Fm109/CondaPkg.toml
    CondaPkg Resolving changes
             + ehtim (pip)
             + libstdcxx-ng
             + numpy
             + numpy (pip)
             + openssl
             + pandas
             + python
             + setuptools (pip)
             + uv
             + xarray
    CondaPkg Creating environment
             │ /home/runner/.julia/artifacts/7973f2c7725e2d0eef7a95159454c4145f0945a2/bin/micromamba
             │ -r /home/runner/.julia/scratchspaces/0b3b1443-0f03-428d-bdfb-f27f9c1191ea/root
             │ create
             │ -y
             │ -p /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env
             │ --override-channels
             │ --no-channel-priority
             │ libstdcxx-ng[version='>=3.4,<13.0']
             │ numpy[version='*']
             │ numpy[version='>=1.24, <2.0']
             │ openssl[version='>=3, <3.1']
             │ pandas[version='<2']
             │ python[version='>=3.8,<4',channel='conda-forge',build='*cpython*']
             │ python[version='>=3.6,<=3.10']
             │ uv[version='>=0.4']
             │ xarray[version='*']
             └ -c conda-forge
conda-forge/linux-64                                        Using cache
conda-forge/noarch                                          Using cache

Transaction

  Prefix: /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env

  Updating specs:

   - libstdcxx-ng[version='>=3.4,<13.0']
   - numpy=*
   - numpy[version='>=1.24, <2.0']
   - openssl[version='>=3, <3.1']
   - pandas[version='<2']
   - conda-forge::python[version='>=3.8,<4',build=*cpython*]
   - python[version='>=3.6,<=3.10']
   - uv[version='>=0.4']
   - xarray=*


  Package                 Version  Build                 Channel           Size
─────────────────────────────────────────────────────────────────────────────────
  Install:
─────────────────────────────────────────────────────────────────────────────────

  + libstdcxx-ng           12.3.0  hc0a3c3a_7            conda-forge     Cached
  + _libgcc_mutex             0.1  conda_forge           conda-forge     Cached
  + python_abi               3.10  5_cp310               conda-forge     Cached
  + ca-certificates    2024.12.14  hbcca054_0            conda-forge     Cached
  + ld_impl_linux-64         2.43  h712a8e2_2            conda-forge     Cached
  + libgomp                14.2.0  h77fa898_1            conda-forge     Cached
  + _openmp_mutex             4.5  2_gnu                 conda-forge     Cached
  + libgcc                 14.2.0  h77fa898_1            conda-forge     Cached
  + liblzma                 5.6.3  hb9d3cd8_1            conda-forge     Cached
  + ncurses                   6.5  h2d0b736_3            conda-forge     Cached
  + libzlib                 1.3.1  hb9d3cd8_2            conda-forge     Cached
  + libgfortran5           14.2.0  hd5240d6_1            conda-forge     Cached
  + libstdcxx              14.2.0  hc0a3c3a_1            conda-forge     Cached
  + libgcc-ng              14.2.0  h69a702a_1            conda-forge     Cached
  + xz-tools                5.6.3  hb9d3cd8_1            conda-forge     Cached
  + xz-gpl-tools            5.6.3  hbcc6ac9_1            conda-forge     Cached
  + liblzma-devel           5.6.3  hb9d3cd8_1            conda-forge     Cached
  + libsqlite              3.48.0  hee588c1_1            conda-forge     Cached
  + libgfortran            14.2.0  h69a702a_1            conda-forge     Cached
  + uv                     0.5.25  h0f3a69f_0            conda-forge     Cached
  + tk                     8.6.13  noxft_h4845f30_101    conda-forge     Cached
  + readline                  8.2  h8228510_1            conda-forge     Cached
  + libuuid                2.38.1  h0b41bf4_0            conda-forge     Cached
  + libnsl                  2.0.1  hd590300_0            conda-forge     Cached
  + libffi                  3.4.2  h7f98852_5            conda-forge     Cached
  + bzip2                   1.0.8  h4bc722e_7            conda-forge     Cached
  + openssl                3.0.14  h4ab18f5_0            conda-forge     Cached
  + xz                      5.6.3  hbcc6ac9_1            conda-forge     Cached
  + libopenblas            0.3.28  pthreads_h94d23a6_1   conda-forge     Cached
  + sqlite                 3.48.0  h9eae976_1            conda-forge     Cached
  + libblas                 3.9.0  28_h59b9bed_openblas  conda-forge     Cached
  + libcblas                3.9.0  28_he106b2a_openblas  conda-forge     Cached
  + liblapack               3.9.0  28_h7ac8fdf_openblas  conda-forge     Cached
  + tzdata                  2025a  h78e105d_0            conda-forge     Cached
  + python                 3.10.0  h543edf9_3_cpython    conda-forge     Cached
  + wheel                  0.45.1  pyhd8ed1ab_1          conda-forge     Cached
  + setuptools             75.8.0  pyhff2d567_0          conda-forge     Cached
  + pip                      25.0  pyh8b19718_0          conda-forge     Cached
  + six                    1.17.0  pyhd8ed1ab_0          conda-forge     Cached
  + pytz                   2024.2  pyhd8ed1ab_1          conda-forge     Cached
  + packaging                24.2  pyhd8ed1ab_2          conda-forge     Cached
  + python-dateutil   2.9.0.post0  pyhff2d567_1          conda-forge     Cached
  + numpy                  1.26.4  py310hb13e2d6_0       conda-forge     Cached
  + pandas                  1.5.3  py310h9b08913_1       conda-forge     Cached
  + xarray               2024.3.0  pyhd8ed1ab_0          conda-forge     Cached

  Summary:

  Install: 45 packages

  Total download: 0 B

─────────────────────────────────────────────────────────────────────────────────



Transaction starting
Linking libstdcxx-ng-12.3.0-hc0a3c3a_7
Linking _libgcc_mutex-0.1-conda_forge
Linking python_abi-3.10-5_cp310
Linking ca-certificates-2024.12.14-hbcca054_0
Linking ld_impl_linux-64-2.43-h712a8e2_2
Linking libgomp-14.2.0-h77fa898_1
Linking _openmp_mutex-4.5-2_gnu
Linking libgcc-14.2.0-h77fa898_1
Linking liblzma-5.6.3-hb9d3cd8_1
Linking ncurses-6.5-h2d0b736_3
Linking libzlib-1.3.1-hb9d3cd8_2
Linking libgfortran5-14.2.0-hd5240d6_1
Linking libstdcxx-14.2.0-hc0a3c3a_1
warning  libmamba [libstdcxx-14.2.0-hc0a3c3a_1] The following files were already present in the environment:
    - lib/libstdc++.so
    - lib/libstdc++.so.6
    - share/licenses/libstdc++/RUNTIME.LIBRARY.EXCEPTION
Linking libgcc-ng-14.2.0-h69a702a_1
Linking xz-tools-5.6.3-hb9d3cd8_1
Linking xz-gpl-tools-5.6.3-hbcc6ac9_1
Linking liblzma-devel-5.6.3-hb9d3cd8_1
Linking libsqlite-3.48.0-hee588c1_1
Linking libgfortran-14.2.0-h69a702a_1
Linking uv-0.5.25-h0f3a69f_0
Linking tk-8.6.13-noxft_h4845f30_101
Linking readline-8.2-h8228510_1
Linking libuuid-2.38.1-h0b41bf4_0
Linking libnsl-2.0.1-hd590300_0
Linking libffi-3.4.2-h7f98852_5
Linking bzip2-1.0.8-h4bc722e_7
Linking openssl-3.0.14-h4ab18f5_0
Linking xz-5.6.3-hbcc6ac9_1
Linking libopenblas-0.3.28-pthreads_h94d23a6_1
Linking sqlite-3.48.0-h9eae976_1
Linking libblas-3.9.0-28_h59b9bed_openblas
Linking libcblas-3.9.0-28_he106b2a_openblas
Linking liblapack-3.9.0-28_h7ac8fdf_openblas
Linking tzdata-2025a-h78e105d_0
Linking python-3.10.0-h543edf9_3_cpython
Linking wheel-0.45.1-pyhd8ed1ab_1
Linking setuptools-75.8.0-pyhff2d567_0
Linking pip-25.0-pyh8b19718_0
Linking six-1.17.0-pyhd8ed1ab_0
Linking pytz-2024.2-pyhd8ed1ab_1
Linking packaging-24.2-pyhd8ed1ab_2
Linking python-dateutil-2.9.0.post0-pyhff2d567_1
Linking numpy-1.26.4-py310hb13e2d6_0
Linking pandas-1.5.3-py310h9b08913_1
Linking xarray-2024.3.0-pyhd8ed1ab_0

Transaction finished

To activate this environment, use:

    micromamba activate /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env

Or to execute a single command in this environment, use:

    micromamba run -p /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env mycommand

    CondaPkg Installing Pip packages
             │ /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env/bin/uv
             │ pip
             │ install
             │ ehtim >=1.2.10, <2.0
             │ numpy >=1.24, <2.0
             └ setuptools
Using Python 3.10.0 environment at: /home/runner/work/Comrade.jl/Comrade.jl/examples/intermediate/ClosureImaging/.CondaPkg/env
Resolved 35 packages in 13ms
Installed 29 packages in 100ms
 + astropy==6.1.7
 + astropy-iers-data==0.2025.1.27.0.32.44
 + certifi==2024.12.14
 + charset-normalizer==3.4.1
 + contourpy==1.3.1
 + cycler==0.12.1
 + ehtim==1.2.10
 + fonttools==4.55.8
 + future==1.0.0
 + h5py==3.12.1
 + hdrhistogram==0.10.3
 + idna==3.10
 + jplephem==2.22
 + kiwisolver==1.4.8
 + matplotlib==3.10.0
 + networkx==3.4.2
 + pandas-appender==0.9.9
 + paramsurvey==0.4.21
 + pbr==6.1.0
 + pillow==11.1.0
 + psutil==6.1.1
 + pyerfa==2.0.1.5
 + pyparsing==3.2.1
 + pyyaml==6.0.2
 + requests==2.32.3
 + scipy==1.15.1
 + sgp4==2.23
 + skyfield==1.49
 + urllib3==2.3.0

For reproducibility we use a stable random number genreator

julia
using StableRNGs
rng = StableRNG(123)
StableRNGs.LehmerRNG(state=0x000000000000000000000000000000f7)

Load the Data

To download the data visit https://doi.org/10.25739/g85n-f134 To load the eht-imaging obsdata object we do:

julia
obs = ehtim.obsdata.load_uvfits(joinpath(__DIR, "..", "..", "Data", "SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits"))
Python: <ehtim.obsdata.Obsdata object at 0x7fb46d1208e0>

Now we do some minor preprocessing:

  • Scan average the data since the data have been preprocessed so that the gain phases are coherent.

  • Add 2% systematic noise to deal with calibration issues such as leakage.

julia
obs = scan_average(obs).add_fractional_noise(0.02)
Python: <ehtim.obsdata.Obsdata object at 0x7fb4aab312a0>

Now, we extract our closure quantities from the EHT data set. We flag now SNR points since the closure likelihood we use is only applicable to high SNR data.

julia
dlcamp, dcphase  = extract_table(obs, LogClosureAmplitudes(;snrcut=3), ClosurePhases(;snrcut=3))
(EHTObservationTable{Comrade.EHTLogClosureAmplitudeDatum{:I}}
  source:      M87
  mjd:         57849
  bandwidth:   1.856e9
  sites:       [:AA, :AP, :AZ, :JC, :LM, :PV, :SM]
  nsamples:    128, EHTObservationTable{Comrade.EHTClosurePhaseDatum{:I}}
  source:      M87
  mjd:         57849
  bandwidth:   1.856e9
  sites:       [:AA, :AP, :AZ, :JC, :LM, :PV, :SM]
  nsamples:    152)

Note

Fitting low SNR closure data is complicated and requires a more sophisticated likelihood. If low-SNR data is very important we recommend fitting visibilties with a instrumental model.

Build the Model/Posterior

For our model, we will be using an image model that consists of a raster of point sources, convolved with some pulse or kernel to make a ContinuousImage. To define this model we define the standard two argument function sky that defines the sky model we want to fit. The first argument are the model parameters, and are typically a NamedTuple. The second argument defines the metadata for the model that is typically constant. For our model the constant metdata will just by the mean or prior image.

julia
function sky(θ, metadata)
    (;fg, c, σimg) = θ
    (;mimg) = metadata
    # Apply the GMRF fluctuations to the image
    rast = apply_fluctuations(CenteredLR(), mimg, σimg.*c.params)
    m = ContinuousImage(((1-fg)).*rast, BSplinePulse{3}())
    # Force the image centroid to be at the origin
    x0, y0 = centroid(m)
    # Add a large-scale gaussian to deal with the over-resolved mas flux
    g = modify(Gaussian(), Stretch(μas2rad(250.0), μas2rad(250.0)), Renormalize(fg))
    return shifted(m, -x0, -y0) + g
end
sky (generic function with 1 method)

Now, let's set up our image model. The EHT's nominal resolution is 20-25 μas. Additionally, the EHT is not very sensitive to a larger field of views; typically, 60-80 μas is enough to describe the compact flux of M87. Given this, we only need to use a small number of pixels to describe our image.

julia
npix = 32
fovxy = μas2rad(150.0)
7.27220521664304e-10

To define the image model we need to specify both the grid we will be using and the FT algorithm we will use, in this case the NFFT which is the most efficient.

julia
grid = imagepixels(fovxy, fovxy, npix, npix)
RectiGrid(
executor: ComradeBase.Serial()
Dimensions: 
(↓ X Sampled{Float64} LinRange{Float64}(-3.5224744018114725e-10, 3.5224744018114725e-10, 32) ForwardOrdered Regular Points,
→ Y Sampled{Float64} LinRange{Float64}(-3.5224744018114725e-10, 3.5224744018114725e-10, 32) ForwardOrdered Regular Points)
)

Now we need to specify our image prior. For this work we will use a Gaussian Markov Random field prior

julia
using VLBIImagePriors, Distributions

Since we are using a Gaussian Markov random field prior we need to first specify our mean image. For this work we will use a symmetric Gaussian with a FWHM of 50 μas

julia
fwhmfac = 2*sqrt(2*log(2))
mpr = modify(Gaussian(), Stretch(μas2rad(50.0)./fwhmfac))
imgpr = intensitymap(mpr, grid)
skymeta = (;mimg = imgpr./flux(imgpr));

Now we can finally form our image prior. For this we use a heirarchical prior where the direct log-ratio image prior is a Gaussian Markov Random Field. The correlation length of the GMRF is a hyperparameter that is fit during imaging. We pass the data to the prior to estimate what the maximumal resolutoin of the array is and prevent the prior from allowing correlation lengths that are much small than the telescope beam size. Note that this GMRF prior has unit variance. For more information on the GMRF prior see the corr_image_prior doc string.

julia
cprior = corr_image_prior(grid, dlcamp)
HierarchicalPrior(
	map: 
	ConditionalMarkov(
Random Field: VLBIImagePriors.GaussMarkovRandomField
Graph: MarkovRandomFieldGraph{1}(
dims: (32, 32)
)
)	hyper prior: 
	Truncated(Distributions.InverseGamma{Float64}(
invd: Distributions.Gamma{Float64}(α=1.0, θ=0.0407685911951416)
θ: 24.528686684644875
)
; lower=1.0, upper=64.0)

)

Putting everything together the total prior is then our image prior, a prior on the standard deviation of the MRF, and a prior on the fractional flux of the Gaussian component.

julia
prior = (c = cprior, σimg = Exponential(0.1), fg=Uniform(0.0, 1.0))
(c = HierarchicalPrior(
	map: 
	ConditionalMarkov(
Random Field: VLBIImagePriors.GaussMarkovRandomField
Graph: MarkovRandomFieldGraph{1}(
dims: (32, 32)
)
)	hyper prior: 
	Truncated(Distributions.InverseGamma{Float64}(
invd: Distributions.Gamma{Float64}(α=1.0, θ=0.0407685911951416)
θ: 24.528686684644875
)
; lower=1.0, upper=64.0)

)
, σimg = Distributions.Exponential{Float64}(θ=0.1), fg = Distributions.Uniform{Float64}(a=0.0, b=1.0))

We can then define our sky model.

julia
skym = SkyModel(sky, prior, grid; metadata=skymeta)
SkyModel
  with map: sky
   on grid: ComradeBase.RectiGrid

Since we are fitting closures we do not need to include an instrument model, since the closure likelihood is approximately independent of gains in the high SNR limit.

julia
using Enzyme
post = VLBIPosterior(skym, dlcamp, dcphase; admode=set_runtime_activity(Enzyme.Reverse))
VLBIPosterior
ObservedSkyModel
  with map: sky
   on grid: VLBISkyModels.FourierDualDomainIdealInstrumentModelData Products: Comrade.EHTLogClosureAmplitudeDatumComrade.EHTClosurePhaseDatum

Reconstructing the Image

To reconstruct the image we will first use the MAP estimate. This is approach is basically a re-implentation of regularized maximum likelihood (RML) imaging. However, unlike traditional RML imaging we also fit the regularizer hyperparameters, thanks to our interpretation of as our imaging prior as a hierarchical model.

To optimize our posterior Comrade provides the comrade_opt function. To use this functionality a user first needs to import Optimization.jl and the optimizer of choice. In this tutorial we will use Optim.jl's L-BFGS optimizer, which is defined in the sub-package OptimizationOptimJL. We also need to import Enzyme to allow for automatic differentiation.

julia
using Optimization
using OptimizationOptimJL
xopt, sol = comrade_opt(post, LBFGS();
                        maxiters=1000, initial_params=prior_sample(rng, post));
┌ Warning: Using fallback BLAS replacements for (["cblas_zdotc_sub64_"]), performance may be degraded
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:294
│  [3] map
│    @ ./namedtuple.jl:265
│  [4] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:294
│  [3] map
│    @ ./namedtuple.jl:265
│  [4] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59

First we will evaluate our fit by plotting the residuals

julia
using Plots
p = residual(post, xopt)

Now let's plot the MAP estimate.

julia
import CairoMakie as CM
g = imagepixels(μas2rad(150.0), μas2rad(150.0), 100, 100)
img = intensitymap(skymodel(post, xopt), g)
fig = imageviz(img, size=(600, 500));

That doesn't look great. This is pretty common for the sparse EHT data. In this case the MAP often drastically overfits the data, producing a image filled with artifacts. In addition, we note that the MAP itself is not invariant to the model parameterization. Namely, if we changed our prior to use a fully centered parameterization we would get a very different image. Fortunately, these issues go away when we sample from the posterior, and construct expectations of the posterior, like the mean image.

To sample from the posterior we will use HMC and more specifically the NUTS algorithm. For information about NUTS see Michael Betancourt's notes.

Note

For our metric we use a diagonal matrix due to easier tuning.

julia
using AdvancedHMC
chain = sample(rng, post, NUTS(0.8), 700; n_adapts=500, progress=false, initial_params=xopt);
┌ Warning: Using fallback BLAS replacements for (["cblas_zdotc_sub64_"]), performance may be degraded
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:294
│  [3] map
│    @ ./namedtuple.jl:265
│  [4] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:293
│  [3] map
│    @ ./tuple.jl:294
│  [4] map
│    @ ./namedtuple.jl:265
│  [5] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
┌ Warning: TODO forward zero-set of arraycopy used memset rather than runtime type 
│ Caused by:
│ Stacktrace:
│  [1] copy
│    @ ./array.jl:411
│  [2] map
│    @ ./tuple.jl:294
│  [3] map
│    @ ./namedtuple.jl:265
│  [4] copy
│    @ ~/.julia/packages/StructArrays/KLKnN/src/structarray.jl:467
└ @ Enzyme.Compiler ~/.julia/packages/GPUCompiler/Nxf8r/src/utils.jl:59
[ Info: Found initial step size 0.003125

Warning

This should be run for longer!

Now that we have our posterior, we can assess which parts of the image are strongly inferred by the data. This is rather unique to Comrade where more traditional imaging algorithms like CLEAN and RML are inherently unable to assess uncertainty in their reconstructions.

To explore our posterior let's first create images from a bunch of draws from the posterior

julia
msamples = skymodel.(Ref(post), chain[501:2:end]);

The mean image is then given by

julia
using StatsBase
imgs = intensitymap.(msamples, Ref(g))
mimg = mean(imgs)
simg = std(imgs)
fig = CM.Figure(;resolution=(700, 700));
axs = [CM.Axis(fig[i, j], xreversed=true, aspect=1) for i in 1:2, j in 1:2]
CM.image!(axs[1,1], mimg, colormap=:afmhot); axs[1, 1].title="Mean"
CM.image!(axs[1,2], simg./(max.(mimg, 1e-8)), colorrange=(0.0, 2.0), colormap=:afmhot);axs[1,2].title = "Std"
CM.image!(axs[2,1], imgs[1],   colormap=:afmhot);
CM.image!(axs[2,2], imgs[end], colormap=:afmhot);
CM.hidedecorations!.(axs)
fig |> DisplayAs.PNG |> DisplayAs.Text

Now let's see whether our residuals look better.

julia
p = Plots.plot(layout=(2,1));
for s in sample(chain[501:end], 10)
    residual!(post, s)
end
p |> DisplayAs.PNG |> DisplayAs.Text

And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend

  1. Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)

  2. Fitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume


This page was generated using Literate.jl.