While profiling for bottlenecks in the code, I happened to inspect the implementation of the TV regularizer and noticed that it is slightly different from the way it is implemented in the cited source.
I don't think the difference will affect the results very much, but I thought I'd point it out. Feel free to close this if it's not an issue.
Comparison
Here's an MWE that illustrates this. First I'll define the variables we need.
import torch
torch.manual_seed(2023)
sky_cube = torch.rand((8, 10, 10))
epsilon = 1e-6
nchan = sky_cube.size(0)
npix = sky_cube.size(-1)
img_dim = npix * npix
horizontal_pad = torch.zeros(nchan, 1, npix)
vertical_pad = torch.zeros(nchan, npix, 1)
MPoL Implementation
After refactoring the code to use built-in torch functions to facilitate easier comparison, the current MPoL implementation is:
diff_mm = torch.diff(sky_cube[:, :, 0:-1], dim=1)
diff_ll = torch.diff(sky_cube[:, 0:-1, :], dim=2)
loss_old = torch.sqrt(diff_ll**2 + diff_mm**2 + epsilon).sum()
eht-imaging Implementation
The biggest difference lies in the way that the finite difference is taken. Here, they add a zero-padding to the end of the tensor in the dimension that they are taking the difference.
diff_mm = torch.diff(sky_cube, dim=1, append=horizontal_pad)
diff_ll = torch.diff(sky_cube, dim=2, append=vertical_pad)
# eht-imaging returns the negative loss, but that's just a minor detail
loss_new = torch.sqrt(diff_ll**2 + diff_mm** 2 + epsilon).sum()
Loss Values
The EHT implementation results in a larger loss compared to the implementation of MPoL.
>>> loss_new # eht-imaging
tensor(443.1116)
>>> loss_old # MPoL
tensor(341.5393)
While profiling for bottlenecks in the code, I happened to inspect the implementation of the TV regularizer and noticed that it is slightly different from the way it is implemented in the cited source.
I don't think the difference will affect the results very much, but I thought I'd point it out. Feel free to close this if it's not an issue.
Comparison
Here's an MWE that illustrates this. First I'll define the variables we need.
MPoL Implementation
After refactoring the code to use built-in torch functions to facilitate easier comparison, the current MPoL implementation is:
eht-imaging Implementation
The biggest difference lies in the way that the finite difference is taken. Here, they add a zero-padding to the end of the tensor in the dimension that they are taking the difference.
Loss Values
The EHT implementation results in a larger loss compared to the implementation of MPoL.