Implemented Multi-scale Gaussian Normalization#30
Conversation
|
Hello @vatch123! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2019-06-20 15:37:58 UTC |
|
Thanks for the pull request @vatch123! Everything looks great! |
|
I think the main point is that we don't want to have both versions in our code but one unified version which we can switch to (maybe with a keyword) if we want. |
Then it is better that we choose one because both are essentially the same. The only difference is that one has some weights and the other doesn't (which is equivalent to setting the weights 1). |
|
The output doesn't look similar. So what is causing that? |
Please have a look at this comment for greater details as how the code varies from the paper.
Then working on Cadair's code is better because now it is exactly similar to the paper and gives better visual output too. |
|
About visual output comparison (as seen on the gist): the output depends on the input parameters, which are not specified in the gist. I guess that they are the default parameters, but the default parameters are not the same for This can be seen immediately for IMHO the definition of "weight" (i.e. with arctan normalization to [0,1]) is much more meaningful that that of "h". |
|
Yes, the default parameters in both the implementation are different. But after discussion with @nabobalis and @wafels we have decided to only have |
|
@ebuchlin Would it be possible for you to do another review for us? |
|
Could you run black, isort and docformater on your PR? |
Co-Authored-By: Nabil Freij <nabil.freij@gmail.com>
Codecov Report
@@ Coverage Diff @@
## master #30 +/- ##
==========================================
- Coverage 82.72% 77.13% -5.59%
==========================================
Files 13 13
Lines 573 433 -140
==========================================
- Hits 474 334 -140
Misses 99 99
Continue to review full report at Codecov.
|
|
This is done too, I guess |
|
Okay, I will begin on it again from tomorrow. |
ebuchlin
left a comment
There was a problem hiding this comment.
Here are some comments after reading the code again. Disclaimer: I haven't tried to run the code (I don't know how to do with the code of a PR).
sunkit_image/enhance.py
Outdated
| weights = np.ones(len(sigma)) | ||
|
|
||
| # 1. Replace spurious negative pixels with zero | ||
| data[data <= 0] = 1e-15 # Makes sure that all values are above zero |
There was a problem hiding this comment.
I'm a bit uncomfortable with that. The paper is explicitly for EUV data only with normally only positive values, but I don't see where it is required or enforced. Actually there are negative values in low-signal, noisy areas of SDO/AIA images, and these are as meaningful as the higher-value pixels in these low-signal areas, there is no reason to clip them to 0. Furthermore, as the code does not tell explicitly that it is for EUV images only, some users would perhaps like to try with e.g. magnetogram Bz data, which would probably be fine except that the gamma transform of the original data would not be the best choice for representing such data.
There was a problem hiding this comment.
If we set a keyword that makes this stage optional? Would that be an ok compromise?
|
Thanks for the review @ebuchlin! If you want to run the code, the most straightforward way would be to clone the branch this code is on by doing |
Thanks, I could just run the example file, and the output was as expected. |
Co-Authored-By: ebuchlin <eric.buchlin@ias.u-psud.fr>
Implemented Multi-scale Gaussian Normalization (sunpy#30)
Description
TODO:
Fixes #1