Donut stars
Re: Donut stars
What Mike said - thank you, Ivo!
Haven't had time to do much beyond download the new beta yet, but a quick look seems similar to Mike's reaction. Different, but definitely improves (cures, even?) the dark-ring effect.
As for 16/32, it's a bit awkward in Siril to keep swapping back and forth (really, to remember to go a couple of menu levels deep to make the switch) to calibrate and register in 16-bit then stack in 32-bit, so I just had it default to 32-bit. I will have a go at changing my habits.
Mike: is it just me reading too much into it, or is it weird that PI returns 55k, 63k 62k 57k for four consecutive pixels that each have the same raw reading (65k)?
Haven't had time to do much beyond download the new beta yet, but a quick look seems similar to Mike's reaction. Different, but definitely improves (cures, even?) the dark-ring effect.
As for 16/32, it's a bit awkward in Siril to keep swapping back and forth (really, to remember to go a couple of menu levels deep to make the switch) to calibrate and register in 16-bit then stack in 32-bit, so I just had it default to 32-bit. I will have a go at changing my habits.
Mike: is it just me reading too much into it, or is it weird that PI returns 55k, 63k 62k 57k for four consecutive pixels that each have the same raw reading (65k)?
Re: Donut stars
I would assume the same. But I cannot prove, of course. And Ivo's testsets don't seem to be affected that much, as he wrote. I wonder if it would be helpful to reread some older threads to identify some candidates.Mike in Rancho wrote: ↑Sat Mar 23, 2024 8:25 pm But really this pops up across a large cohort, with many different optics, sensors, and stackers.
Yeah, I thought about the same. Maybe better than something built-in which maybe has downsides for not affected datasets.Mike in Rancho wrote: ↑Sat Mar 23, 2024 8:25 pm and then how to achieve it. Like a donut or PSF-protection slider.
Thanks. Glad if I can helpMike in Rancho wrote: ↑Sat Mar 23, 2024 8:25 pm Want to include more praise for the Profile Viewer v3.
Thanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder?admin wrote: ↑Sun Mar 24, 2024 12:39 am I was also able to "hide" it by increasing Optidev's curve resolution (the amount of points it calculates to interpolate the constructed curve between) 16-fold (from 4096/12-bit points to 65536-bit points), which should not be necessary unless something has stuffed lots of data/"detail" in a tiny (1/4096th) slice of the - in this case - upper dynamic range (giving it very high importance, and thus making it "deserving" of more dynamic range).
The curve has nearly no inclination in the range of the plateau? Is this right?
I would have thought to mitigate this by decreasing the resolution of the curve?
Are you sure, this is because of 16-bit ADCs? My datasets are affected as well, and I'm using a Canon DSLR (EOS 2000D) having a 14-bit ADC.
Stack
https://c.web.de/@334960167135216273/WN ... BTbxf2gZrA
Single frame
https://c.web.de/@334960167135216273/b1 ... 7Tv8BfnNLQ
I too assume it would be better to understand what's happening. And then decide how to deal with it. Let us/me know if we/I can be of any help.
Best regards, Dietmar.
Re: Donut stars
Take the notion of decreasing the resolution of the curve to the absurd extreme and give it just one bit. Now every pixel in the 'stretched' image will either be On or Off and (obviously) all the detail in the original data is lost.decay wrote: ↑Sun Mar 24, 2024 3:10 pmThanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder?admin wrote: ↑Sun Mar 24, 2024 12:39 am I was also able to "hide" it by increasing Optidev's curve resolution (the amount of points it calculates to interpolate the constructed curve between) 16-fold (from 4096/12-bit points to 65536-bit points), which should not be necessary unless something has stuffed lots of data/"detail" in a tiny (1/4096th) slice of the - in this case - upper dynamic range (giving it very high importance, and thus making it "deserving" of more dynamic range).
The curve has nearly no inclination in the range of the plateau? Is this right?
I would have thought to mitigate this by decreasing the resolution of the curve?
Is increasing resolution 'necessary' (at the top end, but I can see how it might only make sense to implement globally)? I think that, in the overall scheme of trying to display the actual target, the answer should be no. But if the bright stars have obvious flaws they can attract all the attention away from the nebula/galaxy.
Re: Donut stars
Thanks, Ron. This is probably not my day.
Ivo wrote "the amount of points it calculates to interpolate the constructed curve between". My assumption was that we have N points to interpolate the curve using a spline function or similar. So two points would allow to construct a line. This would be the identity transformation with regards to the stretching curve.
-
- Posts: 1166
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: Donut stars
I could be wrong, but I thought Ivo was just looking to see the actual raw sub from the camera? I guess I better go re-read that post. I'm not sure going all retro and stacking at 16-bit is warranted? Not that 16-bit is anything to sneeze at, but calibration subs get combined-averaged too, so why not use the greater precision, and then the same when all the calibrated lights are stacked?dx_ron wrote: ↑Sun Mar 24, 2024 2:30 pm As for 16/32, it's a bit awkward in Siril to keep swapping back and forth (really, to remember to go a couple of menu levels deep to make the switch) to calibrate and register in 16-bit then stack in 32-bit, so I just had it default to 32-bit. I will have a go at changing my habits.
Mike: is it just me reading too much into it, or is it weird that PI returns 55k, 63k 62k 57k for four consecutive pixels that each have the same raw reading (65k)?
And yeah I saw that line too in my raw sub - all middle pixels blown up. Hey, point source!
I pulled that sub because the PI log said it was the chosen reference, so I assume a reasonably better one. But I am unsure how that all pans out after many subs are calibrated, normalized, star-aligned (which will include some form of sub-pixel splining or interpolation), and averaged. I guess I should look at a few other subs.
I suppose I can also make a graph, I mean it is a spreadsheet. I had been looking for some kind of obvious sensor non-linearity?
I certainly don't have a grasp of all this myself. Might have to mull it all over while pushing the mower around later today.decay wrote: ↑Sun Mar 24, 2024 3:10 pm
Thanks for chiming in and for taking some time, Ivo. And thanks for the detailed explanation. But I'm sorry, I think I don't understand At least not everything. My assumption was, that this shoulder is a result from having assigned too little dynamic range? So this is the case, because there's another (more upper) range, which 'eats up' (has assigned) most of the available dynamic range? So nothing left over for the range of the shoulder?
The curve has nearly no inclination in the range of the plateau? Is this right?
I would have thought to mitigate this by decreasing the resolution of the curve?
I was thinking more in terms of histogram curves and dynamic range allocation. I'm sure you guys have seen me say a half dozen times I'd like to figure out what OptiDev is doing to curve the hightlights. But, I for sure hadn't realized the plateau/illusion until now, and hadn't thought of it in terms of quantization. Although, I suppose I should have, that's what histograms show right? What a dummy.
So I must ponder this all, probably right from the get go. If we use gain on the sensor, does that result in bigger ADU buckets on the top end (setting aside non-linearity)? Then what happens with the 32-bit transformation of stacking? Then we start working on that, and at some point, not sure where, ST knocks it back down to 16-bit...but how? Then for display sRGB is some kind of 8-bit per channel color right, and also undoubtedly a gamma curve is thrown at things depending on screen/monitor?
Piece of cake.
-
- Posts: 1166
- Joined: Sun Jun 20, 2021 10:05 pm
- Location: Alta Loma, CA
Re: Donut stars
Here's a quick chart with colored buckets for each pixel across the middle of the star. Interesting graphic representation of stretching, huh?
When I have time I'll try to add more distance on either side to get the full star profile down to the background level. And who knows maybe add some other stretching styles and programs.
EDIT: Note that the "trouble" areas weren't starting out way at the top end, but down pretty low and only up to 20K or so ADU, or maybe even 10K.
When I have time I'll try to add more distance on either side to get the full star profile down to the background level. And who knows maybe add some other stretching styles and programs.
EDIT: Note that the "trouble" areas weren't starting out way at the top end, but down pretty low and only up to 20K or so ADU, or maybe even 10K.
Re: Donut stars
Link to the raw single sub - https://www.dropbox.com/scl/fi/gdnc1l46 ... 1tc0t&dl=0 although it probably does not matter any more.
I think 16-bit ADCs are now fairly common, with the IMX571 having become so popular, though maybe that's just the people who post a lot on CN.
Dietmar, I started trying to write out how our different analogies could point in the same direction, but I got stuck because I don't think I understand how it works
I think 16-bit ADCs are now fairly common, with the IMX571 having become so popular, though maybe that's just the people who post a lot on CN.
Dietmar, I started trying to write out how our different analogies could point in the same direction, but I got stuck because I don't think I understand how it works
Last edited by dx_ron on Sun Mar 24, 2024 10:28 pm, edited 1 time in total.
Re: Donut stars
Colors! Now we're talking! Can you do one with 8x10 color glossy photographs, with circles and arrows and a paragraph on the back of each one?Mike in Rancho wrote: ↑Sun Mar 24, 2024 8:40 pm Here's a quick chart with colored buckets for each pixel across the middle of the star. Interesting graphic representation of stretching, huh?
Re: Donut stars
Hi Ron, that's my problem as well. There's something Ivo wrote, that I do not understand (as I wrote in my reply to Ivo). Maybe Ivo will give us some hints and we can try to figure out, step by step. Sorry if I caused even more confusion.
(And I still have to work up your discussion with Mike. Tomorrow )
Best regards!
Re: Donut stars
I'm definitely not proposing we should be doing 16-bit stacking, however the assumption is that the source material is *multiple subs* of 10, 12, 14 or 16-bit quantized data, where values fell within the range of 0 to /1024, 4096, 16384 or 65536. Stacking multiple subs will - in essence - add 2^(stacked subs) precision to this (e.g. 0.5, 0.25, 0.125, 0.0625 for 2, 4, 8, 16 subs, etc.).
While very useful for intermediary calculations and representation. Floating point operations and encoding will more readily allow ranges outside of the expected range of (0..unity) and let software (or humans) erroneously interpret out-of-bounds data or encode potentially destabilizing rounding errors, while in the real world, we only captured integers and we know that there is a range beyond which data numbers should not occur (or are not trustworthy).
For example, I believe Ron's dataset encoded numbers/pixels past 1.0, which if 1.0 was taken as unity, should not really occur. In an attempt to establish what "pure white" (unity) is, StarTools assumes the highest number in the dataset is unity (unless it encounters a FITS keyword that says otherwise) - it goes to show how this introduces unnecessary ambiguity. This ambiguity can have real consequences.
Mike's graph is super helpful in trying to demonstrate the issue; The 2600MM 20s L sub is clearly over-exposing and correctly encodes the over-exposing pixels as 65535. Those pixels that are over-exposing are not reliable information anymore - the sensor well was full. Yet somehow they have been given values that are no longer 65535 in the final PI MasterLight. This is because the stacking algorithm has decided to - in effect - average out 65535 with some subs where the same pixels read something lower than 65535. The reasons why some pixels in some subs may have read lower than 65535 are numerous, but even a slight mis-alignment of the sub will do it. Nevertheless the end result is a "spike" that does not really exist.
OptiDev's algorithm can ferret out the spike and sees the enormous difference between where the spike pixels start and where the real stellar profile ends. The enormous difference that it detects, means that it will allocate more dynamic range just for that spike to make that difference visible.
The way OptiDev works is - roughly - as follows;
While very useful for intermediary calculations and representation. Floating point operations and encoding will more readily allow ranges outside of the expected range of (0..unity) and let software (or humans) erroneously interpret out-of-bounds data or encode potentially destabilizing rounding errors, while in the real world, we only captured integers and we know that there is a range beyond which data numbers should not occur (or are not trustworthy).
For example, I believe Ron's dataset encoded numbers/pixels past 1.0, which if 1.0 was taken as unity, should not really occur. In an attempt to establish what "pure white" (unity) is, StarTools assumes the highest number in the dataset is unity (unless it encounters a FITS keyword that says otherwise) - it goes to show how this introduces unnecessary ambiguity. This ambiguity can have real consequences.
Mike's graph is super helpful in trying to demonstrate the issue; The 2600MM 20s L sub is clearly over-exposing and correctly encodes the over-exposing pixels as 65535. Those pixels that are over-exposing are not reliable information anymore - the sensor well was full. Yet somehow they have been given values that are no longer 65535 in the final PI MasterLight. This is because the stacking algorithm has decided to - in effect - average out 65535 with some subs where the same pixels read something lower than 65535. The reasons why some pixels in some subs may have read lower than 65535 are numerous, but even a slight mis-alignment of the sub will do it. Nevertheless the end result is a "spike" that does not really exist.
OptiDev's algorithm can ferret out the spike and sees the enormous difference between where the spike pixels start and where the real stellar profile ends. The enormous difference that it detects, means that it will allocate more dynamic range just for that spike to make that difference visible.
The way OptiDev works is - roughly - as follows;
- For each pixel, establish a measure of local entropy (how "busy" the local area is). This is our proxy for "detail"; it gives us a number. If not much happens (for example in the gradual stellar profile or in an over-exposing all-65535 core) that number is low. If a lot happens, for example in the transition from stellar profile to artificial spike that number is high.
- Divide up the full dynamic range into brightness "tranches" (4096 before, 65536 now). For each tranche, tally up all the "busyness" numbers for pixels that fall into that tranche.
- Expand (or contract) each tranche's start and end points (in terms of the dynamic range it occupies) from being evenly distributed, to being non-evenly distributed, based on how "busy" the tranche is. Busy tranches get more dynamic range, tranquil tranches get less dynamic range.
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast