I'm having trouble wrapping my head around how it works exactly, and what cases it might not be ideal.
I know from narrowband images sometimes it's better to use just Ha as luminance for a cleaner L layer as adding Oiii and Sii can make the L result noisier.
But for LRGB images with equal weight RGB channels, are there circumstances where it is better to not use the additional L+synthetic luminance?
I'm trying to understand how it works to provide a cleaner luminance result. I understand it sums the RGB channels, but can it also introduce more noise in some cases? I'm guessing it doesn't work exactly like integration where values are averaged and noise reduced, or does it? would it make sense to do another integration of RGB and L frames together instead if my goal was the cleanest luminance? are there drawbacks/benefits of doing that at integration stage if L and RGB exposure times are 1:3?
Then I'm wondering other circumstances where it might be better to only use the shot luminance- if guiding/seeing is poor during RGB shooting and the L has better star size. I just shot an M42 project on a windy night with mediocre seeing. To not overexpose the core my L frames were 10s and RGB 30s, and the L was shot later in the night as things calmed down. I am still integrating all of that but i think the shorter L frames could produce a sharper L results and adding a synthetic L from RGB could make it worse, but I haven't checked yet.
Basically I'm looking for clarification so I can conceptualize and understand how synthetic L extraction and addition to the real luminance works, and when it will produce a lower noise and cleaner result and cases where it might not and introduce noise or problems.
Looking for some clarity on synthetic luminance generation
Re: Looking for some clarity on synthetic luminance generation
Ok I did some tests just looking for star size
Just the integrated luminance frames, then integrating RGB and L together, then RGB+synthetic L from startools with exposure weights then saved it.
So the straight Luminance stack has the lowest FWHM at 2.479. The combined RGB+L integration has a median value of 2.579, and the extracted L + L has the worst median value of 2.644.
This is kind of what I expected, though I'm not sure how big of a difference it is. I did not get to shoot enough L so it would probably benefit from at least the combined integration. And I noticed some slight angled streaking in background in the L integration- not sure if this is something form the short 10s exposures with this camera but it is better in either combined result and not something ive seen before. Both the combined integration and startools RGB+synthetic L look cleaner and lower noise. Not sure best way to analyze them to compare that.
I am using astro pixel processor for integration and settings for all were stacking based on star shape weights which gives highest priority to best star size/shape.
edit:
and this is running pixinsight noise evalation script on the three
I don't know of a better way to evaluate noise and SNR but am open to ideas
also I guess similar results to what I expected. extracted L+RGB offers some improvement over straight L, but the L+RGB integration is lowest
Just the integrated luminance frames, then integrating RGB and L together, then RGB+synthetic L from startools with exposure weights then saved it.
So the straight Luminance stack has the lowest FWHM at 2.479. The combined RGB+L integration has a median value of 2.579, and the extracted L + L has the worst median value of 2.644.
This is kind of what I expected, though I'm not sure how big of a difference it is. I did not get to shoot enough L so it would probably benefit from at least the combined integration. And I noticed some slight angled streaking in background in the L integration- not sure if this is something form the short 10s exposures with this camera but it is better in either combined result and not something ive seen before. Both the combined integration and startools RGB+synthetic L look cleaner and lower noise. Not sure best way to analyze them to compare that.
I am using astro pixel processor for integration and settings for all were stacking based on star shape weights which gives highest priority to best star size/shape.
edit:
and this is running pixinsight noise evalation script on the three
I don't know of a better way to evaluate noise and SNR but am open to ideas
also I guess similar results to what I expected. extracted L+RGB offers some improvement over straight L, but the L+RGB integration is lowest
Re: Looking for some clarity on synthetic luminance generation
The entire premise behind adding R+G+B to L, is that one R+G+B frame is equal to one L frame, because ideally the 3 color filters together cover precisely the spectrum that the L filter covers.
Of course, this assumption is - at best - an approximation. Take, for example, this LRGB set by Astrodon; (source; this page)
You can see that the assumption roughly holds, but also that there are definitely parts where L response extends beyond red, where green response lacks vs L response and where blue and green overlap somewhat.
As you point out, even if the RGB set perfectly matches the L filter, then you still have to contend with external factors that may cause discrepancies between what you captured in L, versus what you captured in the aggregate of R, G and B.
Indeed, such things can be changing atmospheric conditions, slight differences in filter coating quality, slight differences in focus, etc. Also important is how well the 4 channels are aligned. This is not to be underestimated and is one of the big reasons why the recommendation is to stack L first (best signal) and then use that stack as reference frame to stack the R, G and B sets against. This tends to yields better results than aligning the fully stacked L, R, G and B sets after the fact.
All up though, adding RGB to L is typically worth it. It definitely makes your luminance (and thus detail) signal quite a bit deeper.
Does this help?
Of course, this assumption is - at best - an approximation. Take, for example, this LRGB set by Astrodon; (source; this page)
You can see that the assumption roughly holds, but also that there are definitely parts where L response extends beyond red, where green response lacks vs L response and where blue and green overlap somewhat.
As you point out, even if the RGB set perfectly matches the L filter, then you still have to contend with external factors that may cause discrepancies between what you captured in L, versus what you captured in the aggregate of R, G and B.
Indeed, such things can be changing atmospheric conditions, slight differences in filter coating quality, slight differences in focus, etc. Also important is how well the 4 channels are aligned. This is not to be underestimated and is one of the big reasons why the recommendation is to stack L first (best signal) and then use that stack as reference frame to stack the R, G and B sets against. This tends to yields better results than aligning the fully stacked L, R, G and B sets after the fact.
All up though, adding RGB to L is typically worth it. It definitely makes your luminance (and thus detail) signal quite a bit deeper.
Does this help?
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast