Looking for some clarity on synthetic luminance generation
Posted: Thu Nov 11, 2021 7:30 am
I'm having trouble wrapping my head around how it works exactly, and what cases it might not be ideal.
I know from narrowband images sometimes it's better to use just Ha as luminance for a cleaner L layer as adding Oiii and Sii can make the L result noisier.
But for LRGB images with equal weight RGB channels, are there circumstances where it is better to not use the additional L+synthetic luminance?
I'm trying to understand how it works to provide a cleaner luminance result. I understand it sums the RGB channels, but can it also introduce more noise in some cases? I'm guessing it doesn't work exactly like integration where values are averaged and noise reduced, or does it? would it make sense to do another integration of RGB and L frames together instead if my goal was the cleanest luminance? are there drawbacks/benefits of doing that at integration stage if L and RGB exposure times are 1:3?
Then I'm wondering other circumstances where it might be better to only use the shot luminance- if guiding/seeing is poor during RGB shooting and the L has better star size. I just shot an M42 project on a windy night with mediocre seeing. To not overexpose the core my L frames were 10s and RGB 30s, and the L was shot later in the night as things calmed down. I am still integrating all of that but i think the shorter L frames could produce a sharper L results and adding a synthetic L from RGB could make it worse, but I haven't checked yet.
Basically I'm looking for clarification so I can conceptualize and understand how synthetic L extraction and addition to the real luminance works, and when it will produce a lower noise and cleaner result and cases where it might not and introduce noise or problems.
I know from narrowband images sometimes it's better to use just Ha as luminance for a cleaner L layer as adding Oiii and Sii can make the L result noisier.
But for LRGB images with equal weight RGB channels, are there circumstances where it is better to not use the additional L+synthetic luminance?
I'm trying to understand how it works to provide a cleaner luminance result. I understand it sums the RGB channels, but can it also introduce more noise in some cases? I'm guessing it doesn't work exactly like integration where values are averaged and noise reduced, or does it? would it make sense to do another integration of RGB and L frames together instead if my goal was the cleanest luminance? are there drawbacks/benefits of doing that at integration stage if L and RGB exposure times are 1:3?
Then I'm wondering other circumstances where it might be better to only use the shot luminance- if guiding/seeing is poor during RGB shooting and the L has better star size. I just shot an M42 project on a windy night with mediocre seeing. To not overexpose the core my L frames were 10s and RGB 30s, and the L was shot later in the night as things calmed down. I am still integrating all of that but i think the shorter L frames could produce a sharper L results and adding a synthetic L from RGB could make it worse, but I haven't checked yet.
Basically I'm looking for clarification so I can conceptualize and understand how synthetic L extraction and addition to the real luminance works, and when it will produce a lower noise and cleaner result and cases where it might not and introduce noise or problems.