LRGB v RGB
LRGB v RGB
I was going to post this on Cloudy nights but wanted to run it passed here first.
Question is it better to take more Luminance images than just RGB? i.e. take a set of LRGB or just RGB.
I would like to revisit this debate because I think the equipment and software used to capture and process astro images has moved on and would like to have my thought process tested.
I am one of those who has moved from taking OSC DSLR (mirrorless DSLR) astro images to dedicated astro mono camera with a CMOS sensor (ZWOASI1600MMPro). I was lead to believe from the Astro community wisdom that was the way to go. In my ignorance I thought it was simply a matter of taking R + G + B images through wide band filters. I was not fully aware of the issue of taking images using a Luminance filter or indeed its significance. Now after reading the very informative debate on Cloudynights I can see that there are good arguments for both methods. But to me it comes down to what equipment you are using and your software method of processing.
I have to say that I am more persuaded by the RGB arguments than the LRGB. I have no doubt that the maths show LRGB to be better but ignores what some would call the true signal (i.e. the useful bit of signal). If I am to look at the ZWO filters set that came bundled with my ASI1600MMPro I see that the Blue filter has a response between 400-500nm, The Green overlaps this a bit between 470-590nm and the red between 620-700nm. There is a significant gap (590 -620nm) between the red and the green. I am assuming that there are not many DSO’s out there that shine light at this wavelength and it is more likely than not to be from light pollution (unwanted signal = noise). Now the Clear filter used for luminance has a much wider band between 400-710nm including the gap between the red and green. It seems to me therefore that the clear filter will show more “noise” because of capturing this bit of the spectrum, which I don’t want.
Now some of the claims on the LRGB side is that for a set imaging time you get better results, but requires you to be binning your R+ G + B frames 2 x 2 and hence having reduced time exposures. The reasons for doing this appear to be because of read noise in CCD sensors being significantly reduce by binning. But wait I am using a CMOS sensors I image at a gain of 129 = a read noise of about 1.8 (e-rms). I have read that because the read noise is so low there is no benefit of binning at the sensor for a CMOS camera, so I do not bin. That means if I go with the LRGB method I now collect less RGB for the same imaging session because I have to take Luminance frames.
This becomes very easy to prove now as all I need to do is take a set of LRGB images and process one as a LRGB and one as a RGB set. Now I use Startools and that has a very useful feature of being able to process luminance and colour in parallel. So I have taken some images of the Hickson 44 Galaxy group with my iOptron RC6 no reducer or field flatterner. Clear filter 60 x 60sec, Red filter 55 x120sec, Green 55x 120sec and Blue 55 x 120sec. The first image was processed in Startools using the Luminace, colour L, RGB total integration time 6.5 hrs. The second image was processed using the L + Synthetic L from RGB, RGB total integration time 5.5hrs. The third set was taken by creating a synthetic luminance from the LRGB and using that new luminance file what I think they call LLRGB. All were processed in exactly the same way following the loading and blending of the files. I have not changed anything in the colour module just accepting the default values. Now the only difference I can see is that the LRGB image shows more noise and a very slight colour loss and that is despite it having longer integration time. If I am missing something then the difference is so small that at the moment to me it would come down to other factors.
So my take on this for my setup is that the difference is so small as makes no difference I can get longer integration taking just RGB and the real benefit is I can ditch the Clear filter and put in an OIII narrowband filter in my 5 position Electronic filter wheel to go along with the Ha already in there.
Question is it better to take more Luminance images than just RGB? i.e. take a set of LRGB or just RGB.
I would like to revisit this debate because I think the equipment and software used to capture and process astro images has moved on and would like to have my thought process tested.
I am one of those who has moved from taking OSC DSLR (mirrorless DSLR) astro images to dedicated astro mono camera with a CMOS sensor (ZWOASI1600MMPro). I was lead to believe from the Astro community wisdom that was the way to go. In my ignorance I thought it was simply a matter of taking R + G + B images through wide band filters. I was not fully aware of the issue of taking images using a Luminance filter or indeed its significance. Now after reading the very informative debate on Cloudynights I can see that there are good arguments for both methods. But to me it comes down to what equipment you are using and your software method of processing.
I have to say that I am more persuaded by the RGB arguments than the LRGB. I have no doubt that the maths show LRGB to be better but ignores what some would call the true signal (i.e. the useful bit of signal). If I am to look at the ZWO filters set that came bundled with my ASI1600MMPro I see that the Blue filter has a response between 400-500nm, The Green overlaps this a bit between 470-590nm and the red between 620-700nm. There is a significant gap (590 -620nm) between the red and the green. I am assuming that there are not many DSO’s out there that shine light at this wavelength and it is more likely than not to be from light pollution (unwanted signal = noise). Now the Clear filter used for luminance has a much wider band between 400-710nm including the gap between the red and green. It seems to me therefore that the clear filter will show more “noise” because of capturing this bit of the spectrum, which I don’t want.
Now some of the claims on the LRGB side is that for a set imaging time you get better results, but requires you to be binning your R+ G + B frames 2 x 2 and hence having reduced time exposures. The reasons for doing this appear to be because of read noise in CCD sensors being significantly reduce by binning. But wait I am using a CMOS sensors I image at a gain of 129 = a read noise of about 1.8 (e-rms). I have read that because the read noise is so low there is no benefit of binning at the sensor for a CMOS camera, so I do not bin. That means if I go with the LRGB method I now collect less RGB for the same imaging session because I have to take Luminance frames.
This becomes very easy to prove now as all I need to do is take a set of LRGB images and process one as a LRGB and one as a RGB set. Now I use Startools and that has a very useful feature of being able to process luminance and colour in parallel. So I have taken some images of the Hickson 44 Galaxy group with my iOptron RC6 no reducer or field flatterner. Clear filter 60 x 60sec, Red filter 55 x120sec, Green 55x 120sec and Blue 55 x 120sec. The first image was processed in Startools using the Luminace, colour L, RGB total integration time 6.5 hrs. The second image was processed using the L + Synthetic L from RGB, RGB total integration time 5.5hrs. The third set was taken by creating a synthetic luminance from the LRGB and using that new luminance file what I think they call LLRGB. All were processed in exactly the same way following the loading and blending of the files. I have not changed anything in the colour module just accepting the default values. Now the only difference I can see is that the LRGB image shows more noise and a very slight colour loss and that is despite it having longer integration time. If I am missing something then the difference is so small that at the moment to me it would come down to other factors.
So my take on this for my setup is that the difference is so small as makes no difference I can get longer integration taking just RGB and the real benefit is I can ditch the Clear filter and put in an OIII narrowband filter in my 5 position Electronic filter wheel to go along with the Ha already in there.
- Attachments
-
- Processed as L L R G B Integration time 6.5 hrs
- H44 LLRGB.jpg (213.93 KiB) Viewed 10250 times
-
- Processed as R G B Integration time 5.5 hrs
- H44 RGB.jpg (210.95 KiB) Viewed 10250 times
-
- Processed as L R G B Integration time 6.5 hrs
- H44 LRGB.jpg (232.19 KiB) Viewed 10250 times
Re: LRGB v RGB
By the sounds of it, you have an decent handle on the considerations.
All things equal, under perfect circumstances with perfect filters and identical (shifted) filter response, shooting R+G+B separately and combining into a synthetic luminance frame, should be exactly equal to using a real luminance frame.
In practice there are some losses (filter transparency losses) and variables (changes in atmosphere, sky, conditions) to contend with.
However, because all things are not equal and because filters are not perfect and do not have 100% identical (shifted) filter response, real luminance frames will always be preferable to separate R, G and B frames.
Back to the question at hand, then. If you will be taking Luminance frames (recommended as the results will be more consistent and optimal), then spend as much time as you can collecting those. Only acquire the minimum amount of RGB you can get away with.
One small (probably obvious) note - the total "theoretical" integration time for the luminance is (roughly) L + ((R + G + B) / 3). That's because it is assumed (big assumption) the R, G and B filters collect exactly one third of the light of the full L spectrum. E.g. to observe a more dramatic difference, acquire more L and less R, G and B.
Please also note that mixing different exposure times, for the purpose of synthetic luminance creation, may make highlights and noise floors behave differently, if mixed with a synthetic luminance frame of different sub-frame exposures.
Your LLRGB image (e.g. marked as "Processed as L L R G B Integration time 6.5 hrs, H44 LLRGB.jpg (213.93 KiB)"), to me, shows a slight improvement in the definition of the faint fuzzy at 3 o'clock. I would say the datasets and images look roughly as expected, with the LLRGB stack showing the cleanest detail, the (SynL)RGB stack coming in close second and the LRGB stack coming in a distant third.
Hope any of the above is useful!
All things equal, under perfect circumstances with perfect filters and identical (shifted) filter response, shooting R+G+B separately and combining into a synthetic luminance frame, should be exactly equal to using a real luminance frame.
In practice there are some losses (filter transparency losses) and variables (changes in atmosphere, sky, conditions) to contend with.
However, because all things are not equal and because filters are not perfect and do not have 100% identical (shifted) filter response, real luminance frames will always be preferable to separate R, G and B frames.
Back to the question at hand, then. If you will be taking Luminance frames (recommended as the results will be more consistent and optimal), then spend as much time as you can collecting those. Only acquire the minimum amount of RGB you can get away with.
One small (probably obvious) note - the total "theoretical" integration time for the luminance is (roughly) L + ((R + G + B) / 3). That's because it is assumed (big assumption) the R, G and B filters collect exactly one third of the light of the full L spectrum. E.g. to observe a more dramatic difference, acquire more L and less R, G and B.
Please also note that mixing different exposure times, for the purpose of synthetic luminance creation, may make highlights and noise floors behave differently, if mixed with a synthetic luminance frame of different sub-frame exposures.
Your LLRGB image (e.g. marked as "Processed as L L R G B Integration time 6.5 hrs, H44 LLRGB.jpg (213.93 KiB)"), to me, shows a slight improvement in the definition of the faint fuzzy at 3 o'clock. I would say the datasets and images look roughly as expected, with the LLRGB stack showing the cleanest detail, the (SynL)RGB stack coming in close second and the LRGB stack coming in a distant third.
Hope any of the above is useful!
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
Re: LRGB v RGB
Thank you Ivo I think I was a bit unkind to include the LRGB blend because if you have taken Luminance then I can see that it makes sense to do a LLRGB. I am not that good at seeing the finer points of an astro image yet. I am just starting to get a better handle on the StarTools combine module and have to say its just what those like me with lesser processing skills need to get better asto images.
I shall try your advise and take at least 4 hours of luminance and process that on the Hickson 44 galaxy group and see what improvement that makes. I could see that this approach could be more efficient data acquisition. A question I have however is how much does the frequency response of a Luminace filter have. My ZWO Clear has very flat response would it be better if I used something like a light pollution filter with multiple band response?
I was hoping for confirmation that dropping the Luminance filter would be ok for my setup but I value what those who have gained considerable experience have found, its the sanity check I need so thank you very much.
I shall try your advise and take at least 4 hours of luminance and process that on the Hickson 44 galaxy group and see what improvement that makes. I could see that this approach could be more efficient data acquisition. A question I have however is how much does the frequency response of a Luminace filter have. My ZWO Clear has very flat response would it be better if I used something like a light pollution filter with multiple band response?
I was hoping for confirmation that dropping the Luminance filter would be ok for my setup but I value what those who have gained considerable experience have found, its the sanity check I need so thank you very much.
Re: LRGB v RGB
That totally depends on the quality of your RGB data of courseJLP wrote:Thank you Ivo I think I was a bit unkind to include the LRGB blend because if you have taken Luminance then I can see that it makes sense to do a LLRGB.
I'm happy to that!I am not that good at seeing the finer points of an astro image yet. I am just starting to get a better handle on the StarTools combine module and have to say its just what those like me with lesser processing skills need to get better asto images.
For the purpose of pure luminance, you could definitely decide to image without a filter in place. It's mostly just to block out IR, but letting IR pass as well (by not having a filter) you can often achieve a nice bump in signal.I shall try your advise and take at least 4 hours of luminance and process that on the Hickson 44 galaxy group and see what improvement that makes. I could see that this approach could be more efficient data acquisition. A question I have however is how much does the frequency response of a Luminace filter have. My ZWO Clear has very flat response would it be better if I used something like a light pollution filter with multiple band response?
I was hoping for confirmation that dropping the Luminance filter would be ok for my setup but I value what those who have gained considerable experience have found, its the sanity check I need so thank you very much.
Do you know if your red, green and blue filters block IR? If not, then you may want to image with both luminance AND the respective color filters when doing the color channels. You can usually find the frequency response on the manufacturers website.
Clear skies!
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
Re: LRGB v RGB
Just to add to Ivo's info, ideally when you do LRGB you would use binning on the RGB channels to again "speed up the process" eg..hardware binning (2x2) for a CCD or software binning (CMOS camera like yours). The idea is that you don't need detail, just colour so you can sacrifice resolution and gain a few bits of exposure back. So for example if you take 2hrs of L, often people would take only 1h total of RGB (so about 20mins per filter) at Bin 2x2 to get the colour information. But is there also detail in colour channels?, sure is on some targets, so for many bigger galaxies you may want to take colour at standard resolution Bin 1x1 to pick out those details and even mix in some Ha. But I guess the whole point of LRGB was to "speed up" processing and leverage the CCD binning benefits. It works best in darker skies as colour isn't as washed out, but as you note you can use a LP filter in place of the "Lum" filter. You can also use a Ha filter in place of "L' too.
Re: LRGB v RGB
Hi Guys thanks for taking the time out to reply. My filter set has an UV/IR cut so does not include IR band.
I hear and understand whats being said but my experience is showing me something else so what is it that I am doing wrong?
Last night I took another 2 hrs worth of frames on The Hickson 44 Galaxy Group I added that to my previous Luminous stack and processed in the same way as I had done for the previous set.
I have to say that to me the resulting images are indistinguishable i.e the LLRGB and that I call the RGB which is from L+Synthetic From RGB, RGB.I now have 8.5hrs intregration time of the LLRGB set and only 5.5hrs of the RGB set. I may be able to tease out more detail if I process these stacks differently but the purpose of this exercise was to compare raw datatsets, is that fair on the cleanest data.
My dataset can be got from this link.
https://btcloud.bt.com/web/app/share/invite/d5MGM51rwf
I am going to try and add another 3hrs worth of RGB and see if that shows an improvement.
This is the new LLRGB with an additional 2 hrs lum.
I hear and understand whats being said but my experience is showing me something else so what is it that I am doing wrong?
Last night I took another 2 hrs worth of frames on The Hickson 44 Galaxy Group I added that to my previous Luminous stack and processed in the same way as I had done for the previous set.
I have to say that to me the resulting images are indistinguishable i.e the LLRGB and that I call the RGB which is from L+Synthetic From RGB, RGB.I now have 8.5hrs intregration time of the LLRGB set and only 5.5hrs of the RGB set. I may be able to tease out more detail if I process these stacks differently but the purpose of this exercise was to compare raw datatsets, is that fair on the cleanest data.
My dataset can be got from this link.
https://btcloud.bt.com/web/app/share/invite/d5MGM51rwf
I am going to try and add another 3hrs worth of RGB and see if that shows an improvement.
This is the new LLRGB with an additional 2 hrs lum.
- Attachments
-
- H44 LLRGB total integration time 8.5 hrs
- H44 LLRGB 2hr 59m Lum.jpg (225.04 KiB) Viewed 10154 times
Re: LRGB v RGB
Thank you for uploading. I notice your datasets are not very well calibrated (if at all?) and show a fair bit of walking noise.
You should urgently address these issues first, as any noise comparisons that focus on shot noise (the only noise an AP'er should ideally be concerned with) becomes quite muddied...
You should urgently address these issues first, as any noise comparisons that focus on shot noise (the only noise an AP'er should ideally be concerned with) becomes quite muddied...
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
Re: LRGB v RGB
Help, I think I will need some help in understanding this.admin wrote:Thank you for uploading. I notice your datasets are not very well calibrated (if at all?) and show a fair bit of walking noise.
You should urgently address these issues first, as any noise comparisons that focus on shot noise (the only noise an AP'er should ideally be concerned with) becomes quite muddied...
How have you identified that my datasest are not very well calibrated? Did you do this from within Sartools? If so how and what are you looking for?
I do not understand walking noise. I do not dither at the moment is this related?
I calibrate my datasets in DSS 4.2.3. I have taken 20 Darks 120 secs for RGB and 60 secs for Lum. And created a master I use the master. I have taken 20 flats for each filter and created a master I use the master. I have taken 20 flat darks created a master used the master. The dark flats match the exposure time and gain of the flats. All have been taken at a gain of 129 with the sensor temperature set at -5deg C. I have not taken or used bias frames.
Is it the way I am using masters that is causing the datasets to be badly calibrated?
I have redone the red filter stacking here using the raw files https://btcloud.bt.com/web/app/share/invite/k997WJdYAT
Is this any better?
My camera EFW and telescope are fixed on a longer dovetale bar and only move in and out to focus. This train has not been taking apart since I did the calibration files.
Any help will be most welcome. Thanks.
Re: LRGB v RGB
Had a good read of DSS help file on calibrating. It seems I may have been adding the Flat Dark files twice because I was using the master files. I re stacked my original RGB 1hr 50m and the luminance file of 3hrs not using the master files, just the original raw calibrated files. Are these better calibrated?
Now when I compare the RGB processed with the LLRGB file I can see a slight improvement in the LLRGB file but it has 3hrs more integration time than the RGB. Last night I took another 2hours of RGB and stacked that (hopefully properly). The RGB (compose L+Synthetic from RGB, RGB) now has 7hr 28m of integration time compared to the 8hr 30m of the LLRGB.
I cannot see a significant difference.
The new files are here:
https://btcloud.bt.com/web/app/share/invite/BwgDoncnpS
Here are my results And the LLRGB using Luminance file of 3hr
[attachment=1]H44 RGB 7hr 28m.jpg[attachment]
Now when I compare the RGB processed with the LLRGB file I can see a slight improvement in the LLRGB file but it has 3hrs more integration time than the RGB. Last night I took another 2hours of RGB and stacked that (hopefully properly). The RGB (compose L+Synthetic from RGB, RGB) now has 7hr 28m of integration time compared to the 8hr 30m of the LLRGB.
I cannot see a significant difference.
The new files are here:
https://btcloud.bt.com/web/app/share/invite/BwgDoncnpS
Here are my results And the LLRGB using Luminance file of 3hr
[attachment=1]H44 RGB 7hr 28m.jpg[attachment]
- Attachments
-
- LLRGB Luminance 3hr total 8hr 30m
- H44 LLRGB 8hr 30m.jpg (274.06 KiB) Viewed 10107 times
Re: LRGB v RGB
This is an AutoDev of the synthetic luminance frame;
There are some pretty clear gradients and areas of uneven lighting visible. There are even some dust donut remnants visible. A streaky pattern runs diagonally (due to insufficient dithering)...
There are some pretty clear gradients and areas of uneven lighting visible. There are even some dust donut remnants visible. A streaky pattern runs diagonally (due to insufficient dithering)...
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast