I have read as much as my brain can handle of the various guides and tutorials for using StarTools, so now time to ask for advice. This is the best version I can create of this image -
https://www.dropbox.com/s/vjd8ng7sfeyol ... 9.jpg?dl=0
this is 4.5 hrs of data using an RC6, CCDT67 reducer set to 0.71, and an ASI294MCPro OSC camera from a Bortle 6 site.
My RA RMS for these session averaged 1.2, my Dec RMS was 1.4, so a better mount and improved guiding will help sharpness.
My StarTools log is here, with the workflow and settings I used - https://www.dropbox.com/s/up45rkydob5jd ... 9.txt?dl=0
Here is the stacked file I started with - https://www.dropbox.com/s/t04aeha34vhbl ... 3.fit?dl=0
What could I do different or better to improve my processing of this data? I have a feeling I'm missing something but I don't know what! All advice gratefully received.
Thanks in advance.
Paul E
Full Res version here - How can I improve my processing of this M101 image?
Re: How can I improve my processing of this M101 image?
Hi Paul,
I don't think you've done too bad at all, given the dataset!
I think the biggest gains can be made by improving your data acquisition and pre-processing, which will then flow on to yielding big gains in post-processing, while also making post-processing a lot easier.
In that regard, by far the biggest improvement you could make, would be dithering between frames. It would get rid of the many hot pixels and correlated noise clumps/strings. Regarding the latter, you will notice the current background noise is more like correlated "mottle" than random, single-pixel Poissonian noise specks. This makes it especially hard for noise mitigation algorithms to deal with; it is no longer random (it is caused by repeatedly taking samples of the same area using the same photosites on your camera, rather than spreading the sampling across multiple photosites to even out the bias). Once you got cleaner doing so, you can push it much harder (and it doesn't really cost you anything, nor does it add to your acquisition time).
Finally, try choosing an outlier rejection algorithm in your stacker (for example median stacking). This should get rid of things like satellite trails.
You don't say whether you used a light pollution filter, though image seems to be fairly difficult to color balance correctly (leading me to think one was used?). Using the "Cap green" function should only be used as a last resort. Before resorting to that function, try using the MaxRGB mode to remove green until it is no longer dominant (small amounts as noise are still acceptable and even desirable). Even when using a light pollution filter, you should be able to show a yellow core (red dominance in MaxRGB mode) and bluer outer regions (blue dominance in MaxRGB mode) with foreground stars that vary between deep orange, white and and blue (though usually a distinct lack of yellow stars). The bright stars may also exhibit diffraction spikes that show at least some of the left-over color spectrum. You may also be able to recover purplish HII regions dotted around the spiral arms.
Do let me know if this helps and if you have any questions/comments!
I don't think you've done too bad at all, given the dataset!
I think the biggest gains can be made by improving your data acquisition and pre-processing, which will then flow on to yielding big gains in post-processing, while also making post-processing a lot easier.
In that regard, by far the biggest improvement you could make, would be dithering between frames. It would get rid of the many hot pixels and correlated noise clumps/strings. Regarding the latter, you will notice the current background noise is more like correlated "mottle" than random, single-pixel Poissonian noise specks. This makes it especially hard for noise mitigation algorithms to deal with; it is no longer random (it is caused by repeatedly taking samples of the same area using the same photosites on your camera, rather than spreading the sampling across multiple photosites to even out the bias). Once you got cleaner doing so, you can push it much harder (and it doesn't really cost you anything, nor does it add to your acquisition time).
Finally, try choosing an outlier rejection algorithm in your stacker (for example median stacking). This should get rid of things like satellite trails.
You don't say whether you used a light pollution filter, though image seems to be fairly difficult to color balance correctly (leading me to think one was used?). Using the "Cap green" function should only be used as a last resort. Before resorting to that function, try using the MaxRGB mode to remove green until it is no longer dominant (small amounts as noise are still acceptable and even desirable). Even when using a light pollution filter, you should be able to show a yellow core (red dominance in MaxRGB mode) and bluer outer regions (blue dominance in MaxRGB mode) with foreground stars that vary between deep orange, white and and blue (though usually a distinct lack of yellow stars). The bright stars may also exhibit diffraction spikes that show at least some of the left-over color spectrum. You may also be able to recover purplish HII regions dotted around the spiral arms.
Do let me know if this helps and if you have any questions/comments!
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast
Re: How can I improve my processing of this M101 image?
Hi Ivo.
That is so useful, many thanks for this prompt and detailed reply. I knew the camera was capable of better - I just didn't know how to "unlock" it. Next imaging run (whenever it stops being cloudy here in UK) I'll try dithering. Good advice about the stacking algorithm also. No I'm not using an LP filter, but again good advice about MaxRGB and what I should be looking for in galaxy colours.
I do have one further followup question. Living in a bortle 6 zone and using an OSC camera is not an ideal combination. I know that wipe does a good job of removing LP gradients, and although an LP filter such as an IDAS D2 would do a good job of removing the mix of "old style" street lamp and newer LED LP which I have, it also takes big chunks out of the yellow / orange spectrum (and hence star colours) which I can't sort out afterwards.
What I wondered was whether I would see any benefit from using an LP filter, taking say 4 hours data with my OSC camera, and converting that data to luminance / greyscale, and then adding them to say 4 hours OSC frames of the same object without the LP filter, using some form of LRGB processing to use the filtered frames for greyscale detail and the non-filtered frames for colour. Conceptually I would be trying to do at capture time what wipe does anyway with software at post-process time. Is there any advantage to this option, assuming the same total exposure time in both options? The LP / LRGB option is clearly more work, and needs money spent on an LP filter I don't currently have.
I'm talking specifically about galaxy images here - there is a separate conversation to be had about use of duo or triband filters for emission nebulae...
Your thoughts would be much appreciated.
Paul
That is so useful, many thanks for this prompt and detailed reply. I knew the camera was capable of better - I just didn't know how to "unlock" it. Next imaging run (whenever it stops being cloudy here in UK) I'll try dithering. Good advice about the stacking algorithm also. No I'm not using an LP filter, but again good advice about MaxRGB and what I should be looking for in galaxy colours.
I do have one further followup question. Living in a bortle 6 zone and using an OSC camera is not an ideal combination. I know that wipe does a good job of removing LP gradients, and although an LP filter such as an IDAS D2 would do a good job of removing the mix of "old style" street lamp and newer LED LP which I have, it also takes big chunks out of the yellow / orange spectrum (and hence star colours) which I can't sort out afterwards.
What I wondered was whether I would see any benefit from using an LP filter, taking say 4 hours data with my OSC camera, and converting that data to luminance / greyscale, and then adding them to say 4 hours OSC frames of the same object without the LP filter, using some form of LRGB processing to use the filtered frames for greyscale detail and the non-filtered frames for colour. Conceptually I would be trying to do at capture time what wipe does anyway with software at post-process time. Is there any advantage to this option, assuming the same total exposure time in both options? The LP / LRGB option is clearly more work, and needs money spent on an LP filter I don't currently have.
I'm talking specifically about galaxy images here - there is a separate conversation to be had about use of duo or triband filters for emission nebulae...
Your thoughts would be much appreciated.
Paul
Re: How can I improve my processing of this M101 image?
PaulE54 wrote:Hi Ivo.
That is so useful, many thanks for this prompt and detailed reply. I knew the camera was capable of better - I just didn't know how to "unlock" it. Next imaging run (whenever it stops being cloudy here in UK) I'll try dithering. Good advice about the stacking algorithm also. No I'm not using an LP filter, but again good advice about MaxRGB and what I should be looking for in galaxy colours.
Let us know how you go! I'm currently helping another ZWO ASI camera user, with datasets that exhibit the same sort of noise signature due to not dithering. Any before-and-afters would be very interesting to see.
Interesting about the non-use of a LP filter. The color response is somewhat challenging to color balance (something I have noticed before with some other ZWO datasets)....
You are a very clever thinker! That's absolutely a valid technique (see here). I'm currently finishing a big upgrade of the processing engine and a rewrite of the LRGB module. It will allow you to process separately composited luminance and chroma information simultaneously, as if it's a standard RGB dataset (e.g. no change in the workflow you know and love), while StarTools takes care of the rest.I do have one further followup question. Living in a bortle 6 zone and using an OSC camera is not an ideal combination. I know that wipe does a good job of removing LP gradients, and although an LP filter such as an IDAS D2 would do a good job of removing the mix of "old style" street lamp and newer LED LP which I have, it also takes big chunks out of the yellow / orange spectrum (and hence star colours) which I can't sort out afterwards.
What I wondered was whether I would see any benefit from using an LP filter, taking say 4 hours data with my OSC camera, and converting that data to luminance / greyscale, and then adding them to say 4 hours OSC frames of the same object without the LP filter, using some form of LRGB processing to use the filtered frames for greyscale detail and the non-filtered frames for colour. Conceptually I would be trying to do at capture time what wipe does anyway with software at post-process time. Is there any advantage to this option, assuming the same total exposure time in both options? The LP / LRGB option is clearly more work, and needs money spent on an LP filter I don't currently have.
Ivo Jager
StarTools creator and astronomy enthusiast
StarTools creator and astronomy enthusiast