I'm admittedly very confused now.
This is your ou4_O3.v0 file, correct?
You then say you remove "banding" from this, and end up with this?
http://ram.org/images/space/downloads/ou4_O3.v0.1.fit
I don't see any banding in any of your stacked datasets you have shared so far?
I'm sure you've done vastly more research trying to fix this particular issue, since it has been plaguing you for a while, however I'm not 100% convinced banding is something you would find in your final dataset, looking this way; I would think that, if banding were present, it would have either calibrated out (in case of a fixed pattern) by your darks/bias frames or would have been stacked out (in case of a random pattern per frame, or in case of the same, fixed pattern in all frames, under the influence of dithering). It should not lead to this localized uneven lighting as observed in your datasets.
Are we really talking about the same thing here? (e.g. banding = perfectly horizontal or vertical stripes/bars in your individual frames?)
I don't own PixInsight, however using any of these tools will never 100% repair bad calibration or lost signal. Looking at your source dataset and then at your DBE result - I'm going to be 100% honest here - I do not (cannot) trust that the faint background detail in your images is real after DBE. DBE (or ST's Wipe equivalent) is not a substitute for removing signal undulating this fast and that was - I am 99% sure - never of celestial origin.You can also try it out: just run CBE with the defaults on the v0 image after it is rotated 180 and you'll see a remarkable change when you redo the STF. I'd say 80-90% of the banding goes away after CBE and then the remaining 20% is gotten by DBE.
On the contrary; constantly working around issues is vastly more time consuming than it is to solve a fundamental problem once. It is always more optimal to address a cause than it is to repeatedly address symptoms. This problem is currently keeping you from achieving better results. You have hit a plateau; more data is not going to make your image better, as it is IMHO not calibrated well enough - the aberrant signal overwhelms any faint signal you might be adding.At the end I'm not sure where this leaves us. Time is precious and it's easier to just work around issues.
However, I have to deal with the gear I have and the problems it generates and I really would've liked to see a difficult use case since what I've observed looking at other people's data is also that generating clean data is difficult (and this is probably my most difficult data set ever).
Achieving a "clean" (as in suitable for ST) dataset is not particularly hard. What is meant by "clean" in the context of StarTools, is simply well-calibrated and containing only Poissonian (shot) noise. The signal doesn't have to be noise free, just calibrated to be free of any other non-celestial influences (uneven lighting, dust, dead pixels, hot pixels, pattern noise), to the best of your abilities.
You can use virtually the same workflow/defaults on this short exposure, imperfect DSLR dataset that was also not stacked according to ST best practices. (iI's from this old video) and you should be able to achieve something like this with the same basic workflow using mostly defaults (not processed to taste, just mostly defaults where possible - 1.7 now even cleans up the walking noise in this dataset, caused by not dithering).I did however take a look at thread you pointed to on SGL and the images done using ST are amazing - it is good to know that can be accomplished. But the conditions for that set are amazing - they spent 100 hours on data collection and used 40 hours of it - throwing out 60 hours worth! That's not a realistic proposition, right? We have to be able to do this without that effort - what would've happened had all 100 hours been used? But I do have other data sets of varying difficulties and cleanliness and I can compare and contrast.
I can't quite remember exactly, but I believe this was 40 minutes of exposure time, acquired under light polluted skies (a CLS filter was used) and shot with an old Canon 450D. Some sort of calibration was performed, but there are still dust donuts in the upper left corner and some defective sensor columns. No dithering was performed either.
StarTools certainly does not require deep data to be able to avail of its benefits. It just requires "honest" data (e.g. free of signal introducing defects) to the best of one's abilities.
I only saved the JPEG, but I'll redo it and upload the 16-bit TIFF here (ST is a post-processing application only - it does not export data, only images!). I will let you know when it's up.But as a favour since you did put in the work, can you please send me your final version that you said "that's as far as I'm willing to push it" as a 32 bit fits? I can then examine it properly. Thanks a lot!
Clear skies!