Ok, I went through your Antares video.
To sum things up, you're doing very well. Your data, by the looks of it is great and you're doing very well without a mount. I understand you're getting to grips with the tools and what they do. However even without, perhaps, understanding in-depth what they do, you're guiding them well.
First off, one thing that should define the context within which you use the tools, is that you are processing a wide field. You already mention this.
When processing wide fields there are 3 main things to take into account;
- Fast undulating gradients
- Undersampling (vs oversampling for narrow fields)
- Busy star fields
You're experiencing all 3 in your image.
Going through your video, I can make the following comments;
AutoDev. The first AutoDev is to quickly let you inspect the image for potential problems. Take your time.
Here you look for stacking artifacts (you already do this), try to gauge how good/bad light pollution/gradients are (important for widefields), try to gauge if your data is oversampled/undersampled (this has consequences for binning and decon) and more.
You decide to bin (this indeed helps with noise). You rightfully crop the image (to get rid of stacking artefacts).
The reason why you might do that second AutoDev, is simply because getting rid of stacking artifacts will mean that AutoDev no longer needs to show
them - it can now allocate dynamic range to show other "problems". For now, we haven't been using AutoDev at all for the purpose of our final/"real" stretch. We've just been (ab)using its dynamic-range-allocation-skills to try to bring out any glaring problems (e.g. any problems that can be shown to you by stretching the image in a particular way). Problems you can more easily pick up this way are stacking artifacts, gradients/vignetting, dust specks/donuts, etc.
You go onto Wipe, to get rid of the light pollution, skyglow, gradients and any other biases.
You have a tricky problem here; because your data set is a wide field, gradients can be very noticeable and undulate rapidly. Specifically, if there is any light pollution on the horizon this can show up quickly as a fast undulating gradient near one of the corners. You can see an example of that in the bottom/right, where there is a gradient that rapidly falls off. This is a tricky, non-trivial situation to model and subtract.
There are varying solutions, each with their own pros and cons.
The easiest is to simply recognise that it is a real light source that has invaded your faithful recording and to simply crop it out.
The second easiest solution is to bump up the aggressiveness parameter in Wipe, or bump up the aggressiveness near the corners (the Vignetting preset does that for example).
More elaborate solutions involve masking (which some AP'ers consider cheating or "painting") to combine different Wiped images together.
Much depends on what your preferred way of conducting AP is and who you are trying to impress.
You go on to use the RoI feature of AutoDev effectively, making a valid consideration that you wish to highlight that particular nebulosity as the center piece of the image. Rather than as a problem visualisation tool, you are now using AutoDev to perform the final stretch. You can now do that because you are now satisfied that problems have now been dealt with; you are now certain that AutoDev will only stretch signal.
On to decon now.
It is important to note that the binning will have made the data undersampled; multiple units of detail have now been crammed into a single pixel. This is the opposite of being oversampled, where a single unit of detail is "smeared out" over multiple pixels.
The purpose of deconvolution is to reverse this "smearing out" of single units of detail. The cause of the smearing can be anything from atmospheric conditions to optical train light scattering particularities. But the fact remains; if there is no longer smearing out of detail in your data (either because it's a wide field, or because you binned the data) then deconvolution can't do much for you.
In fact when dealing with widefields, it can make another problem worse; busy star fields (see further down below). The latter may also be exacerbated by using the Sharp module to enhance small detail. If using it on a wide field, try reducing the lower scales - there is plenty of small detail already (it's a wide field!), but larger scale detail is less visible as the busy star field is distracting. You can use the Sharp module (and Life module - see below) to only enhance large scale structures by only enhancing the larger scales.
You use the Contrast module, where you increase the Dark Anomaly Filter. You, rightly so, don't see much difference doing that. The Dark Anomaly Filter here is very similar to the one in Wipe - it keeps Contrast (and Wipe) from getting confused by small darker-than-real-background specks, dead pixels or noise. Halos may form otherwise (where Contrast/Wipe back off as they try to protect that aberrant pixels from clipping). Your data is of good quality, so no tweaking was needed.
As a side note, here I have to remark that, to any untrained observer, you certainly seem to know what you're doing. You are
already effectively expressing your vision for your data using the various tools. You do so in a way that is non-destructive to your data and yields a valid interpretation of your data. Understanding the tools better will of course help home into your personal taste much more.
The Flux module is better left alone, especially in wide fields. The reason is that Flux uses self-similarity to detect and interpret detail; gas flows/knots/shock fronts at a larger scale tend to behave like gas flows/knots/shock fronts on a smaller scale. The Flux module uses that similarity to detect and enhance detail across scales. The trouble is that, in widefields, at smaller scales stars take over and are too numerous. Stars are simply point lights and don't behave/look consistent across different scales.
For wide fields like these, I can heartily recommend the Isolate preset (with full mask set!) in the Life module. It will push back heavy and distracting star fields and re-focus the viewer's attention on larger scale structures.
Finally, you go into the Color module. You mention the colors don't look like other renditions found on Reddit, which can be a good thing, deppending on your preferences (you can emulate the "Reddit" way by changing the "Style" away from Color Constancy.
The reason why StarTools' color renditions look different (by default) is that StarTools records, recovers and maintains color ratios as they were recorded. Other software - regrettably - stretches color along with the luminance detail, resulting in wildly distorted hues and saturation values. Things in outer space don't magically change color just because you decided on a particular exposure length or stretch curve. M42's core is not white, it is green. M31's core is not white, it's yellowish.
You will notice that in your rendition, the sky is peppered with all manner of star temperatures. As is the case in reality (in fact StarTools relies on this fact to achieve reasonable default color balance).
Though this is slowly changing, unfortunately, many AP'ers still depict star colors as uniformly yellow or white.
After noise reduction, you can indeed modify the star field, though some times this is not even needed if the Isolate preset has been used.