Dark anomalies are - in StarTools - pixels (or clumps of pixels) that are darker than the "real" background. E.g. they don't describe real signal, but rather some form of signal obstruction or hindrance (dust, dead pixel, tree, stacking artefacts etc.).
A number of modules in StarTools really, really hate dark anomalies as they rely on taking measurements of the "real" background for various purposes. Dark anomalies can really throw off these measurements. Therefore filters like the Dark Anomaly filter exists for filtering these anomalies out if they are small, or masks for bigger areas/clumps.
The question in some modules is then, how should a module treat this anomalous data once it has been "worked around"/"ignored"? Should it just clip it to black/0? Maintain the original value? Something in between?
The dark anomaly headroom does exactly this; it controls the headroom (dynamic range) allocate to dark anomalies. No headroom = clip to black, full headroom = keep as-is.
Usually dynamic range taken up by dark anomalies is better allocated to real detail, but the choice is yours.
Minimum distance to 1/2 unity can be thought of as a decision making filter. 1/2 unity is a fancy word for "grey"
It works like this; given two images and an x,y location, which pixel at that x,y location is closest to grey? Is it the pixel in image1 or is it the pixel in image 2?
If pixel1 is closest to grey, we choose pixel 1. If pixel2 is closest to grey, we choose pixel 2.
The above is useful when trying to make a high dynamic range composite of two processed images. If you keep picking the pixels closest to grey, you avoid pixels that are either blown out ("full unity") or very dark (close to 0). E.g. pixels closest to grey tend to show the most human detectable detail.
Now, the only problem is that you end up with ugly borders of areas where the composite switches from on image to another. This can be counteracted by making the decision less black/white but more fuzzy (e.g. take a blend of image1 and image2 based on how many other pixels in the neighbourhood are also "in favor" of image1 or image2). This way you get a smoother, more gradual blend between the two images.
Hope this helps!
Ivo