Question regarding NBAccent and Noise
Posted: Fri Oct 21, 2022 6:06 am
Hi Ivo (and all of course),
there was one sentence in your post
viewtopic.php?p=13385#p13385
that caught my special attention:
“… and the signal has to be pushed quite a bit. The latter can make noise start to poke through ...”
Together with this post (explaining how the NBAccent module works)
viewtopic.php?p=12382#p12382
and particularly this point
“the new value for the pixel (per channel), is the value of the pixel that is largest (original or accented)”
I’ve started thinking about what this means in case of noisy data sets.
If I understand correctly this means for every single pixel
- we have two signals (original or accented) with different levels
- both signal levels are superposed with different, random noise levels (shot noise or whatever)
- therefore the outcome (which signal “wins”) relies on the (for each pixel) concrete, random, statistical noise levels both signal levels are superposed with.
This means if there is for example an area in original data with signal level(s) not too much above the signal level of the accented data and both signals are noisy that there are a number of pixels in this area where the accented data “wins” – due to random noise levels. So the accented data shines or pokes through – as you wrote.
Do I understand this correctly? (I hope what I wrote was halfway understandable. Not so easy to describe.)
Having all this said, here comes my question: Would it make sense to do first noise reduction on both data sets (separately) and than afterwards NB accent processing as the ST module does? Having lower noise levels on both data sets should reduce this ‘poking through’ effect?
Thanks & best regards, Dietmar.
there was one sentence in your post
viewtopic.php?p=13385#p13385
that caught my special attention:
“… and the signal has to be pushed quite a bit. The latter can make noise start to poke through ...”
Together with this post (explaining how the NBAccent module works)
viewtopic.php?p=12382#p12382
and particularly this point
“the new value for the pixel (per channel), is the value of the pixel that is largest (original or accented)”
I’ve started thinking about what this means in case of noisy data sets.
If I understand correctly this means for every single pixel
- we have two signals (original or accented) with different levels
- both signal levels are superposed with different, random noise levels (shot noise or whatever)
- therefore the outcome (which signal “wins”) relies on the (for each pixel) concrete, random, statistical noise levels both signal levels are superposed with.
This means if there is for example an area in original data with signal level(s) not too much above the signal level of the accented data and both signals are noisy that there are a number of pixels in this area where the accented data “wins” – due to random noise levels. So the accented data shines or pokes through – as you wrote.
Do I understand this correctly? (I hope what I wrote was halfway understandable. Not so easy to describe.)
Having all this said, here comes my question: Would it make sense to do first noise reduction on both data sets (separately) and than afterwards NB accent processing as the ST module does? Having lower noise levels on both data sets should reduce this ‘poking through’ effect?
Thanks & best regards, Dietmar.