Page 1 of 2
Star Tools GPU processing - GPU type influence?
Posted: Tue Mar 30, 2021 9:20 pm
by UlfG
Hi, I have a question relating to Star Tools GPU processing.
I did some testing on the time gain of using Star Tools with GPU support, and without it. I measured the time needed to do the default deconvolution processing on an otherwise unprocessed image. My result was that when doing the processing with GPU support (StarTools-Windows 64-GPU.exe) the time needed was about 5 times shorter as compared to doing the procesing without GPU support (StarTools-Windows64.exe).
I did this same test on a laptop computer with a rather simple GPU type (Intel HD 5500) and on a desktop with a more advanced GPU( Nvidia Geforce 1030). The time gain factor with GPU support was almost exactly the same on both computers, that is a factor of 5.
So it would seem to me from this test that the influence of the GPU type on the time gain is minimal.. It appears that the important part is to use a GPU of some type, but it might not matter very much which type of GPU..?
So my question now is: If I would by a new computer to use with StarTools, should I spend extra money on bying a gaming type computer with an advanced dedicated GPU card? Or would that extra money be mostly wasted, and I might be just as well off with a simple integrated GPU card (Intel HD/UHD) normally found in office type computers? And perhaps would there be a better performance gain by spending the extra money on a better CPU rather than on a better GPU?
Regards
Ulf
Re: Star Tools GPU processing - GPU type influence?
Posted: Wed Mar 31, 2021 6:44 am
by admin
Hi Ulf,
Purely from a value-for-money perspective, now is a really bad time to invest in a GPU.
Cryptomining has taken off again, and the recent human manlware has led to parts and silicon shortages. These two things have conspired to make GPUs extremely expensive right now - if you can even find one. This is true for new and used GPUs unfortunately.
If you have a solution that works sufficiently well right now, please do use that for the time being. It's probably worth waiting at least 6 months to see if the market normalizes again.
A GT 1030 is considered a fairly low powered card in terms of compute power (but definitely faster than an Intel HD 5500 iGPU). A more powerful card does tend to provide increased speed. I would have expected the GT 1030 to perform somewhat better than a the iGPU.
That said, some other factors also impact the speed gain and any benchmarking you would be doing.
For instance, memory speeds (both GPU and system), bus speeds, the size of the dataset (larger datasets will benefit proportionally more from GPUs). Finally, the algorithm being executed may also vary in terms of how fast it will execute on specific GPUs and which parts of a GPU are being stressed.
I hope this helps!
Re: Star Tools GPU processing - GPU type influence?
Posted: Wed Mar 31, 2021 6:48 am
by admin
I forgot to mention, that NVidia "helpfully" released two versions of the GT 1030; one with GDDR5 memory and - later - one with SDDR4 memory. The SDDR4 memory variant is
much slower; almost half the speed of a "proper" GT 1030. This would definitely put it in the ballpark of an Intel HD 5500.
https://www.techspot.com/review/1658-ge ... omination/
Re: Star Tools GPU processing - GPU type influence?
Posted: Wed Mar 31, 2021 3:17 pm
by UlfG
Thanks for your answer. The GeForce 1030 I have is specified to have GDDR5 SDRAM memory. I am not really considering to buy a separate GPU,
rather I am choosing between different types of laptops, either a "standard" one with only integrated GPU, or a "gaming" one with a dedicated, but preinistalled, GPU. I don't think the bitcoin-mining GPU situation you mention has generally affected the laptop GPU market that much.
But from a more principal point of view, I would like to ask this:
Here
https://www.startools.org/modules/gpu-acceleration
it is stated that the GPU usage in Startools occurs in very short, but very intense bursts. This is also how
it appears when I look at it with the GPU-Z monitoring software. There are short (sub-second) bursts of GPU usage that
occours at intervals that are quite long (many seconds) in relation to the burst duration.
From that point of view, it would appear that getting a better GPU would only shorten the already very short bursts,
but would not change what is happening in the much longer times in between the bursts, and hence the influence of the better GPU on the total processing time would be quite minimal. Is this a correct conclusion, or are there some other factors at play here that I am not aware of?
I do use another image processing software that also uses GPU acceleration, the Topaz labs Denoise application, but that software is quite
different in that is uses the GPU at a constantly high rate during the whole processing time. And in that case, I see a substantial difference in processing time when going from the HD 5500 to the Geforce 1030, a factor of 3 or so. But in the case of Startools, I don't really see that much of a difference, and I do have the feeling that this is because of the very different GPU usage time pattern....?
Regards
Ulf
Re: Star Tools GPU processing - GPU type influence?
Posted: Thu Apr 01, 2021 7:44 am
by admin
UlfG wrote: ↑Wed Mar 31, 2021 3:17 pm
Thanks for your answer. The GeForce 1030 I have is specified to have GDDR5 SDRAM memory.
Interesting... What is the CPU it is paired with if I may ask? I would have definitely expected a noticeable speedup vs the HD 5500 (though not massive).
I am not really considering to buy a separate GPU,
rather I am choosing between different types of laptops, either a "standard" one with only integrated GPU, or a "gaming" one with a dedicated, but preinistalled, GPU. I don't think the bitcoin-mining GPU situation you mention has generally affected the laptop GPU market that much.
Understood. In that case I would highly recommend a "gaming" or "media/productivity" unit.
It should be said, however that laptop GPUs are a lot less powerful than their desktop counterparts, even when they carry identical model numbers. That said, a middle-of-the-road RTX 3060 Mobile should kick some serious a**
But from a more principal point of view, I would like to ask this:
Here
https://www.startools.org/modules/gpu-acceleration
it is stated that the GPU usage in Startools occurs in very short, but very intense bursts. This is also how
it appears when I look at it with the GPU-Z monitoring software. There are short (sub-second) bursts of GPU usage that
occours at intervals that are quite long (many seconds) in relation to the burst duration.
From that point of view, it would appear that getting a better GPU would only shorten the already very short bursts,
but would not change what is happening in the much longer times in between the bursts, and hence the influence of the better GPU on the total processing time would be quite minimal. Is this a correct conclusion, or are there some other factors at play here that I am not aware of?
The other factors at play are memory system and GPU memory speeds, bus speeds, the size of the dataset and the specific task at hand. Some algorithms even need more GPU power depending on the SNR (if signal is low, more computations are performed) in your dataset.
I do use another image processing software that also uses GPU acceleration, the Topaz labs Denoise application, but that software is quite
different in that is uses the GPU at a constantly high rate during the whole processing time. And in that case, I see a substantial difference in processing time when going from the HD 5500 to the Geforce 1030, a factor of 3 or so. But in the case of Startools, I don't really see that much of a difference, and I do have the feeling that this is because of the very different GPU usage time pattern....?
Due to the complexity of the algorithms and various strengths and weaknesses of CPUs (good at complex conditional branching, sorting and single-threaded interdependent sequential tasks) vs GPUs (good at multiple, simultaneous, independent, mathematical operations), most GPU optimizations in StarTools pass data back and forth between the system and the GPU, hence memory dataset size and bus speed being a very important factor. The faster the system can pass datasets back and forth and the bigger and noisier those datasets are, the longer the GPU bursts will be, and the more your dataset will benefit from GPU acceleration.
Does that help?
Re: Star Tools GPU processing - GPU type influence?
Posted: Tue Apr 06, 2021 9:46 pm
by UlfG
Hi, thanks for your answer, I now have a different machine, a gaming type laptop that has both Intel UHD 630 Graphics and Nvidia GTX 1650 Graphics.
However, Star Tools states that is using Intel UHD 630 Graphics, and so it would seem that the Nvidia graphics is there to no avail for Star Tools...?
Regards
Ulf
Re: Star Tools GPU processing - GPU type influence?
Posted: Wed Apr 07, 2021 12:00 am
by admin
UlfG wrote: ↑Tue Apr 06, 2021 9:46 pm
Hi, thanks for your answer, I now have a different machine, a gaming type laptop that has both Intel UHD 630 Graphics and Nvidia GTX 1650 Graphics.
However, Star Tools states that is using Intel UHD 630 Graphics, and so it would seem that the Nvidia graphics is there to no avail for Star Tools...?
Regards
Ulf
Congratulations on the new machine Ulf!
It is possible to force a GPU solutions switch through some undocumented functionality;
You can override the default vendor selection by creating a file named 'openclplatformindex.cfg' (please make sure it has exactly that name, including its extension) and putting the number '0' or the number '1' in there. This lets you use
different OpenCL platforms/vendors/drivers in your system.
There is also similar functionality via a file named 'opencldeviceindex.cfg'. Same thing - put a '0' or '1' in there. This lets you use different devices from the
same vendor, running the
same driver.
Do let me know how you get on!
Re: Star Tools GPU processing - GPU type influence?
Posted: Thu Apr 08, 2021 6:35 pm
by UlfG
Hi, Thanks for the tip, this worked, I could choose which GPU was used this way. This also gave me the oppurtunity to test the processing time
when changing no other factors than the GPU. So I did a simple test, I ran deconvolution on a 4466x2940 32 bit FITS image.
I did three processings with each GPU, and the average processing time was:
Intel: 66 s
Nvidia: 54 s
CPU only: about 200 s
So it still seems to hold that the difference between CPU only and "some" GPU is much bigger than the difference between different
GPU:s.
Regards
Ulf
Re: Star Tools GPU processing - GPU type influence?
Posted: Fri Apr 09, 2021 1:17 am
by admin
So it still seems to hold that the difference between CPU only and "some" GPU is much bigger than the difference between different
GPU:s.
Again, this depends greatly on the previously mentioned factors, but also on how you do your benchmarks (# iterations, noisiness, whether you are measuring an update or a 1st run, what the baseline CPU usage is in both cases, what sort of PSF you use, etc.).
If your settings are mostly system-heavy (CPU. RAM, Bus), then you will be mostly measuring your system's speed and not the GPU. If your settings are mostly GPU-heavy then you will be mostly measuring GPU speed.
Re: Star Tools GPU processing - GPU type influence?
Posted: Fri Apr 09, 2021 4:41 pm
by UlfG
Hi, I did the testing with as similar conditions as I possibly could. I did as follows:
* Start Star Tools
* Open the image
* Go to Deconvolution, select "Mask as is" (full mask), start stop watch
* Stop the stopwatch when done.
Repeat from start at least three times, calculate average, and then change GPU setting in .cfg-file.
Repeat again from start with new GPU.
This way the processing conditions was a similar as possible each time. The deconvolution settings were
of course the defaults each time since I did not touch them. No other software started except GPU-Z monitor, which was running all the time.
I find it hard to see how one could get the conditions more equal than this way.
( The "CPU only" test, of course, was done with the "non-GPU" executable, and this could be a reason for some differences,
but since the difference to the "with GPU" tests was quite big, those difference are presumably small in comparison.)
/Ulf