As an Amazon Associate I earn money from qualifying purchases.

Monday, January 20, 2014

NVIDIA Scrypt GPU Mining Performance with CUDAminer

Everyone "knows" that AMD GPUs are best for mining the various cryptocurrencies, and the conventional wisdom is that NVIDIA GPUs aren't worth the cost or the trouble. While this may be true from a pure performance perspective, if you already own such a GPU, the December update to CUDAminer actually delivered a pretty substantial boost in performance. To be clear, we're still only talking about roughly half the performance of AMD's similarly priced hardware, but there are other reasons to go with NVIDIA GPUs -- gaming and general computational programming are both strong areas for NVIDIA.

And it's not just about games; if you're looking for a PC that can still be used for other non-GPU-intensive tasks while mining, CUDAminer tends to be far less taxing in my experience (similar to high-end AMD GPUs at intensity 13). Noise levels also tend to be much better (quieter), and stability is also good. So if you have an NVIDIA GPU, what settings should you use and what sort of performance can you expect as a result?

Let's start with the settings. I'm going to focus mostly on Kepler-based cards, but Fermi cards may also work reasonably well. The key flag to use in order to generally get optimal scrypt hashing performance is the -l flag (--launch-config), where you can specify the architecture to use for the GPU. CUDAminer will try to find the "best" solution on its own, but you can usually get better results and faster startup times by using -l. There are six options: L (Legacy), F (Fermi), K (Kepler), T (Titan Kepler), S (a variant of the Kepler core), and X (experimental Titan). I didn't even look into using L/F/S flags, as it's generally agreed now that the GPUs that require those configurations won't run all that well.

As for the remaining options, T/X require Compute 3.5 cards based on GK110, which means GeForce GTX 780/780 Ti or GeForce GTX Titan. We can again safely skip Titan -- performance is roughly between the 780 and the 780 Ti, but the cost is currently well over $1000 -- which leaves the $500 (give or take) GTX 780 and the $700 GTX 780 Ti. If you have a non-GK110 chip, you can still get reasonable performance as well, and since I have a GTX 770 I ran some tests on that as well. The basic rule is this: every SMX unit in Kepler (and Titan/GK110) has 192 cores, so take your CUDA cores and divide it by 192 to find out what your configuration should be. Here's what you can expect (without playing with overclocking):

GTX 770: ~330 KHash @ 200W (-l K8x32)
GTX 780: ~510 KHash @ 250W (-l T12x32)
GTX Titan: ~570 KHash @ 250W (-l T14x32)
GTX 780 Ti: ~580 KHash @ 260W (-l T15x32)
GT 750M: ~75 KHash @ 35W (-l K2x32) (Note: this is a notebook)
GTX 760M: ~110 KHash @ 45W (-l K4x32) (Note: this is a notebook)
GTX 780M: ~240 KHash @ 100W (-l K8x32) (Note: this is a notebook)

Again, there's obviously no reason right now to go out and buy a ton of NVIDIA GPUs to run CUDAminer, but if you already have those GPUs around -- or if you're more interested in gaming and you just want to run CUDAminer when the PC isn't otherwise being used -- the GTX 780 should still generate a return of roughly $90 per month (after power costs). The GTX 770 actually has some rebates going on right now that can bring the price down to just $320, so basically we're looking at half the performance of AMD's equivalently price GPUs. That may sound bad, but prior to the December update to CUDAminer, it was more like one fourth the performance.

No comments:

Post a Comment