First, my latest HD 7950 cards are from Sapphire, this time the dual-fan model with a stock clock of 925MHz on the cores. (As far as I can tell, that's the stock clock and not the Boost Clock.) They're available for $300 each, and they come with the Never Settle Reloaded bundle of Bioshock Infinite, Crysis 3, and Far Cry: Blood Dragon -- you can typically get close to $50 for the bundle via eBay. As I already have three other Radeon 7950 cards, you'd think I have a decent handle on settings by now, but I'm still tweaking and learning.
My main PC has a single 7950 in it, and I game occasionally on this system. I also use it to surf the web, email, write, etc. If you're not using a PC that's mining Litecoin, you can go for Intensity 20, but for a PC that you're actually trying to use for other things you'll never be able to use it properly with Intensity more than 13. So, I have two folders, with two different CGminer configuration files: one for when I'm not using the PC, and the other for when I'm working. The Intensity 13 version gets about 10-15% less hashing performance, but there's a catch: I'm using practically all of the GPU's 3GB of RAM! With a single thread, I get around 330kHash, but with two threads I'm at 510kHash. Here are the two cgminer.conf files I'm running on my "daily use" PC:
cgminer.conf Usable:
{
"pools" : [
{
"url" : "coinotron.com:3334",
"user" : "[USER].[WORKER]",
"pass" : "[PASS]"
},
{
"url" : "newlc.ozco.in:9332",
"user" : "[USER].[WORKER]",
"pass" : "[PASS]"
}
],
"intensity" : "13",
"vectors" : "1",
"worksize" : "256",
"lookup-gap" : "2",
"thread-concurrency" : "17920",
"shaders" : "1792",
"gpu-engine" : "900-1025",
"gpu-fan" : "40-100",
"gpu-memclock" : "1550",
"gpu-powertune" : "20",
"gpu-vddc" : "1.100",
"temp-cutoff" : "99",
"temp-overheat" : "95",
"temp-target" : "80",
"api-port" : "4028",
"expiry" : "120",
"failover-only" : true,
"gpu-threads" : "2",
"log" : "5",
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"scrypt" : true,
"kernel" : "scrypt",
"kernel-path" : "/usr/local/bin"
}
cgminer.conf High Hashing:
{
"pools" : [
{
"url" : "coinotron.com:3334",
"user" : "[USER].[WORKER]",
"pass" : "[PASS]"
},
{
"url" : "newlc.ozco.in:9332",
"user" : "[USER].[WORKER]",
"pass" : "[PASS]"
}
],
"intensity" : "20",
"vectors" : "1",
"worksize" : "256",
"lookup-gap" : "2",
"thread-concurrency" : "21712",
"shaders" : "1792",
"gpu-engine" : "900-1025",
"gpu-fan" : "40-100",
"gpu-memclock" : "1550",
"gpu-powertune" : "20",
"gpu-vddc" : "1.100",
"temp-cutoff" : "99",
"temp-overheat" : "95",
"temp-target" : "80",
"api-port" : "4028",
"expiry" : "120",
"failover-only" : true,
"gpu-threads" : "1",
"log" : "5",
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"scrypt" : true,
"kernel" : "scrypt",
"kernel-path" : "/usr/local/bin"
}
That might seem like a pretty simple configuration for some, but let me run through some of the specifics. First, the "Usable" configuration has thread-concurrency at 17920. I arrived at that number by trying different values until I could get cgminer to start; I think I'm hitting maximum RAM use perhaps, though maybe I could increase it slightly. Either way, it's not running much faster I don't think so I leave it alone. The second configuration file is basically what you often see recommended for Radeon 7950, but I usually see talk of 650+ kHash, which is only going to happen with some serious overclocking and tuning!There's more to starting CGminer going than the above, however. First, nearly all of my 7950 cards have default voltages that are far too high to run reliably without water cooling, at least in my ~75F environment. The dual-fan Sapphire cards for instance all come set to 1.250V, but under mining load even at lower intensities I can hit 95C and higher on some of the cards. What's more, three of the cards at 1.250V running slightly overclocked settings consume 1000W and more from the outlet, with an 80 Plus Platinum PSU even. Wow! The trick for me has been undervolting and overclocking, which isn't what you'd immediately expect.
MSI Afterburner is a great little utility for overclocking and undervolting, but it doesn't work with all cards -- the dual-fan Sapphire cards for instance don't allow voltage adjustments with it. Sapphire has their own TRIXX utility that works with their cards, however, so that's what I used. Here are my standard settings for reasonably reliable operation:
Vcore: 1.100V
GPU Clock: 1025MHz
RAM Clock: 1550MHz (6200MHz effective)
PowerTune: +10 (or +20 -- this doesn't seem to matter much)
I've managed to use those settings successfully on six different Radeon HD 7950 cards now, so I'm pretty confident they'll work for most users (at least if you're in a decent climate -- if the room is above 85F, you'll probably have issues). Okay, that gives us the overclock/undervolt settings as well as CGminer settings, so we're done, right? Not quite!The final step is launching CGminer properly. For that, I created a batch file (a sequence of commands for Windows to run). Since I have CGminer starting up with Windows, I had to play around a bit. MSI Afterburner (or Sapphire TRIXX) is set to load automatically and restore my clocks, but depending on the PC this can take 10-30 seconds before it's complete (more on a slower CPU/HDD). If CGminer launches before the clocks and voltage are set properly, there's a very good chance your PC will crash and require a hard reboot! The other lines are to ensure the thread-concurrency options work properly, and then to tell CGminer to automatically tune fan speed. Here's the batch file (paste this into "LaunchCGminer.bat"):
@echo off
set GPU_MAX_ALLOC_PERCENT=100
setx GPU_MAX_ALLOC_PERCENT 100
@ping 1.1.1.1 -n 1 -w 30000>nul
cgminer.exe --auto-fan --failover-only
Create a shortcut to that batch file and place it in C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup
and you should be set. But there are a few final items to mention! I have two different CGminer (currently 3.1.0) folders, one for high intensity and one for low intensity. There appears to be a bug with AMD's drivers or CGminer or something, with the result being that if I run CGminer, quit, run it again, quit, etc. then maybe on the third or fourth cycle I usually get a system crash. Ugh. So rebooting between runs is often required. Also, you'll want to close out of most other programs (especially your web browser!) before starting CGminer, as if it can't get enough VRAM it won't run properly. Oddly, after it's started and working well, you can start Firefox, Chrome, etc. without issue. Also also, if you run Photoshop, I suggest just exiting CGminer until you're done -- it's not worth the potential hassle, trust me!I think that's it for now. Happy Litecoin mining! My latest build is named "Frankenmine" because it's sort of in a disassembled mess right now while I wait for a few more pieces to arrive (specifically the PCIe slot extenders). It's definitely not safe for kids to be around at this point in time! If there's interest, I can provide the cgminer.conf files for this as well, and I have some other stories to relate involving the joys of Linux. But for now, this is what I'm using to get ~1800kHash/sec:
As always, if you find this stuff useful, please feel free to kick a few digital pennies my way!
BTC: 1JSrAuxPUhD2rS6yYLiPPT6X8fvz7c7k1W
LTC: LXpEZcNJtikd263z7Ha3vrdYDcLU7hiKWv
Or if donating directly is too much, go play at the LTC4You faucet and typically win 0.04 LTC every hour. That's a free $1.20 for clicking a button and typing a captcha. :-)
Update: Need help? See my Toubeshooting Guide.
I'm not able to run at appropriate speeds, like I do with BTC. My card stays at 20Kh/s when it should be at 200Kh/s (I get 190 Mh/s in my 6770 for BTC - https://github.com/litecoin-project/litecoin/wiki/Mining-hardware-comparison and https://en.bitcoin.it/wiki/Mining_hardware_comparison ).
ReplyDeleteThis is the .bat file that I'm using to run cgminer:
cgminer --scrypt -o tcp+stratum://pooledbits.com:8338 -u me.1 -p x
Am I doing something wrong that I should be doing different for Litecoin?
Thanks for the tutorial, it will help a lot of people
Hi Tom, I don't have a lot of experience with the 6770, but I'd suggest trying something like this, in a batch file:
ReplyDeletesetx GPU_USE_SYNC_OBJECTS 1
setx GPU_MAX_ALLOC_PERCENT 100
cgminer --scrypt --worksize 256 --lookup-gap 2 --thread-concurrency 8192 -g 1 --intensity 18 (with user name and password, obviously)
My guess is either your intensity is too low, or the worksize, thread-concurrency, or some other setting is too low.
This comment has been removed by the author.
ReplyDeleteHi,
ReplyDeletethanks for the info.
I have 3 Sapphire Radeon Vapor-X HD 7950 OC with Boost 3 GB.
I made the .bat and .conf file and everything as as you suggested.
Unfortunately I get always the same problem:
[2013-08-16 15:23:02] Started cgminer 3.3.4
[2013-08-16 15:23:03] Need to specify at least one pool server.
Input server details.
URL:
Any help would be appreciated,
Regards
So you have a file called "cgminer.conf" in the CGminer folder? It sounds like either the file is named incorrectly, or it's not in the right folder, as it's asking you for your server details. Also, you are joined up to some mining pool, right?
DeleteGoogle: "If the file doesn't already exist in the cgminer folder, create it and add settings as needed."
DeleteThis comment has been removed by a blog administrator.
ReplyDeletehello, what your mainboard model? thanks all.
ReplyDeleteI'm using this one (in one of my systems) and it works well:
Deletehttp://tinyurl.com/3pcie-mobo
That's an LGA1155 motherboard, so if you want a new Haswell build you'd need something different. AMD's FM2/FM2+ platform is also a good choice for cost reasons.
I'm curious as to how you get your hash rates. Using the exact same video card (same core/memory clocks) and settings CGMiner reports a total hash rate of 290 Kh/s. Do I need to double that number or am I missing something?
ReplyDeleteWhat OS are you using? I tried to get things running under Linux at one point, and hash rates were about half of what I got with Windows. After a week of tinkering I gave up on Linux and switched the machine to Windows.
DeleteWindows 8 64-bit. I thought that perhaps system RAM would be the bottle neck, but I have 16 GBs.
DeleteThis comment has been removed by the author.
ReplyDelete2x Sapphire 7950 Vapor-X. No cgminer.conf, but the below settings are giving me ~1.15Mh/s (average of 580Kh/s on each card @ 74.C, running a extra cooling fan - normal table fan directed at the open tower cabinet).
ReplyDeleteSettings:
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
cgminer --scrypt -d 0,1 --gpu-engine 1100,1100 --gpu-memclock 1475,1475 --shaders 1792,1792 --gpu-vddc 1.175,1.175 --gpu-powertune 20 --worksize 256 -I 13,13 --gpu-threads 2 --thread-concurrency 8192 -o -O
I have not used Trixx! Running default driver that came with the card. 12.10x and the APP SDK 2.8
Windows 7 64 bit. Gigabyte MB, CoolerMaster 1200W Powersupply.
Hope it helps some one! :)
Oops, commenting system screwed up my above settings:
DeleteSettings:
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
cgminer --scrypt -d 0,1 --gpu-engine 1100,1100 --gpu-memclock 1475,1475 --shaders 1792,1792 --gpu-vddc 1.175,1.175 --gpu-powertune 20 --worksize 256 -I 13,13 --gpu-threads 2 --thread-concurrency 8192 -o POOLSERVERADDRESS -O USERNAME:PASSWORD
This was the only cgminer script I could get working for my 3x rig. Thanks for posting this. Author's script doesn't work for me at all :(
DeleteMuch appreciated, helped me tweak my settings.
ReplyDeleteI have a few XFX 7950's and I'm getting 620KHash with these settings:
{
"pools" : [
{
"url" : "miner.coinedup.com:3351",
"user" : "miner",
"pass" : "somethingsomething"
}
],
"intensity" : "20",
"vectors" : "1",
"worksize" : "256",
"lookup-gap" : "2",
"thread-concurrency" : "20000",
"shader" : "1792",
"gpu-engine" : "900-1050",
"gpu-fan" : "40-100",
"gpu-memclock" : "1450",
"gpu-powertune" : "10",
"temp-cutoff" : "99",
"temp-overheat" : "87",
"temp-target" : "80",
"api-port" : "4028",
"expiry" : "120",
"failover-only" : true,
"gpu-threads" : "1",
"log" : "5",
"queue" : "1",
"scan-time" : "60",
"temp-hysteresis" : "3",
"scrypt" : true,
"kernel" : "scrypt",
"kernel-path" : "/usr/local/bin"
}
Also, don't run "setx GPU_USE_SYNC_OBJECTS 1", just slowed mine down by ~100KHash
DeleteThe config you have posted i can manage to get it going. Because my card is crazy.
ReplyDeleteI have an sapphire 7950 boost and it does not want to go over 500 ( this are the settings i use ) this is the only that i find will stuck for the moment but at 500 :(
--thread-concurrency 8192 -g2 --intensity 13 --worksize 256
Also no other Trixx or MSI tune on it but i tried some settings and almost most of the time i get an BSOD.
GPU CLOCK : 925
GPU MEMORY : 1250
I do not know when i can fine tune a bit to get at least 600 not to say that i saw many configs of 7950 witch they get around 700.
I with this setup and I20 gives me HW errors and is not stable, is only stable with I13, and I can not get more stable 587Kh.
DeleteI have an AMD quad core a8 cortex and 8gb ram,
I can´t increase from i13, and I do not agree Thread concurrency than 8192 ... Does anyone know what could be the problem?
You've run "setx GPU_MAX_ALLOC_PERCENT 100"? That's usually the only thing that prevents thread concurrency above 8192. And are you using 7950, 7970, or something else?
DeleteI have already solved, there is a special kernel for cgminer and 7950, with the kernel and configuration of this post I got 629kh stable.
DeleteKernel File.
http://www.crark.net/download/scrypt130511.zip
Thank you.
Maybe i can put my rock to the wall because i think i have the same kind of graphic card ( sapphire hd7950) :
ReplyDeleteI had to change two things in your conf files.
first :
"thread-concurrency" : "8000"
higher and it's doesn't work well, less hashrate, cgminer complain about setx GPU_MAX_ALLOC_PERCENT 100 and setx GPU_USE_SYNC_OBJECTS 1
even if it is in the batch file.
second :
"temp-overheat" : "75",
"temp-target" : "65",
because if i put higher, my system freezes while the gpu's fans are picking their noose.
With that config, i have 530 kh/s for the "Usable" file and 600 kh/s for the "High Hashing" file.
PS : i use the kernel file from Miguel Angel Soler's post.
Ok, the changes i made are not good, got lots of rejected and HW. I'll investigate further but now, i'm busy testing different auto-switching pools.
DeleteUnfortunately that's pretty much how it goes. If you and I have the same GPU and run the same settings, unless all other elements (motherboard, CPU, RAM) are the same we most likely won't get the same performance. I'm starting to wonder if the R9 290X hardware runs best on FM2 motherboards -- I know some people are hitting nearly 1000 KHash but I haven't every had a 290X do more than about 850 KH on an Intel platform. Weird.
DeleteNow my tweaks seem to work fine.The trick was to use cgminer kalroth fork ( http://k-dev.net/cgminer/http://k-dev.net/cgminer/ ) and to use the "Lantis' optimized scrypt binaries" on the same page. With that i can push the xintensity without problem. but i had a lot stale share. i found a post somewhere with some guy using "shaders" : "2048" and indeed, it works quite well.
DeleteNow, i managed to have 2 different configurations which work fine :
640 kh/s with every now and then a stale share and the computer is a little bit laggy :
"xintensity" : "176",
"vectors" : "1",
"worksize" : "256",
"lookup-gap" : "2",
"gpu-threads" : "1",
"thread-concurrency" : "21712",
"shaders" : "2048",
"gpu-engine" : "1050-1050",
"gpu-fan" : "40-100",
"gpu-memclock" : "1500",
"gpu-memdiff" : "0",
"gpu-powertune" : "20",
"gpu-vddc" : "0.000",
"temp-cutoff" : "99",
"temp-overheat" : "75",
"temp-target" : "65",
"api-mcast-port" : "4028",
"api-port" : "4028",
"auto-fan" : true,
"expiry" : "120",
"failover-switch-delay" : "60",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"log" : "5",
"log-dateformat" : "0",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "60",
"scrypt" : true,
"temp-hysteresis" : "3",
"worktime" : true,
"shares" : "0",
"kernel-path" : "/usr/local/bin"
And another one which is around 615kh/s but the computer is perfectly usable :
"xintensity" : "4",
"vectors" : "1",
"worksize" : "256",
"lookup-gap" : "2",
"gpu-threads" : "2",
"thread-concurrency" : "8192",
"shaders" : "1792",
"gpu-engine" : "900-1020",
"gpu-fan" : "40-100",
"gpu-memclock" : "1530",
"gpu-memdiff" : "0",
"gpu-powertune" : "20",
"gpu-vddc" : "1.100",
"temp-cutoff" : "99",
"temp-overheat" : "75",
"temp-target" : "65",
"api-mcast-port" : "4028",
"api-port" : "4028",
"auto-fan" : true,
"expiry" : "1",
"failover-only" : true,
"failover-switch-delay" : "60",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"log" : "5",
"log-dateformat" : "0",
"no-pool-disable" : true,
"queue" : "1",
"scan-time" : "1",
"scrypt" : true,
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
I'm sure both can be tweak further, so don't hesitate to test things.