- Joined
- Apr 22, 2010
- Messages
- 3,313
- Reaction score
- 982
- Points
- 5,265
If you thought overclocking 1000 series was tough, well NVIDIA has gone and made it even more difficult. I'm not saying they did it for any particular reason at all; I'm just saying as stands and with the tools we have, it's a lot tougher than it was before. Affects AIC partners as well, like EVGA who will not have K-Boost available to use anymore as the Turing GPUs simply don't allow that sort of clock/P-state control it seems.
1. You can only OC the boost clock not the base clock,
2. Boost clock is determined by the GPU and driver based on a host of input data regarding the state of the GPU (power, heat, load and a ton of other stuff) and the the boost clock will adjust accordingly.
3. There's currently no way to turn off the power monitoring, even with extreme overclocking you generally have to stay within the allowed power cap or at least fool the GPU into passing off the power draw as 'normal'. As you know the colder the GPU, the less power it consumes, so when dealing with XOC you're effectively lowering the temps while balancing the GPU clock so that together they do not trigger this power cap.
4. Adding more voltage to the GPU (if you can find the right tools even) won't help a single bit and even when you 'can' it simply won't apply. You will be able to tell that the software or whatever utility has requested to set a particular VID, but based of the actual 3D load reading, nothing happens.
5. The best thing you can do for 2000 series overclocking is to cool the GPU as others have said. A full coverage waterblock should be well worth it and considering what these cards go for, I'd seriously recommend one. Not for the heat per se, but to keep the temps low enough so that you get max boost clocks as much as possible. I've seen 3D clock go from 2010MHz to 1930MHz, all by itself, while the GPU lowers the voltage and does all sorts of things to 'optimize' operation.
6. In light of the above, the clock which you set in the utility isn't always going to be the same. If you set +50MHz it could set a 3Dload clock of 1995MHz, or 2010MHz or even 2025MHz. It will depend on the temperature and power draw at the time.
7. So far I don't think there's a utility that allows a fixed 3D clock, it's just a range that's possible and depending on the operating conditions may stay, there, increase or decrease. The lower the temps the more stable this clock frequency will be.
8. GDDR6 is great for obvious reasons, but I can't help but think that because Micron is the only provider for now, most memory on the GPUs isn't great at all. I say this because on record thus far, the highest mem clock is 8,700MHz I think or 17,4GHz (2080 Ti on LN2). Many GPUs can't even do 16GHz (8000MHz) even though the reference or mem IC rating is 7000MHz.
Micron has always produced slower GDDR than Samsung with lower overclocking headroom. Exactly as it is with regular DRAM, you want Samsung ICs not Micron.
In short, get a water block, it's worth it now more than any other time before.
With all that said, here are some results with an air cooled RTX 2080
3DMark TimeSpy - 12,288
3DMark FireStrtike Ultra - 7342
3DMark TimeSpy is a great example of how much more competent the Turing GPUs are at computational tasks than Pascal and previous generation GPUs. 3DMark FireStrike on the other hand, is more of a legacy benchmark at this point, because it leans a lot less on compute tasks, being a DX11 benchmark and all with limited general compute opportunities in how it renders an image.
Put in another way, one can do a lot more complicated tasks more efficiently (calculus), while the other does simpler tasks as faster (simple arithmetic). Simply put, every game we have at present favours the latter, but going forward, the former is where all development will and should be. How we get to movie like CGI is with complex, more exotic maths, not ever more arithmetic operations on ever increasing numbers.
1. You can only OC the boost clock not the base clock,
2. Boost clock is determined by the GPU and driver based on a host of input data regarding the state of the GPU (power, heat, load and a ton of other stuff) and the the boost clock will adjust accordingly.
3. There's currently no way to turn off the power monitoring, even with extreme overclocking you generally have to stay within the allowed power cap or at least fool the GPU into passing off the power draw as 'normal'. As you know the colder the GPU, the less power it consumes, so when dealing with XOC you're effectively lowering the temps while balancing the GPU clock so that together they do not trigger this power cap.
4. Adding more voltage to the GPU (if you can find the right tools even) won't help a single bit and even when you 'can' it simply won't apply. You will be able to tell that the software or whatever utility has requested to set a particular VID, but based of the actual 3D load reading, nothing happens.
5. The best thing you can do for 2000 series overclocking is to cool the GPU as others have said. A full coverage waterblock should be well worth it and considering what these cards go for, I'd seriously recommend one. Not for the heat per se, but to keep the temps low enough so that you get max boost clocks as much as possible. I've seen 3D clock go from 2010MHz to 1930MHz, all by itself, while the GPU lowers the voltage and does all sorts of things to 'optimize' operation.
6. In light of the above, the clock which you set in the utility isn't always going to be the same. If you set +50MHz it could set a 3Dload clock of 1995MHz, or 2010MHz or even 2025MHz. It will depend on the temperature and power draw at the time.
7. So far I don't think there's a utility that allows a fixed 3D clock, it's just a range that's possible and depending on the operating conditions may stay, there, increase or decrease. The lower the temps the more stable this clock frequency will be.
8. GDDR6 is great for obvious reasons, but I can't help but think that because Micron is the only provider for now, most memory on the GPUs isn't great at all. I say this because on record thus far, the highest mem clock is 8,700MHz I think or 17,4GHz (2080 Ti on LN2). Many GPUs can't even do 16GHz (8000MHz) even though the reference or mem IC rating is 7000MHz.
Micron has always produced slower GDDR than Samsung with lower overclocking headroom. Exactly as it is with regular DRAM, you want Samsung ICs not Micron.
In short, get a water block, it's worth it now more than any other time before.
With all that said, here are some results with an air cooled RTX 2080
3DMark TimeSpy - 12,288
3DMark FireStrtike Ultra - 7342
3DMark TimeSpy is a great example of how much more competent the Turing GPUs are at computational tasks than Pascal and previous generation GPUs. 3DMark FireStrike on the other hand, is more of a legacy benchmark at this point, because it leans a lot less on compute tasks, being a DX11 benchmark and all with limited general compute opportunities in how it renders an image.
Put in another way, one can do a lot more complicated tasks more efficiently (calculus), while the other does simpler tasks as faster (simple arithmetic). Simply put, every game we have at present favours the latter, but going forward, the former is where all development will and should be. How we get to movie like CGI is with complex, more exotic maths, not ever more arithmetic operations on ever increasing numbers.