What's new
Carbonite

South Africa's Top Online Tech Classifieds!
Register a free account today to become a member! (No Under 18's)
Home of C.U.D.

Nvidia RTX 2080Ti vs Nvidia 1080Ti Benchmarks

CRE4MPIE

Resident Forum Troll
VIP Supporter
Rating - 100%
239   0   0
Joined
May 19, 2010
Messages
4,231
Reaction score
1,009
Points
7,065
Age
48
Location
the Grassy Knoll !
The benchmarks everyone's been waiting for !!!


RTX-2080-TI.jpg












Made you look :p

If you find em - post here xD


CP out
 
Yeah but these new gen cards are supposed to break that mould according to leaks. Which, of course, should be taken with a grain of salt.

An RTX 2060 was reported to outperform an overclocked 1080 Ti.

But we'll see...

Cheers :)
 
More people need to buy the new series cards so more people like me get get bargain bin pricing on the 1080ti's people are inevitably going to ditch :ROFLMAO:
Not so sure you’re gonna see 1080Tis roll in too soon at the current 2080 pricing structure. I was keen to upgrade but quickly changed my plans and rekindled my love for my 1080Ti when I saw the new pricing. Anyway, am looking forward to the benchmarks.
 
Guys all current performance etc is given on comparing 10xx series with 20xx series on Ray tracing performance. Hence not real world gaming benchmarks, if it was 2060 to outrun 1080ti nvida would have thrown that Gaming benchmarks on the day of lunch.

All stats etc is thus on ray tracing performance.
 
And here's me thinking that's it's just gonna take me a long while to re-program myself that a card starting with R does not belong to the AMD/Ti stable.
The mere idea of putting an RTX in my system currently makes me feel dirty (not that there would be any point on this board and chip anyways).
 
I am actually worried that the RTX 2080 Ti isn't going to perform near what people think it is.
Maybe the game just gave a bad impression due to bad optimization or something but from a article I read this morning they were saying in Shadow of the Tomb raider the 2080 Ti couldn't even get stable 60fps on 1080p. It was running between 30-70fps.

They tried blaming it on bad optimization and that the game isn't a final product yet, but you kind of want enough raw power in a gfx card of that price, so that even with bad optimization you can run at decent frames.

I guess we have to wait and see how it performs with games that are actually out before judging it.

Here is a link to the article if anyone was wondering.
Shadow of the Tomb Raider unable to maintain 60fps on a GeForce RTX 2080 Ti
 
Yeah but these new gen cards are supposed to break that mould according to leaks. Which, of course, should be taken with a grain of salt.

An RTX 2060 was reported to outperform an overclocked 1080 Ti.

But we'll see...

Cheers :)

If this becomes a fact, I'll vote for Juju, serrrrious. This would skip too many chances to make $$ over the next few years.
 
I am actually worried that the RTX 2080 Ti isn't going to perform near what people think it is.
Maybe the game just gave a bad impression due to bad optimization or something but from a article I read this morning they were saying in Shadow of the Tomb raider the 2080 Ti couldn't even get stable 60fps on 1080p. It was running between 30-70fps.

They tried blaming it on bad optimization and that the game isn't a final product yet, but you kind of want enough raw power in a gfx card of that price, so that even with bad optimization you can run at decent frames.

I guess we have to wait and see how it performs with games that are actually out before judging it.

Here is a link to the article if anyone was wondering.
Shadow of the Tomb Raider unable to maintain 60fps on a GeForce RTX 2080 Ti

Man I wouldn't read anything into that. If it can't run at 1080p then there's some real issues, a 1080ti should be able to run it at 4k


Sent from my iPhone using Tapatalk
 
So Hairworks 2.0 Ray Tracing takes a performance hit? Let me switch that on so my 25K GPU can be as slow as my old 970.
 
They mentioned something like the tech that went into ray tracing is like 10x 1080 Ti’s. So I wonder if you turn ray tracing off, if there will be major performance gains...
 
Those claims were a little too ambitious that's for sure.
However, why are so many YouTubers and websites failing at objective analysis?
There are plenty of moronic concerns and videos based on hardware and software they've not used and even if they did use, have dubious testing methodology and understanding. I.E reviewers are spectacularly weak technically and it applies to the vast majority of them.

1. Ray tracing isn't a single algorithm (the mathematical model may be the same, but how it ends up being implemented in hardware isn't ) & is not single implementation within engines or hardware.

2. Ray Tracing is a mathematically compute heavy operation, more so than any other graphics operation and that has consequences;

(A) - We moved from fixed functionality to making as much of the GPU as programmable as possible. Our VGA cards were almost entirely fixed function because dedicated hardware/silicon was/will always be faster than executing or computing the same thing on a generic compute device (think FPGAs and CPLDs vs general CPU, we also moved Transform and Lighting to dedicated GPU hardware with GF256/DX7).As computing power of the general compute chips or logic increases, we then move those dedicated fixed/single function parts and integrate them back into the general logic or the chip. That is to say; if we had Ryzen 7 and Core i7 computational power in 1999, we'd not have had a GF256 with a T&L engine (silicon), as the host CPUs would be more than fast enough to handle anything, any game engine could hope to do with vertex/primitive operation.

(B) The reason I explain the above is because for Ray-Tracing, we have literally found ourselves needing dedicated hardware once again, simply because the general compute units we have within these GPUs (CUDA cores in NVIDIA's case) are simply not powerful enough to do any form of meaningful real time ray tracing or rather the specific computational load this would place on them. Hence a huge part of the silicon is now dedicated to this one function, but because of its paradigm shifting nature (massive importance), it's worth it. In the same way it was worth building discreet and fixed/dedicated functionality parts before, when something major was happening.
With Real Time Ray Tracing, this shift is more significant than any other since the advent of 3D acceleration (at a micro level - on a bigger scale the world has never had dedicated RT hardware at all)

As general compute logic gets faster (either by architectural overhaul, lithographic advancements or a combination) Not only will RT cores move back into the SMs, but so will the Tensor cores as well.

(C) We've literally seen this happen before. Your T&L hardware ended up being part of the Vertex processors (Vertex shader and discreet pixel shader in DX8 ~ DX9 hardware), then even that fell away and was folded into general compute shaders ( Unified Shader model - DX10 ) and subsequently the hardware (GeForce 10 CUDA cores don't distinguish between vertex and pixel operations in as far as computational load type is concerned)

In closing!
When something new and of importance comes to us in our pursuit to render realistic graphics.
We literally give it dedicated hardware. Once the framework around that technology and the logic manufacturing is sufficiently advanced, we put it back into the general "processor' or processing unit. In this case it is SM units, which will eventually house RT hardware. The CUDA units within the SMs will not distinguish between these RT operations or any other, outside of how many cycles each instruction or process requires.

Why this isn't obvious and known to those who make a living misinforming others is mind bending.
 
As for regular non RT performance metrics, even the 2080 beats the TITAN-V so these concerns mostly stem from ignorance of the tech media or laziness rather.
 

Users who are viewing this thread

Back
Top Bottom