What's new
Carbonite

South Africa's Top Online Tech Classifieds!
Register a free account today to become a member! (No Under 18's)
Home of C.U.D.

670 GTX Review 0.o

JollyJamma

What's my age again?
VIP Supporter
Rating - 100%
44   0   0
Joined
Aug 22, 2011
Messages
8,990
Reaction score
3,605
Points
11,255
Location
Rutland, United Kingdom
Despite its 12.5 per cent reduction in stream processors and lower base clock, the GTX 670 2GB proved to only be between 5-10 per cent slower than its much more expensive sibling, the GTX 680 2GB. This is no doubt due to the card’s more aggressive use of GPU Boost, as we saw our review sample reach core frequencies of up between 1,045Mhz and 1,084MHz during testing.

In ARMA 2 this saw a minimum frame rate of 56fps at 1,920 x 1,080 with 4xAA and 32fps at 2,560 x 1,600 with 4x AA, both results just 5-6 per cent behind that of the GTX 680 2GB, although also the same, or slower than the Radeon HD 7950 3GB in the same tests.

However, ARMA 2 is the only game where the 7950 3GB comes close to matching the performance of the GTX 670 2GB. In Battlefield 3 its minimum frame rate at 1,920 x 1,080 with 4x AA is an excellent 60fps, 6 per cent slower than a stock GTX 680 2GB and convincingly faster than even the HD 7970 3GB, which manages a minimum of 52fps in the same test.
Conclusion
25 fps faster than a 560ti in BF3 at 1920X1080@Max Settings.
 
As I said, performance/price domination of the GTX 680. The paper specs spoke for themselves.
 
Going to give this card a spin in a few minutes
 
It will be a matter of stocks and supplies, something the European and North American market are having massive problems with. Its pointless having the best paper release and nothing to show for it.
 
Ya, for some reason TSMC is struggling to give Nvidia good yields on GTX6xx. Nobody else is suffering from poor yields on 28nm, so something is up with their design which is a great pity. They really have an excellent range out.
 
Ya, for some reason TSMC is struggling to give Nvidia good yields on GTX6xx. Nobody else is suffering from poor yields on 28nm, so something is up with their design which is a great pity. They really have an excellent range out.

I will have a look to see if I can find that article again but Nvidia called for Intel to help them out with the Fabs.
 
If nvidia and intel could partner on GPUs, imagine those guys moving to 22nm processes with all of Intel's support! It would also help Intel's GPU IP for their mobile processors! It would be amazing!! Its a pity GlobalFoundries and TSMC are unable to match Samsung and Intel in their fabs and stuff.
 
I will have a look to see if I can find that article again but Nvidia called for Intel to help them out with the Fabs.

Wow guys slow down. There's nothing wrong with the design of the GPU. NVIDIA buys more wafers at TSMC than AMD AND they want more. Yields have nothing to do with the design, rather clock speeds, TDP targets etc, balanced with the ASIC density. NVIDIA has used other partners before while still using TSMC as well. NV36 was largely made at IBM and some at UMC as well.
7870/7850s are still scarce as well, it's not a big deal, production will ramp up soon enough. :) It's not yields but capacity :D
 
It's not yields but capacity :D

Actually it is about yields (design problem), along with poor wafer starts (fab problem).
How come AMD and Qualcomm are not moaning about poor yields on 28nm from TSMC but Nvidia is? Think about it. Two other companies are doing fine, while only one is moaning...
 
It compounds the matter that Nvidia have overheads and haven't sold, AMD have already populated the market share, the release of the GHZ variants of the 7970/7950/7870 will close the gap and give AMD more market share while Nvidia limbo on. Then there is the HD8000 to be released. The waiting game cost Nvidia before, I thought they would have learned from it. There is no use having the now fastest card, when you cannot retail it.
 
Hahaha, you guys serious?
@Sarinaide, please compare quarterly results from NVIDIA against those from AMD. Nothing has cost NVIDIA anything for a while. NVIDIA has significantly higher margins per GPU than AMD. They don't need to sell as much. Its exactly what allows them to spend so much on RnD and built massive cores traditionally. They are the richer company, AMD used to have more assets, but they sold the fabs. NVIDIA hasn't had a quarter reporting losses in years, AMD has been reporting losses (including last quarterly results) for years, yes its much better now and the last one was particularly strong but make no mistake about it, NVIDIA makes lots on their GPUs, more on each than AMD does.

Archer you're making a logical deduction from what you know.
You are mistaken though, the node and the chip being manufactured on the node are staggeringly more detailed than you can appreciate currently.
Density isn't the same from an SOC, Kepler and Tahiti. The functional units are not the same, the clocks are not the same, pipelines stages, logic units etc, gosh they are very very different.
There isn't only one 28nm process from TSMC
HKMG, LKMG, high speed, low power etc...
Every chip may be on a 28nm node, but the actual process that yields the wafer with said chips is not identical.
All fabs tune the wafers bought by their customers, to the process they have.
Designs are validated months before, this has nothing to do with design.
 
Last edited:
Yes the AMD debts which mostly involve the buy out deal with Global Foundries, which was the best decision made in a long time. Why pay for substandard silicon. While AMD are still going to use Global Foundries for the immediate future, it is possibly only until they find a new supply. So you are justifying Nvidia's wait as being based on quarterly fiscus, that at the expense of fans.

At the end of the day whether AMD, Nvidia or Intel you have overheads for the relevent sectors and all need to be paid, that involves sales.
 
Last edited:
huh? You're mistaken. The Fabs AMD sold were theirs, GF bought into them, then AMD sold the remaining shares they had. GF did not build these, there were AMDs.
They were offloaded for financial reasons, nothing else. You can't compete at a process level with INTEL if you don't have fabs. How do you figure this was the best decision ever? Much like overpaying for ATI, where AMD paid over $1BN than what the company was worth at the time?

NVIDIA's wait on what exactly? The GPUs weren't ready for mass production. You can only release when a product is ready. If they could have released earlier they would have. You make it seem as if they were relaxing.
Now that the GPUs are ready they are selling them.
Who is this person that's been done so wrong by NVIDIA and how?
 
It was financial reasons but even with the 8% shares, they were being overcharged and sold substandard silicon on mass. The Ati aquisition fits if not the price, but that may have been to secure the shareholders in ATI, I don't see how that will not pay itself off though either through GPU production or APU's the two areas which are doing rather well.

The release has been overdue, but the actual stocks availability is still unknown. It is said end of this month, but that is already 8 weeks later.
 
sigh...
GF took over what was AMD. IT wasn't GF selling them inferior silicon, it was AMD's engineers, tools equipment etc. GF did not ruin the fabs, they were falling behind way before GF even came into the picture. There wa no re-tooling when GF took over.
As for availability of stock, did you forget how many weeks it took for 7970 and 7950 stock to be available?
How about 7870 and 7850?
Wait, how about INTEL Core i7 IB CPUs? Heck INTEL and 3960/3930K CPUs?
Release and mass availability rarely coincide, don't make it seem as if it's an NVIDIA thing only, it happens with everyone as production is still ramping up.
 
Any ideas who AMD are looking at for fabs, I assume its only for steamroller based APU timeline, I expect they will still use GF for the immediate future.
 
Could be anyone they can come to an agreement with.
SAMSUNG, UMC, IBM, etc a few to pick from who have the process capability and capacity
 
Radeon HD 7950 and 7970 were released at the beginning of the year, mass availability started around March
Radeon HD 7870 and 7850 were so rare that they didn't even have drivers on AMD's website for about a month after launch
Ivy Bridge was released almost a few weeks ago and suppliers laugh at me when I ask about stock
Sandy Bridge-E was released last year with mass availability starting this year
Bulldozer also took eons to become available, Sandy Bridge wasn't overnight either
Standard procedure really, not being able to keep up with demand != poor yields
 
Anyways, I have to bounce. Always nice having discussions, arguements, debates whatever they may be called. Maybe over the weekend.
 
Archer you're making a logical deduction from what you know.
You are mistaken though, the node and the chip being manufactured on the node are staggeringly more detailed than you can appreciate currently.
Density isn't the same from an SOC, Kepler and Tahiti. The functional units are not the same, the clocks are not the same, pipelines stages, logic units etc, gosh they are very very different.
There isn't only one 28nm process from TSMC
HKMG, LKMG, high speed, low power etc...
Every chip may be on a 28nm node, but the actual process that yields the wafer with said chips is not identical.
All fabs tune the wafers bought by their customers, to the process they have.
Designs are validated months before, this has nothing to do with design.

Being condescending and telling me what I can or cant appreciate is not going to help get your point across.

Moving on,
Lets take a step back. HD5xxx and GTX4xx launch. TSMC was blamed by both parties, and both had poor availability. Thereafter Nvidia still suffered because of poor design, which was rectified with GTX5xx. Now looking back like that, its clear there are parts of the node that are common, that would set back anyone regardless of their design. So back to the present, who am I more inclined to believe? Two companies that are doing fine on 28nm, or Nvidia that has a history of shifting the blame and can barely make its mind up on whether to moan at TSMC (poor yields) or congratulate them (Kepler is amazing). Both those statements btw come straight from the CEO of Nvidia

@Ojo, on the availability front - Looking at whats going on in SA is such a bad way of determining global availability.
HD79xx did suffer for about a month globally, but was then fine. The big difference being, AMD never moaned about yields, ie they were fine (or at least thats what they want us to think)
GTX680, its been two months (well 7 weeks), and its still out of stock just about everywhere. So something is off.

So I guess we wait another month, see if GTX680 becomes available. By then TSMC would have rectified the situation if it is indeed their fault. Although, with GTX670 now out, I'd suspect any surplus GTX680's would be due to the GTX670 insane performance, so we'd still be stuck arguing
 
Archer I'm not being condescending.
You're making statements based on your very limited understanding. Respect the technology enough, to know that its significantly more detailed than you can deduce from behind your keyboard. ;)
The point I'm making assumes rudimentary knowledge about what it is we are talking about, understanding you clearly do not have.

You speak about design? On what grounds? On what grounds and what information can you say NVIDIA has/had a bad design?
Show me the work rate of each part of the GPU, it's localized thermal conditions, how much it contributes to total TDP etc on any GPU?
Show me the validation test results you clearly have access to that allowed TSMC to agree to manufacturing GPUs for NVIDIA and gave NVIDIA the go ahead to order wafers. (BTW NVIDIA takes up most of TSMC's 300mm wafers on the high-performance process)

The only information you have about design directives of either company are slides you see, which show you the very high level logical layout of the GPUs, not the physical, which is vastly different.
Tesla manufacturing problems would rear their head with the 480 because it was the densest ASIC TSMC had ever mass produced at the time and at the 40nm node. AMD's Cayman GPU was significantly smaller about 1BN gates less, with simpler compute cores. 580 had about 200 million gates less, and availability was better and the process had matured. (Yes nodes do get better with time as the process is tweaked for better thermals, performance, yields etc)

So we can either have the debate or you can nurse your complex. Either way, keep talking untruths and I'll call you out on them at every turn.
 
@Ojo, on the availability front - Looking at whats going on in SA is such a bad way of determining global availability. How is such poor availability that they don't even bother to have drivers listed on their GLOBAL website looking at SA?
HD79xx did suffer for about a month globally, but was then fine. The big difference being, AMD never moaned about yields, ie they were fine (or at least thats what they want us to think)
GTX680, its been two months (well 7 weeks), and its still out of stock just about everywhere. So something is off. Not true, most major suppliers have stock right now as of today (Friday), I know because I had to price a few.

It's in the quote.
 

Users who are viewing this thread

Back
Top Bottom