Asus EN 8800 GTX - nVidia's G80 Performance Revealed

Introducing G80 architecture and DX 10

Introduction

g80 naked

The last generation of cards saw nVidia slip well behind ATI in terms of image quality, although with their dual-PCB, dual-GPU 7950GX2 they clawed the performance crown back. This was a place that nVidia was not used to being, especially after the success of their 6 series.

DirectX 10 is a technology that we are all waiting for and both nVidia and ATI are producing cards (hopefully) in time for the release. nVidia are the first onto the market with their 8800GTX and GTS. With an architecture almost totally re-worked from the old generation, nVidia have gone all out on a unified architecture.

Asus kindly sent us their 8800GTX for a study on how the card does in real life, but first let us explore G80's features.

Outlining the technology

There's a lot of information out there on nVidia's latest gen of card so I thought I'd try to keep the explanation part simple and concise.

In the 8800GTX nVidia have implemented a parallel, unified shader design consisting of 128 individual stream processors running at 1.35 GHz. As described in my article on ATI's unified shader architecture, nVidia have made a pipeline that processes vertex, pixel, geometry, or physics operations: giving it flexibility and efficiency.

g80 architecture

So what can we see here then?

One noticible difference first of all is that nVidia have implemented ZCull before the data enters the stream processors. ZCull is a process that strips the data that you will not see out of the rendering engine. This means that the GPU does not waste time rendering stuff you will never see on screen. Previously this was implemented in post processing, meaning that vital processing power was used to render the unnecessary pixels, which were then culled afterwards.

Let's see why both nVidia and ATI think that a unified architecture was needed to increase the performance of DX10 cards:

DirectX 9 and traditional Shaders:
dx 9 pipelines

DirectX 10 Unified Shaders:
unified pipelines

So what do we have in the two pictures? In the first we see the classic non-unified pipelines with seperate vertex and pixel shader pipelines. The argument is that when you get a larger amount of either type of shader information then only one of the separate pipelines will be working to maximum effect, with "idle hardware" as nVidia put it.

Let's move onto the Unified example. Here in both geometry and pixel workloads the unified architecture excels (in theory) as the unified shader pipelines use their flexibility to render any of the information sent their way. Couple this with dynamic load balancing and you have a mightily efficient architecture that can handle anything thrown at it.

This means that you have a GPU with 128 shader processors each capable of processing pixel, vertex, geometry and physics data.

nVidia are also hoping that the flexible and efficient (and of course hugely parallel) processor in their G80 will mean that other data can also be processed.

DirectX 10

I don't want to go into too much detail with DirectX 10, as this has been covered in one of our previous articles - see here, but I'll just go over why DX10 will also add to the performance increase.

cpu overhead

dx10 pipeline

DirectX 10 reduces the CPU overhead by reducing the amount that the CPU gets involved in the rendering process. By cutting out the CPU in the most basic API processes, DirectX 10 means that the time that each object get's rendered is hugely reduced. Let's look at it like this (ATI slide)

ati dx 10 slide

DirectX 10 solves this by working towards general processing units. The new Geometry Shader can manipulate Vertexes and many other types of objects. This means that it has more ways to process, access and move data. This extra level of manipulation adds a lot of headroom for developers to introduce new features into games and to utilise the GPU for more than just rendering of a scene.

So Geometry Shaders can manipulate data...how do the developers use this?

Well basically I'm hoping that developers will use this to do things like stop "pop-up" (of trees/objects etc in the distance). I can see that there would be a huge advantage in using these units to change things like water over distance and adding far superior rendering to characters that are in the periphery of games: such as excellent crowd animation in racing/sports games. This is all my own speculation, but it would certainly be nice to see.

Memory interface mid-process

Also added into nVidia's "Stream" processors is the ability to move data to memory and back again in one pass. This means that you should no longer require data to have two or more passes before it can be outputted. Once again this adds to the picture of added efficiency that nVidia are building up.

Instancing

Shader Model 3 brought far superior instancing than we had seen before. Instancing means that you can render one object and replicate it a whole load of times, creating a fuller effect. This is very useful in trees and grass where you need to replicate basically the same thing many times over.

instancing

128-Bit High Dynamic Range

nVidia have implemented 32-bit floating-point precision for a total of 128bits dynamic-range rendering. They claim this is a quality of accuracy that outperforms film renders

This is enough detail for this article at the moment, but I may do a fuller article on this in the future.
«Prev 1 2 3 4 5 6 7 8 9 Next»

Most Recent Comments

24-12-2007, 17:24:49

Rastalovich
Great insight into the higher end of the gpu spectrum.

I have to be fair, even tho u`ve got a 1 to 4 positioning, I don`t think there`s that much in it for any of them. I know the Ati slides in the rankings, but it does show itself to be a fine card.

An interesting comparison would have been performance versus costing, seeing as they`re as close as they appear to be.

I`m a bit at odds with the oc results too. I simply can`t achieve the memory overclock on my GT without it having fits. But I can clock the gpu to around 720 (from 670).

What would impress me is some1 buying the cheapest GT, putting it under water, and claiming back those clock hertz with their superior cooling. Sounds nice.

Suprizes me a bit tho that the GTS isn`t launched as a bigger performer. Although pricewize, it does seem to start from stock where the GT limit starts. Leaving u further to climb

Nice review m8, think every1s doing the "going to bed early to get up early" trick Quote

24-12-2007, 17:25:40

Hyper
Nice to see this review is finally up

Shame I cant view it as 128k is painfully slow :sleep:Quote

24-12-2007, 18:07:43

Hatman
Thing is though there is a BIOS voltmod for the 8800GT that GREATLY increases its overclockability. If there isn't one for the 8800GTS I see them both reaching high limits but so far the 8800GT seems to OC better. With good cooling and the volt mod this thing easily passes 800mhz core, I've seen some at 850+.

What you also go to look at as well is shaders, they seem to be the key thing atm if the shaders OC much better on the 8800GT, which I guess they kinda should since they could simply lock the worst performing ones to get it top out better, it could impact performance in certain titles a lot.

Nice review though I was surprised the 8800GT didn't clock more, there wasn't a heat issue was there?

It should also be said about the supposed problems of the memory modules on the 8800GT, seems that long term use of the memory at 2ghz+ and it brakes. Thats why some companies like Zotac use better memory modules for their cards.Quote

24-12-2007, 18:29:40

Brooksie
Great review Kempez, can't wait to open my GTS tomorrow Quote

24-12-2007, 18:34:22

FarFarAway
The GT was a little hot yes, but I will never change the cooling or take off the heatsink prior to running any tests. I test every card as it would be when it appears on your doorstep

Volt-modding is not part of the overclocking testing as the percentage of users who do that is small and it voids any warranty you have.

An overclock is dependent on what RAM is used, what cooling is like and also what luck you get with the silicon...the overclocking gods didn't smile on me this time

@Rast: I agree, the ATI card is a great card as I said in the ATI review, but this review was for these two cards. Having said that: would I buy the HD3850 personally? No, I'd go for an 8800 GTS 512mb G92 or an 8800 GT 512 G92 It's simply not fast enough with every game, I'm afraid.Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.