Tweak3D - Your Freakin' Tweakin' Source!
Leadtek Winfast GeForce3 TD (Page 1/10)


Posted: May 6, 2001
Written by: Tuan "Solace" Nguyen

Introduction

Well guys, it’s that time again when I sit down, open my reference machines and get ready to install the slew of video cards that gets shipped me. A few months ago, it was getting boring. First it was the GeForce2 GTS. That was somewhat exciting, but most of the “new” technology had been introduced with the GeForce256. Then came the GeForce2 MX. The MX gained a lot of momentum and lots of praise because of its exceptional value.


Afterwards came the almighty GeForce2 Ultra. The Ultra alleviated a large portion of the memory bottleneck that hampered the previous GeForce cards from reaching their maximum potential. Basically the memory pipeline was just not fast enough to keep out with the data that the GeForce2 core was pushing and pulling. Users found that overclocking the memory would yield more performance increases than overclocking the core. This property was true for all GeForce2 cards even up the Ultra.

Since the GeForce2 Ultra was so expensive and was out of reach from most gamers, NVIDIA needed to introduce something that would offer similar Ultra performance but at similar GTS price. Thus, the GeForce2 Pro was introduced. NVIDIA gave the Pro speedier memory but kept the core speed identical to the GTS.

While all these GeForce2 variants were nice, something was missing, something was definitely lacking. All the GeForce2 cards that I’ve ever reviewed and mentioned only addressed one thing -- speed. GeForce2 cards differed from each other either in core speed, memory speed or both. Although it’s true that certain manufacturers added more value to their cards with things like digital flat panel and TV output, the core of their cards remained largely unchained.

Weeks and months were moving by and NVIDIA fans didn’t see anything new. On the other front, ATI Radeon users were bragging about how their Radeon cards bested GeForce2 cards in quality and feature set. Not wanting to just sit and duck, NVIDIA fans quickly retaliated with comments like “we have better T&L”, “we’re faster” and even “NVIDIA’s the best, ATI sucks”. Talk about a religious war.

The truth of the mater is, NVIDIA did have the speed crown, but ATI held the rest. The Radeon had DirectX 8 features built in, had better image quality and quite frankly wasn’t that far behind the GeForce2 GTS in terms of speed. DirectX 8 compliancy was planned for NVIDIA’s next GPU, the NV20. However, once NV20 arrives, the Radeon will already have some DirectX 8 features too, so it will still be able to compete with NV20 in some areas something -- GeForce2 cannot say.

NVIDIA and its users had high hopes for NV20. They kept it at heart that NV20, when released, will bring all the best of GeForce2, and then some. “Then some” was definitely an understatement.

On February 27th, NVIDIA showed to the world its secret weapon, the GeForce3.

I was fortunate enough to be able to attend NVIDIA’s launch function for the GeForce3 and to say the least, the audience was floored. NVIDIA introduced such things as:

Fully programmable pixel shader: Ability to program features and functions directly into the GeForce3 GPU
LightSpeed Memory architecture: Removes memory bottleneck
Crossbar memory controller: Uses 4 independent memory controllers instead of GeForce2’s 1 controller
Quincunx Anti Aliasing: Better than 4X FSAA with only 2X performance hit
And Z-Occlusion Culling: Ability to render only those polygons that are seen from the viewer (hidden surface removal)

To say the least everyone left the show in amazement.

If you want to know more about all the features listed above, please read our GeForce3 Technology Preview and our GeForce3 Vertex and Pixel Shading article.

Next Page

  • News
  • Forums
  • Tweaks
  • Articles
  • Reviews