The Nvidia GeForce3 launched 25 years ago — underappreciated at launch, its impact shaped the industry

A GeForce 3 graphics card flanked by game boxes for Star Wars Jedi Knight 2, Warcraft III, and Max Payne.
(Image credit: NVIDIA UK)

It's easy to hear "25 years ago" and miss the context, so let's talk about the context of the GeForce 3 launch. This was before Steam and before the iPhone. Before YouTube. The concept of the "GPU" was still quite novel, having been established with the GeForce256 just 15 months prior. Yes, we went from GeForce256 to GeForce2 to GeForce 3, all in just 15 months. Things moved more quickly back then; we went from 1 GHz to 2 GHz CPUs in roughly the same time.

When it launched in February of 2001, the GeForce 3 was a critical turning point in the history of graphics processors, as this was the first GPU to include any real sort of programmability by way of being the first GPU with DirectX 8.0 pixel and vertex shader support. What that means is that graphics programmers could now write programs that run on the GPU.

A screenshot from the NVIDIA Chameleon demo.

The Chameleon demo, with its impressive normal-mapped textures, was one of the first showcases of what the GeForce 3 could do. (Image credit: Future)

This doesn't just have performance implications, but rather it limits how useful the graphics processor can actually be. For effects like skeletal animation, moving the processing onto the GPU allows for things like matrix palette skinning, that allow textures to stretch realistically across a character. DirectX 8 enabled radically more lifelike water effects, as seen in The Elder Scrolls III: Morrowind, and it also famously enabled per-pixel lighting and true Dot3 bump-mapping, as seen in Doom 3.

Indeed, despite the fact that it's difficult to imagine NVIDIA and Apple hardware playing nice today, the GeForce 3 was actually revealed in Tokyo at the Makuhari Messe convention hall by none other than Steve Jobs. It was the 2001 Macworld Conference & Expo, and Jobs brought Id Software's John Carmack on stage to show off the first-ever demos of Doom 3, then one of the most hotly-awaited games of all time. Hype levels were off the charts as Carmack talked about the new rendering features that the programmable NV20 GPU enabled, like real-time per-pixel lighting, detailed normal mapping, and complex interactivity, including interactive GUI surfaces.

Doom 3 wouldn't come out until years later, of course, and it actually didn't run particularly well on a GeForce 3 when it did. That's to be expected, as things were still moving fast in the three-year interim, and that doesn't discount the fact that the GeForce 3 was still the first GPU with the ability to run the game. Ironically, though, the reality of February 2001 didn't quite match the hype, as the GeForce 3 was arguably a bit of a letdown when it launched. Many reviewers were mixed on the card when it came out.

A photo of a GeForce 3 card from the original Tom's Hardware review.

The header photo from the original 3DTested review of the GeForce 3 in 2001. (Image credit: Thomas Pabst/Future)

You see, the GeForce 3 brings along awesome DirectX 8 graphics features, but it has exactly the same raw raster performance (fillrate) as the GeForce 2 Pro. Its only real benefit in DirectX 7 and lower games is its "Lightspeed Memory Architecture," a then-advanced crossbar memory controller that drastically improved effective memory bandwidth considerably. That gave it a real performance advantage at higher resolutions, but if you were only concerned about Quake III Arena in 800x600, it looked underwhelming.

This was partially rectified later in 2001 by the launch of the GeForce3 Ti500. Yes, the GeForce 3 is also the genesis of the familiar "Ti" appellation still used on NVIDIA cards today. Whether you say it "tie" or "tee-aye", it originally referred to "Titanium Edition", and it was applied in October 2001 to two new models of GeForce 3 as well as a new GeForce 2 model that served as NVIDIA's budget offering at the time. The GeForce 2 Ti offered incredible value for the legacy games of the day, but it lacked the passport to the future that the GeForce 3 Ti200 provided. Without that DX8 support, you couldn't tick the 'shiny water' box in Morrowind, so that made the previous-gen GeForce card much less desirable.

A photograph of the NVIDIA XGPU in the Xbox.

The original Xbox's "XGPU" was a modified GeForce 3 core, known as NV2A. (Image credit: Future)

Another launch late in 2001 solidified the GeForce 3 as the foundation of things to come: Microsoft's Xbox. The original Xbox famously used NVIDIA graphics, but fewer realize that NVIDIA also provided the sound hardware and memory controller for the machine. NVIDIA's work on the Xbox would become the foundation for its beloved but relatively short-lived "nForce" series of motherboard chipsets. The Xbox was clearly the most powerful console of its generation, with more memory and more advanced graphics capabilities than competing machines from Nintendo and Sony, but a games machine is nothing without software, and while the Xbox had a few notable standouts (including Halo: Combat Evolved, Fable, and Star Wars KotOR), it wasn't the market leader.

Indeed, the market often punishes forward-looking hardware in the short term, but it's the long game that counts. The GeForce 3 itself wasn't a hugely successful release for NVIDIA, but it soon gave way to the wildly popular GeForce 4 series. The GeForce 4 continued the legacy of the GeForce 3, bumping Direct3D support from version 8.0 to 8.1. This added features like volumetric texturing (3D textures) and dependent texture reads, which allow the GPU to use the color data from one texture to calculate the coordinates for another. Of course, the real generational jump was in the expansion of both pixel and vertex shader capabilities as well as huge improvements in pure raster throughput and memory bandwidth.

In the following years, pixel and vertex shader hardware became the primary area of GPU development and progress. The GeForce 6 series, launched in April 2004, ushered in DirectX 9.0c and Shader model 3.0, which brought us "smart shaders", or true dynamic flow control in GPU programs. This meant that developers could start writing "uber-shaders" which could handle disparate material types in a single program without tanking the frame rate. It also brought along Vertex Texture Fetch for displacement mapping, hardware geometry instancing for lush fields of grass and dense forests, and HDR rendering, allowing for radically more realistic lighting.

A screenshot from the original NVIDIA Dawn demo, depicting the pixie walking along a woody branch.

NVIDIA created the famous Dawn demo to show off the power of DirectX 9. (Image credit: Future)

But it was really the GeForce 8 series where things took a drastic turn. Launched to much fanfare in late 2006, the GeForce 8 series was the launch of the Tesla microarchitecture. Tesla was NVIDIA's first design with fully unified shaders. That meant that there was no longer a split between pixel and vertex shaders; the GPU was now a fully programmable processor with far fewer of the limitations that existed before. It's no coincidence that the GeForce 8 series launched alongside the first version of CUDA, NVIDIA's proprietary compute framework.

That's the key to all of this. You can draw a straight throughline from the launch of the GeForce 3 directly to NVIDIA's current dominance of the AI market. Once you have massively parallel programmable processors sitting in millions of PCs worldwide, some folks start to wonder what else you can run on them besides games. That's when General Purpose GPU (GPGPU) computing came about. In the early days, it was mostly about scientific computing, folding@home, physics sims, and academia. Then came cryptographics, and then cryptocurrency; folks reading this likely remember the GPU shortages caused by that craze. Nowadays, it's almost exclusively about AI.

The weird part of technological evolution is that it rarely looks important in the moment. In fact, sometimes it even looks like a regression. The groundwork laid by the GeForce 3 was of critical importance. It was a bet that graphics hardware should be programmable, and it put NVIDIA ahead of the pack in that regard. Twenty-five years later, that bet is running the world's most powerful AI datacenters, and it all happened because gamers wanted better lighting in video games. The GeForce 3 wasn't the fastest card in the games of its day, but it was the first card for tomorrow's, and "tomorrow" turned out to be very, very large.

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Zak Killian
Contributor
  • Dementoss
    That long ago, I was using a Voodoo 3 3000, it did a fine job.
    Reply
  • Neilbob
    Dementoss said:
    That long ago, I was using a Voodoo 3 3000, it did a fine job.
    So was I. Unreal using Glide was legitimately the BEST way to play Unreal, and the only reason I went to the Voodoo 4 4500 after (a mistake in retrospect).

    I remember being tempted by the GeForce 3, but the anaemic performance increase was off-putting. The GeForce 4 did much better, and was the one and only time I've ever bought the highest-end graphics card available at the time - for the remarkable price of (I think) £300. Seems hard to imagine that with today's prices.
    Reply