In 1999, the PC world was booming. The pace of new products is frantic, with a renewal cycle of around six months. Imagine yourself in June 1999; you have just treated yourself to your new PC equipped with a Pentium III at 500 MHz and the Nvidia TNT2 graphics card, launched in April. You’re ready to play Quake II at 50 frames per second or hope to exceed 40 frames per second in Half-Life at 1024×768 resolution on your 17-inch CRT display. And then, in October of that same year, a graphics card truly changed the game: the GeForce 256.
Advertising, your content continues below
Before the release of the GeForce 256, graphics cards were more accelerator cards than real GPUs (Graphic Processing Units). Originally, the graphics card was intended to display 2D. To simplify, it was designed to display the Windows 95 interface as well as all the applications you could launch. When the first “3D” games arrived, it was the central processor (CPU) that did all the work of calculating the polygons and their movements. Graphics cards were limited to applying textures to polygons calculated by the processor. Some, like 3dfx graphics cards, began to integrate calculators to generate polygons (Setup Engine) and even had their own API or used standards like Glide, OpenGL and Direct3D.
The 3dfx era
Thus, from 1996 to 1998, 3dfx imposed its Voodoo chipset, from which games like Tomb Raider and Quake benefited greatly. To establish itself, 3dfx is touring publishers to impose its API: Glide. This simple and efficient library makes it easier for developers who swear by it. IN response and in order to avoid a monopoly of 3dfx, Microsoft offers Direct3D 5. The Windows publisher has joined forces with ATI, Nvidia, Matrox and S3 to counter the influence of 3dfx and offer a universal API.
Advertising, your content continues below
GeForce 256: the 1st GPU
In August 1999, Nvidia launched the GeForce 256. Its NV10 chip was not particularly impressive in terms of technical characteristics. In fact, it is only clocked at 120 MHz, while its predecessor, the NV5 of the TNT2, runs at 150 MHz. In addition, its engraving fineness is 220 nm, while the competition engraves its chips at 180 nm. However, it is in terms of the number of onboard transistors that the NV10 stands out: 23 million! This is almost four times more than a certain Pentium III (9.5 million) and the TNT2 NV5 (9 million). This considerable gap as well as the support for new features explains the move from NV5 to NV10 in the nomenclature.
The 23 million transistors are distributed between the management of the 2D display and the units dedicated to 3D: the Setup Engine, the Rendering Engine and a new feature, the Transform and Lighting (T&L) Engine. It is this last unit which will revolutionize the industry.
Thus, the unit responsible for T&L will have the task of calculating the transformations of the polygons: translation, rotation and scaling (enlargement/reduction) and lighting. Although the calculations themselves are relatively simple (16 multiplications and 12 additions), they must be performed for each rendered frame and for each object. This is why Nvidia’s idea to dedicate computing units specifically to this task turned out to be brilliant.
With the democratization of T&L, the GeForce 256 propels you into a new era of video games!
The CPU can finally breathe
Thanks to the GeForce 256, Nvidia relieves the central processor of transformation calculations, allowing it to concentrate on other tasks. Additionally, because the NV10 has specialized units, it performs these calculations much faster than a processor. At the time, Nvidia announced that its NV10 could calculate 15 million triangles per second, compared to 4 million for a Pentium III-550 which would only be dedicated to this task. Added to this is support for S3TC texture compression in DirectX allowing lossless 6:1 compression, and other innovative technologies such as Cube Environment Mapping and Vertex Blending.
Advertising, your content continues below
The other strong point of the NV10 lies in its Rendering Engine, used to apply textures to polygons. Nvidia parallelized this task using four pipelines, compared to two for the TNT2 Ultra. Consequence: even with a 20% lower operating frequency, the NV10 is 60% faster, processing 480 million pixels per second. This allowed video game developers to significantly increase the number of polygons of 3D objects displayed, thus drastically improving the overall quality of the graphics.
T&L Support or Death
As a result, the GeForce 256 is ahead of its time. Only games using the professional OpenGL API take full advantage of T&L upon release, and we will have to wait for the arrival of DirectX 7 for T&L to be supported. Game publishers will update their titles immediately to be compatible. Quake 3, released on December 2, 1999, will take full advantage of the GeForce 256, with the success it is known for.
The year 2000 will be the year of T&L, at PC FUN we are sure of it: developers will use this new function because f… it’s so beautiful.
The price of the GeForce 256 was a little over 2000 francs, or around 455 euros in current euros (according to INSEE), compared to 1600 francs for a graphics card based on TNT2 Ultra, for example. The GeForce 256 could even compete with the professional cards used for 3DS MAX, being worth more than twice as much.
Epilogue
In 2000, the graphics card landscape was changing. In December, Nvidia bought 3dfx, unable to fully support T&L, thus putting an end to the ambitions of this pioneering company in terms of 3D. Matrox will attempt to follow suit with its G550 in 2001, but partial support for T&L and financial difficulties will lead to a focus on 2D applications. S3, despite an attempt with its poorly performing S2000s, failed to establish itself and sold its 3D branch to VIA in 2001, thus putting an end to its ambitions in this sector. Only ATI, with its Radeon 256 released in April 2000, managed to follow in Nvidia’s footsteps with T&L, which gave rise to a battle that still lasts 25 years later.
Today, it is impossible not to draw parallels with the advent of ray tracing, a technology that Nvidia has spearheaded since the launch of the GeForce RTX 20 series in 2018. Coupled with its CUDA software platform and range of tools made available to developers, this technological cocktail could well outdistance AMD, which had nevertheless acquired ATI in 2006 with the aim of competing more effectively. History is an eternal beginning.
Advertising, your content continues below