While the nitty gritty detail of recollections captured when still fresh in memory can be fascinating, I especially appreciate reflections written a few decades later as it allows putting the outcomes of key decisions in perspective, as well as generally enabling more frank assessments thanks to fewer business and personal concerns.
Pretty sure the author meant write 640 kilobytes.
At a time when PC memory maxed out at 640 megabytes, the fact that the PCI bus could address 4 gigabytes meant that quite a few of its address bits were surplus. So we decided to increase the amount of data shipped in each bus cycle by using some of them as data. IIRC NV1 used 23 address bits, occupying 1/512th of the total space. 7 of the 23 selected one of the 128 virtual FIFOs, allowing 128 different processes to share access to the hardware. We figured 128 processes was plenty.
PC memory was nearly always sold in powers of two. So you could have SIMMs in capacity of 1MiB, 2MiB, 4, 8, 16MiB. You could usually mix-and-match these memory modules, and some PCs had 2 slots, some had 4, some had a different number of slots.
So if you think about 4 slots that can hold some sort of maximum, we're thinking 64MiB is a very common maximum for a consumer PC, and that may be 2x32 or 4x16MiB. Lots of people ran up against that limit for sure.
640MiB is an absurd number if you think mathematically. How do you divide that up? If 4 SIMMs are installed, then their capacity is 160MiB each? No such hardware ever existed. IIRC, individual SIMMs were commonly maxed at 64MiB, and it was not physically possible to make a "monster memory module" larger than that.
Furthermore, while 64MiB requires 26 bits to address, 640MiB requires 30 address bits on the bus. If a hypothetical PC had 640MiB in use by the OS, then only 2 pins would be unused on the address bus! That is clearly at odds with their narrative that they were able to "borrow" several more!
This is clearly a typo and I would infer that the author meant to write "64 megabytes" and tacked on an extra zero, out of habit or hyperbole.
I can’t find the purchase receipts or specific board brand but it had four SDRAM slots, and I had it populated with 2x64 and 2x256.
Edit: Found it in some old files of mine:
I was wrong! Not four DIMM slots... three! One must have been 128 and the other two 256.
Pentium II 400, 512k cache
Abit BF6 motherboard
640 MB PC100 SDRAM
21" Sony CPD-G500 (19.8" viewable, .24 dot pitch)
17" ViewSonic monitor (16" viewable, .27 dot pitch)
RivaTNT PCI video card with 16 MB VRAM
Creative SB Live!
Creative 5x DVD, 32x CD drive
Sony CD-RW (2, 4, 24)
80 GB Western Digital ATA/100
40 GB Western Digital ATA/100
17.2 GB Maxtor UltraDMA/33 HDD
10.0 GB Maxtor UltraDMA/33 HDD
Cambridge SoundWorks FourPointSurround FPS2000 Digital
3Com OfficeConnect 10/100 EtherNet card
3 Microsoft SideWinder Gamepads
Labtec AM-252 Microphone
Promise IDE Controller card
Hauppage WinTV-Theatre Tuner Card
I remain a bit mystified about why it would be a hard maximum, though. Did such motherboards prevent the user from installing 4x256MiB for a cool 1GiB of DRAM? Was the OS having trouble addressing or utilizing it all? 640MiB is not a mathematical sort of maximum I was familiar with from the late 1990s. 4GiB is obviously your upper limit, with a 32-bit address bus... and again, if 640MiB were installed, that's only 2 free bits on that bus.
So I'm still a little curious about this number being dropped in the article. More info would be enlightening! And thank you for speaking up to correct me! No wonder it was down-voted!
Now we have Vulkan. Vulkan standardizes some things, but has a huge number of options because hardware design decisions are exposed at the Vulkan interface. You can transfer data from CPU to GPU via DMA or via shared memory. Memory can be mapped for bidirectional transfer, or for one-way transfer in either direction. Such transfers are slower than normal memory accesses. You can ask the GPU to read textures from CPU memory because GPU memory is full, which also carries a performance penalty. Or you can be on an "integrated graphics" machine where CPU and GPU share the same memory. Most hardware offers some, but not all, of those options.
This is why a lot of stuff still uses OpenGL, which hides all that.
(I spent a few years writing AutoCAD drivers for devices now best forgotten, and later trying to get 3D graphics to work on PCs in the 1990s. I got to see a lot of graphics boards best forgotten.)
> “This was the most brilliant thing on the planet. It was our secret sauce. If we missed a feature or a feature was broken, we could put it in the resource manager and it would work.”
Absolutely brilliant. Understand the strengths and weaknesses of your tech (slow/updateable software vs fast/frozen hardware) then design the product so a missed deadline won’t sink the company. A perfect combo of technically savvy management and clever engineering.
> Because Nvidia became one of the most valuable companies in the world, there are now two books explaining its rise and extolling the genius of Jensen Huang,
Yeah, he's a real genius. (Sarcasm). He is a marking guy, there is no genius behind this man.
The fact that Nvidia uses its market position cause harm to the industry by strong-arming partners to toe the line makes this company a problem, just like all the others. They operate like any other predatory corporation.
pjmlp•5h ago
Back then I was quite p***d not being able to keep the Voodoo, how little did I know how it was going to turn out.
stewarts•2h ago
pjmlp•1h ago
ahartmetz•55m ago