frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

The Dawn of Nvidia's Technology

https://blog.dshr.org/2025/05/the-dawn-of-nvidias-technology.html
80•wmf•4h ago

Comments

pjmlp•4h ago
I wanted to buy a Voodoo card, and due to PCI incompatible version, had to trade it back for a Riva TNT.

Back then I was quite p***d not being able to keep the Voodoo, how little did I know how it was going to turn out.

stewarts•1h ago
We hold you singularly responsible for the eventual failure of the Voodoo3/4/5 and Nvidia domination.
2OEH8eoCRo0•3h ago
No mention of SGI.
wmf•3h ago
AFAIK the SGI-Nvidia connection was after DSHR's time.
rjsw•3h ago
I still have a NV1 card.
mrandish•2h ago
This kind of retrospective from key people who were involved is invaluable from an historical perspective. I find hearing first-hand accounts of the context, assumptions, thought processes, internal debates, technical limitations, business realities and even dumb luck a good way to not only understand how we got here but how to do as well (or better) going forward.

While the nitty gritty detail of recollections captured when still fresh in memory can be fascinating, I especially appreciate reflections written a few decades later as it allows putting the outcomes of key decisions in perspective, as well as generally enabling more frank assessments thanks to fewer business and personal concerns.

rhdjsjebshjffn•1h ago
I'm excited about this too, but it's a little concerning there's a brand in the title. There's no shortage of those from ati, intel, amd, apple, ibm, the game gaggle, etc to interview. The fact that nvidia succeeded where others failed is largely an artifact of luck.
jjtheblunt•23m ago
Nvidia’s Cg language made developers prefer their hardware, I’d say.
killme2008•2h ago
Really fascinating story—thanks for sharing! Graphics programming has been a major driving force behind the widespread adoption of object-oriented programming, and the abstraction of devices in this context is truly elegant.
hackyhacky•2h ago
> At a time when PC memory maxed out at 640 megabytes,

Pretty sure the author meant write 640 kilobytes.

rjsw•2h ago
Maybe that is what they were thinking but anything designed to work with a PCI bus would have been introduced after PCs became capable of using more memory than that.
usefulcat•1h ago
It's hard to tell exactly what time frame the author is referencing there. For context, NV1 was released in '95, by which time it was not uncommon for a new PC to have 8-16 MB of memory (I had a 486 with 16 MB by '94). Especially if you planned to use it for gaming.
npalli•1h ago
The sentence and paragraph which makes it clear that this was megabytes and not kilobytes

At a time when PC memory maxed out at 640 megabytes, the fact that the PCI bus could address 4 gigabytes meant that quite a few of its address bits were surplus. So we decided to increase the amount of data shipped in each bus cycle by using some of them as data. IIRC NV1 used 23 address bits, occupying 1/512th of the total space. 7 of the 23 selected one of the 128 virtual FIFOs, allowing 128 different processes to share access to the hardware. We figured 128 processes was plenty.

AStonesThrow•1h ago
Okay but "640" is a completely fictitious number for installed RAM in any given PC.

PC memory was nearly always sold in powers of two. So you could have SIMMs in capacity of 1MiB, 2MiB, 4, 8, 16MiB. You could usually mix-and-match these memory modules, and some PCs had 2 slots, some had 4, some had a different number of slots.

So if you think about 4 slots that can hold some sort of maximum, we're thinking 64MiB is a very common maximum for a consumer PC, and that may be 2x32 or 4x16MiB. Lots of people ran up against that limit for sure.

640MiB is an absurd number if you think mathematically. How do you divide that up? If 4 SIMMs are installed, then their capacity is 160MiB each? No such hardware ever existed. IIRC, individual SIMMs were commonly maxed at 64MiB, and it was not physically possible to make a "monster memory module" larger than that.

Furthermore, while 64MiB requires 26 bits to address, 640MiB requires 30 address bits on the bus. If a hypothetical PC had 640MiB in use by the OS, then only 2 pins would be unused on the address bus! That is clearly at odds with their narrative that they were able to "borrow" several more!

This is clearly a typo and I would infer that the author meant to write "64 megabytes" and tacked on an extra zero, out of habit or hyperbole.

chadaustin•7m ago
You are straight up wrong. The first computer I ever built was a Pentium 2, RivaTNT, and it had 640 MB RAM.

I can’t find the purchase receipts or specific board brand but it had four SDRAM slots, and I had it populated with 2x64 and 2x256.

artyom•1h ago
This reads as one of the many engineering marvel stories (e.g. Bell Labs, Xerox) where revolutionary technology is created by a combination of (a) clever engineers with enough "free" time, and (b) no clueless managers around.
whyowhy3484939•35m ago
You can read in Kernighans History of Unix that really good managers - "enlightened management" IIRC - were involved and not just involved, some of them were absolutely crucial or Unix won't have existed. It's not like you can just let loose a couple of big brains and things will work out fine. They won't (and didn't).
jacobgorm•1h ago
I remember sitting next to David Rosenthal at a conference reception (must have been FAST, which makes sense given his involvement with LOCKSS) in San Jose some time around 2010 or 2011, not knowing up front who he was. He explained some of the innovations he had made at NVIDIA around making the hardware more modular and easier for parallel teams to work on, and we chatted about the rumors I had heard about SUN thinking about licensing the Amiga hardware, which he confirmed but said would have been a bad idea, because the hardware didn't support address space protection. I guess I didn't know enough about him or NVIDIA to be sufficiently impressed at the time, but he was a very friendly and down to earth person.
Animats•14m ago
That's from the period when there was no standardization of how the CPU talked to the graphics device. Triangles or quads? Shared memory or command queues? DMA from the CPU side or the graphics device side? Graphics as part of the CPU/memory system or as part of the display system? Can the GPU cause page faults which are serviced by the virtual memory system?

Now we have Vulkan. Vulkan standardizes some things, but has a huge number of options because hardware design decisions are exposed at the Vulkan interface. You can transfer data from CPU to GPU via DMA or via shared memory. Memory can be mapped for bidirectional transfer, or for one-way transfer in either direction. Such transfers are slower than normal memory accesses. You can ask the GPU to read textures from CPU memory because GPU memory is full, which also carries a performance penalty. Or you can be on an "integrated graphics" machine where CPU and GPU share the same memory. Most hardware offers some, but not all, of those options.

This is why a lot of stuff still uses OpenGL, which hides all that.

(I spent a few years writing AutoCAD drivers for devices now best forgotten, and later trying to get 3D graphics to work on PCs in the 1990s. I got to see a lot of graphics boards best forgotten.)

saltcured•4m ago
And that was an evolution of earlier 2D cards where you had a potential mixture of CPU-addressable framebuffer and various I/O ports to switch modes between text and raster graphics, adjust video modes in DACs, adjust color palette lookup tables, load fonts for text modes, and maybe address some 2D coprocessors for things like "blitting" (kind of like rectangular 2D DMA), line drawing, or even some basic polygonal rendering with funny options like dithering or stipple shading...
cadamsdotcom•4m ago
> all an application could do was to invoke methods on virtual objects .. the application could not know whether the object was implemented in hardware or in the resource manager's software. The flexibility to make this decision at any time was a huge advantage. As Kim quotes Michael Hara as saying:

> “This was the most brilliant thing on the planet. It was our secret sauce. If we missed a feature or a feature was broken, we could put it in the resource manager and it would work.”

Absolutely brilliant. Understand the strengths and weaknesses of your tech (slow/updateable software vs fast/frozen hardware) then design the product so a missed deadline won’t sink the company. A perfect combo of technically savvy management and clever engineering.

Cheating Expert Answers Casino Cheating Questions [video]

https://www.youtube.com/watch?v=0QWP4IZOu0I
1•Bluestein•28s ago•0 comments

Developers: Is training taking a back seat?

https://www.techradar.com/pro/developers-is-training-taking-a-back-seat
1•mooreds•2m ago•0 comments

Arch Linux on Mac Pro 1.1/2.1

https://wiki.ponovo.rs/wiki/Linux_on_Mac_Pro_1.1/2.1
1•marxo•3m ago•0 comments

FastAPI and Next.js User Auth

https://www.david-crimi.com/blog/user-auth
1•Crimid01•3m ago•0 comments

Disable debuginfo to improve Rust compile times

https://kobzol.github.io/rust/rustc/2025/05/20/disable-debuginfo-to-improve-rust-compile-times.html
1•ingve•4m ago•0 comments

Things money can't buy – like happiness and better health

https://news.harvard.edu/gazette/story/2025/05/things-money-cant-buy-like-happiness-and-better-health/
2•gnabgib•4m ago•0 comments

System-wide Technology Outage

https://ketteringhealth.org/system-wide-technology-outage/
1•mattbsheets•8m ago•0 comments

Strands Agents, an Open Source AI Agents SDK

https://aws.amazon.com/blogs/opensource/introducing-strands-agents-an-open-source-ai-agents-sdk/
1•jaredwiener•9m ago•0 comments

Show HN: Copilot Audit – PDF->Excel with AI

https://copilot-audit.com/
1•miwend•11m ago•0 comments

Telekons: Some high performance code in Common Lisp

https://github.com/telekons
2•doener•13m ago•0 comments

The Netfarm Suite: a replicated, mostly-trustless object system

https://gitlab.com/cal-coop/netfarm
1•doener•14m ago•0 comments

Text reminders for court hearings can boost justice system efficiency

https://www.route-fifty.com/digital-government/2025/05/report-text-reminders-court-hearings-can-help-boost-justice-system-efficiency/405352/
2•gnabgib•14m ago•0 comments

Our Journey Through Linux/Unix Landscapes

https://blog.kalvad.com/our-journey-through-linux-unix-landscapes/
2•alekq•14m ago•0 comments

Cheers star George Wendt dies at 76

https://www.bbc.com/news/articles/cx2xx998102o
2•austinallegro•14m ago•1 comments

Jason Padgett

https://en.wikipedia.org/wiki/Jason_Padgett
2•sans_souse•15m ago•1 comments

Show HN: I made a tool to repurpose a TikTok video from another account

https://github.com/best-trading-indicator-tools/10XReach
2•Daveatt•16m ago•0 comments

Matrix Governing Board Elections 2025

https://matrix.org/foundation/governing-board-elections/2025/#nominees
1•doener•19m ago•0 comments

Why Good Programmers Use Bad AI

https://nmn.gl/blog/ai-and-programmers
3•namanyayg•20m ago•1 comments

Project Mariner – Browser-Based AI Agent

https://deepmind.google/models/project-mariner/
6•jenthoven•21m ago•0 comments

SynthID Detector – a new portal to help identify AI-generated content

https://blog.google/technology/ai/google-synthid-ai-content-detector/
1•rob•21m ago•0 comments

Feels great to finish CRUD for my ERP app

https://github.com/oitcode/samarium
2•ignosnim•24m ago•1 comments

Intel explores sale of networking and edge unit

https://www.reuters.com/technology/intel-explores-sale-networking-edge-unit-sources-say-2025-05-20/
2•mfiguiere•25m ago•0 comments

Jupiter was formerly twice its current size and had much stronger magnetic field

https://phys.org/news/2025-05-jupiter-current-size-stronger-magnetic.html
2•amichail•26m ago•0 comments

A context-aware LLM agent built directly into Grafana Cloud

https://grafana.com/blog/2025/05/07/llm-grafana-assistant/
5•matryer•28m ago•0 comments

The Onion's Ben Collins Knows How to Save Media

https://www.vanityfair.com/hollywood/story/the-onions-ben-collins-knows-how-to-save-media
3•coloneltcb•28m ago•0 comments

Reachability analysis tool for Linux kernel CVEs

https://github.com/udibabaskydeck/ralk
1•ATechGuy•30m ago•0 comments

Pupil dilation can reveal the accuracy of your memories

https://www.popsci.com/health/eyes-memory-study/
1•geox•31m ago•0 comments

Patents Pendency Data April 2025

https://www.uspto.gov/dashboard/patents/pendency.html
1•toomuchtodo•32m ago•0 comments

Show HN: I'm Building a Decentralized Amazon

https://www.searchagora.com/hn
3•astronautmonkey•33m ago•0 comments

Relationship Between Running Cadence and Stress Fractures in Distance Runners

https://rdw.rowan.edu/stratford_research_day/2025/may1/118/
2•bookofjoe•35m ago•0 comments