frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
193•theblazehen•2d ago•56 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
678•klaussilveira•14h ago•203 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
954•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
125•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
25•kaonwarb•3d ago•21 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
62•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
235•isitcontent•15h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
227•dmpetrov•15h ago•121 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
38•jesperordrup•5h ago•17 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•17h ago•145 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
499•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
384•ostacke•21h ago•96 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•183 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
21•speckx•3d ago•10 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
291•eljojo•17h ago•182 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
6•matt_d•3d ago•1 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•10 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
66•kmm•5d ago•9 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
93•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
260•i5heu•17h ago•202 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
38•gmays•10h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1073•cdrnsf•1d ago•458 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
291•surprisetalk•3d ago•43 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•71 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
8•1vuio0pswjnm7•1h ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
154•SerCe•10h ago•144 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
187•limoce•3d ago•102 comments
Open in hackernews

How Quake.exe got its TCP/IP stack

https://fabiensanglard.net/quake_chunnel/index.html
528•billiob•2mo ago

Comments

afandian•2mo ago
> I didn't work on the Chunnel. That was mainly a British guy named Henry

The British guy named Henry might have named it after another feat of engineering completed around the same time.

https://en.wikipedia.org/wiki/Channel_Tunnel

torh•2mo ago
Or from the Seinfeld episode "The Pool Guy" (Aired November 1995) which had a fictional movie called "Chunnel" -- probably based on the very same channel tunnel.
ralferoo•2mo ago
From Google (AI slop at top of search results): "Chunnel" is not a real movie but a fictional film from the TV show Seinfeld. It is depicted as a disaster movie about an explosion in the Channel Tunnel...

Weird hearing that name now though. Around that time, everybody referred to it as the "Chunnel", but I don't think I've heard it as anything but the "Channel Tunnel" since maybe 2000. I suspect even that usage is now limited to only taking cars on the train from Folkestone. Every time I've travelled on it as a regular passenger from London, it's just been referred to as the Eurostar without any mention of the tunnel at all.

afandian•2mo ago
Yes, it's definitely a word from a 1990s geography textbook.
sroussey•2mo ago
I was half expecting something about how to get tcp into windows, but this is win95 where they shipped it inside the os and put some company out of business that used to sell that.
jahnu•2mo ago
I used to use Trumpet Winsock with Windows 3.1

https://en.wikipedia.org/wiki/Trumpet_Winsock

hypercube33•2mo ago
Didn't Win95 get tcp from FreeBSD?
LeoPanthera•2mo ago
That was Windows 2000.
hypercube33•2mo ago
No I'm fairly certain that berkley sockets were used as a foundation to integrate a full network stack under winsockets so people wouldn't have to go buy things like Trumpet (Windows 3.1) and you could coax out messages saying as much from the commandline but Google is failing me (I'm sure most of this stuff is on usenet which no one seems to care about these days)
zweifuss•2mo ago
The history of the Windows TCP/IP stack went most likely like this:

IBM (NetBEUI, no TCP/IP) -> Spider TCP/IP Stack + SysV STREAMS environment -> MS rewrite 1 (early NT, Winsock instead of STREAMS) -> MS rewrite 2 (make win2000 faster):

https://web.archive.org/web/20151229084950/http://www.kuro5h...

kalleboo•2mo ago
It's interesting how STREAMS pervaded everything for a short while (Apple's Open Transport networking stack for System 7.5 and up was also based on STREAMS) but everyone almost immediately wanted to get rid of it and just use Berkley sockets interfaces.
xbar•2mo ago
Berkeley, for disambiguation.
kalleboo•2mo ago
Oops, too late to edit my comment!
justsomehnguy•2mo ago
I still don't quite get how you should had communicate with the other systems over the network with STREAMS.

With IP you have an address and the means to route the data to that address and back, with TCP/UDP sockets you have the address:port endpoint so the recipient doesn't need to pass a received packet to the all processes on the system, asking "is that yours".

So if there is already some network stack providing both the addressing and the messaging...

fredoralive•2mo ago
STREAMS isn’t a networking protocol, it’s an internal data routing thing some UNIXes use locally, and amongst other things to implement the network stack in it.

You’d still be talking of stuff like IP addresses and the like with it. Probably with the XTI API instead of BSD sockets, which is a bit more complex but you need the flexibility to handle different network stacks than just TCP/IP, like erm…

justsomehnguy•2mo ago
https://web.archive.org/web/20060716013234/http://www2.linux...

Sadly no images.

EvanAnderson•2mo ago
The story with the Windows NT IP stack is nuanced, but it wasn't just lifted from BSD: https://web.archive.org/web/20051114154320/http://www.kuro5h...
pak9rabid•2mo ago
Win95 had its own stack, codenamed Wolverine
ogurechny•2mo ago
The turbulent times and the breakneck speed of computer development need to be taken into account. Not long before that computer networks were strictly corporate things installed by contractors choosing hardware, driver and software suppliers suitable for tasks performed by employees or students, and someone who installed it at home was the same kind of nerd who would drag an engine from the work into his room to tinker. Non-business-oriented software rarely cared about third party network functions. Then network card became a consumer device, and a bit later it became integrated and expected.

Also, Windows did not install TCP/IP components on computers without a network card (most of them until the Millennium era), it was an optional component. You could not “ping” anything, as there was no ping utility, nor libraries it could call. In that aspect, those network-less Windows systems were not much different from network-less DOS systems. The installer probably still works that way (or can be made to, by excluding some dependencies), but it's hard to find hardware without any network connectivity today. I wonder what Windows 11 installer does when there is no network card to phone home...

franga2000•2mo ago
> I wonder what Windows 11 installer does when there is no network card to phone home...

One of "works fine", "needs a command line trick to continue" or "refuses to work completely" depending on which specific edition of win11 your computer has been afflicted with.

ggambetta•2mo ago
I learned to solder as a pre-teen so I could make a nullmodem :) Then I learned that resistors were a thing when I made a parallel port sound card (this thing https://en.wikipedia.org/wiki/Covox_Speech_Thing). Fun times!
d3Xt3r•2mo ago
I wasn't allowed a soldering iron as a kid, so I ended up just chopping and splicing a regular serial cable and turned it into a null modem, all so that I could play OMF2097 with my friends without having to share the same keyboard (we would always fight over right side, which defaulted to using the arrow keys for movement - and so the person who got the right side generally had the advantage, as back then arrow keys were the default movement keys, unlike these days where WASD is default.)
jacquesm•2mo ago
I wasn't allowed one either so I soldered with a screwdriver heated up on the gas stove when my parents weren't home...
ggambetta•2mo ago
That's pretty hardcore, respect :)
jacquesm•2mo ago
It also taught me valuable lessons about hardening.
giantrobot•2mo ago
Your parents: A soldering iron is dangerous!

You: I'll show you!

pixl97•2mo ago
I am not in danger.

I am the danger.

[Sticks glowing hot screw driver in molten lead]

orthecreedence•2mo ago
"We need laws to keep children away from soldering irons"

Later that day...

khafra•2mo ago
Shared-keyboard OMF 2097 also had an overwhelming advantage for the first mover, since most keyboards had 2-3 key rollover--if you hit wd to jump forward, your opponent had to be fast to do anything before you hit your attack key.
ukuina•2mo ago
I wonder if OpenOMF has the same limits.
c-hendricks•2mo ago
It's a keyboard thing and less of a software thing.
Reason077•2mo ago
This must have been around the same time (1993 or so) when many organisations were upgrading old coax 10Base2 network equipment to modern 10BaseT (and eventually 100BaseT). My friends and I, strongly motivated by the incentive of being able to play multiplayer DOOM, managed to source some free ISA 10Base2 Ethernet cards and coax cable and T-connectors from someone's Dad. The only thing we were missing was the terminators which could be made yourself by cutting a coax cable and soldering a resistor between the conductors... fun introduction to LAN technology for us!
lll-o-lll•2mo ago
Really fun times. I “learned” to solder around that time and age also. Playing Mod files through a DIY version of that “thing” piped into a portable stereo speaker was awesome.

Years later I learned what flux was, and soldering became quite a bit better and easier.

qingcharles•2mo ago
That parallel port sound card was my primary sound card for a long time. I bought a bunch of full sized resistors from Maplin and soldered them all as janky as a kid can with huge blobs of solder, but it worked perfectly from day one.
burnt-resistor•2mo ago
Nice. I learned on a DE-9 cable making an HP-48 cable from an internal CD-ROM analog cable. I was such a poor student cliché that I used Scotch tape instead of electrical tape to ensure the RX, TX, and GND lines didn't short.

There wasn't one kind of null modem cable, per se, there were serial and parallel null modem cables.

Originally, there were null modem (serial) adapters that worked with straight through cables but that got expensive, awkward, and complicated. A universal serial null modem cable had a pair of "DB-9" DE-9 female and DB-25 female connectors on both ends so it would work with either system having either type of connector.

A parallel null modem cable had DB-25 male connectors on both ends.

http://www.nullmodem.com/LapLink.htm

Both were used with LapLink and often interchangeably called "LapLink cables" too because the boxed version of LapLink included both cables.

nikanj•2mo ago
” It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine. My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances”

Virtual x86 mode had little to do with what we nowadays think of when someone says ”virtual machine”

skywal_l•2mo ago
Not entirely related but Quake had a VM though, executing scripts written in QuakeC[0] which would drive the AI, game events, etc.

[0]: https://en.wikipedia.org/wiki/QuakeC

acchow•2mo ago
But your link says that QuakeC was a compiled language
vkazanov•2mo ago
QuakeC was compiled into QuakeVM bytecode, which made all modes and logic portable between platforms without having to recompile things everytime, unlike what had to be done for Quake 2 (which was 100% native code).

This hurt performance a bit but in the longer term benefited the modding scene massively.

Luc•2mo ago
Compiled to bytecode.
FartyMcFarter•2mo ago
It's compiled into bytecode, so it still requires a VM / bytecode interpreter (whatever you want to call it).
hapless•2mo ago
Arguably it had a great deal to do with what we think of as a "virtual machine."

Virtual 8086 remapped all opcodes and memory accesses to let a program pretend to be on a single, real-mode 8086 when in reality it was one of many programs running in protected mode on a newer chip

AMD-V and whatever the Intel counterpart is do the almost exactly the same thing: re-map all ia32 and amd64 instructions and memory accesses to let a program pretend to be doing ring 0 stuff on an x86 box, when in reality it is one of many programs running with fewer privileges on a newer chip

There are a few more wrinkles in the latter case -- nested page tables, TLB, etc -- but it is the same idea from the viewpoint of the guest.

skrebbel•2mo ago
Random drive-by nitpick:

> From the beginning of the development, id had requested from djgpp engineers that their DPMI client would be able to run on djgpp's DPMI server but also Windows 95 DPMI server.

I'm pretty sure that "DJGPP engineers" is just one guy, DJ Delorie. DJGPP was always open source so I bet he got some contributors, but if the rest of this sentence is true that "id has requested from djgpp engineers", it just means they asked the maker of an open source tool they used to please add a feature. I wonder whether they paid him for it or whether DJ just hacked it all in at id's request for kicks. His "about me" page suggests he does contracting so might be the latter.

DJGPP was spectacularly good back in the day. I didn't appreciate at the time what a monumental effort it must have been to port the entire GCC toolchain and runtime to DOS/Windows. Hats off to DJ Delorie!

maybewhenthesun•2mo ago
Amen to that!

I think I remember there was some communication between ID and Charles Sandmann about CWSDPMI, so even though it's worded a bit strange for an open source project there's probably some thruth in it?

Also a bit strange how the author is surprised about Quake running in a 'VM', apparently they don't really know about VM86 mode in x86 processors...

trollbridge•2mo ago
DPMI clients don’t run in a VM, though. They’re just a normal task like any other task / process in Windows.
toast0•2mo ago
So... Win32 runs in virtual mode. In 2025, we don't think of that as a Virtual Machine, but it totally is. Hardware access is trapped by the CPU and processed by the OS/DPMI server.
fredoralive•2mo ago
No, in 386 mode 3.x and 9x the System VM and other DPMI clients runs in protected mode.

Virtual 8086 mode, as its name somewhat suggests, only runs real mode code.

trollbridge•2mo ago
It's not a virtual machine. There is no hardware machine that presents itself to function like a protected-mode ring 3 task on an 80286 or an 80386 functions. And the 286 and 386 both lacked a "virtual 286" and "virtual 386" mode (although it would have been almost-trivial for them to support it; Intel just decided not to, probably figuring it wasn't important).

Virtual 8086 mode, on the other hand, does behave exactly like a real 8086, which otherwise would have been very slow to implement on a real 386, and was either very slow on a 286 or impossible due to the 286 having some errata that prevented normal virtualisation techniques (the 286 had some non-restartable exceptions).

VogonPoetry•2mo ago
VM in this usage means Virtual Memory - i.e. with page tables enabled. Two "processes" can use the same memory addresses and they will point to different physical memory. In Real Address mode every program has to use different memory addresses. The VM86 mode lets you to have several Real Mode programs running, but using Virtual Memory.
oso2k•2mo ago
VM does not mean Virtual Memory in this context. VM does mean Virtual Machine. When an OS/DPMI Server/Supervisor/Monitor provides an OS or program a virtual interface to HW interrupts, IO ports, SW interrupts, we say that OS or program is being executed in a Virtual Machine.

For things like Windows 3.x, 9x, OS/2, CWSDPMI, DOS/4G (DPMI & VCPI), Paging & Virtual Memory was an optional feature. In fact, CWSDPMI/djgpp programs had flags (using `CWSDPR0` or `CWSDPMI -s-` or programmatic calls) to disable Paging & Virtual Memory. Also, djgpp’s first DPMI server (a DOS extender called `go32`) didn’t support Virtual Memory either but could sub-execute Real Mode DOS programs in VM86 mode.

http://www.delorie.com/djgpp/v2faq/faq15_2.html

VogonPoetry•2mo ago
I agree that my comment about VM was imprecise and inaccurate.

I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.

Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.

I did go back and re-read some of [0] to refresh some of my memory bitrot.

[0] https://www.ardent-tool.com/CPU/docs/Intel/386/manuals/23098...

trollbridge•2mo ago
Slight nitpick: you could run fire up V86 mode without any LDT entries.

It's also possible to run Virtual 8086 mode without paging enabled, and when in Virtual 8086 mode, the 386 doesn't care about what's in the LDT/GDT (unless an interrupt happens). In practice this is never done because the Virtual 8086 task would only be able to use the bottom 1MB of physical memory.

trollbridge•2mo ago
Slight nitpick: OS/2 2.x+ did not have a way to disable paging, although you could disable virtual memory in any version of OS/2 by simply setting MEMMAN=NOSWAP.

On Windows 3.x, paging and swapping was optional - if you started it in 286 ("standard") mode. On Windows 95, paging is not optional, and it's not optional in Windows 3.11 for Workgroups either.

oso2k•2mo ago
Only if they never call DOS or the BIOS or execute a Real Mode Software Interrupt. When they do, they ask the DPMI server (which could be an OS like Windows 9x or CWSDPIM) to make the call on their behalf. In doing so, the DPMI server will temporarily enter into a VM86 Virtual Machine to do execute the Real Mode code being requested.

http://www.delorie.com/djgpp//doc/libc-2.02/libc_220.html

trollbridge•2mo ago
But that's not the DPMI client running in a VM. That's the host ending up running in a VM.

Note that in a machine with EMM386, the machine is normally running DOS inside an 8086 VM... and the only reason it would switch out of that VM is when you fire up a DPMI client.

zoeysmithe•2mo ago
I think if you're relatively young is hard to know computing history. Its oddly older than one thinks, even concepts that are seen as new. Its sometimes interesting to see people learn about BBS's which flourished 40 years ago.
bombcar•2mo ago
It's a bit surprising because this is the author of the DooM Black Book and they know the underpinnings pretty well.

However, the difference between a DOS VM under Windows 9x and a Windows command prompt and a w32 program started from DOS is all very esoteric and complicated (even Raymond Chen has been confused about implementation details at times).

db48x•2mo ago
Is the author surprised by that, or did you just misread it? The only relevant quote on that page that I see is “It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine.”

He is perhaps surprised that it runs _at speed_ in the VM, not that it runs in the VM which he already knows about.

maybewhenthesun•2mo ago
I mean VM86 is not really a VM in the modern sense of the word. And the author doesn't seem to know.
rasz•2mo ago
That and the fact Quake doesnt run in VM85 mode, it explicitly runs in _protected_.
maybewhenthesun•2mo ago
That as well, yes.
eknkc•2mo ago
Completely off topic;

So I just took a look at DJ’s website and he has a college transcript there. Something looked interesting.

Apparently he passed a marksmanship PE course at the first year. Is that a thing in US? I don’t know, maybe its common and I have no idea. I’d love to have a marksmanship course while studying computer science though.

Cthulhu_•2mo ago
I wouldn't be surprised if it's a pretty normal thing in a few countries or regions in the world. Marksmanship and archery are also olympic sports.
le-mark•2mo ago
It would be an easy “A” for a lot of people in the US!
0x457•2mo ago
Yeah, in Russia even thought everything is decided for you once you've selected your major, PE classes still for you to choose. Competition to get in was crazy too, none of that "first come, first served" - swimming only accepted top N students, table tennis held a tournament style competition (I went there with two friends and I had to play against both of them).

US colleges still have far more options, though.

simiones•2mo ago
US colleges have a very open curriculum, where you have wide leeway in what classes you actually take, especially in the early years of study. If you're coming from more European-style universities, this is vastly different to the relatively rigid course set you'd take (with a few electives here and there).
mortehu•2mo ago
US colleges last one year longer, and the first year is more academically similar to the last year of high school in Europe.
kbolino•2mo ago
It's definitely not common. My US university required 2 physical education classes, but only if you were under 30 and hadn't served in the military. They may have offered marksmanship, but I just took running and soccer (aka football). The classes were graded pass-fail and didn't even count for academic credit.
stronglikedan•2mo ago
We have myriad available "electives" that contribute towards our degrees. I have college credit for "bowling and billiards" and "canoeing and kayaking".
expressadmin•2mo ago
MIT offers a Pirate Certificate: https://physicaleducationandwellness.mit.edu/about/pirate-ce...
marpstar•2mo ago
I took an 8-week, 1-credit badminton course to fulfill my PE requirements. I wouldn't be surprised to find a marksmanship course.
cardiffspaceman•2mo ago
My college required its graduates to pass a minimal swimming test. Just enough swimming ability to give a potential rescuer some extra time to effect the rescue, rather than have us go straight to the bottom of the sea. We all took a test in the first week or so. Those who failed had to take a course and retake the test.
toast0•2mo ago
I needed one PE credit to get a degree from my community college. My school didn't offer marksmanship, but I would imagine it would fit into PE, archery certainly would and there's synergy. I took Table Tennis to graduate. I don't think my engineering school where I got my BS required Physical Education though.
kevin_thibedeau•2mo ago
My high school had some marksmanship trophy's in their case dating back to the 70s. Responsible gun ownership was a real thing when a sizable portion of the male population were veterans.
kelnos•2mo ago
As a sibling poster mentions, US universities often have a very open curriculum. At my university, I got PE credit for classical fencing!

The marching band could also count for PE credit. I believe you could only get credit for one semester for it, and my university required two semesters of PE, so I gave fencing a try, having never fenced before.

gpderetta•2mo ago
It was great indeed. DJGPP is how I learned to to program.
bluedino•2mo ago
Would love to see some interviews etc with DJ if he's up for it
ddtaylor•2mo ago
Same. I only have experience from M.U.G.E.N fighting engine with respect to DJGPP.
EGreg•2mo ago
I remember back in the day using DJGPP (DJ Delorie) with the Allegro library (Shawn Hargreaves), building little games that compiled and ran on Windows and other OSes, and being part of the community.

You can still play the little game I made in under 10K for the Allegro SizeHack competition in 2000: https://web.archive.org/web/20250118231553/https://www.oocit...

Back then I was also writing a bunch of articles on game development: https://www.flipcode.com/archives/Theory_Practice-Issue_00_I...

Anyone on HN was active around that time? :) Fun time to be hacking!

jasomill•2mo ago
I was, but my application was less fun: porting Perl code from Windows NT to MS-DOS to integrate with software that required direct hardware access to a particular model of SCSI card.

Worked great, and saved a bunch of time vs writing a VDD to enable direct hardware access from NTVDM or a miniport driver from scratch.

IIRC, the underlying problem was that none of the NT drivers for any of the cards we'd tested were able to reliably allocate enough sufficiently contiguous memory to read tapes with unusually large (multi-megabyte) block sizes.

olau•2mo ago
Yes. DJGPP and Allegro was a great help, and a big step up from the old Borland Turbo Pascal I started out with. I remember trying to rotate an image pixel by pixel in Pascal. Allegro simply had a function to do it. And yes, the mailing list was great - Shawn Hargreaves and the couple of people in the inner circle (I seem to remember someone called George) were simply awesome, helpful people.

I eventually installed Red Hat, started at university and lost most of my free time to study projects.

EGreg•2mo ago
George Foot :)
krajzeg•2mo ago
I was quite active in the Allegro community around that time, mostly on the allegro.cc forums - but I was still a 14-year old learning the ropes back then. Missed out on DJGPP, it was already MinGW under Windows for me.

I took part in a few of the later Speedhacks, around 2004-2005, I think?

Allegro will always have a warm place in my heart, and it was a formative experience that still informs how I like to work on games.

EDIT: Hah, actually found my first Speedhack [1]! Second place, not bad! Interestingly, the person who took first place (Rodrigo Braz Monteiro) is also a professional game developer now.

[1]: https://web.archive.org/web/20071101091657/http://speedhack....

ddtaylor•2mo ago
I was on BlitzCoder.
djdelorie•2mo ago
Back then, DJGPP was a much bigger group, and most of the Quake kudos go to Charles Sandmann, author of cwsdpmi, who worked directly with Id to help them optimize their code for our environment.
skrebbel•2mo ago
Omg I’m totally starstruck now!

Purely out of curiosity, was that all a volunteer open source effort or was the entire DJGPP group acting as a consulting organization?

djdelorie•2mo ago
The DJGPP project and its contributors were 100% volunteer. I'm sure some of the contributors took advantage of their involvement to obtain consulting gigs on the side (I did ;) but DJGPP itself didn't involve any. The Quake work was a swap; we helped them with optimizing, and they helped us with near pointers. Win-win!
thatjoeoverthr•2mo ago
Thank you for this work. It was my first C compiler in 1998. Y'all helped me on the mailing list and you even replied to my e-mails! I was 11 and this was insane to me.
djdelorie•2mo ago
You're welcome!
jhgb•2mo ago
I too need to thank you for the very first C compiler I ever had access to in 1999, after 10 years of having a book on C in my possession that I couldn't use until then.
xtracto•2mo ago
Just passing by to thank you. As many others have mentioned, DJGPP was pivotal for my life. I compiled my first C/Allegro games in DJGPP back in the mid/late 90s.

And here you are!!

hhhAndrew•2mo ago
+1 DJGPP/Allegro key life experience on my parents Windows machine, thankyou!
ec109685•2mo ago
This article makes it seems like 1996 was ancient times. There was the internet then, browsers, Mac’s had a tcp stack for a while by then, quake was an extremely advanced game.

Yeah, the dos to windows transitions was a big deal, but it was a pretty ripe time for innovation then.

hylaride•2mo ago
Yeah, but dial-up was slow, laggy, and what 95% of people used to access the internet in those days. Real-time gaming was not fun with anything that used it. I grew up in a rural area in the 1990s and was no match for people that started to get cable modems as time went on.
1718627440•2mo ago
Dial-up, has better latency, since their is no packet-switching. So it is slow, but not laggy.
ipython•2mo ago
Dialup has a ton of latency (100+ms), but little jitter.
toast0•2mo ago
If you're dialed up directly, you should be able to get a little bit better latency as you won't need IP, UDP, and PPP/SLIP headers; at modem bandwidth, header bytes add meaningful latency. But data transmission is still pretty slow, even with small packets.
pak9rabid•2mo ago
Dialing-up a friend to play Quake, there essentially was no lag.

Dialing-up to the Internet to play Quake via TCP/IP...shit tons of lag (150+ ms).

giantrobot•2mo ago
You're using confusing terminology so you look very wrong. What you mean to say is direct modem-to-modem connections were not laggy because there was no packet switching. This is a true statement.

What the GP comment was talking about was dial-up Internet being most people's exposure to TCP/IP gaming in the 90s. That was most assuredly laggy. Even the best dial-up Internet connections had at least 100ms of latency just from buffering in the modems.

The QuakeWorld network stack was built to handle the high latency and jitter of dial-up Internet connections. The original Quake's network was fine on a LAN or fast Internet connection (e.g. a dorm ResNet) but was sub-par on dial-up.

kleiba•2mo ago
And then there was also this: https://superuser.com/questions/419070/transatlantic-ping-fa...
hylaride•2mo ago
> Dial-up, has better latency, since their is no packet-switching. So it is slow, but not laggy.

It was laggy as there was buffering and some compression (at least for later revisions of dial-up) that most definitely added latency.

bombcar•2mo ago
Even when people had dial-up, a huge majority were using portal-dialers like AOL or Compuserve, and it took extra steps to use those to "get" the Internet directly as opposed to within the walled garden.

And even then they'd often just use the bundled browser.

I remember the first friend who got a cable modem, that shit was insane compared to dial-up.

cubefox•2mo ago
In an interview with Lex Fridman, John Carmack said that in retrospect, Quake was too ambitious in terms of development time, as it both introduced network play and a fully polygonal 3D engine written in assembly. So it would have been better to split the work in two and publish a "Network Doom" first and then build on that with a polygonal Quake.

Which seems to imply that the network stack was about as difficult to implement as the new 3D engine.

bitwize•2mo ago
And then you had Romero saying that Quake wasn't ambitious enough...
pak9rabid•2mo ago
That's the difference between an engine and game developer.
cubefox•2mo ago
They theoretically had more than enough time for game design in the ~2 year development period, which was long for the time.
bitwize•2mo ago
It's the difference between someone working in a world of engineering tradeoffs and a fantasist imagining their next übergame with everything awesome in it.

I looked into the development cycle behind Daikatana, partly because it had its 25th anniversary this year and so for some reason, YouTube was recommending me Daikatana content. And... there's a reason why Romero's first dev team all quit. Daikatana started life as a 400-page document full of everything Romero found awesome at the time. There were going to be time travel mechanics and a roleplaying party system like Chrono Trigger. It was going to have like a hundred awesome weapons that totally reinvented how to make things go boom. It was going to have the sweetest graphics imaginable. Etc. It was like something Imari Stevenson would have written as a teenager, which is somewhat surprising since Romero could now call himself a seasoned industry professional.

What's worse, "Design is Law" basically meant "what I say goes". It was his job to have the ideas, and it was his team's job to implement them. Romero wanted to be the "idea guy", and Daikatana was an "idea guy" game. I doubt he had the maturity at the time to understand what design is, in terms of solving a problem with the tools and constraints you have. He wanted Daikatana, and Quake before that, to have everything, and didn't know how to pare it down to the essentials, make compromises, and most importantly, listen to his team. Maybe there's an alternate-universe Quake or Daikatana somewhere that's just a bit more ambitious than the Quake we got, incorporating more roleplaying elements into a focused experience. But in our timeline, Romero didn't want to make that game.

Of course, after taking the L on Daikatana's eventual release, Romero wised up and started delivering much more focused and polished experiences, learning to work within constraints and going a long way toward rehabilitating his reputation. But that's not where he was when he criticized Quake for not pushing the envelope enough.

cubefox•2mo ago
Yeah, Romero seemed too ambitious. I didn't play the original Daikatana, but the Game Boy Color port was surprisingly good. The basic 2D graphics of the system helped enforce technical simplicity. So an undisciplined game designer could actually go nuts (within the technical limitations of a tile-based 8-bit machine) without much risk of overwhelming programmers or artists.

However, the GBC game wasn't actually developed by Romero's team, but outsourced to the Japanese studio Kemco. Romero was involved, though I'm not sure how much.

Anyway, what made the GBC game unique is that it played like a linear story-driven first-person game, similar to Half-Life, just not in first-person.

The presentation was top-down, like Zelda Link's Awakening, but the world wasn't split into an overworld and dungeons, nor was it split into "levels". Instead you would just walk from location to location, where side paths would be blocked off, similar to Half-Life. On the way you were solving environmental (story related) puzzles, killing enemies, meeting allies, all while advancing the elaborate plot through occasional dialog scenes. It felt pretty modern, like playing an action thriller.

For some reason I never saw another 2D game which played like that. I assume one reason is that this type of story-driven action game was only invented with Half-Life (1998), at which point 2D games were already out of fashion on PC and home consoles. Though this doesn't explain why it didn't catch on for mobile consoles.

So in conclusion, I think Romero (his own studio) might have been better off developing ambitious 2D action adventures for constrained mobile consoles rather than trying the same on more challenging 3D hardware. It would have been a nice niche that no other team occupied at the time, to my knowledge.

bitwize•2mo ago
Well, that's kinda what he did. He formed a new studio, Monkeystone Games, and released the 2D top-down adventure, Hyperspace Delivery Boy, on Windows CE devices, which was pretty well received at the time. Like I said, he took the L pretty well and learned its lessons.

I've played Daikatana GBC. It's pretty good. Years back, Romero released the ROM on his web site as a free download. I suspect ION was pretty involved, at least from a standpoint of making sure the story unfolded more or less as it did in the main game.

PeterHolzwarth•2mo ago
Is this the sign that Fabian is beginning his look and research at Quake (ie, in slow preparation for another Game Engine Black Book)?
bombcar•2mo ago
I feel he's stepping his way to that, but Quake is an entire other world of complexity from DooM (which is simple enough that a 400+ page book can "explain" it and the OS and the computers it ran on).
klaussilveira•2mo ago
Andre Lamothe's "Tricks of the 3d Game Programming Gurus: Advanced 3d Graphics and Rasterization" is a great book for anyone interested in graphics programming from those days.
skywal_l•2mo ago
Also "The Black Book"[0] of Michael Abrash who worked on Quake with Carmack.

[0]: https://www.amazon.fr/Michael-Abrashs-Graphics-Programming-S...

bombcar•2mo ago
Which itself is the inspiration for the DooM/Wolf3d Black Books.
andrewf•2mo ago
https://github.com/othieno/GPBB - the last chapter is a Quake retrospective.

There's also the columns he wrote for Dr Dobbs' Journal during development. They're like an expanded, illustrated version of GPBB chapter's first half. https://www.bluesnews.com/abrash/contents.shtml

klaussilveira•2mo ago
He most definitely is cooking something. Seen contributions from him to chocolate-quake: https://github.com/Henrique194/chocolate-quake/pull/60

As well as bug reports: https://github.com/Henrique194/chocolate-quake/issues/57

I personally would love to get some help with my chocolate Doom 3 BFG fork, specially with pesky OpenGL issues: https://github.com/klaussilveira/chocolate-doom3-bfg

skywal_l•2mo ago
He said he was working on something in a comment a couple of months ago on a thread about typst: https://news.ycombinator.com/item?id=44356883
fallingfrog•2mo ago
Ah I remember running a serial cable from my bedroom to the hallway so we could play 1v1 quake via direct connect. Good times! I think we used to play age of empires that way too.
sempron64•2mo ago
It's amusing to me that in the 90s you could easily play Quake or Doom with your friends by calling their phone number over the modem whereas now setting up any sort of multiplayer essentially requires a server unless you use some very user-unfriendly NAT busting.
kamranjon•2mo ago
I wonder if there is a way to use tailscale to make it easy again?
blackcatsec•2mo ago
Quite literally folks have done this for decades using Hamachi.
sempron64•2mo ago
Hamachi and STUN were what I was thinking of when I referred to user-unfriendly NAT busting. It's true that these are not much harder to get working than a modem, but they don't match up with modern consumer expectations of ease-of-use and reliability on firewalled networks. It would be nice if Internet standards could keep up with industry so that these expectations could be met. It's totally understandable where we've landed due to modern security requirements, but I still feel something has been lost.
klaussilveira•2mo ago
But how are you going to circumvent the user firewall? He still has to open ports there, even using STUN or Steam Relay or Hamachi.
blackcatsec•2mo ago
Hamachi does not require you to open any ports on your firewall by nature. Except maybe the local firewall (Windows firewall, likely) which apps should automatically get asked for when they try to use a port.
blackcatsec•2mo ago
I mean, internet standards kept up. IPv6 is a thing, and some form of dynamic IPv6 stateful firewall hole punching a la UPnP would be useful here. Particularly if the application used the temporary address for the hole punch--because once the address lifetime ends, it's basically not going to get used again (64-bit address space). So that effectively nullifies any longer term concerns about security vulnerabilities.
ryandrake•2mo ago
Glad you mentioned DOOM! Sometimes people forget that DOOM supported multiplayer as early as December 1993, via a serial line and February 1994 for IPX networking. 4 player games on a LAN in 1994! On release, TCP/IP wasn't supported at all, but as the Internet took off, that was solved as well. I remember testing an early-ish version of the 3rd party iDOOM TCP setup driver from my dorm room (10 base T connection) when I was supposed to be in class, and it was a true game changer.
jandrese•2mo ago
What was even more amazing is you could daisy chain serial ports on computers to get multiplayer Doom running. One or more of those links could even be a phone modem.

Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.

bombcar•2mo ago
Or slave two copies to yours and get "side view" which was only supported for a few releases IIRC - https://doomwiki.org/wiki/Three_screen_mode
badocr•2mo ago
Yes, Doom with multi-monitors! There's (at least) one video on Youtube showing it in action with 3 monitors plus a fourth one with the map: https://www.youtube.com/watch?v=q3NQQ7bPf6U#t=1798.333333
bigjimmyk3•2mo ago
Someone tried running that in one of the campus computer labs when I was a student, and the (probably misconfigured) IPX routers amplified it into... a campus-wide outage. Seems weird to me, but that's what the big sign on the door said the next day.

The perpetrator was never caught.

mrguyorama•2mo ago
You usually just need to forward a port or two on your router. That gets through the NAT because you specify which destination IP to forward it to. You also need to open that port in your Windows firewall in most cases.

Some configuration, but you don't have to update the port forwarding as often as you would expect.

The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.

bitwize•2mo ago
Starcraft could only do internet play through battle.net, which required a legit copy. Pirated copies could still do LAN IPX play though, and with IPX over IP software you could theoretically do remote play with your internet buddies.

By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".

klaussilveira•2mo ago
You can kinda solve that problem with STUN servers. Most games on Steam use Steam Datagram Relay, which also solves this: https://partner.steamgames.com/doc/features/multiplayer/stea...

It's like in-engine Hamachi. Works really well with P2P games.

rurban•2mo ago
Multi-player started with Doom 2. Original doom was single player only. Doom 2 was for 4 players which I used in my mod ArsDoom. Quake then extended it to scale via a dedicated quake server.
Lammy•2mo ago
I played a lot of Brood War with my friends like this.
mmastrac•2mo ago
I still get Google hits on my 25 year old DJGPP/NASM tutorial...
anthk•2mo ago
On DOS runnning fast under W95, how no one created a SVGALIB wrapper trapping i386 code and redirecting the calls to an SDL window?
badocr•2mo ago
I recall reading about TCP/IP-powered Internet multiplayer DOS Quake in TECHINFO.TXT that shipped with the retail version of the game, and I quote:

  Beame & Whiteside TCP/IP
  ------------------------
  
  This is the only DOS TCP/IP stack supported in the test release.
  It is not shareware...it's what we use on our network (in case you
  were wondering why this particular stack).  This has been "tested"
  extensively over ethernet and you should encounter no problems
  with it.  Their SLIP and PPP have not been tested.  When connecting
  to a server using TCP/IP (UDP actually), you specifiy it's "dot notation"
  address (like 123.45.67.89).  You only need to specify the unique portion
  of the adress.  For example, if your IP address is 123.45.12.34
  and the server's is 123.45.56.78, you could use "connect 56.78".

I looked around a little and sure enough, a copy of the software was avaiable in a subdirectory of idgames(2?) at ftp.cdrom.com. I knew nothing about TCP/IP networking at the time, so it was a struggle to get it all working, and in the end, the latency and overall performance was miserable and totally not worth it. Playing NetQuake with WinQuake was a much more appropriate scenario.
pragma_x•2mo ago
From that first graph: Who was using WinNT in 2005?!
bombcar•2mo ago
Businesses. And I knew a few diehards who swore Windows 2000 was shit and Windows NT 4.0 was where it was at, even as a workstation.

Windows 95 and MS-DOS in 2004 worry me a bit more.

pak9rabid•2mo ago
I'm guessing none of these diehards were running domain controllers.
toast0•2mo ago
> And in 1995 there were only two: us, and Total Entertainment Network. You might think game creators would come to us and say "please put my game on your service!", but... nope! Not only did we have a licensing team that went out and got contracts to license games for our service, but we had to pay the vendor for the right to license their game, which was often an exclusive. So, we had Quake and Unreal; TEN got Duke Nukem 3D and NASCAR.

FWIW, the Total Entertainment Network (TEN) got Quake later, here's a press release from September 30 1996 [1]. Wikipedia says QTest was released Feb 24, 1996, and I can't find when MPath support launched, so I don't know how long the exclusive period was, but not very long. Disclosure: I was a volunteer support person (TENGuide) so I could get internet access as a teen; I was GuideToast and/or GuideName.

[1] https://web.archive.org/web/20110520114948/http://www.thefre...

nu11ptr•2mo ago
What a blast from the past. I recall my own experimenting with DJGPP and DPMI in my own software. It felt futuristic at the time. I was blown away.

Another fond memory: I was playing Star Wars: Dark Forces (I think that was the one) and was frustrated with the speed of level loading. I think it used DOS/4GW and I recall renaming it, copying in a new dos extender (was it CWSDPMI? not sure), and renaming it to the one Star Wars used. I was shocked when it not only worked, but the level loading was MUCH faster (like 3-5x faster I recall). My guess is whatever one I had swapped in wasn't calling the interrupt in DOS (by swapping back to real mode), but perhaps was calling the IDE disk hw directly from protected mode. Not sure, but it was a ton faster, and I was a very happy kid. The rest of the game had the same performance (which makes sense I think) with the new extender.

jhallenworld•2mo ago
In the early years of Linux, before it had networking, we used KA9Q for the TCP/IP stack:

https://www.ka9q.net/code/ka9qnos/

This worked in DOS, but was easily ported to Linux.

As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).

https://en.wikipedia.org/wiki/CWSDPMI

Rodeoclash•2mo ago
> If you wanted to play a multiplayer game on the internet, either you needed to have explicit host & port information, or you needed to use an online multiplayer gaming service.

Technically true, although tools like Kali existed which could simulate IPX style networks over the internet. I know this because I played a substantial amount of Mechwarrior 2 online when it designed only for local network play!

lizknope•2mo ago
Of course Quake had to support DOS but id developed Quake on NeXTSTEP which of course had TCP/IP and they had been supporting Linux and other commercial Unix versions like Solaris since Doom a few years earlier.
burnt-resistor•2mo ago
> My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances. [sic]

When a DOS VM is in windowed mode, Windows must intercept VGA port I/O and video framebuffer RAM access to a shadow framebuffer and scale/bitblt it to display in its hosted window.

In full screen, exclusive VGA access is possible so it doesn't need to do anything special except restore state on task switching.

Quake would be even faster if it didn't have to support DPMI/Windows and ran in "unreal mode" with virtual memory and I/O port protections disabled.

xela79•2mo ago
and let's not forget QuakeWorld that introduced latency compensation, allowing you to have an OK experience with up to 200ms ping, where as the normal Quake TCP/IP stack was basically unplayable with anything over 70-80ms. https://quakewiki.org/wiki/QuakeWorld

my first multiplayer Quake was with Qtest in February 1996 using a null modem cable between two machines, and later on using coax cables and IPX in DOS

getting a multiplayer game running was a tech feat compared to the plug & play nature of things now. the learning curve is gone now.