IBM (NetBEUI, no TCP/IP) -> Spider TCP/IP Stack + SysV STREAMS environment -> MS rewrite 1 (early NT, Winsock instead of STREAMS) -> MS rewrite 2 (make win2000 faster):
https://web.archive.org/web/20151229084950/http://www.kuro5h...
With IP you have an address and the means to route the data to that address and back, with TCP/UDP sockets you have the address:port endpoint so the recipient doesn't need to pass a received packet to the all processes on the system, asking "is that yours".
So if there is already some network stack providing both the addressing and the messaging...
You’d still be talking of stuff like IP addresses and the like with it. Probably with the XTI API instead of BSD sockets, which is a bit more complex but you need the flexibility to handle different network stacks than just TCP/IP, like erm…
Sadly no images.
Also, Windows did not install TCP/IP components on computers without a network card (most of them until the Millennium era), it was an optional component. You could not “ping” anything, as there was no ping utility, nor libraries it could call. In that aspect, those network-less Windows systems were not much different from network-less DOS systems. The installer probably still works that way (or can be made to, by excluding some dependencies), but it's hard to find hardware without any network connectivity today. I wonder what Windows 11 installer does when there is no network card to phone home...
One of "works fine", "needs a command line trick to continue" or "refuses to work completely" depending on which specific edition of win11 your computer has been afflicted with.
You: I'll show you!
I am the danger.
[Sticks glowing hot screw driver in molten lead]
Later that day...
Years later I learned what flux was, and soldering became quite a bit better and easier.
There wasn't one kind of null modem cable, per se, there were serial and parallel null modem cables.
Originally, there were null modem (serial) adapters that worked with straight through cables but that got expensive, awkward, and complicated. A universal serial null modem cable had a pair of "DB-9" DE-9 female and DB-25 female connectors on both ends so it would work with either system having either type of connector.
A parallel null modem cable had DB-25 male connectors on both ends.
http://www.nullmodem.com/LapLink.htm
Both were used with LapLink and often interchangeably called "LapLink cables" too because the boxed version of LapLink included both cables.
Virtual x86 mode had little to do with what we nowadays think of when someone says ”virtual machine”
This hurt performance a bit but in the longer term benefited the modding scene massively.
Virtual 8086 remapped all opcodes and memory accesses to let a program pretend to be on a single, real-mode 8086 when in reality it was one of many programs running in protected mode on a newer chip
AMD-V and whatever the Intel counterpart is do the almost exactly the same thing: re-map all ia32 and amd64 instructions and memory accesses to let a program pretend to be doing ring 0 stuff on an x86 box, when in reality it is one of many programs running with fewer privileges on a newer chip
There are a few more wrinkles in the latter case -- nested page tables, TLB, etc -- but it is the same idea from the viewpoint of the guest.
> From the beginning of the development, id had requested from djgpp engineers that their DPMI client would be able to run on djgpp's DPMI server but also Windows 95 DPMI server.
I'm pretty sure that "DJGPP engineers" is just one guy, DJ Delorie. DJGPP was always open source so I bet he got some contributors, but if the rest of this sentence is true that "id has requested from djgpp engineers", it just means they asked the maker of an open source tool they used to please add a feature. I wonder whether they paid him for it or whether DJ just hacked it all in at id's request for kicks. His "about me" page suggests he does contracting so might be the latter.
DJGPP was spectacularly good back in the day. I didn't appreciate at the time what a monumental effort it must have been to port the entire GCC toolchain and runtime to DOS/Windows. Hats off to DJ Delorie!
I think I remember there was some communication between ID and Charles Sandmann about CWSDPMI, so even though it's worded a bit strange for an open source project there's probably some thruth in it?
Also a bit strange how the author is surprised about Quake running in a 'VM', apparently they don't really know about VM86 mode in x86 processors...
Virtual 8086 mode, as its name somewhat suggests, only runs real mode code.
Virtual 8086 mode, on the other hand, does behave exactly like a real 8086, which otherwise would have been very slow to implement on a real 386, and was either very slow on a 286 or impossible due to the 286 having some errata that prevented normal virtualisation techniques (the 286 had some non-restartable exceptions).
For things like Windows 3.x, 9x, OS/2, CWSDPMI, DOS/4G (DPMI & VCPI), Paging & Virtual Memory was an optional feature. In fact, CWSDPMI/djgpp programs had flags (using `CWSDPR0` or `CWSDPMI -s-` or programmatic calls) to disable Paging & Virtual Memory. Also, djgpp’s first DPMI server (a DOS extender called `go32`) didn’t support Virtual Memory either but could sub-execute Real Mode DOS programs in VM86 mode.
I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.
Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.
I did go back and re-read some of [0] to refresh some of my memory bitrot.
[0] https://www.ardent-tool.com/CPU/docs/Intel/386/manuals/23098...
It's also possible to run Virtual 8086 mode without paging enabled, and when in Virtual 8086 mode, the 386 doesn't care about what's in the LDT/GDT (unless an interrupt happens). In practice this is never done because the Virtual 8086 task would only be able to use the bottom 1MB of physical memory.
On Windows 3.x, paging and swapping was optional - if you started it in 286 ("standard") mode. On Windows 95, paging is not optional, and it's not optional in Windows 3.11 for Workgroups either.
Note that in a machine with EMM386, the machine is normally running DOS inside an 8086 VM... and the only reason it would switch out of that VM is when you fire up a DPMI client.
However, the difference between a DOS VM under Windows 9x and a Windows command prompt and a w32 program started from DOS is all very esoteric and complicated (even Raymond Chen has been confused about implementation details at times).
He is perhaps surprised that it runs _at speed_ in the VM, not that it runs in the VM which he already knows about.
So I just took a look at DJ’s website and he has a college transcript there. Something looked interesting.
Apparently he passed a marksmanship PE course at the first year. Is that a thing in US? I don’t know, maybe its common and I have no idea. I’d love to have a marksmanship course while studying computer science though.
US colleges still have far more options, though.
The marching band could also count for PE credit. I believe you could only get credit for one semester for it, and my university required two semesters of PE, so I gave fencing a try, having never fenced before.
You can still play the little game I made in under 10K for the Allegro SizeHack competition in 2000: https://web.archive.org/web/20250118231553/https://www.oocit...
Back then I was also writing a bunch of articles on game development: https://www.flipcode.com/archives/Theory_Practice-Issue_00_I...
Anyone on HN was active around that time? :) Fun time to be hacking!
Worked great, and saved a bunch of time vs writing a VDD to enable direct hardware access from NTVDM or a miniport driver from scratch.
IIRC, the underlying problem was that none of the NT drivers for any of the cards we'd tested were able to reliably allocate enough sufficiently contiguous memory to read tapes with unusually large (multi-megabyte) block sizes.
I eventually installed Red Hat, started at university and lost most of my free time to study projects.
I took part in a few of the later Speedhacks, around 2004-2005, I think?
Allegro will always have a warm place in my heart, and it was a formative experience that still informs how I like to work on games.
EDIT: Hah, actually found my first Speedhack [1]! Second place, not bad! Interestingly, the person who took first place (Rodrigo Braz Monteiro) is also a professional game developer now.
[1]: https://web.archive.org/web/20071101091657/http://speedhack....
Purely out of curiosity, was that all a volunteer open source effort or was the entire DJGPP group acting as a consulting organization?
And here you are!!
Yeah, the dos to windows transitions was a big deal, but it was a pretty ripe time for innovation then.
Dialing-up to the Internet to play Quake via TCP/IP...shit tons of lag (150+ ms).
What the GP comment was talking about was dial-up Internet being most people's exposure to TCP/IP gaming in the 90s. That was most assuredly laggy. Even the best dial-up Internet connections had at least 100ms of latency just from buffering in the modems.
The QuakeWorld network stack was built to handle the high latency and jitter of dial-up Internet connections. The original Quake's network was fine on a LAN or fast Internet connection (e.g. a dorm ResNet) but was sub-par on dial-up.
It was laggy as there was buffering and some compression (at least for later revisions of dial-up) that most definitely added latency.
And even then they'd often just use the bundled browser.
I remember the first friend who got a cable modem, that shit was insane compared to dial-up.
Which seems to imply that the network stack was about as difficult to implement as the new 3D engine.
I looked into the development cycle behind Daikatana, partly because it had its 25th anniversary this year and so for some reason, YouTube was recommending me Daikatana content. And... there's a reason why Romero's first dev team all quit. Daikatana started life as a 400-page document full of everything Romero found awesome at the time. There were going to be time travel mechanics and a roleplaying party system like Chrono Trigger. It was going to have like a hundred awesome weapons that totally reinvented how to make things go boom. It was going to have the sweetest graphics imaginable. Etc. It was like something Imari Stevenson would have written as a teenager, which is somewhat surprising since Romero could now call himself a seasoned industry professional.
What's worse, "Design is Law" basically meant "what I say goes". It was his job to have the ideas, and it was his team's job to implement them. Romero wanted to be the "idea guy", and Daikatana was an "idea guy" game. I doubt he had the maturity at the time to understand what design is, in terms of solving a problem with the tools and constraints you have. He wanted Daikatana, and Quake before that, to have everything, and didn't know how to pare it down to the essentials, make compromises, and most importantly, listen to his team. Maybe there's an alternate-universe Quake or Daikatana somewhere that's just a bit more ambitious than the Quake we got, incorporating more roleplaying elements into a focused experience. But in our timeline, Romero didn't want to make that game.
Of course, after taking the L on Daikatana's eventual release, Romero wised up and started delivering much more focused and polished experiences, learning to work within constraints and going a long way toward rehabilitating his reputation. But that's not where he was when he criticized Quake for not pushing the envelope enough.
However, the GBC game wasn't actually developed by Romero's team, but outsourced to the Japanese studio Kemco. Romero was involved, though I'm not sure how much.
Anyway, what made the GBC game unique is that it played like a linear story-driven first-person game, similar to Half-Life, just not in first-person.
The presentation was top-down, like Zelda Link's Awakening, but the world wasn't split into an overworld and dungeons, nor was it split into "levels". Instead you would just walk from location to location, where side paths would be blocked off, similar to Half-Life. On the way you were solving environmental (story related) puzzles, killing enemies, meeting allies, all while advancing the elaborate plot through occasional dialog scenes. It felt pretty modern, like playing an action thriller.
For some reason I never saw another 2D game which played like that. I assume one reason is that this type of story-driven action game was only invented with Half-Life (1998), at which point 2D games were already out of fashion on PC and home consoles. Though this doesn't explain why it didn't catch on for mobile consoles.
So in conclusion, I think Romero (his own studio) might have been better off developing ambitious 2D action adventures for constrained mobile consoles rather than trying the same on more challenging 3D hardware. It would have been a nice niche that no other team occupied at the time, to my knowledge.
I've played Daikatana GBC. It's pretty good. Years back, Romero released the ROM on his web site as a free download. I suspect ION was pretty involved, at least from a standpoint of making sure the story unfolded more or less as it did in the main game.
[0]: https://www.amazon.fr/Michael-Abrashs-Graphics-Programming-S...
There's also the columns he wrote for Dr Dobbs' Journal during development. They're like an expanded, illustrated version of GPBB chapter's first half. https://www.bluesnews.com/abrash/contents.shtml
As well as bug reports: https://github.com/Henrique194/chocolate-quake/issues/57
I personally would love to get some help with my chocolate Doom 3 BFG fork, specially with pesky OpenGL issues: https://github.com/klaussilveira/chocolate-doom3-bfg
Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.
The perpetrator was never caught.
Some configuration, but you don't have to update the port forwarding as often as you would expect.
The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.
By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".
It's like in-engine Hamachi. Works really well with P2P games.
Beame & Whiteside TCP/IP
------------------------
This is the only DOS TCP/IP stack supported in the test release.
It is not shareware...it's what we use on our network (in case you
were wondering why this particular stack). This has been "tested"
extensively over ethernet and you should encounter no problems
with it. Their SLIP and PPP have not been tested. When connecting
to a server using TCP/IP (UDP actually), you specifiy it's "dot notation"
address (like 123.45.67.89). You only need to specify the unique portion
of the adress. For example, if your IP address is 123.45.12.34
and the server's is 123.45.56.78, you could use "connect 56.78".
I looked around a little and sure enough, a copy of the software was avaiable in a subdirectory of idgames(2?) at ftp.cdrom.com. I knew nothing about TCP/IP networking at the time, so it was a struggle to get it all working, and in the end, the latency and overall performance was miserable and totally not worth it. Playing NetQuake with WinQuake was a much more appropriate scenario.Windows 95 and MS-DOS in 2004 worry me a bit more.
FWIW, the Total Entertainment Network (TEN) got Quake later, here's a press release from September 30 1996 [1]. Wikipedia says QTest was released Feb 24, 1996, and I can't find when MPath support launched, so I don't know how long the exclusive period was, but not very long. Disclosure: I was a volunteer support person (TENGuide) so I could get internet access as a teen; I was GuideToast and/or GuideName.
[1] https://web.archive.org/web/20110520114948/http://www.thefre...
Another fond memory: I was playing Star Wars: Dark Forces (I think that was the one) and was frustrated with the speed of level loading. I think it used DOS/4GW and I recall renaming it, copying in a new dos extender (was it CWSDPMI? not sure), and renaming it to the one Star Wars used. I was shocked when it not only worked, but the level loading was MUCH faster (like 3-5x faster I recall). My guess is whatever one I had swapped in wasn't calling the interrupt in DOS (by swapping back to real mode), but perhaps was calling the IDE disk hw directly from protected mode. Not sure, but it was a ton faster, and I was a very happy kid. The rest of the game had the same performance (which makes sense I think) with the new extender.
https://www.ka9q.net/code/ka9qnos/
This worked in DOS, but was easily ported to Linux.
As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).
Technically true, although tools like Kali existed which could simulate IPX style networks over the internet. I know this because I played a substantial amount of Mechwarrior 2 online when it designed only for local network play!
When a DOS VM is in windowed mode, Windows must intercept VGA port I/O and video framebuffer RAM access to a shadow framebuffer and scale/bitblt it to display in its hosted window.
In full screen, exclusive VGA access is possible so it doesn't need to do anything special except restore state on task switching.
Quake would be even faster if it didn't have to support DPMI/Windows and ran in "unreal mode" with virtual memory and I/O port protections disabled.
my first multiplayer Quake was with Qtest in February 1996 using a null modem cable between two machines, and later on using coax cables and IPX in DOS
getting a multiplayer game running was a tech feat compared to the plug & play nature of things now. the learning curve is gone now.
afandian•2mo ago
The British guy named Henry might have named it after another feat of engineering completed around the same time.
https://en.wikipedia.org/wiki/Channel_Tunnel
torh•2mo ago
ralferoo•2mo ago
Weird hearing that name now though. Around that time, everybody referred to it as the "Chunnel", but I don't think I've heard it as anything but the "Channel Tunnel" since maybe 2000. I suspect even that usage is now limited to only taking cars on the train from Folkestone. Every time I've travelled on it as a regular passenger from London, it's just been referred to as the Eurostar without any mention of the tunnel at all.
afandian•2mo ago