Then there's this:
> All of the OS/2 API routines use the Pascal extended keyword for their calling convention so that arguments are pushed on the stack in the opposite order of C. The Pascal keyword does not allow a system routine to receive a variable number of arguments, but the code generated using the Pascal convention is smaller and faster than the standard C convention.
Did this choice of a small speed boost over compatibility ever haunt the decision makers, I wonder? At the time, the speed boost probably was significant at the ~Mhz clock speeds these machines were running at, and Moore's Law had only just gotten started. Maybe I tend to lean in the direction of compatibility but this seemed like a weird choice to me. Then, in that same paragraph: > Also, the stack is-restored by the called procedure rather than the caller.
What could possibly go wrong?>> Also, the stack is restored by the called procedure rather than the caller.
> What could possibly go wrong?
This is still the case for non-vararg __stdcall functions used by Win32 and COM. (The argument order was reversed compared to Win16’s __far __pascal.) By contrast, the __syscall convention that 32-bit OS/2 switched to uses caller cleanup (and passed some arguments in registers).
Windows 3.0 effort was initially disguised as update for this before management could be convinced to support the project.
But this is an example on this very page of the telephone-game problem that happened during the Operating System Wars, where the porting tool of the WINAPI macro that Microsoft introduced into its DOS-Windows SDK, allowing 32-bit programmers to divorce themselves from the notion of "far" function calls that 16-bit programmers had to be very aware of, becomes intertwined into a larger "Windows is a porting layer" tale, despite the two being completely distinct.
That a couple of applications essentially supplied 16-bit Windows as a runtime really was not related to the 16-bit to 32-bit migration, which came out some while after DOS-Windows was a standalone thing that one ran explicitly, rather than as some fancy runtime underpinnings for a Microsoft application.
* https://jdebp.uk/FGA/function-calling-conventions.html#WINAP...
There is nothing wrong with using this calling convention, except for those specific functions that need to have a variable number of arguments - and why not handle those few ones differently instead, unless you're using a braindead compiler / language that doesn't keep track of how functions are declared?
Moreover, it can actually support tail calls between functions of arbitrary (non-vararg) signatures.
I think it is a big pity that contemporary mainstream x86[-64] calling conventions (both Windows and the SysV ABI used by Linux-and almost everybody else) don’t pass the argument count in a register for varargs functions. This means there is no generic way for a varargs function to know how many arguments it was called with - some functions use a sentinel value (often NULL), for some one of the arguments contains an embedded DSL you need to parse (e.g. printf and friends). Using obtuse preprocessor magic you can make a macro with the same name as your function which automatically passes its argument count as a parameter-but that is rarely actually done.
OpenVMS calling convention-including the modified version of SysV ABI which the OpenVMS x86-64 port uses-passes the argument count of varargs function calls in a register (eax), which is then available using the va_count macro. I don’t know why Windows/Linux/etc didn’t copy this idea, I wish they had - but it is too late now.
...in the end it's just another calling convention which you annotate your system header functions with. AmigaOS had a vastly different (very assembly friendly) calling convention for OS functions which exclusively(?) used CPU registers to pass arguments. C compilers simply had to deal with it.
> What could possibly go wrong?
...totally makes sense though when the caller passes arguments on the stack?
E.g. you probably have something like this in the caller:
push arg3 => place arg 3 on stack
push arg2 => place arg 2 on stack
push arg1 => place arg 1 on stack
call function => places return address on stack
...if the called function would clean up the stack it would also delete the return address needed by the return instruction (which pops the return address from the top of the stack and jumps to it).(ok, x86 has the special `ret imm16` instruction which adjusts the stack pointer after popping the return address, but I guess not all CPUs could do that back then)
The rule of thumb I had heard and followed was that if something could take longer than 500ms you should get off the UI thread and do it in a separate thread. You'd disable any UI controls until it was done.
Presentation Manager did not have a single input queue. Every PM application had its own input queue, right from when PM began in OS/2 1.1, created by a function named WinCreateMsgQueue() no less. There were very clearly more than 1 queue. What PM had was synchronous input, as opposed to asynchronous in Win32 on Windows NT.
Interestingly, in later 32-bit OS/2 IBM added some desynchronization where input would be continued asynchronously if an application stalled.
Here's Daniel McNulty explaining the difference in 1996:
* https://groups.google.com/g/comp.os.os2.beta/c/eTlmIYgm2WI/m...
And here's me kicking off an entire thread about it the same year:
* https://groups.google.com/g/comp.os.os2.programmer.misc/c/Lh...
I had (and likely have lost forever) a Boot disk with OS/2, and my Forth/2 system on it that could do directory listings while playing Toccata and Fugue in D minor in a different thread.
I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler. Thanks to a copy of the OS/2 development kit from Ward Christensen (who worked at IBM), and a few months of spare time, Forth/2 was born, written in pure assembler, compiling to directly threaded native code. Brian Matthewson from Case Western wrote the manual for it. Those were fun times.
Just to be clear, when you say "without the need for the GUI", more accurately that's "without a GUI" (w/o Presentation Manager). So you're using OS/2 in an 80x25 console screen, what would appear to be a very good DOS box.
It exists in that interesting but obsolete interstitial space alongside BeOS of very well done single user OS.
Btw, I'd love to know where this idea about no assembler programs on OS/2 came from. There was no problem at all with using MASM, with CodeView to debug on a second screen.
It’s certainly a misunderstanding - nothing prevents someone from writing assembly. Could also be a lack of official documentation on how to make the API calls from assembly. Another possible source is that you can’t access the hardware directly in OS/2, which was more closely associated with low-level programming.
Just as a brief explanation: this was in the late 80's. I was using a device called an imaging photon detector to detect individual photo events. This was the back end of an early electronic camera running at about ISO 1G. Not a digital camera - the photo events were recorded as analogue xy coordinates, with a resolution of about 400x400 and a photon rate of about 100k/s (not an exact figure - as the rate increased, you lost an increasing number of events due to clash. I had a load of wire-wrap to rearrange the bits before they were presented to the computer (bit-bashing on the 80386/33 wasn't fast enough), then the xy coordinated were presented to some sort of PIO. In some cases I wanted to take video, hence the hardware clock, since the nearest thing available on the PC only ran at something like 13.3Hz and was used for polling the mouse. There was no GUI available at the time, so I built my own.
I don't remember a huge amount of detail, and the source is on 5¼" floppy, but as I remember it none of this was difficult for a very amateur self-taught C/asm programmer. The only significant problem was working out that the OS/2 1.0 documentation of the mouse system lied (it didn't have the display capability claimed, and attempting to use it shut down mouse polling), and third party manuals just copied the lies without testing. That was easy enough to work out, and building a couple of threads to display the mouse cursor was easy.
I think I'm going to see if I can find a service which will read the disk - it would be nice to have the code, though I remember there was a lot of stuff ♯defined out where I would have used SCCS once I had it available - hence it was a bit messy.
One little vexation was that although I had a vast 64MB hard disk, and a 1GB (bites tip of little finger) optical WORM drive, I had to reboot in to MS/DOS to use the WORM drive (no OS/2 drivers) and because one of the operating systems couldn't handle more than 32MB disk size, I had to partition the hard disk.
Oh the screen would go to snow often, and sidekick would bring it right back.
How well did OS/2 handle the text modes for VGA?
I regularly ran in 50-line VGA mode with zero problems. One could session-switch between full-screen OS/2 TUI programs (that were genuinely operating the hardware in VGA text mode, not simulating it with the hardware itself being used in graphics mode as became the norm with other operating systems much later and which OS/2 itself never got around to) and the Presentation Manager desktop.
I even had a handy LINES 50 command that wrapped the VIO mode change function, that I gave to the world as one of many utilities accompanying my 32-bit CMD, and which in 32-bit form was layered on top of a debugged clean room reimplementation of IBM's never-properly-shipped-outwith-the-Developers'-Toolkit 32-bit version of the VIO API.
You can still download it from Hobbes, today.
* https://hobbesarchive.com/?detail=/pub/os2/util/shell/32-bit...
300 Megabytes!!!
I had all my source code on it, archives, utilities, compilers, the whole shebang!
[1] http://www.bitsavers.org/pdf/microSolutions/MicroSolutions_P...
I was thinking about this recently and considering writing a blog post about it, nothing feels more motivational than being told "that's impossible." I implemented a pure CSS draggable a while back when I was told it's impossible.
For some while I read people saying that, despite the existence of Paul Jarc showing how svscan as process 1 would actually work and Gerrit Pape leading the way with runit-init and demonstrating the basic idea, one could not do full system management with daemontools and wholly eliminate van Smoorenberg init and rc.
* https://code.dogmap.org/svscan-1/
It was one of the motivating factors in the creation of nosh, to show that what one does is exercise a bit of imagination, take the daemontools (or daemontools-encore) service management, and fairly cleanly layer separate system management on top of that. Gerrit Pape pioneered the just-3-shell-scripts approach, and I extended that idea with some notions from AIX, SunOS/Solaris, AT&T System 5, and others. The service manager and the system manager did not have to be tightly coupled into a single program, either. That was another utter bunkum claim.
* https://jdebp.uk/Softwares/nosh/#SystemMangement
* https://jdebp.uk/Softwares/nosh/guide/new-interfaces.html
Laurent Bercot demonstrated the same thing with s6 and s6-rc. (For clarity: Where M. Bercot talks of "supervision" I talk of "service management" and where M. Bercot talks of "service management" as the layer above supervision I talk of "system management".)
* https://skarnet.org/software/s6-rc/overview.html
The fallacy was still going strong in 2016, some years afterwards, here on Hacker News even.
Sometimes it was a very clueless manifestation of the telephone game effect, where the fact that OS/2 API was designed to be easily callable from high-languages, without all of the fiddling about with inline assembly language, compiler intrinsics, or C library calls that one did to call the DOS API, could morph into a patently ridiculous claim that one could not write OS/2 applications in assembly language.
Sometimes, though, it was (as we all later found out) deliberate distortion by marketing people.
Patently ridiculous? Yes, to anyone who actually programmed. The book that everyone who wanted to learn how to program OS/2 would have bought in the early years was Ed Iaccobucci's OS/2 Programmers Guide, the one that starts off with the famous "most important operating system, and possibly program, of all time" quotation by Bill Gates. Not only are examples dotted throughout the book in (macro) assembly language, there are some 170 pages of assembly language program listings in appendix E.
It was cool, but don’t forget that you could do the same thing with MP/M on an 8-bit machine in late 1979.
Even Microsoft developed a similar operating system a year later, but never released it. The code name was M-DOS, or MIDAS, depending who you ask.
With meta-classes, implementation inheritance across multiple languages, and much better tooling in the OS tier 1 languages.
And of course COM does do implementation inheritance: despite all the admonitions to the contrary, that’s what aggregation is! If you want a more conventional model and even some surprisingly fancy stuff like the base methods governing the derived ones and not vice versa, BETA-style, then WinRT inheritance[1] is a very thin layer on top of aggregation that accomplishes that. Now if only anybody at Microsoft bothered to document it. As in, at all.
(I don’t mean to say COM is my ideal object model/ABI. That would probably a bit closer to Objective-C: see the Maru[2]/Cola/Idst[3] object model and cobj[4,5] for the general direction.)
[1] https://www.interact-sw.co.uk/iangblog/2011/09/25/native-win...
[2] https://web.archive.org/web/20250507145031/https://piumarta....
[3] https://web.archive.org/web/20250525213528/https://www.piuma...
[4] https://dotat.at/@/2007-04-16-awash-in-a-c-of-objects.html
Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to set up, if one wants to avoid writing all the boilerplate by hand.
As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about, and now only folks that cannot avoid it, or Microsoft employees on Windows team, care about its existence.
Maybe? I have to admit I know much more about Smalltalk internals than I ever did about actually architecting programs in it, so I’ll need to read up on that, I guess. If they were trying to sell their environment to the PC programmer demographic, then their marketing was definitely mistargeted, but I never considered the utility was obvious to them rather than the whole thing being an academic exercise.
> Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to [...] avoid writing all the boilerplate by hand.
Meh. Yes, the boilerplate is and always had been ass, and it isn’t nice that the somewhat bolted-on nature of the whole thing means most COM classes don’t actually support being aggregated. Yet, ultimately, (single) implementation inheritance amounts to two things: the derived object being able to forward messages to the base one—nothing but message passing needed for that; and the base object being able to send messages to the most derived one—and that’s what pUnkOuter is for. That’s it. SOM’s ability to allocate the whole thing in one gulp is nice, I’d certainly rather have it than not, but it’s not strictly necessary.
Related work: America (1987), “Inheritance and subtyping in a parallel object-oriented language”[1] for the original point; Fröhlich (2002), “Inheritance decomposed”[2], for a nice review; and Tcl’s Snit[3] is a nice practical case study of how much you can do with just delegation.
> As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about [...].
Can’t say I weep for UWP as such; felt like the smartphonification of the last open computing platform was coming (there’s a reason why Valve got so scared). As for WinRT, I mean, I can’t really feel affection for anything Microsoft releases, not least because Microsoft management definitely doesn’t, but that doesn’t preclude me from appreciating how WinRT expresses seemingly very orthodox (but in reality substantially more dynamic) implementation inheritance in terms of COM aggregation (see link in my previous message). It’s a very nice technical solution that explains how the possibility was there from the very start.
[1] https://link.springer.com/chapter/10.1007/3-540-47891-4_22
[2] https://web.archive.org/web/20060926182435/http://www.cs.jyu...
IBM wasn’t selling to developers. SOM was first and foremost targeting PHB’s, who were on a mission to solve the “software crisis”. We somehow managed to easily escape that one despite no one coming up with a silver bullet solution.
The talk was meaningful, don’t get me wrong, I just don’t see how it could sell anything to a non-technical audience.
CORBA had a ton of “technobabble”, too: It wasn’t there to make the standard better for developers.
And I would vouch that REST/GraphQL with SaaS products, finally managed to achieve that vision of software factories, nowadays a big part of my work is connecting SaaS products, e.g. fronted in some place, maybe with a couple of microservices, plugged with CMS, ecommerce, payment, marketing, and notifications based SaaS products.
Although the designers of COM had the right idea, they implemented delegation in just about the worst way possible. Instead of implementing true delegation with multiple interface inheritance, similar to traits, they went pure braindead compositional. The result: unintuitive APIs that led to incomprehensible chains of QueryInferface calls.
It seems the Windows team is against having something like VB 6, Delphi, C++ Builder, .NET Framework, MFC, approaches to COM tooling, just out of principle.
Thus we end up with low level clunky code, with endless calls to specific API like QueryInferface(), manually written boilderplate code, and with the IDL tools, manually merging generated code, because they were not designed to take into account existing code.
The dark corner of COM was IDispatch.
As for IDispatch, it’s indeed underdocumented—there’s some stuff in the patents that goes beyond the official docs but it’s not much—and also has pieces that were simply never used for anything, like the IID and LCID arguments to GetIDsOfNames. Thankfully, it also sucks: both from the general COM perspective (don’t take it from me, take it from Box et al. in Effective COM) and that of the problem it solves (literally the first contact with a language that wasn’t VB resulted in IDispatchEx, changing the paradigm quite substantially). So there isn’t much of an urge to do something like it for fun. Joel Spolsky’s palpable arrogance about the design[1,2] reads quite differently with that in mind.
[1] https://www.joelonsoftware.com/2000/03/19/two-stories/ (as best as I can tell, the App Architecture villains were attempting to sell him on Emacs- or Eclipse-style extensibility, and he failed to understand that)
[2] https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
But seeing it laid out as just the multi-tasking kernel that it is it seems more obvious now as a major foundational upgrade of MS-DOS.
Great read!
http://iowa.gotthefacts.org/011107/PX_0860.pdf
'The demos of OS/2 were excellent. Crashing the system had the intended effect – to FUD OS/2 2.0. People paid attention to this demo and were often surprised to our favor. Steve positioned it as -- OS/2 is not "bad" but that from a performance and "robustness" standpoint, it is NOT better than Windows'.
However, it was the only operating system that I have ever used (before or since) that had some issues with it's provided disk drivers that ended-up deleting data and corrupting it's own fresh install... so, it didn't last long for me...
wkjagt•6mo ago
Wasn't it mostly an IBM product, with Microsoft being involved only in the beginning?
rbanffy•6mo ago
SV_BubbleTime•6mo ago
fredoralive•6mo ago
mananaysiempre•6mo ago
zabzonk•6mo ago
I worked as a trainer at a commercial training company that used the Glockenspiel C++ compiler that required OS/2. It made me sad. NT made me happy.
p_l•6mo ago
Even when marketing people etc. got enthused enough that the project got official support and release, it was not expected to be such a hit of a release early on and expectation was that OS/2 effort would continue, if perhaps with a different kernel.
chasil•6mo ago
I'm assuming that all of it was written mainly, if not solely, by Microsoft.
Hilift•6mo ago
https://archive.org/details/showstopperbreak00zach