Then there's this:
> All of the OS/2 API routines use the Pascal extended keyword for their calling convention so that arguments are pushed on the stack in the opposite order of C. The Pascal keyword does not allow a system routine to receive a variable number of arguments, but the code generated using the Pascal convention is smaller and faster than the standard C convention.
Did this choice of a small speed boost over compatibility ever haunt the decision makers, I wonder? At the time, the speed boost probably was significant at the ~Mhz clock speeds these machines were running at, and Moore's Law had only just gotten started. Maybe I tend to lean in the direction of compatibility but this seemed like a weird choice to me. Then, in that same paragraph: > Also, the stack is-restored by the called procedure rather than the caller.
What could possibly go wrong?>> Also, the stack is restored by the called procedure rather than the caller.
> What could possibly go wrong?
This is still the case for non-vararg __stdcall functions used by Win32 and COM. (The argument order was reversed compared to Win16’s __far __pascal.) By contrast, the __syscall convention that 32-bit OS/2 switched to uses caller cleanup (and passed some arguments in registers).
Windows 3.0 effort was initially disguised as update for this before management could be convinced to support the project.
There is nothing wrong with using this calling convention, except for those specific functions that need to have a variable number of arguments - and why not handle those few ones differently instead, unless you're using a braindead compiler / language that doesn't keep track of how functions are declared?
Moreover, it can actually support tail calls between functions of arbitrary (non-vararg) signatures.
I think it is a big pity that contemporary mainstream x86[-64] calling conventions (both Windows and the SysV ABI used by Linux-and almost everybody else) don’t pass the argument count in a register for varargs functions. This means there is no generic way for a varargs function to know how many arguments it was called with - some functions use a sentinel value (often NULL), for some one of the arguments contains an embedded DSL you need to parse (e.g. printf and friends). Using obtuse preprocessor magic you can make a macro with the same name as your function which automatically passes its argument count as a parameter-but that is rarely actually done.
OpenVMS calling convention-including the modified version of SysV ABI which the OpenVMS x86-64 port uses-passes the argument count of varargs function calls in a register (eax), which is then available using the va_count macro. I don’t know why Windows/Linux/etc didn’t copy this idea, I wish they had - but it is too late now.
...in the end it's just another calling convention which you annotate your system header functions with. AmigaOS had a vastly different (very assembly friendly) calling convention for OS functions which exclusively(?) used CPU registers to pass arguments. C compilers simply had to deal with it.
> What could possibly go wrong?
...totally makes sense though when the caller passes arguments on the stack?
E.g. you probably have something like this in the caller:
push arg3 => place arg 3 on stack
push arg2 => place arg 2 on stack
push arg1 => place arg 1 on stack
call function => places return address on stack
...if the called function would clean up the stack it would also delete the return address needed by the return instruction (which pops the return address from the top of the stack and jumps to it).(ok, x86 has the special `ret imm16` instruction which adjusts the stack pointer after popping the return address, but I guess not all CPUs could do that back then)
The rule of thumb I had heard and followed was that if something could take longer than 500ms you should get off the UI thread and do it in a separate thread. You'd disable any UI controls until it was done.
I had (and likely have lost forever) a Boot disk with OS/2, and my Forth/2 system on it that could do directory listings while playing Toccata and Fugue in D minor in a different thread.
I wrote Forth/2 out of pure spite, because somehow I heard that it just wasn't possible to write OS/2 applications in assembler. Thanks to a copy of the OS/2 development kit from Ward Christensen (who worked at IBM), and a few months of spare time, Forth/2 was born, written in pure assembler, compiling to directly threaded native code. Brian Matthewson from Case Western wrote the manual for it. Those were fun times.
Just to be clear, when you say "without the need for the GUI", more accurately that's "without a GUI" (w/o Presentation Manager). So you're using OS/2 in an 80x25 console screen, what would appear to be a very good DOS box.
It exists in that interesting but obsolete interstitial space alongside BeOS of very well done single user OS.
Btw, I'd love to know where this idea about no assembler programs on OS/2 came from. There was no problem at all with using MASM, with CodeView to debug on a second screen.
I was thinking about this recently and considering writing a blog post about it, nothing feels more motivational than being told "that's impossible." I implemented a pure CSS draggable a while back when I was told it's impossible.
With meta-classes, implementation inheritance across multiple languages, and much better tooling in the OS tier 1 languages.
And of course COM does do implementation inheritance: despite all the admonitions to the contrary, that’s what aggregation is! If you want a more conventional model and even some surprisingly fancy stuff like the base methods governing the derived ones and not vice versa, BETA-style, then WinRT inheritance[1] is a very thin layer on top of aggregation that accomplishes that. Now if only anybody at Microsoft bothered to document it. As in, at all.
(I don’t mean to say COM is my ideal object model/ABI. That would probably a bit closer to Objective-C: see the Maru[2]/Cola/Idst[3] object model and cobj[4,5] for the general direction.)
[1] https://www.interact-sw.co.uk/iangblog/2011/09/25/native-win...
[2] https://web.archive.org/web/20250507145031/https://piumarta....
[3] https://web.archive.org/web/20250525213528/https://www.piuma...
[4] https://dotat.at/@/2007-04-16-awash-in-a-c-of-objects.html
Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to set up, if one wants to avoid writing all the boilerplate by hand.
As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about, and now only folks that cannot avoid it, or Microsoft employees on Windows team, care about its existence.
Maybe? I have to admit I know much more about Smalltalk internals than I ever did about actually architecting programs in it, so I’ll need to read up on that, I guess. If they were trying to sell their environment to the PC programmer demographic, then their marketing was definitely mistargeted, but I never considered the utility was obvious to them rather than the whole thing being an academic exercise.
> Aggregation is not inheritance, rather a workaround, using delegation. And it has been always a bit of the pain to [...] avoid writing all the boilerplate by hand.
Meh. Yes, the boilerplate is and always had been ass, and it isn’t nice that the somewhat bolted-on nature of the whole thing means most COM classes don’t actually support being aggregated. Yet, ultimately, (single) implementation inheritance amounts to two things: the derived object being able to forward messages to the base one—nothing but message passing needed for that; and the base object being able to send messages to the most derived one—and that’s what pUnkOuter is for. That’s it. SOM’s ability to allocate the whole thing in one gulp is nice, I’d certainly rather have it than not, but it’s not strictly necessary.
Related work: America (1987), “Inheritance and subtyping in a parallel object-oriented language”[1] for the original point; Fröhlich (2002), “Inheritance decomposed”[2], for a nice review; and Tcl’s Snit[3] is a nice practical case study of how much you can do with just delegation.
> As for WinRT, I used to have it in high regard, until Microsoft management managed to kill everything good that UWP was all about [...].
Can’t say I weep for UWP as such; felt like the smartphonification of the last open computing platform was coming (there’s a reason why Valve got so scared). As for WinRT, I mean, I can’t really feel affection for anything Microsoft releases, not least because Microsoft management definitely doesn’t, but that doesn’t preclude me from appreciating how WinRT expresses seemingly very orthodox (but in reality substantially more dynamic) implementation inheritance in terms of COM aggregation (see link in my previous message). It’s a very nice technical solution that explains how the possibility was there from the very start.
[1] https://link.springer.com/chapter/10.1007/3-540-47891-4_22
[2] https://web.archive.org/web/20060926182435/http://www.cs.jyu...
IBM wasn’t selling to developers. SOM was first and foremost targeting PHB’s, who were on a mission to solve the “software crisis”. We somehow managed to easily escape that one despite no one coming up with a silver bullet solution.
The talk was meaningful, don’t get me wrong, I just don’t see how it could sell anything to a non-technical audience.
CORBA had a ton of “technobabble”, too: It wasn’t there to make the standard better for developers.
Although the designers of COM had the right idea, they implemented delegation in just about the worst way possible. Instead of implementing true delegation with multiple interface inheritance, similar to traits, they went pure braindead compositional. The result: unintuitive APIs that led to incomprehensible chains of QueryInferface calls.
The dark corner of COM was IDispatch.
As for IDispatch, it’s indeed underdocumented—there’s some stuff in the patents that goes beyond the official docs but it’s not much—and also has pieces that were simply never used for anything, like the IID and LCID arguments to GetIDsOfNames. Thankfully, it also sucks: both from the general COM perspective (don’t take it from me, take it from Box et al. in Effective COM) and that of the problem it solves (literally the first contact with a language that wasn’t VB resulted in IDispatchEx, changing the paradigm quite substantially). So there isn’t much of an urge to do something like it for fun. Joel Spolsky’s palpable arrogance about the design[1,2] reads quite differently with that in mind.
[1] https://www.joelonsoftware.com/2000/03/19/two-stories/ (as best as I can tell, the App Architecture villains were attempting to sell him on Emacs- or Eclipse-style extensibility, and he failed to understand that)
[2] https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
But seeing it laid out as just the multi-tasking kernel that it is it seems more obvious now as a major foundational upgrade of MS-DOS.
Great read!
wkjagt•12h ago
Wasn't it mostly an IBM product, with Microsoft being involved only in the beginning?
rbanffy•12h ago
SV_BubbleTime•5h ago
fredoralive•11h ago
mananaysiempre•11h ago
zabzonk•11h ago
I worked as a trainer at a commercial training company that used the Glockenspiel C++ compiler that required OS/2. It made me sad. NT made me happy.
p_l•10h ago
Even when marketing people etc. got enthused enough that the project got official support and release, it was not expected to be such a hit of a release early on and expectation was that OS/2 effort would continue, if perhaps with a different kernel.
chasil•10h ago
I'm assuming that all of it was written mainly, if not solely, by Microsoft.
Hilift•6h ago
https://archive.org/details/showstopperbreak00zach