RAM chips usually have a "chip enable" pin, you might have chips that have 4k of addresses that are 8 bits wide [1] and fill out the 64k address space by having 16 RAM chips, feeding the least significant 12 bits to the RAM chips and the most 4 bits to a multiplexer that goes to the 16 RAM chips. All of the RAM chips are on the bus but only the one with CE set responds.
The same kind of thinking could be applied to extend the address space past 16 bits, for instance you poke at some hardware register and that determines which chip enable pin get sets, there is really no limit on how much RAM you could attach to an 8-bit machine.
A really advanced bank switching scheme for an 8-bitter was on the TRS-80 Color Computer 3
https://www.chibiakumas.com/6809/coco3.php
where the 64k address space was divided into 8k blocks and which might be backed by 128kB (minimum), 512kB (max from radio shack) or more RAM and you poked into a table which mapped virtual blocks to physical blocks. That wasn't too different from a modern memory management system greatly scaled down with the exception that systems like that rarely if ever had a true "executive" mode so nothing stopped user mode software from poking to change the memory map. The CoCo for instance had a multitasking OS called OS-9 that did muiltitasking like described in the explained if you had the orginal Color Computer, you could get Level II that supported more memory and if you never poked at those registers, some memory protection.
[1] at least you did in 1979.
I think you meant KB here but now im also wondering how many MB you -could- actually scale tp and what the overhead would be due to the numbers of banks to switch between...
If that is how the bank switching was done then that's fascinating and very very surprising and I would love to hear more about it.
I myself made a Coin-Operated Telephone server, but in 1978 the processor was already cheaper and faster 8085.
I think this essay is where it originated: https://paulgraham.com/hackernews.html
I recall spending a few days mulling over the exact sequence of instructions to save the state of the previous task without clobbering the flags or any registers.
The result was that I could register functions (procedures) as tasks with their own little stack, and it would switch preemptively between them in a round-robin fashion.
I'm not familiar with Z80 asm but from what I can gather it looks very similar to what I had. I was running in real mode so also had very limited resources for each task, and a hardcoded upper limit on the number of tasks.
While I'm wildly more productive these days, I kinda miss how not having internet made accomplishments so much greater. It's like walking up a mountain on your own vs taking a tour bus to the summit.
So I wrote a pre-emptive multitasking iRMX clone in C and a bit of assembler. Ended up using it to develop a mildly successful POS system running on a single PC with several VT100 terminals attached.
The experience of figuring these things out was tonnes of fun! There's nothing like following threads of assembly with a debugger or disassembler in the Amiga's ROM to get a better idea of how the code worked. And since systems were so much smaller in the 1980s, a single person really could understand virtually everything about the system with enough time and effort.
The biggest challenge for me was that the ROM Kernel Manuals were very expensive back then, so I wasn't able to get copies until far too late in my Amiga years (with Commodore being in its death throes).
Motorola and Intel were great back then as they would ship out printed copies of all the documentation for various CPUs and support chips for free upon request!
Good times!
Users were all connected via CRT terminals over serial, so you didn't need to waste any memory on screen buffers. This also reduced CPU usage as the user wouldn't get any CPU time until they pressed a key, and the scheduler might queue up multiple key presses before even switching to the user's task (while still showing immediate feedback)
They also had superior IO. Home computers were connected to slow floppy drives (at best, some were still using tape) so they typically tried to keep the whole working set in memory. But multi-user machines had relatively fast hard drives and could afford to shuffle the working set in and out of memory as needed, with DMAs handling the actual transfer in the background.
As others have pointed out, these machines had bank switching. But I doubt such machines were ever configured with more than 256k of memory, with 128k being more typical. Wikipedia claims 32k was the absolute minimum, but with very little free memory, so I suspect 64k would be enough for smaller offices with 4-6 users.
chorlton2080•3mo ago