The article mentions the desire for square pixels. So presumably they chose the horizontal resolution first and then chose the vertical resolution that gave them square pixels for a 512 pixel horizontal resolution.
Whether this is enough to make it count as actually 32 bits is one for the philosophers.
(Fun fact: there was also the 68008, which was a 68k with an 8 bit bus!)
The reduced 24-bit address bus was never a significant bottleneck during its commercial lifetime, as little consumer software at the time would require more than 4mb of RAM, and by the time it did the 486SX (32bit busses with no maths coprocessor) was the new value champion.
> the ISA bus was only 16-bits wide, which limited the utility of the 32-bit bus for fast graphics transfers.
Not only that, it was 8MHz to match the speed of the fastest IBM AT. VLB on a 486/33 or 66 ran at 33 MHz and was a godsend, 8x the bandwidth of 16-bit ISA.
Externally it had 16 bits for databus and 24 bits for addresses. That is why we later got the 32 bit clean ROMs as Apple used the upper unused 8 address bits for flags.
> Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two more 16-bit ALUs used mostly for addresses,[4] and has a 16-bit external data bus.
I will throw out there though that ALU width and buses are generally seen as orthogonal to 'bitness' of a processor, and more an implementation detail. The Z80 had a 4bit ALU, but is considered an 8bit CPU. The PDP-8/s and SERV have single bit ALUs, but are considered 12 and 32 bits respectively. The 8088 is considered a 16bit CPU despite having both an 8bit ALU and bus.
'Bitness' is generally defined more as 'what is the width of whatever is the closest thing to a GPR'.
And for more evidence, the Z80 is referred to as an 8-bit processor but has a 4-bit ALU.
Most 32 bit operations are slower than 16 bit operations because the external data bus is only 16 bits and most operations use the external data bus. But simple internal ops are faster at 32 bits, so that seems to indicate the 68000 is 32 bit internally.
ADDQ and ADDX are better instructions to look at, as are any with a Dn,Dn addressing mode. The long and word cases are the same number of instruction bytes, but the long case is still slower.
(Register-to-register moves are the same regardless of width, so presumably it has a 32 bit path for this. That's nice. But not as nice as it would be if it had a 32 bit path for everything. Which it really looks like it doesn't. This CPU has registers, but that can't save it.)
The separation of Data and Address registers are also result of how it evolved over time, AFAIK, ultimately because it allowed to make the CPU cheaper/easier to make. Another element is that 68000 at least has two layers of microcode - first microcode engine generates instructions interpreted by second microcode engine which finally actually drives execution units.
I remember the "enable 32-bit addressing" part (but it's not pictured..)
The later 384 number corresponds to an exact 4:3 aspect ratio.
Extremely nitpicky thing I know, but this kinda stuff really bugs me, could somebody please clarify what was the real size (and/or PPI) here?
For reference:
512x324 @ 72 PPI = 8.42" (or 214 mm) (rounded)
512x342 @ 72 PPI = 8.55" (or 217 mm) (rounded)
512x384 @ 72 PPI = 8.89" (or 226 mm) (rounded)
The first two don't even yield an integer result for the number of diagonal pixels, let alone yield an integer multiple of 72. Or would there be bars around the screen, or how would this work?
It was a 9” tube with 3:2 aspect ratio. Your calculation of a 8.5” image at 72 dpi sounds right.
That's also why TVs and monitors of that era always seemed smaller than advertised. I remember having to explain that to a lot of people.
Whilst the CRT is 9", according to period repair guides the screen should be adjusted so that the visible image was 7.11" x 4.75", pretty much exactly 1.5:1. This meant 72dpi, which was to match PostScript point size for print output and WYSIWYG.
So it's your 8.55" diagonal.
Some classic Macintosh users today are unaware of this screen size reasoning, or don't agree with it, and stretch the screen to fill the whole CRT. Yikes!
BTW, I posted pretty much the same info earlier today at https://news.ycombinator.com/item?id=44105531 — what synchronicity!
A 9" CRT would never be precisely 9", because beam trace width and height are analog, plus there's overscan, so a 9" screen would simply give something pretty close to 9".
… a limitation that many Macs, and even some iPhones, are still stuck with over 40 years later!
Same, I remember installing some program that would let you quickly change the display settings on basically every computer I ever interacted with. It was especially bad if the crt was in a room with fluorescent lighting.
Though now that I think of it, the CRT should be syncing with the signal and there is no reason that sync needs to be related to the AC line, but it does anyway (all the computers I know of generate their own sync from a crystal, I have no idea where TV stations get their sync but I doubt AC line frequency).
> Matching the field refresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera.
(I suspect shows that were pre-recorded and telecined for broadcast would've also been filmed at 30fps using a synchronous AC motor.)
> In early TV systems, a master voltage-controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator.
I think later TVs would've just synchronized to the received signal.
https://en.wikipedia.org/wiki/NTSC#Resolution_and_refresh_ra...
- I've heard mixed reports over whether CRT monitors had faster-decaying phosphors than televisions. Maybe part of it is a computer has a white image, which causes more noticeable flicker than a dark background with white text (or darker TV scenes).
I think it was actually the interlacing and not the refresh rate that did it.
but only half the screen at a time so in effect every other line was flickering at 25Hz
Since TFTs came I was bothered a lot less by it because the lack of flicker (though some 4 bit cheap TN LCDs still had it with some colours)
* The original VGA and thus most MS-DOS games ran at 70 Hz.
Later on, and with graphic card that had more than 2MB of RAM, I remember experimenting a lot with modelines to pull higher refresh rates and higher resolution on the 17" CRT I inherited when my father switched to a laptop :)
I know I found the flicker of CRTs annoying even at 60 Hz.
> The most important decision was admitting that the software would never fit into 64K of memory and going with a full 16-bit memory bus, requiring 16 RAM chips instead of 8. The extra memory bandwidth allowed him to double the display resolution, going to dimensions of 512 by 342 instead of 384 by 256
If you look at the specs for the machine, you see that during an active scan line, the video is using exactly half of the available memory bandwidth, with the CPU able to use the other half (during horizontal and vertical blanking periods the CPU can use the entire memory bandwidth)[1]. That dictated the scanline duration.
If the computer had any more scan lines, something would have had to give, as every nanosecond was already accounted for[2]. The refresh rate would have to be lower, or the blanking periods would have had to been shorter, or the memory bandwidth would have to be higher, or the memory bandwidth would have had to be divided unevenly between the CPU and video which was probably harder to implement. I don't know which of those things they would have been able to adjust and which were hard requirements of the hardware they could find, but I'm guessing that they couldn't do 384 scan lines given the memory bandwidth of the RAM chips, and the blanking times of the CRT they selected, if they wanted to hit 60Hz.
[1]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...
[2]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...
It looks like DRAM was set up on a 6-CPU-cycle period, as 512 bits (32 16-bit bus accesses) x 342 lines x 60 Hz x 6 cycles x 2 gives 7.87968 MHz, which is just slightly faster than the nominal 7.83 MHz, the remaining .6% presumably being spent during vblank.
I suspect kmill is right: https://news.ycombinator.com/item?id=44110611 -- 512x342 is very close to 3:2 aspect ratio, whereas 347 would give you an awkward 1.476:1 aspect ratio.
If I was placing bets, it was another hardware limitation. Maybe 342 put them right at some particular DRAM timing limit for the chips they were signing contracts for. Or maybe more likely, the ~21.5 kHz scan rate was a hard limit from the tube supplier (that was much faster than TVs could do) and they had a firm 60 Hz requirement from Jobs or whoever.
https://nerdhut.de/2016/06/26/macintosh-classic-crt-1/
https://bobparadiso.com/wp-content/uploads/2014/09/timing.pn...
The Timex Sinclair did all of its computation during the blanking interval which is why it was so dog slow.
It's interesting how the differing vertical resolutions between these two (200p /400i vs 256p /512i) also had some secondary effects on software design, it was always easy to tell if a game was made in NTSC regions or with global releases in mind because the bottom 20% of the screen was black in PAL.
http://blog.tynemouthsoftware.co.uk/2023/10/how-the-zx80-gen...
“The CPU then only produces a TV picture when BASIC is waiting for input (or paused). At other times it does not bother to produce a video picture, so the CPU can run the program at full speed.”
It doesn't help if your crossbar memory interconnect only has static priorities.
I love the monitor, it's sharp and clear and almost kind of HDR a lot of the time, but the fact that it has a bunch of USB 3.0 ports that only get USB 2.0 speeds because I don't want choppy 30Hz gaming is just... weird.
Consequently almost nothing actually renders at 4k. It’s all upscaling - or even worse your display is wired to double up on inputs.
Once we can comfortably get 60 FPS, 1080p, 4x msaa, no upscaling, then let’s revisit this 4k idea.
So yes you get 4k pixels lit up on your display, but is it actually a better image?
And yes there may be high end hardware which can handle it, but the sw still made design choices for everyone else.
There are also image algorithms which are not as amenable to GPUs which are now impossible to compute effectively.
I think partial refresh capability only came with some optional extensions to DisplayPort.
I remember writing a cpu intensive code on the Atari and using video blanking to speed up the code.
But mostly I suspect it’s just far easier.
I feel like the extra 16% of screen real estate would have been worth it.
Mac was not cheap machine and Apple that time was not rich to make unnecessary thing - they really need to make a hit this time, and they succeed.
And yes, it is true, they was limited by bandwidth, it is also true they was limited by semi-32bit CPU speed.
But Mac was real step ahead at the moment, and had significant resources to grow when new technology will arrive. That what I think lack PCs of that time.
The 1st edition of macworld, notably the first page is an advert for microsoft's products, multiplan spreadsheet, word, etc. https://archive.org/details/MacWorld_8404_April_1984_premier...
The original floppy used on the mac was a single-sided 400KB disk. I imagine that was another set of trade-offs. https://folklore.org/Disk_Swappers_Elbow.html
Originally they planned on using custom 870K drives, but they were too unreliable so at the last minute they switched to the Sony 400K 3.5" disks
Lisa was January 1983
Macintosh was January 1984
Apple IIgs was September 1986
Out of similar tricks, Symbolics 3600 (at least first model) had major portions of disk driver implemented as one of the tasks in microcode (yes, the microcode was a multi-tasking system with preemption). Don't know how much of MFM wrangling was going there, but ultimately it meant that reading and writing a page from/to disk was done by means of single high level instruction
It's a bit of work, but I suspect you can arithmetic your way through the problem. Supposing they wanted 60 Hz on the display and a framebuffer you need 196,608 bits/24,576 bytes/24 kbytes [below] on a 1-bit display at 512x384.
The Mac 128k shipped with a Motorola 68k at 7.8336 Mhz giving it 130560 Hz per frame @ 60 fps.
IIR the word length of the 68k is 32bits, so imagining a scenario where the screen was plotted in words, it's something like 20 cycles per fetch [1], you can get about 6528 fetches per frame. At 32-bits a fetch, you need 6144 or so fetches from memory to fill the screen. You need a moment for horizontal refresh so you lose time waiting for that, thus 6528-6144 = (drumroll) 384, the number of horizontal lines on a display.
I'm obviously hitting the wavetops here, and missing lots of details. But my point is that it's calculable with enough information, which is how engineers of yor used to spec things out.
1 - https://wiki.neogeodev.org/index.php?title=68k_instructions_...
below - why bits? The original Mac used 1-bit display, meaning each pixel used 1-bit to set it as either on or off. Because it didn't need 3 subpixels to produce color, the display was tighter and sharper than color displays, and even at the lower resolution appeared somewhat paperlike. The article is correct that the DPI was around 72. Another way to think about it, and what the Mac was targeting was pre-press desktop publishing. Many printing houses could print at around 150-200 lines per inch. Houses with very good equipment could hit 300 or more. Different measures, but the Mac, being positioned as a WYSISWYG tool, did a good job of approximating analog printing equipment of the time. (source: grew up in a family printing business)
Some of the code AFAIK used fancy multi-register copies to increase cycle efficiency in graphics code.
As for screen, IIRC making it easy to correlate "what's on screen" and "what's on paper" was major part of what drove Mac to be nearly synonymous with DTP for years.
Why, the name of the website is 512pixels.net not 342pixels.net; he nailed the 512 dimension. :)
Beyond that, this article really wants to tell you how amazing that resolution was in 1984. Never mind that you could get an IBM XT clone with "budget" 720x348 monochrome Hercules graphics that year and earlier.
A Hercules card, whilst nice does suffer from the same non-square pixels issue as the Lisa, so not as nice for creating a GUI.
Nice, that's line doubled from the //e's 560x192 and would probably look crisp.
I actually had no idea that Atari made a laser printer. Everyone I knew with a ST (admittedly, not many people) was either doing MIDI or playing video games.
My question is what is the htotal and vtotal times in pixels and lines? Maybe there was a hardware savings to have vtotal exactly equal to 384 (which is 128 times 3). Perhaps they saved one bit in a counter, which may have resulted in one fewer TTL chip.
johnklos•1d ago
It wouldn't've been too crazy had Apple went with 64K x 4 chips, so they'd've just needed four of them to get 128 KB at a full 16 bits wide.
512x342 was 16.7% of 128 KB of memory, as opposed to 18.75% with 512x384. Not much of a difference. But having square pixels is nice.
jerbear4328•1d ago
bscphil•1d ago
ChuckMcM•1d ago
dhosek•23h ago
90s_dev•1d ago
Really, John? You really had to make me parse that word?
webstrand•1d ago
kevin_thibedeau•23h ago
90s_dev•21h ago
kragen•17h ago
kstrauser•19h ago
brookst•19h ago
JKCalhoun•23h ago
[1] To be sure, many games hide the menu bar.