Raskin was deeply concerned with how humans think in vague, associative, creative ways, while computers demand precision and predictability.
His goal was to humanize the machine through thoughtful interface design—minimizing modes, reducing cognitive load, and anticipating user intent.
What’s fascinating now is how AI, changes the equation entirely. Instead of rigid systems requiring exact input, we now have tools that themselves are fuzzy, and probabilistic.
I keep thinking that the gap Raskin was trying to bridge is closing—not just through interface, but through the architecture of the machine itself.
So AI makes Raskin’s vision more feasible than ever but also challenges his assumptions:
Does AI finally enable truly humane interfaces?
Perhaps, but I don't think we're going to see evidence of this for quite a while. It would be really cool if the computer adapted to how you naturally want to use it, though, without forcing you through an interface where you talk/type to it.
I object to the framing of this question directly -- there is no definition of "AI" . Secondly, the humane interface is a genre that Jef Raskin shaped and re-thought over years.. A one-liner here definitely does not embody the works of Jef Raskin.
Off the top of my head, it appears that "AI" enables one-to-many broadcast, service interactions and knowledge retrieval in a way that was not possible before. The thinking of Jef Raskin was very much along the lines of an ordinary person using computers for their own purposes. "AI" in the supply-side format coming down the road, appears to be headed towards societal interactions that depersonalize and segregate individual people. It is possible to engage "AI" whatever that means, to enable individuals as an appliance. This is by no means certain at this time IMHO.
I think it does; LLMs in particular. AI also enables a ton of other things, many of them inhumane, which can make it very hard to discuss these things as people fixate on the inhumane. (Which is fair... but if you are BUILDING something, I think it's best to fixate on the humane so that you conjure THAT into being.)
I think Jef Raskin's goal with a lot of what he proposed was to connect the computer interface more directly with the user's intent. An application-oriented model really focuses so much of the organization around the software company's intent and position, something that follows us fully into (most of) today's interfaces.
A magical aspect of LLMs is that they can actually fully vertically integrate with intent. It doesn't mean every LLM interface exposes this or takes advantage of this (quite the contrary!), but it's _possible_, and it simple wasn't possible in the past.
For instance: you can create an LLM-powered piece of software that collects (and allows revision) to some overriding intent. Just literally take the user's stated intent and puts it in a slot in all following prompts. This alone will have a substantial effect on the LLMs behavior! And importantly you can ask for their intent, not just their specific goal. Maybe I want to build a shed, and I'm looking up some materials... the underlying goal can inform all kinds of things, like whether I'm looking for used or new materials, aesthetic or functional, etc.
To accomplish something with a computer we often thread together many different tools. Each of them is generally defined by their function (photo album, email client, browser-that-contains-other-things, and so on). It's up to the human to figure out how to assemble these, and at each step it's easy to lose track, to become distracted or confused, to lose track of context. And again an LLM can engage with the larger task in a way that wasn't possible before.
LLMs have no role to play in any of that, because their job is text generation. At best, they could generate excerpts from a half-imagined user manual ...
I was specifically asking about LLMs because the comment I replied to only talked about LLMs - Large Language Models.
Bottom line is that when folks are talking about LLM applications, multimodal LLMs, MoE LLMs, and even agents are all in the general umbrella.
(Though yes, keep in mind that 0.1% hallucination = 99.9% correctness which is really not that high when we're talking about high reliability things. With zero-shot that far exceeded my expectations though.)
Eh. That's not as good as being skilled enough to know exactly what you want and have the tools to make that happen.
There's something to be said for tools that give you the power of manipulating something efficiently, than systems that do the manipulation for you.
I mean, do you know that? A tool that offers this audible fluent experience needs to exist before you can make that assessment right? Or are vibes alone a strong enough way to make this judgement? (There's also some strong "Less space than a Nomad. Lame" energy in this post lol.)
Moreover why can't you just have both? When I fire up Lightroom, sure I have easy mode sliders to affect "warmth" but then I have detailed panels that let me control the hue and saturation of midtones. And if those panels aren't enough I can fire up Photoshop and edit to my heart's content.
Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.
So again, why not?
Because at my heart I'm a humanist, and I want tools that allow and encourage humans to have and express mastery themselves.
> Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.
IMHO, good tools are deterministic, so a compiler (to use your example) is a good tool, because you can learn how it functions and gain mastery over it.
I think an AI easy-button is a bad tool. It may get the job done (after a fashion), but there's no possibility of mastery. It's making subjective decisions and is too unpredictable, because it's taking the task on itself.
And I don't think bad tools should be built, because the weaknesses of human psychology. Something is stopping you "from taking your mouse in hand at any point and saying 'let me do it'," and its those weaknesses. You either take the shortcut or have to exercise continuous willpower to decline it, which can be really hard and stressful. I don't think we should build bad tools that should put people in that situation.
And you're not going to make any progress with me by arguing based on precedent of some widely-used bad tool. Those tools were likely a mistake too. For a long time, our society has been putting technology for its own sake ahead of people.
Your comment is pretty frustrating. HN has definitely become more "random internet comments" forum over the years from its more grounded focus. But even when "random internet comments" talk to each other, you expect a forthrightness to discuss and talk. My reading of your comment is that you have a strong opinion, you're injecting that opinion, but you're not open to discussion on your opinion. This statement makes me feel like my time spent replying to you was a waste.
Moreover I feel like an attitude of posting but not listening when using internet forums is corrosive. In fact, when you call yourself a humanist, this confuses and frustrates me even more because I feel it's human to engage with an argument or just stop discussing when engagement is fruitless. Stating your opinion constantly without room for discussion seems profoundly inhuman to me, but I also suspect we're not going to have a productive discussion from here so I will heed my own feelings and disengage. Have a nice day.
Eh, whatever. I was just trying to prevent the possibility of a particularly tiresome cookie-cutter "argument" I've seen a million times around here. I don't know if you were actually going to make it, but we're in the context where it's likely to pop up, and it'd just waste everyone's time.
Also this isn't really opinion territory, it's more values territory.
You can describe your intention in any of these tools. And it can be whatever you want... maybe your intention in an audio editor is "I need to finish this before the deadline in the morning but I have no idea what the client wants" and that's valid, that's something an LLM can actually work with.
HOW the LLM is involved is an open question, something that hasn't been done very well, and may not work well when applied to existing applications. But an LLM can make sense of events and images in addition to natural language text. You can give an LLM a timestamped list of UI events and it can actually infer quite a bit about what the user is actually doing. What does it do with that understanding? We're going to have to figure that out! These are exciting times!
This is something I keep tossing over in my head. Multimodal capabilities of frontier models right now are fantastic. Rather than locking into a desktop with peripherals or hunching over a tiny screen and tapping with thumbs we finally have an easy way to create apps that interact "natively" through audio. We can finally try to decipher a user's intent rather than forcing the user to interact through an interface designed to provide precise inputs to an algorithm. I'm excited to see what we build with these things.
One for the quote file.
I haven't played it, but looking at the graphics alone brings up some deep feelings of nostalgia
I think that Mr Raskin's opinion would be that it should be obvious how to use a piece of software.
We're supposed to idolize this as some sort of hyper-enlightened version of interface design? Hell no.
I get that this design worked for Raskin. It worked for him the same way that my hacked version of GNU Emacs' next-line function does for me when the cursor is at the end of the buffer, or how I needed a version of its delete-horizontal-space but that would work only after the cursor.
I get that Raskin's "oh, you probably made a mistake, let me do something else that I suspect is what you really meant" might even have worked for a bunch of other people too. But the idea that somehow Raskin had some sort of deep insight with stuff like this, rather than just a set of personal preferences that are like those of anyone else, is just completely wrong, I think.
In that 40 years, many UI conventions have sprung up, and we've internalized them to the point that they're so familiar we actually say they're intuitive.
But if you go back to the state of computing in 1986, or even earlier, when Raskin was developing his UX principles for the Canon Cat and the SwyftCard, he was considering computer interfaces that were almost exclusively command-line interfaces.
You're not supposed to "idolize" any designer or engineer. But I would highly encourage you to read The Humane Interface, learn about the underlying principles of usability and interface design, and consider how you'd apply them to a UI today, 40 years later. The execution you'd come up with would be different. But the principles he started from are foundational and very useful.
I used GNU Emacs as an example for precisely this reason.
But I do not think that Raskin was channelling some remarkable stream of insight into these matters. And yes, "idolize" was more poking fun at people who use superlatives to describe him, in my opinion without much justification.
Today we have Miro and it works like that. I hate it.
Zooming, from a building, to a room, to a bookshelf, to a book/folder/boxfile, to the content and the location within the content worked with my brain. With digital files it just seems like a swamp I have to wade through. Microsoft are so antagonistic to my 'location' based thinking because Windows conceals where files really are.
The good old days were fun for their sense of everything-is-new adventure, but there's an awful lot I don't miss.
And you got a new version every month.
It was pure magic, I tell you.
The magazine was nearly 100% ads and I could spend a long time doing nothing but consuming ads. Nonetheless, I never felt annoyed by them like I do with animated and pop-up ads.
How else are they going to force you to view advertisements for things that you are completely uninterested in and which are completely unrelated to the page you are viewing?
If you buy a paper magazine you're already interested in the ads. Doesn't matter if it's pet supplies, model railways, computers, or fashion. You've predefined yourself as a potential consumer and you're going to see the ads as a service, not an intrusion. And if they're all in one place, you can comparison shop.
Facebook and Google are going to sell you ads based on your web searches. Mostly they do a terrible job of guessing what you're really interested in. Sometimes the results are so bad they're hilarious.
So instead of providing a useful service, the ads exist to perpetuate the system that generates them, prioritising vapid metrics like "engagement" - which really just measures distraction and wasted time.
There are still plenty of small shops making a nice software living today. I don’t have the nerve to do it myself, though.
Most people pick up on the idea of freedom as in liberty with OSS: that you should be able to amend software at will. Strangely there are limits in his mind (at one talk I attended he insisted he didn’t care about being able to amend the software in his microwave, so didn’t care if it was Free or not).
However when asked about how programmers should make a living, his economic argument is that we should be paid for our hours, not for our software. In his telling, the right economic model is not the author who writes something once and is paid forever by everyone who wants to consume that “art”, but more like the trading crafter who makes and sells things on an on-going basis.
In support of that model, the “devaluation” of software, has meant we have a planet running on it that would not have been possible if every library and application on ever machine had cost $100-$500 each. The advances in scientific and medical research powered by that software driven World would not have been achieved yet, but neither would the damage caused by social media and adtech.
I’m not sure which side of the fence I sit on. I’ve had a good career being paid to write software because of its growing influence in the World, which likely would have stalled if OSS didn’t exist. But sure, I like the idea of spending a year writing something and living the rest of my life off the proceeds, like most people would.
Like with software, this is in large part because everyone can write, and so there is a glut of content, and while a lot of it is poorly written tripe, there's a glut of quality content in almost every niche that is good enough that outstrips the demand, and so only the tiniest proportion of authors earns well. Like with software, a lot of people also do it because they want to, rather than for the money, which further drives down the prices.
I've published two novels. They've sold substantially better than average but nowhere near bestseller level. And yet, despite selling substantially better than average, they're the lowest paid work I've done in more than two decades. I'm a fast writer - the second novel took me 3 weeks. It's still never going to pay for itself. That's okay.
If I wanted to earn a living from writing, I'd look to write articles or columns for magazines and newspapers, or doing copywriting, paid by the word, rather than writing novels.
Java -> Bloated compared to Icon/TCL
VB -> OKish, but TCL/Tk and a bit of C did wonders under Unix.
NT -> Good, advanced
95 -> Mediocre against Amiga/Mac.
MSO -> Polished turd and uberused, giving disasters such as the renaming on Genomics and tons of papers now being void.
MPEG -> Good, bound to TV and multimedia standards.
QT -> Propietary crap, but QT3D was and it's still interesting. Lqtplay from libquicktime plays them well.
MP3 -> Opus and OGG preferred here
Acrobat/PDF -> PostScript and DJVU
Netscape/FrontPage -> Damn crazy bloat with opaque formats on tons of stuff not bound to proper terminology. Even using Emacs editing HTML pages seems easier. Composer looked easier than FP, for sure.
Videoconf -> Yes, h323 and friends.
IDE's -> A lot of them were better under DOS or very bloated under Windows, such as Eclipse, Netbeans...
PC/Console -> Yep, 3DFX/Glide and free Unixen, but consoles went downhill, the PC was set ahead since the Unreal engine.
(I think we're indebted to Sun for wasting money on things like Java and Tcl/Tk, projects that weren't closely related to their core business but which benefited the rest of the industry.)
Most of the things you mention are also products of the 1990s, providing additional evidence for the claim that software advanced quite a bit on both the proprietary and free software sides during that decade.
I think this may be judgment in retrospect. The original combination of Word, Excel, and PowerPoint was undoubtedly useful and became more so with integration. It's too bad that competition from Lotus, Apple/Claris, WordPerfect/Novell/Corel, open source alternatives, etc. didn't affect Office's dominance. Google had a great head start over Office 365 with Docs but didn't pursue it.
Alternatively, placing software behind API paying walls.
- Hello, MSN network.
- Hello, closed memory-dump formats for Microsoft Office.
- You don't play this niche audio/video format? Install this new adware player, and now you will. Oh, enjoy this new adware toolbar.
- Shareware/nagware. Enough said.
- Software patents and codecs. MP3 encoder for streaming? Pay. MPEG encoder/decoder for a custom A/V project? Pay. And so on.
Turbo Pascal was $49.99 though...
edit: back, here's a quote:
> [...] I asked people unfamiliar with the mouse to use a Macintosh. My protocol was to run a [game that only used clicking, with the keyboard removed]. I would point to the mouse and say, "This is the mouse that you use to operate the game. Go ahead, give it a try." If asked any questions, I'd say something nonspecific, such as "Try it." The reaction of an intelligent Finnish educator who had never seen a Macintosh but was otherwise computer literate was typical: she picked up the mouse.
> Nowadays, the might seem absurd, but [mentions the scene in Star Trek where Scotty does the same thing]. In the case of my Finnish subject, her next move was to turn the mouse over and to try rolling the ball. Nothing happened. She shook the mouse, and then she held the mouse in one hand and clicked the button with the other. No effect. Eventually, she succeeded in operating the game by holding the mouse in her right hand, rolling the ball on the bottom with her fingers, and clicking the button with her left hand.
> These experiments make the point that an interface's ease of use and speed of learning are not connected with the imagined properties of intuitiveness and naturalness. The mouse is very easy to learn: All I had to do, with any of the test subjects, was to put the mouse on the desk, move it, and click on something. In five to ten seconds, they learned how to use the mouse. That's fast and easy, but it is neither intuitive nor natural. No artifact is.
> The belief that interfaces can be intuitive and natural is often detrimental to improved interface design. As a consultant, I am frequently asked to design a "better" interface to a product. Usually, an interface can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and easy of implementation, it is superior to both the client's existing products and completing products. Nonetheless, even when my proposals are seen as significant improvements, they are often rejected on the grounds that they are not intuitive. [He goes on to talk about how if it going to be significantly better than it will end up being different than what people currently know, but the clients still want it to be similar to Windows...]
The Humane Interface section 6-1
Having refreshed myself on what he said, and re-reading what you wrote, I don't think he would say that you should be able to walk up to his computer without having someone show you how to use it, or looking at a manual. And as you said: "Then someone demonstrated a Macintosh to me" just like when he said he'd show people how to use the mouse.
The Macintosh, by contrast, was quite transparent to the naive user. It was very easy to understand that if you saved a file, the file was represented by a little picture that you could move to a folder icon or a disk icon. No naive user of the 1980s had any experience with an infinitely long scroll, but the desktop metaphor of file and folder icons was easily understood.
The no-separate-document interface of the Cat was, I think, a huge mistake. That might have been the way Raskin thought people should use computers, but it was a greater conceptual leap than users could easily understand. Non-computer people who were used to typewriters were used to working on separate documents; they thought in terms of writing letters, memos, reports, and manuscripts, and they expected all these documents to stay separate objects.
In the section of The Humane Interface you quote, I think Raskin exaggerates the non-intuitiveness of the mouse. People in the 1980s were familiar with the concept of a pointing device through playing Pong and Centipede. Even before I'd seen a Macintosh in real life, I'd seen the Mac ads and knew what the mouse was supposed to do. As Raskin says, no one needs more than 10 seconds to understand a mouse. It takes a lot more than 10 seconds to understand the Canon Cat.
I remember when they finally shut it down, they promised to freeze the website and preserve it in a read-only format, for all the computing history it contains. https://drdobbs.com/ certainly looks the part, but I'm afraid something seems to be fundamentally broken, because every article links just leads to an error message.
codepoet80•1mo ago
throwaway6723•1mo ago