“…Scott Jenson gives examples of how focusing on UX -- instead of UI -- frees us to think bigger. This is especially true for the desktop, where the user experience has so much potential to grow well beyond its current interaction models. The desktop UX is certainly not dead, and this talk suggests some future directions we could take.”
“Scott Jenson has been a leader in UX design and strategic planning for over 35 years. He was the first member of Apple’s Human Interface group in the late '80s, and has since held key roles at several major tech companies. He served as Director of Product Design for Symbian in London, managed Mobile UX design at Google, and was Creative Director at frog design in San Francisco. He returned to Google to do UX research for Android and is now a UX strategist in the open-source community for Mastodon and Home Assistant.”
MS is a prime example, dont do what MS has been doing, remember whos hardware it actually is, remain aware that what a developer, and a board room understands as improvement, is not experienced in the same way by average retail consumers.
In the Trek universe, LCARS wasn't getting continuous UI updates because they would have advanced, culturally, to a point where they recognized that continuous UI updates are frustrating for users. They would have invested the time and research effort required to better understand the right kind of interface for the given devices, and then... just built that. And, sure, it probably would get updates from time to time, but nothing like the way we do things now.
Because the way we do things now is immature. It's driven often by individual developers' needs to leave their fingerprints on something, to be able to say, "this project is now MY project", to be able to use it as a portfolio item that helps them get a bigger paycheck in the future.
Likewise, Geordi was regularly shown to be making constant improvements to the ship's systems. If I remember right, some of his designs were picked up by Starfleet and integrated into other ships. He took risks, too, like experimental propulsion upgrades. But, each time, it was an upgrade in service of better meeting some present or future mission objective. Geordi might have rewritten some software modules in whatever counted as a "language" in that universe at some point, but if he had done so, he would have done extensive testing and tried very hard to do it in a way that wouldn't've disrupted ship operations, and he would only do so if it gained some kind of improvement that directly impacted the success or safety of the whole ship.
Really cool technology is a key component of the Trek universe, but Trek isn't about technology. It's about people. Technology is just a thing that's in the background, and, sometimes, becomes a part of the story -- when it impacts some people in the story.
(equivalent of people being glued to their smartphones today)
(Related) This is one explanation for the Fermi paradox: Alien species may isolate themselves in virtual worlds
The people we saw on screen most of the time also held important positions on the ship (especially the bridge, or engineering) and you can't expect them to just waste significant chunks of time.
Also, don't forget that these people actually like their jobs. They got there because they sincerely wanted to, out of personal interest and drive, and not because of societal pressures like in our present world. They already figured out universal basic income and are living in an advanced self-sufficient society, so they don't even need a job to earn money or live a decent life - these people are doing their jobs because of their pure, raw passion for that field.
Stories which focus on them as technology are nearly always boring. "Oh no the transporter broke... Yay we fixed it".
Not to be "that guy" but LCARS wasn't getting continuous UI updates because that would have cost the production team money and for TNG at least would have often required rebuilding physical sets. It does get updated between series because as part of setting the design language for that series.
And Geordi was shown constantly making improvements to the ship's systems because he had to be shown "doing engineer stuff."
In the Trek universe, LCARS was continuously generating UI updates for each user, because AI coding had reached the point that it no longer needs specific direction, and it responds autonomously to needs the system itself identifies.
Now, this is really because LCARS is "Stage Direction: Riker hits some buttons and stuff happens".
AKA resume-driven development. I personally know several people working on LLM products, where in private they admit they think LLMs are scams
Things just need to "look futuristic". The don't actually need to have practical function outside whatever narrative constraints are imposed in order to provide pace and tension to the story.
I forget who said it first, but "Warp is really the speed of plot".
It is for the audience to imagine that those printed transparencies back-lit with light bulbs behind coloured gel are the most intuitive, easy to use, precise user interfaces that the actors pretend that they are.
On the other hand, if the writers of Star Trek The Next Generation were writing the show now, rather than 35-40 years ago - and therefore had a more expansive understanding of computer technology and were writing for an audience that could be relied upon to understand computers better than was actually the case - maybe there would've been more episodes involving dealing with the details of Future Sci-Fi Computer Systems in ways a programmer today might find recognizable.
Heck, maybe this is in fact the case for the recently-written episodes of Star Trek coming out in the past few years (that seem to be much less popular than TNG, probably because the entire media environment around broadcast television has changed drastically since TNG was made). Someone who writes for television today is more likely to have had the experience of taking a Python class in middle school than anyone writing for television decades ago (before Python existed), and maybe something of that experience might make it into an episode of television sci-fi.
As an additional point, my recollection is that the LCARS interface did in fact look slightly different over time - in early TNG seasons it was more orange-y, and in later seasons/Voyager/the TNG movies it generally had more of a purple tinge. Maybe we can attribute this in-universe to a Federation-wide UX redesign (imagine throwing in a scene where Barclay and La Forge are walking down a corridor having a friendly argument about whether the new redesign is better or worse immediately before a Red Alert that starts the main plot of the episode!). From a television production standpoint, we can attribute this to things like "the set designers were actually trying to suggest the passage of time and technology changing in the context of the show", or "the set designers wanted to have fun making a new thing" or "over the period of time that the 80s/90s incarnations of Star Trek were being made, television VFX technology itself was advancing rapidly and people wanted to try out new things that were not previously possible" - all of which have implications for real-world technology as well as fake television sci-fi technology.
Complex tasks are done vibe coding style, like La Forge vibe video editing a recording to find an alien: https://www.youtube.com/watch?v=4Faiu360W7Q
I do wonder if conversational interfaces will put an end to our GUI churn eventually...
Conversly recent versions have taken the view of foregrounding tech aidied with flashy CGI to handwave through a lot.Basically using it as a plot device when the writing is weak.
https://www.youtube.com/watch?v=zMuTG6fOMCg
The variety of form factors offered are the only difference
I don't think most people would find this degree of reduction helpful.
Correct? I agree with this precisely but assume you’re writing it sarcastically
From the point of view of the starting state of the mouth to the end state of the mouth the USER EXPERIENCE is the same: clean teeth
The FORM FACTOR is different: Electric version means ONLY that I don’t move my arm
“Most people” can’t do multiplication in their head so I’m not looking to them to understand
Now compare that variance to the variance options given with machine and computing UX options
you’ll see clearly that one (toothbrushing) is less than one stdev different in steps and components for the median use case and one (computing) is nearly infinite variance (no stable stdev) between median use case steps and components.
The fact that the latter state space manifold is available but the action space is constrained inside a local minima is an indictment on the capacity for action space traversal by humans.
This is reflected again with what is a point action space (physically ablate plaque with abrasive) in the possible state space of teeth cleaning for example: chemical only/non ablative, replace teeth entirely every month, remove teeth and eat paste, etc…
So yes I collapsed that complexity into calling it “UX” which classically can be described via UML
On the positive side, my electronic toothbrush allows me to avoid excessive pressure via real-time green/red light.
On the negative side, it guilt trips me with a sad face emoji any time my brushing time is under 2 minutes.
Because we've been stuck with the same bicycle UX for like 150 years now.
Sometimes shit just works right, just about straight out of the gate.
By the 1870s we'd pretty much standardised on the "Safety Bicycle", which had a couple of smallish wheels about two and a half feet in olden days measurements in diameter, with a chain drive from a set of pedals mounted low in the frame to the rear wheel.
By the end of the 1880s, you had companies mass-producing bikes that wouldn't look unreasonable today. All we've done since is make them out of lighter metal, improve the brakes from pull rods to cables to hydraulic discs brakes, and give them more gears (it wouldn't be until the early 1900s that the first hub gears became available, with - perhaps surprisingly - derailleurs only coming along 100 years ago).
Maybe the experience has not changed for the average person, but alternatives are out there.
it’s just all gotten miniaturized
Humans have outright rejected all other possible computer form factors presented to them to date including:
Purely NLP with no screen
head worn augmented reality
contact lenses,
head worn virtual reality
implanted touch sensors
etc…
Every other possible form factor gets shit on, on this website and in every other technology newspaper.
This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.
Had people liked those form factors they would’ve been invested in them early on, such that they would develop the same way the laptops and iPads and iPhones and desktops have evolved.
However nobody’s even interested at any type of scale in the early days of AR for example.
I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.
Like everything it’s not a machine problem, it’s a human people in society problem
Cumbersome and slow with horrible failure recovery. Great if it works, huge pain in the ass if it doesn't. Useless for any visual task.
> head worn augmented reality
Completely useless if what you're doing doesn't involve "augmenting reality" (editing a text document), which probably describes most tasks that the average person is using a computer for.
> contact lenses
Effectively impossible to use for some portion of the population.
> head worn virtual reality
Completely isolates you from your surroundings (most people don't like that) and difficult to use for people who wear glasses. Nevermind that currently they're heavy, expensive, and not particularly portable.
> implanted sensors
That's going to be a very hard sell for the vast majority of people. Also pretty useless for what most people want to do with computers.
The reason these different form factors haven't caught on is because they're pretty shit right now and not even useful to most people.
The standard desktop environment isn't perfect, but it's good and versatile enough for what most people need to do with a computer.
yet here we are today
You must’ve missed the point: people invested in desktop computers when they were shitty vacuum tubes that blow up.
That still hasn’t happened for any other user experience or interface.
> it's good and versatile enough for what most people need to do with a computer
Exactly correct! Like I said it’s a limitation of the human society, the capabilities and expectations of regular people are so low and diffuse that there is not enough collective intelligence to manage a complex interface that would measurably improve your abilities.
Said another way, it’s the same as if a baby could never “graduate” from Duplo blocks to Lego because lego blocks are too complicated
Even more, I don't see phones as the same form factor as mainframes.
Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.
Desktop Computers are no exception.
For example, we're not remotely close to having a standardized "watch form-factor" appliance interface.
Physical reality is always a constraint. In this case, keyboard+display+speaker+mouse+arms-length-proximity+stationary. If you add/remove/alter _any_ of those 6 constraints, then there's plenty of room for innovation, but those constraints _define_ a desktop computer.
1. Incremental narrowing for all selection tasks like the Helm [0] extension for Emacs.
Whenever there is a list of choices, all choices should be displayed, and this list should be filterable in real time by typing. This should go further than what Helm provides, e.g. you should be able to filter a partially filtered list in a different way. No matter how complex your filtering, all results should appear within 10 ms or so. This should include things like full text search of all local documents on the machine. This will probably require extensive indexing, so it needs to be tightly integrated with all software so the indexes stay in sync with the data.
2. Pervasive support for mouse gestures.
This effectively increases the number of mouse buttons. Some tasks are fastest with keyboard, and some are fastest with mouse, but switching between the two costs time. Increasing the effective number of buttons increases the number of tasks that are fastest with mouse and reduces need for switching.
I wish the same could be said of car UX these days but clearly that has regressed away from optimal.
GUI elements were easily distinguishable from content and there was 100% consistency down to the last little detail (e.g. right click always gave you a meaningful context menu). The innovations after that are tiny in comparison and more opinionated (things like macos making the taskbar obsolete with the introduction of Exposé).
I’m in the process of designing an os interface that tries to move beyond the current desktop metaphor or the mobile grid of apps.
Instead it’s going to use ‘frames’ of content that are acted on by capabilities that provide functionality. Very much inspired by Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.
A newton-like content soup combined with a persistent LLM intelligence layer, RAG and knowledge graphs could provide a powerful way to create, connect and manage content that breaks out of the standard document model.
Personally, I wish there were a champion of desktop usability like how Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and Google lost the plot in the 2010s due to two factors: (1) the rise of mobile and Web computing, and (2) the realization that software platforms are excellent platforms for milking users for cash via pushing ads and services upon a captive audience. To elaborate on the first point, UI elements from mobile and Web computing have been applied to desktops even when they are not effective, probably to save development costs, and probably since mobile and Web UI elements are seen as “modern” compared to an “old-fashioned” desktop. The result is a degraded desktop experience in 2025 compared to 2009 when Windows 7 and Snow Leopard were released. It’s hamburger windows, title bars becoming toolbars (making it harder to identify areas to drag windows), hidden scroll bars, and memory-hungry Electron apps galore, plus pushy notifications, nag screens, and ads for services.
I don’t foresee any innovation from Microsoft, Apple, or Google in desktop computing that doesn’t have strings attached for monetization purposes.
The open-source world is better positioned to make productive desktops, but without coordinated efforts, it seems like herding cats, and it seems that one must cobble together a system instead of having a system that works as coherently as the Mac or Windows.
With that said, I won’t be too negative. KDE and GNOME are consistent when sticking to Qt/GTK applications, respectively, and there are good desktop Linux distributions out there.
At Microsoft, Satya Nadella has an engineering background, but it seems like he didn't spend much time as an engineer before getting an MBA and playing the management advancement game.
Our industry isn't what it used to be and I'm not sure it ever could.
This also came at a time when tech went from being considered a nerdy obsession to tech being a prestigious career choice much like how law and medicine are viewed.
Tech went from being a sideshow to the main show. The problem is once tech became the main show, this attracts the money- and career-driven rather than the ones passionate about technology. It’s bad enough working with mercenary coworkers, but when mercenaries become managers and executives, they are now the boss, and if the passionate don’t meet their bosses’ expectations, they are fired.
I left the industry and I am now a tenure-track community college professor, though I do research during my winter and summer breaks. I think there are still niches where a deep love for computing without being overly concerned about “stock line go up” metrics can still lead to good products and sustainable, if small, businesses.
When the hell was even that?
Are we stuck with the same brake pedal UX forever?
Coders are the only ones who still should be interested in desktop UX, but even in that segment many just need a terminal window.
For content creation though, desktop still rules.
Whether intentional or not, it seems like the trend is increasingly locked-down devices running locked-down software, and I’m also disturbed by the prospect of Big Tech gobbling up hardware (see the RAM shortage, for example), making it unaffordable for regular people, and then renting this hardware back to us in the form of cloud services.
It’s disturbing and I wish we could stop this.
When I need to get productive, sometimes I disable the browser to stop myself from wasting time on the web.
I guess the larger point is that you need a desktop to run vscode or Figma, so the desktop is not dead.
This also means that I heavily disagree with one of the points of the presenter. We should not use the next gen hardware to develop for the future Desktop. This is the most nonsensical thing I heard all day. We need to focus on the basics.
I can't imagine what I'd be doing without MATE (GNOME 2 fork ported to GTK+ 3).
Recently I've stumbled upon:
> I suspect that distro maintainers may feel we've lost too many team members so are going with an older known quantity. [1]
This sounds disturbing.
[1] https://github.com/mate-desktop/caja/issues/1863#issuecommen...
It's really strange how he spins off on this mini-rant about AI ethics towards the end. I clicked on a video about UI design.
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
>A lot of my work is about trying to get away from this. This a photograph of the desktop of a student of mine. And when I say desktop, I don't just mean the actual desk where his mouse has worn away the surface of the desk. If you look carefully, you can even see a hint of the Apple menu, up here in the upper left, where the virtual world has literally punched through to the physical. So this is, as Joy Mountford once said, "The mouse is probably the narrowest straw you could try to suck all of human expression through." (Laughter)
https://flong.com/archive/texts/lectures/lecture_ted_09/inde...
https://en.wikipedia.org/wiki/Golan_Levin
scottjenson•18h ago
I'm excited so many people are interested in desktop UX!
az09mugen•9h ago
Will look into your other talks.
calmbonsai•2h ago
pjmlp•35m ago
NetOpWibby•32m ago
In my downtime I'm working on my future computing concept[1]. The direction I'm going for the UI is context awareness and the desktop being more of an endless canvas. I need to flesh out my ideas into code one of these days.
P.S. Just learned we're on the same Mastodon server, that's dope.
---
[1]: https://systemsoft.works