idk what it is but when a new paradigm comes whether it is AI or AR the bigtech companies always want to ram it down everybody's throats rather than gentle opt-in. its not like they lack enthusiasts who WILL opt in to offer feedback.
you have billions of users, including many normies who just want to get shit done and dont even know that you have keynotes or shareholders to impress and dont care about the translucency of your "glass" when they're trying to call 911[0]
[0]: see talk (https://meyerweb.com/eric/thoughts/2016/01/25/designing-for-...) and tldr (https://hookedoncode.com/2015/02/designing-for-crisis-by-eri...)
In retrospect, watchOS 1.0 was the gentle introduction to what became SwiftUI. At the time, I was a bit frustrated that I couldn't specify widget position and size like I was used to.
Honestly I still am frustrated by that. We already had variable size windows back in the days of Win3.11 and System 7 — we've just made it far more complicated to do the same things, and even several years into SwiftUI it feels like we've got more bugs with SwiftUI based apps than we ever had with UIKit, and I'm not sure if that's the layout stuff or the reactive stuff or both.
the blog post explains this
in short, it's simply a continuation of the practices that resulted in apple's dominance in the first place
This will let them build up an ecosystem of apps ready at launch and it means you are already training users. This are all laying the foundation of a successful future launch.
It also seems like a way to try and go directly against things like React Native and Kotlin Multiplatform. The recently announced Swift Java interoperability directly really makes it seem like they think KMP is some kind of threat.
This is a bingo for me. If you know your web, you know these effects are almost impossible to pull off. Or any other UI framework for that matter.
This is a play that will enforce the line between proprietary ”native” and cross platform technologies, no matter how performant or good they may be. It is designed to surface the underlying tech stack to the user, so it can be a differentiator kinda like the green bubble or the constant camera array realignment that are both pure social posturing.
10-15 years ago it might have worked, but honestly, I don’t think it will this time. It’s too specific to be adopted and copied by other UI platforms, and Apple-only ecosystem just isn’t feasible for even the most hardcore Apple fans.
It will certainly be adopted by Apple-only devs that make bespoke quality apps in Swift, but Apple really overestimates how much value those can deliver in a world where smartphone is utility in a broad ecosystem. Your average business, from libraries to airlines to grocery stores, don’t have a reason to create full-native apps in 2-3 completely separate stacks. The differentiating features on eg iOS vs Android are simply not effecting the vast majority of real-life businesses.
Flutter, and MAUI/Xamarin OTOH won't be able to.
This isn’t true. You’re never going to want your browser, editor, or Slack window to be translucent.
…or your movie playback window, or Instagram, or your ebook reader, for that matter.
r/unixporn disagrees
Pass-through is the only AR approach that currently allows black pixels, but it has uncomfortable limitations compared to a see-through optic.
Of course, now that Apple has invented it, it will be completely different.
That's quite a rewrite of history considering Windows Phone and Microsoft's Metro interface launched a full three years before Apple's move to a flat design in iOS 7.
Even Android had moved to a flatter design pattern 1-2 years before iOS. While Material Design wouldn't be released until 2014, you can see them moving in that direction from Gingerbread to Jelly Bean, particularly when looking at the system components and first-party apps, since this was before the concept of a unified design language across third-party apps had been formalized.
At the time Apple introduced their flat design in June 2013, they were the odd ones out. In fact, I remember a Daring Fireball article posted in spring 2013 (a few months before WWDC) praising Apple for leading the pack in flat design, and HN excoriating it for making what was at the time a clearly preposterous claim.
Indeed:
https://www.behance.net/gallery/4315369/Google-Project-Kenne...
Tack on site:news.ycombinator.com, and you'll find the top comment too: https://news.ycombinator.com/item?id=5081618
Gruber does mention Metro:
The lack of skeuomorphic effects and almost extreme flatness of the “modern” (née Metro) Windows 8 interface is remarkably forward-thinking. It’s meant to look best on retina-caliber displays, not the sub-retina displays it debuted on (with Windows Phone 7.x) or the typical PC displays of today. That said, I think there’s a sterility to Metro that prevents it from being endearing. It epitomizes “flat” design, but I don’t think it’s great design.
Android, courtesy of being open source, was just able to move much faster
I think if they had just open sourced the OS, like Android, instead of killing it, Windows Phone could have been a decent Android competitor
Their user interface was a true gem - beautiful yet functional. The devices were incredibly fast, and the optical cursor was a revelation. I genuinely believe the way the trackpad cursor functions on the iPad is inspired by BlackBerry’s design.
They owned their space in their time, nothing came close, and then, one day, times have changed and their product become obsolete. I don't blame them.
It's cool to sit on HN and think everyone should pivot on a yearly basis, but in reality it rarely happens for companies that big. It takes a lot of time and effort to change to course of a tanker ship, and when you're in position that you have a product that is precisely on point, competition can't touch you, the most reasonable thing to do is just not to fuck things up... and then it's too late. Sometimes. Most of the time it's the winning strategy.
If anything, Nokia was distaster.
It would be like trying to write code on an iPhone today.
Apps availability was the main issue - there were people who baked their own 3rd party apps for instagram, snapchat and vine. Google on the other hand "fought" with MS by blocking access to YT from their app on the devices - because unsurprisingly ads in videos weren't playing on it. Only Opera released their browser for this platform - Mozilla had short lived Fennec in early alphas.
The OS updates were handled by device manufacturers/service providers and release times differ from one company to another. That could be also another issue leading to platform's failure.
Version fragmentation was also another thing; devices running WP7 couldn't upgrade to WP8 - these had a special 7.8 release which bring some features from 8.0. Same thing happen with WP8 devices - the top-most could get W10M while mid and low-end ones would stuck on 8.1. I tried installing 10 on my Lumia 1320 - it made phone ran hot.
Metro interface was perfect on mobile devices and tiles were an amazing middle ground between icons and widgets at that time. Apple pick up quite recently that concept allowing icons to be expanded into widgets serving particular bits of information. Overall the OS interface focused exactly on displaying needed information instead of delivery form for it; this was achieved by big font and modest use of icons within e.g settings pages. Windows 8/.1 failed miserably on desktop as we know - it wouldn't be as bad if start menu and desktop paradigm would remain and only visually system would receive a flat "lifting" as it did with Windows 10. But at that time it was too late.
At the time I was working on WP apps for a customer and needed 3 different OS installed to work on their apps.
I guess MS really learn it lesson and go ham on opensource ecosystem
if its today MS that launch windows phone, I think they can take off
This also made the battery life much better. (Although whenever I mentioned this, the usual retort I got was, of course the battery life would be better if there were no apps to consume it...)
WP would offload applications from RAM as soon as you switched into another application. It was impossible to multitask — you're writing a comment on a message board, switch into a dictionary to quickly look up a word, switch back... and the state is gone. If you're lucky and the application was written correctly, you would only have to wait for 5-10 seconds before you get your half written comment back. If not (which was the norm for the stuff I used), well...
The second Android phone had none of these problems, not remotely to the same degree.
It was such a widespread problem that it quickly became a meme on forums.
Sadly, I was able to get it in 2015 and by then it was too late. I don’t think any phone since then has hooked me like that.
Honestly, it's a huge loss for all of us. I always felt like the U.S. government should have blocked Google from making Android "free." It killed the market for all non-iOS operating systems. We'd have a much richer world if all horizontally integrated OSes had to charge a licensing fee, instead of using a search monopoly to kill competition in other markets (and then using said free OS to further extend their search monopoly).
I also blame Google for killing Blackberry. If Google is blocked from using its search monopoly to make Android free, imagine the world we would have.
Android, for many years, was actively bad, but it was also a free OS that phone companies could grab. And the rest is history.
The reality is that they all wanted what Apple had - a walled garden to charge exorbitant amounts. Only Google had the foresight to leverage open source (not free).
Palm and Nokia did have very good OS's at the time and well HP killed Palm and then Microsoft Nokia(those two turkeys)
Android wasn't great but Google iterated very quickly and had the clout to go with it at the time.
To be fair, I was getting seriously fed up by the poor software support at the same time which may have amplified my resentment.
When Apple makes a mistake, it was really a genius 4D chess move and everyone will copy them and also it wasn't really a mistake, we just have to trust the plan.
Those things all sucked and deserved to be panned, but we all remember plenty of people defending them too.
We have an inherent recency bias, totally natural of course. But this is where you do journalism and research stuff.
> don't really use other devices
Sometimes I feel like I might as well read the spec sheets myself than read “reviews” written by these people
Also, these journalists might not be professionals at all.
In the end, each body is a niche, which each one is uniquely positioned to know better than anyone else. But it’s hard to accept, for medical personal sometimes, and often for the patients themselves. They tend to want the doctor to be the all-knowing god.
Like smartphones with an entire interface focused on multitouch. There wasn’t another one of those “proven in the market” before the iPhone.
Or, you know, the first mass-marketed personal computer with a GUI, and the first successful one with a mouse (Lisa, Mac).
The Kinect example is nonsensical, it wasn’t as an authentication device. Even if they used the same team and technology, so much more went into it (like the Secure Enclave) than simply repackaging what already existed.
It ignores the fact that there has been a welcome step back from the derelict wasteland of "flat design" that users have endured for far too long. Flat design is often cited as a reaction to absurd levels of skeuomorphism, which Apple certainly WAS a leader in. Remember the "felt" surfaces of Game Center, the "paint" upon which was inexplicably a control? And the "leather" binding of Notes?
Then there's this: "In AR, visual affordances work differently. A button that casts realistic shadows and responds to virtual lighting feels more "real" when floating in your living room than a flat, colored rectangle."
That makes it a SHITTY control, which will get lost in the visual noise of the real environment. This UI sucks for the same reason that sports-stats graphics that are tracked onto real surfaces in TV coverage suck: They don't stand out. It's that simple.
So after years of "flat" design where nothing was demarcated as a control and users were apparently supposed to click on every pixel and every character on the screen in a hunt for hidden goodies, this article celebrates Apple's plan to create the same problem in AR using OVERLY-decorated controls.
Not to mention the stupidity of crippling computer, tablet, and phone UI for the sake of a "VR" UI. This isn't just dumb from a practical standpoint, but from a technical one as well. There's no reason that the control library can't be rendered differently on different devices. So, if this (admittedly poorly-substantiated opinion piece) is right about the motivation behind Apple's exhumation of the "transparent" UI fad that died 20 years ago, we can only lament the end of desktop usability... which Windows flushed vigorously with Microsoft's brain-dead attempt to dumb its UI down for touchscreens years ago.
And despite people constantly whining about it, GNOME is ultra fast, has great shortcuts, and it looks kinda like the pinnacle of UI design, which IMHO was Windows XP.
If and when this comes, I'll be changing the setting to maximal opacity, just like I did with Windows Vista.
Yeah, it's not like these lesson haven't been learned. But I guess Apple could always do it right, but I don't see it happening.
Or in layman’s terms: Let’s hope this isn’t like Microsoft with Metro, “everything is a smart phone” even when it’s not.
https://substack-post-media.s3.amazonaws.com/public/images/6...
The items look so much more tangible, and the text is more readable. Everything is easy to grok visually. The flat design looks way more confusing. And the liquid glass one looks even worse.
Same. But how would large teams of UI designers justify their jobs if they'd leave it like that for 10+ years?
And it's not just phones either. Car companies spend money on retooling to give a model a facelift because people expect it. Sales drop and then pick up again after the facelift because nobody wants to buy something that looks dated from day one.
Manufacturers take cues from each other because once a "modern" trend is set everything else looks dated. Everyone went with flat UIs in a matter of a few years. Cars went with lightbar lights in the past few years too. That's what feels modern now.
As long as a huge part of the market remembers skeuomorphic design and associates it with the early 2000s it will never feel modern so designers stay away from it.
P.S. For me suspenders are still the third best way to keep my pants on (right after "picking the right pants size" and "fastening the buttons"). But nobody wants them these days and it's not a Big Belt conspiracy. They just don't look modern.
People aren't always rational. And "customers" aren't a single group either.
Motorcycles of the classic cut are still being manufactured and sold in massive quantities, even though the design is about 50 years old. Same for them, customers know that the quality is high so it doesn't need to say "new".
And I'm positive that people would line up to buy cars with classic designs if the manufacturers started caring about what customers actually want. Not that I dislike modern car design, but it hit the sweet spot about 5 years ago IMO.
So at least for hardware I think a classic design works well to communicate quality.
And I think we're soon reaching a similar mood in software GUI as well.
Also, I can actually read the battery level indicator in the skeuomorphic display. I sometimes resort to getting out a magnifying glass to read it on my iPhone's current display. (Yes, I have old eyes. And I have to keep telling those Apple UI people to get off my grass.)
I thought there was supposed to be a way to add a tint to it though, which I haven't found a setting for, and think I would do if I could find it.
I suppose it's easy to grok what the newsstand is[1], but I'm not convinced it would matter after the first five minutes.
[1] Because I've seen it in US media, along with the route symbol on the maps icon and the fire hydrants that are in captchas.
I don't know how well connected it is to the power-user axis, but I would say a characteristic power-user doesn't care that they are looking a somewhat garish and busy collection of colored icons, gradients, bezels, etc, whereas the opposite sensibility favors a minimalist UI for the aesthetics over perhaps ease of locating things. The real opposite of a power-user is not a first-time user, its a non-user. The non-user is not annoyed that they can't find things that are hidden away in secret trays you have to swipe for or such, but they appreciate the resulting saved screen-space.
Sure, it's not pretty by today's standard, but it's way easier to use IMO.
John Carmack writes:
Translucent UI is usually a bad idea outside of movies and non-critical game interfaces.
The early moments of joy are fleeting, while the usability issues remain. Windows and Mac have both been down this road before, but I guess a new generation of designers needs to learn the lessons anew. Sigh.
All of the same issues apply in AR as well. Outside of movies, people do not work out their thoughts on windowpanes or transparent “whiteboards” because of the exact same legibility issues.
Would you prefer a notebook of white sheets, or hundreds of different blurry image backgrounds?
We're taking about Apple here, who prioritize aesthetics over everything else, shipping a defective keyboard design for 5 years straight just to shave 1 millimeter of thickness on the laptop.
It needs to 'wow' people in the Apple Store in terms of looks and feel, usability be dammed.
I put off upgrading my personal MacBook for years after work issued me a MacBook with the touch bar. Such a usability nightmare for the sake of eye candy. That was a long seven years.
And that's counting the extra ~year I got out of the Lenovo after having to replace the fans.
Having user serviceable parts is nice but having parts that last 14 years is better. If there was a brand that did both, that's what I'd buy.
Later I switched to Lenovo when I got money and still no issues. Meanwhile all my mates with 2016-2018 era Macbooks have had endless issues, that they swore off ever buying Apple.
Anecdotal stories can swing in both directions, that's not proof of anything.
- most of the close look examples are on very simple buttons with a single geometric shapes, like ">" or "□". Those will be legible even in pretty extreme conditions, and we can't expect real world applications to be mostly composed of those.
Imagine the screen at 11:51 with a "Select" as the button text instead of the geometrical icons. It wouldn't be great.
- text is only presented on very low contrast areas. When scrolling the elephant picture around 9:30 to show the title go dark -> light for instance, it's a switch between a very pale background to a very saturated one, and they stop the scrolling when the title is against the darkest part of the image, where it's the most legible.
It's not just the implementation IMHO, in a real application you can't adjust every screen and interaction to only hit the best absolute conditions to make Liquid Glass look good. The whole idea behind it is just harder to look good in real world, short of giving up and going for very low transparency.
If you are starting at a display which displays a video feed (VR/MR form factor), you are correct.
For walls of text you can still opt out of this and use a less translucent material.
or youtube ones lol
It's not about transparency, it's about not using AR and multitasking in the real world. The purpose of AR in a headset isn't to free up your hands so you can read the group chat while you drive or walk, it's to make UIs that can't feasibly exist with a screen alone.
I‘m multitasking either way, glancing down on the phone or watch for every notification.
Weird tangent, but I used tracing paper over piece of graph paper for notes for a while. I liked it because I could use the graph paper for drawing my figures or align my text, but then have something more aesthetically pleasing and nice for reading after. I find reading on graph paper annoying, due to the vertical lines.
Anyway, I can’t think of any way that a transparent OS window could be similarly helpful.
You might get on with a Whitelines pad[0]?
[0] https://www.whitelinespaper.com/product/engineering-pad-8-5-...
I don’t think I’d like to have it on everything, though. Or basically anything except my terminal, for that matter.
I’d still rather have an opaque terminal and just get a second monitor, but that’s not always an option.
Just found the setting…thank you! It was actually driving me crazy. There’s still a bunch of really weird, unnecessary UX changes but this helps a lot.
This could be partly because there's nothing immediately close to the window so any writing is the only real thing in the focal plane.
If I stand further back, it is somewhat harder to read, but I imagine this wouldn't be a problem with displays that can emulate that (like laser retinal displays iirc)
Whiteboard markers don't really work though (too transparent when it's bright out). Permanent markers and liquid chalk work best, though the latter can be especially annoying to erase. Some glass-specific erasable markers aren't bad either.
I have a hard time parsing this.
"Is the glass effect transparent if I don't turn off transparency?"
uhh.. Yes.
As a bit of a citizen scientist myself let me explain how I wrapped my laymen's brain around it :
If I can see through it, it's transparent. If the color changes behind the thing, and I can somehow intuit that -- good chance's we're dealing with transparency.
> If I can see through it, it's transparent.
Yes, if you can clearly make out the details behind whatever it is you're looking through.
> If the color changes behind the thing, and I can somehow intuit that -- good chance's we're dealing with transparency.
This would normally be translucency, akin to a shower door or curtain that lets you see that someone is in there and maybe who, but not much more.
In this case though, it's a bit weird, and it seems like the person you responded to did have a relevant question, because as far as I've seen it's kind of pseudo-transparent but not quite translucent in different contexts, in the sense that you can more clearly see through to detail that's sometimes there (slider position, magnifying glass) and sometimes only derived from the bottom layer, like colors changing.
To me, it's less like a shower door vs window, and more like a window vs looking through the bottom of a shot glass, but im some cases closer to opaque than translucent if the transparent gimmick is turned off, based on how the question was asked.
All of the outrage screenshots I’ve seen appeared to have been taken during animation.
My suspicion is this concern went the same way as DEI/ESG
Reading the article's claims about translucent UI being ideal for AR, all I could think about was how bad this roadside traffic sign would be if it was white text printed on translucent glass. https://images.app.goo.gl/MU4kJmWZ8ogNGAD9A
Assuming the information a daily-use AR headset is presenting is important, it needs to be instantly legible to be useful. I guess my counter to the article would be images of "Roadside traffic signs as re-imagined in Apple's Liquid Glass." Would showing an intersection with a Liquid Glass stop sign and a car crashed into the side of another be too much?
That basically is just a way of making people upgrade their iPhone
The evidence suggests this isn't AR prep at all. I watched Apple's 20-minute design presentation, and their design team makes the same point repeatedly: Liquid Glass has very narrow guidelines and specific constraints.
Here's the actual design problem Apple solved. In content apps, you have a fundamental trade-off: you have a few controls that need to be instantly accessible, but you don't want them visually distracting from the content. Users are there to consume videos, photos, articles - not to stare at your buttons. But the controls still have to be there when needed.
Before Liquid Glass, your least intrusive option was backdrop blur or translucent pastel dimming overlays. Apple asked: can we make controls even less distracting? Liquid Glass lets you thread this needle even better. It's a pretty neat trick for solving this specific constraint.
So you'll feel like you're seeing Liquid Glass "everywhere" not because Apple applied it broadly, but because of selection bias. The narrow use case Apple designed this for just happens to be where you spend 80% of your phone time: videos, photos, reading messages. You're information processing, not authoring.
Apple's actual guidelines are clear: only a few controls visible at once, infrequent access pattern, only on top of rich content. The criticism assumes they're redesigning everything when they explicitly documented the opposite. People are reacting to marketing tone instead of reading what Apple's design team actually built.
[1] https://peoplesgrocers.com/en/writing/liquid-glass-explained
Or an outline, like gameboy emulators have been doing forever
Often, UX design rhetoric floats way beyond reality. For now, a lot of Liquid Glass is grossly applied. It's only dev beta 1, so it's likely it'll improve over time... especially if they launch an AR product.
But if I want to use the buttons, that necessitates that I see the buttons first in order to use them. If I don't need to see a button, the button probably shouldn't be there at all.
It's not the worst design I've ever seen, but it does feel like they've swung a bit too far in the "users want to focus on the content" direction. The tools to interact with the content are also an important part of the interface and if you can't see them clearly they're not very usable.
There are usability reasons for this too - for instance, even if it's blurred, a hint of what content is behind the bar helps the user know when they've neared some new content or when to stop scrolling, or whether there's more content above/below the unobscured viewport.
> The criticism assumes they're redesigning everything when they explicitly documented the opposite.
Does Control Center fit those guidelines for applying Liquid Glass ?
It doesn't look like Apple has as much restraint as you're giving them credit for.
I would rather borders and color contrast to create visual separation anyway. That approach takes up less space. White space takes makes your UI less dense, but blur is even worse.
Either way… how does that relate to my keyboard being transparent? I don’t need to see a completely illegible blur of the colors behind my keyboard.
I just turned on the “reduce transparency” setting and it’s much better.
I knew a lovely man, a kind hearted engineer, Larry Weiss, for whom this was not true... In the early 1980s I was in his VW van that he used for business roadtrips when he, while still driving, grabbed a felt marker and started drawing on the window in front of him to illustrate a point.
I learned that he kept markers handy and used them to capture his thoughts on long drives (to conferences, customers etc). Rough mechanical sketches mostly.
Back then I did not generally know to describe myself as (modern term) aphantasic as I had yet to realize I was different from most people, but hopefully this context helps you to understand why I then (and now) grokked the value of putting your conceptual thought into your ongoing visual field, non-occlusively aka transparently
Legibility is no more the deciding factor of ~utility here/AR/VR than it is in dreams. Indeed I have been very near sighted for over 50 years and I do not find this ~illegibility to be an issue for clarity of visual~assist to thinking.
The point John Carmack makes may have greater merit for other people or if we were limited to discussing text--but Liquid Glass is not about text per se, is it?
"...there’s way too much information to decode the Matrix. You get used to it. I…I don’t even see the code." -- Cypher from The Matrix 1999
P.S. If my story about Larry intrigued you, I am happy to share these two tiny tidbits I found in memoriam ...
https://isaac-online.org/wp-content/uploads/ISAAC-E-News-Oct...
https://w140.com/tekwiki/wiki/Larry_Weiss [re his work and patents at Tektronix in the 1960s]
Judging from the demos, Apple's version is even more translucent than Vista, so I have no doubt that it'll be bad.
Damn, now I want a frosted glass whiteboard. As if wanting a whiteboard wasn't bad enough.
So if that's the next big thing, Apple had to get consumers used to translucent UIs.
https://www.theverge.com/news/604378/apple-n107-ar-glasses-c...
Yet I see this speculation copied (TechCrunch), copied (MacRumors), copied (Substack), from one article to another with the fervor rising at each one. Yet we never approach anything close to substantive.
I read in 2023 AR is due in 24, then 24 it was 25, and now in 25 it is due in 26. AR also now has something to do with AI because of course it does, and Apple's new blurry UI is something to do with this product 1.5 years out at minimum... Sure.
People do not want invasive glasses, even if they make them as small at normal glasses. I just don't see it becoming anything other than a niche product.
It's like all the moves to voice/audio interfaces powered by AI. They simply won't take off as audio is inherently low bandwidth and low definition. Our eyes are able to see so much more in our peripheral vision, at a much higher bandwidth.
Some would argue that's an indication that AR will happen, but it's still so low deff, and incredibly intrusive, as much as I love the demos and the vision (pun not intended) behind it.
As far as I can see, the only motivation for the visual overall is that they need something to fill the gap until they have some real AI innovations to show. This is a "tick" in the traditional "tick" -> "tock" development and release cycle - a facelift while they work on some difficult re-engineering underneath. But that's not AR, it AI.
Good AR glasses are already available and combined with modern LLMs you can have normal people thinking about computers the way we do. This will feel less invasive than smartphones do currently while being able to do much more.
I'm absolutely certain Apple will not survive the transition though.
Which ones do you think are good?
It's just a matter of packaging and selling it the way the palm pilot was packaged and sold as the smartphone.
This is why I like Lex Fridman style podcast.
Wait, are you arguing that consumers will reject something that puts, say, a social media feed in front of their face 24hrs a day? That will allow them to just gaze at an internet site constantly without even having to think about it? That will allow them to have videos in their peripheral vision while they “concentrate” on something else?
AR headsets will not replace computers, they’ll replace phones.
Anything less than lightweight glasses is a non-starter outside of gaming and other enthusiasts. The Vision Pro is just too bulky for it to sell serious numbers.
Another rather significant historical fact the author completely omits is that the iPhone generated crazy hype among consumer customers [0] and bored the business community.
I think it would still even be graceful to assume the opposite about AR "computing".
Constructing the premise of "this is a precursor of the next big thing" in light of this contrast is rather hard to follow.
[0]: https://www.nytimes.com/2007/06/27/technology/circuits/27pog...
Maybe it would still work ok in low light.
They weren't first with MP3 players, smartphones, tablets, or smartwatches - but when they entered, they often redefined those categories. The current AI situation likely follows this same playbook.
Apple's culture of secrecy means we only see what they choose to release.
What's often overlooked is that Apple might be playing a different game entirely.
Tim Cook's measured approach and the company's $100B+ R&D budget suggest they're building something substantial, not scrambling to catch up.
They may be betting that the current LLM race will commoditize, and the real value will come from integration and user experience - areas where Apple traditionally excels.
It could also be that Tim Cook is just an ops guy who only knows how to hill climb graphs up and to the right and Apple is running out of the innovation momentum it had when Jobs died.
Err, no it isn't. It would be more accurate to state "the only thing that apple has recently been successful at is entering markets later with more refined, integrated solutions".
But it's definitely not the pattern of what they've tried.
The 100 billion also includes tons of expensive failures, like self-driving cars, etc. Those are not cheap, and they were definitely playing catch-up.
Here's an alternate take - apple fails at things sometimes, and has historically been able to get people to ignore and minimize the failure. Leading to folks saying things like "Apple's pattern has always been to enter markets later ...." because they just ignore the failures that contradict this. In this case, they are not just failing at AI, but are failing to get people to ignore the failure.
Why isn't that a better explanation than "apple never fails, they secretly were not trying to succeed at AI, they aren't spending billions trying to catch up, they are spending billions on a secret knockout blow that nobody knows about"
I cant believe real people actually believe this kind of stuff. The author seems to think tech press alone is fixated on AI story. Apple themselves was all gung-ho about AI last time around. They sold an entire line of iPhones touting the benefits of AI. They even "invented" a brand for their line of offering - Apple Intelligence.
And when it all fell flat, Apple had to apologize and had to (yes, had to) showcase other things. Liquid Glass essentially was a replacement for that. If Apple had anything meaningful to show in AI world, it would have show cased that.
And author seems to think Apple is playing 4D chess. Sometimes the simplest explanation is what is really going on.
Real-world productivity improvements due to AI in terms of actual metrics or financial outcomes remain stubbornly undemonstrated.
Usability issues only manifest after point of sale. Messaging/marketing happens after the work has been done (and can involve post-rationalization).
They’re more like quality of life issues that users will appreciate.
A now that 3rd parties can access Apple’s LLM models… let me correct that. Shortcuts, a visual automation app, can also access models on device or bigger, more capable models using Apple’s Private Cloud Compute.
Apple’s not playing multidimensional chess… but they are playing the long game, where users won’t have to use multiple AI chatbots to get work done, because most of what they want to do is handled by their current apps with new AI capabilities—on device.
If anything I'd expect Google, OpenAI, Anthropic ... Or even Meta to have a better on-device "lite" model before Apple.
Blurring in AR is quite difficult as it requires an accurately aligned image to overlay the world. The point of AR is its just an overlay, you don't need to render whats already there. To make a blur, you need the underlying image, this costs energy, which you don't really have on AR glasses.
People are in for a world of pain when they realize this.
Imagine an overlay in front of a red circle. If you want the red circle blurred in the overlay, you need to know about red circle, and sample from it for each pixel. Vision Pro cant sample the entire viewport 120fps (or whatever fps they are running at). It would be a janky mess.
Vision Pro UI is not transparent / translucent but frosted.
https://www.apple.com/newsroom/images/2024/02/apple-announce...
Even worse than that. Each pane of glass that's blurring needs to do its own sampling of what's behind itself. That means you have to render from back to front, sampling in 3D, for each eye, to get realistic blur effects. Your vision isn't a 2D grid coming from each eye, it's conic. A line from your retina out through two equally sized panes placed in front of each other will likely pass through two different points on each pane.
You'd probably need to implement this with ray tracing, to make it truly accurate, or at least convincing. And to make your device not slow to a crawl as you open more and more windows.
Looking at Liquid Glass, they certainly solved it for higher-res backdrops. Low res should be simpler. It won't be as clean as Liquid Glass, but it could probably do VisionOS quality.
You need the camera on and streaming, sure you only need a portion, but also your camera needs to cover all of your screen area, and the output remapped. It also means that your camera now has limited placement opportunities.
Having your camera on costs power to, so not only is your GUI costing power but its costing more power because the camera is on as well.
If the display is naturally transparent, I don't see the need for a non-opaque UI.
You're right, but it depends on the screen type. It turns out that just being transparent isn't actually good enough, you really want to be able to dim the background as well. This means that you can overwrite the real-world object much more effectively.
but that adds a whole level of complication.
I’m not sure what I think about liquid glass, but I do agree with the premise that it’s being driven by the move towards AR and extending interfaces outside the phone/tablet.
I think another interesting tell here will be the 20th anniversary iPhone, which should be coming in 2027 - the iPhone X set the tone for Apple devices for the next decade (so far), and I’d expect to get a better idea of what Apple’s doing here when they show off that hardware.
marketing (big new design), design trend catch-up (metro, android), and all those other technical reasons (memory, textures, vector graphics, enables easy dark-mode) etc etc
just my guess, but making a dark mode (more easily) possible must have been a large factor too
I expect an "AR based UI" to somehow leverage depth of field and focus. Blur and translucency/transparency used to achieve that could be amazing.
I'm reminded of prior UIs which had Z depth. One of the iOS releases had parallax.
Remember that awesome demo repurposing two Wii controllers to do head tracking? It transformed the screen into a portal that you that thru. Moving your head around changed your perspective.
I want that.
I just started watching the WWDC videos. So far I like what I see. I'm on board with stacked components; we'll see how it pans out. I love the idea of morphing UI elements, transitioning between list <-> menu bar; I really want this to succeed.
Mostly, I want less clutter. No matter the appearance or theme, I'm overwhelmed by all the icons, options, etc.
The age old conundrum of balancing ease of use against lots of features. Having created UIs in anger, I'm no smarter than anyone else and don't have any ideas to offer.
Further, I'm apprehensive about voice (w/ GPT). Methinks this will become the best strategy for reducing visual clutter.
Being an old, I just hate talking to my computer. Though I accept it feels natural for others, like my son's generation.
Oh now the shiny is back, but worse.
> armchair experts were laughing at us old people for reminiscing about the eye candy of OS X ...they told us... everything needs to be flat. ... You're not supposed to know the difference between text and buttons...
its pretty clear, i think, most of this stuff, including from the designers (apple) themselves are just post-hoc justifications in the end...It didn't have to be - there was a time they spent money and time on watching people use stuff and figured out how to improve them. Nowadays that sort of scientific process is only applied to increasing engagement and addiction on social media.
The pre-WWDC rumors suggested that iOS and macOS would be refreshed with inspiration from "Apple Vision Pro." However, after using the interface, I don't see much similarity beyond the use of translucency and some of the toolbar shapes.
I had preconceived notions from watching the WWDC videos before trying the new interface, but I didn't really get it until I used it. The videos don't do it justice and fail to provide a genuine feel for the experience.
Keep in mind that much of what you see in the videos consists of marketing renders.
Note: None of this is to claim that AR isn't going to be a thing. I completely believe it will come to dominate.
I feel Apple throws a "UI overhaul" WWDC when they want to occupy all the discussion about them with "UI discussion" while why buy themselves more time to work on things. People will spend all their time and effort arguing merits of UI that Apple fully intend to again as soon as they release it.
Bullshit. Nobody picked up a Hololens and thought "Oh no, I don't know what to do." Nobody put on a first generation Vision Pro and was clueless how to use it because the UI wasn't skeuomorphic and glass-like. AR has been around for decades and this hasn't been a necessity for anyone, ever.
Simply put: it's not important for AR/VR, and it's definitely not necessary for every other computing form factor to adopt it so that folks are somehow prepared to use AR someday. My laptop isn't AR, don't give me an AR interface because it's nice to be consistent across your product lineup.
The only take this post gets right, as best as I can tell: liquid glass gets stuff wrong, and it'll need to change before shipping.
Huh. I always took the move, which I seem to recall as being led by the Google Material folks, as a strategic move to kneecap Apple's huge graphics advantage on iPhones. Apple's hardware could actually execute aesthetically pleasing "real world" things on screen (shadows, blurs, etc). Whereas the ragtag hoard of Android devices, had only a few devices that could draw pretty things, and at large power consumption. It seemed a genius move that suddenly a gestalt of "uh, let's all just work with squares of color, ya know, like construction paper" emerged out of Google as "the new cool." It was marketed well, and the "eye candy space" had saturated, so Apple was forced to "catch up" after holding out for a few years.
Get a grip.
The total sales of the Vision are a fucking rounding error compared to the sales of the iPhone.
Glass is neither new nor is it some grand strategic vision.
What an embarrassment.
Or that your phone will start shedding tears every time you touch it
Genuinely -- fashion is fine, plaid is in this year, great! Whatever!
But so many bozos think they're doing "science about human behavior" when they do this, and they're not.
evantravers•19h ago
apples_oranges•19h ago
deadbabe•19h ago
out-of-ideas•19h ago
1: https://news.ycombinator.com/item?id=44243404
2: https://news.ycombinator.com/item?id=44226612
edit: forever formatting