Like, if you have an eCommerce store and you want an icon for mops, doesn't it kinda make sense to take your top 100 selling mops and blend them together into an icon?
I will take my extremely dense UI over an accessible interface that shows only 10 rows on a spreadsheet in 24 point font. Think of those with low vision!
I fully agree with you. But ...
> I will take my extremely dense UI over an accessible interface that shows only 10 rows on a spreadsheet in 24 point font. Think of those with low vision!
You will be Ctrl + FrontMouseScrollWheel to read your 3rd monitor too eventually.. be nice on HN.
I'm failing to parse this
For me, the perfect examples of feeling infantilized (I like that word) are the following: I feel infantilized by government or hospital illustrations that are meant to convey a simple message to the lowest common denominator. I feel infantilized by Youtube videos and TV ads that use these happy but annoying 'toddler' sounds in the background.
IME, there could be a lot more clarity in product and UX if more people were honest with themselves and others about just wanting "better" looking UI.
if designers need the AI hypetrain to bring back drop shadows and the basic usability affordances we had for the 30 years before they ruined things, so be it
I don’t hate the trend but I am underwhelmed. Just loaded up the Airbnb site and… it’s new icons. The actual UI I’m interacting with hasn’t changed a jot, though. Google’s Material UI stuff from a few days ago felt much more interesting.
I’m not convinced either.
Most of the stuff on HN isn’t exactly written by super geniuses, especially for blog posts without some kind of analysis.
I do agree with the article that before the iOS 7 flat design rush the barrier to entry for indie app developers was super high because it's damn hard to make the iOS <7 style look good. Flat is easy though. But with AI tools, the old style is suddenly available to lots of people again.
Don't overindex on one guy with a newsletter. My read on this article was this is someone trying to be the guy who called the next trend. He cites no evidence supporting the claim that it's the next trend, he's just trying to be the guy who puts a name to it so that he's an influencer if it does.
He also mentions how he's been designing icons in this style for decades, which should make a skeptical person wonder whether there might be some wishful thinking going on here.
What I can tell you for sure is that, as a professional designer, I have not received any memos mandating the use of this style of icons. I'll let you know when I do.
Don't undersell it. It also manages to run like ass on a desktop with a modern gaming GPU, and takes 5 years to load on a gigabit connection.
And got my location wrong. And decided to machine-translate everything away from my native language.
Was it a little fun? Sure? Maybe? It was cute for a moment or two. But "near production ready"?
Kids downloading emulators today and experiencing 40 year old consoles like the NES and 50 year old consoles like the Atari 2600 for the first time don't want an all text UI. They want a picture of that ancient hardware. It makes their experience feel more real. In contrast, Retroarch is that it's an emulator frontend designed that has no frills in its UI, and it's designed with a mindset that everything doesn't take up much space, everything provides value, and color schemes aren't all over the place, and there aren't really any icons. And because it's the ideal HN emulator, it's difficult to use and ugly.
“Delightful” UI/UX has become a cliché at this point, but it really does make me happy to see an element of craft and intention in the software I use, and stuff like these detailed little icons accomplishes that well!
> I treat AI as just that: a tool, not a shortcut to the final result—then there’s still a lot of room for craft, taste, and care.
I wholeheartedly agree with the author here.
Obviously companies need to define their own image in design, but that is not an excuse for bad design.
But that doesn’t mean we should lose sight of fun completely, which I think in many places we have!
> After Airbnb showed off their redesign, the internet exploded with soft, dimensional, highly detailed icon sets prompted into existence using generative AI tools.
One company's redesign + random proofs of concept does not indicate a real trend, and the idea that LLMs make designing with dimensionality in mind more accessible is dubious.
Good design requires consistency. High dimensionality makes consistency harder to achieve. LLMs perform better when there are fewer design nuances to consider. Additionally, we can expect LLMs to reinforce existing trends, as they're all trained on what exists today.
- Natural language interfaces. You can now communicate with your computer with language, voice or text. There are some situations where this is an improvement, but others where it's not. It'll be interesting to see how interfaces are designed to combine the best of both worlds.
- Adaptive interfaces. The UI/UX of the last period of computing is largely a solved problem. There are standard UX solutions for most types of problems. It's also become significantly easier to build these interfaces, and LLMs are pretty good at writing basic declarative UIs. I think the bar will be raised such that users expect their interfaces to adapt to them, instead of a one-size-fits-all solution.
- Immersive interfaces. This might be similar to "dimensional" but more about the actual UX instead of just how 3D the buttons and icons are. I think using 3 dimensions is a natural solution for expressing higher information density. VR and AR will eventually catch on in some form.
https://design.google/library/expressive-material-design-goo...
Bring me back to the era just after Geocities 1999. That pre-web 2.0 world saw some real futurism, before all the flatness took over.
Even Frutiger Aero was more forward thinking than Material.
It feels a lot like language divergence. iOS and Android started at the same place (same language), then became different accents, different dialects, and (soon/now) different languages.
Also dimensional icons have existed within flat UIs as app icons for quite some time, though some platforms have had periods of both flat icons and UI. In a sense they are adopting them in this existing usage as sub-app icons.
The oddest thing is the glossy "new" tags, they are the only tags within the UI which are glossy. Having them mixed with flat tags and flat buttons is honestly confusing, they look more like buttons than the actual buttons do.
> Back in the early 2000s, UI design like this had a high skill ceiling. It took years to master lighting, materials, and depth. Now? That level of craft is often just a prompt away.
Mastering any design style takes time, and the skill ceiling is not meaningfully different if there even is one. I'm also highly sceptical that AI would be able to be consistent enough whether generating flat or 3d icons.
This might not be different from say ... clicking a button causing a massive Rube Goldberg machine to take money out of my bank account, send emails about my booking to all interested parties with signals moving across continents before in some time in the future sending me a confirmation message when everything is finished. People are beginning to think in terms of the output of the action of pressing a button to book as automation instead of just output on a screen that says successful or failed attempt to book.
(If anyone from the Airbnb design team reads this, please, please, please, work on making sure that the absolute important information about a booking in text messages are visible in the text message. And that the page before booking does not hide information about the booking on an iPhone 13. You do not have control of the thousand different combinations of phones, operating systems, ect so when the information has be presented in only words it has to be done in a way that people understand them.)
On the theme, Zukitre it's flat but it has contrast.
It will do everything for you that you do now using the web ..we’ll be opening less apps and web browsers in a few years or less.
Here’s an inspiring one:
Sure you did.. https://openemu.org/
Just a couple days ago I was thinking about this (and in fact, started exploring icon packs for my desktop) that all the UIs I use are reduced to "black and white" icons and widgets. Colors are missing, sophisticated shapes too. Sometimes I am actually wondering what an icon is meant to represent.
At my new gig I have to use web Outlook (not allowed to use my finger-memorized mutt setup), and I must say it's a pleasure to look at the UI. Still line drawing icons, but and elegant play with colors at least. Similar to how some LibreOffice Icon packs look like.
I rather hope this is the future. Use colors as accents, leverage a "grouping" functionality with them.
What do you mean "colors"? I have been using Web Outlook for a while, and everything is blue black and grey with a ton empty space.
Going all the way back to Parc, pre-OSX Mac OS, VRML, OpenStep, and countless widget and windows managers for Linux (anyone remember Enlightenment (https://www.enlightenment.org/about.md ?) we've had deep dimensional design, animations, and heavy-weight skeuomorphics multiple times and have always returned to shallow static dimensional design for mass/generic applications.
I suspect it amounts to less overall mental effort which improves workflow speed and, thus, a greater perception of interactivity.
Maybe deep dimensional design will return when cheap and ubiquitous AR (augmented reality) or VR becomes a...reality, but I don't see it happening with 2D displays.
As a former freelance graphic/web designer I'd argue that the style of icon is much less important than the organization of those icons. E.g. some software will put 50 icons seemingly randomly into a toolbar and let you play a game of "find the right one". The quality of every single icon becomes utterly meaningless then as the organizing principle is wrong.
That means first an icon bas to be the right choice and be used in the right context, then we can talk about readability and how well it clicks and stylistic questions come dead last. Some people think graphic design is about coming up with cool looking styles. That is literally the opposite of what good graphic design is.
-Motif UI
-Plastik
-Platinum
-GNUStep/NextStep
-BlueCurve
All of them have clear outlines and contrast.
On Icons, Tango and Tango2 from Gnome are unbeatable. Maybe just Haiku/Be icons can truly be over Tango.
But the best UI is custom. It would be better if apps had themable UIs so users could apply custom themes if they want to make all their apps have a particular style (e.g. metal, sepia, colorful, modern). Because then the UI can be skeuomorphic or minimal or anything else.
Apple has recently been adding more UI customization. There were custom lock screens in iOS 17, custom app icons in iOS 18, and something's planned for iOS 19. It hasn't gotten much attention, maybe in part because there's still a lot you can't customize (especially in most non-Apple apps), more likely because Android is still even more customizable (especially because it's not as locked down). But Apple doing it is significant because they could start a more widespread trend.
In other words it reminds me more of early android design.
I like it.
I loathe this kind of self serving pseudo humble talk. It's like trying to make or validate something out of its minuscule value.
I don't agree with the author either. If anything Google's material themes have become the defacto trend setter for a while now, and I only know it by the way it "infects" my web usage over time.
Well, yes. Right now that's the trend. The author is saying that he's seeing a change now that indicates the coming end to that trend, not that the trend itself has ended.
FTFA:
> It’s always hard to pinpoint when a paradigm shift happens. Usually, you only recognize it in hindsight.
I don't read that as "The trend has ended", I read that as "In the future (i.e. hindsight) we are going to see these events as the beginning of a new trend".
Whether or not I agree with his position, to me, that is what he is saying.
Let me rephrase - the author is presenting speculation as ... speculation.
This is why I say that, even if I disagree, I don't see the harm in this sort of speculation.
Flat design stands against every single principle of proper design. All I can hope is 40 years from now, there isn't some "retvrn to tradition" BS where a new wave of the youth decide we should return to the "cool classic style of the 2000s-2020s". Let me be an old man and die with the software around me looking beautiful.
There is one problem tho: flat design is cheap to produce. Not sure if AI is there yet to be capable to produce good enough slop to become an industry trend that sticks.
the correct term is ugly flat ass actually
That and the other thing I hate / despair over is the monumental space they now leave between list items. In some applications I get scrolling drop-down menus because of that. Yes, they'll rather have users scrolling through a menu that doesn't fit on the screen rather than compressing space between items.
So they need to rewrite it and got a consultancy to do it. The whole UI got rewritten using react and used an off the shelf flat design. A week after launch they sent out a user survey and nearly every respondent complained about the flat UI. So they made it look like the old one again very quickly.
So now we have a shiny new React app that looks like something from 2001.
I have to wonder if anyone likes flat user interfaces or just user studies are broken.
TBH I’d guess that people would have complained whatever the new design was, people hate change. But yes, flat designs are often on the worse end of the spectrum
The consultancy resisted this horribly because the tech lead is a performative bullshitter and it doesn’t look good on his portfolio page that it looks like something from 2001.
My impression was that it was an attempt to get engineers to be able to do the design, rather than involving graphic artists.
I also think that we often conflate pretty with usable. There's nothing more interesting than these user interfaces that has grown organically for 20+ years. They look "bad" or at least old, but that doesn't mean that they necessarily have poor ergonomics. Some people, myself included, have tried to force that old-school, hodgepodge look, but you can't really do that either, it doesn't work. You just end up with ugly and confusing. Those interfaces has to evolve organically.
Why? Was it broken?
But if you need to do actual work you want maximum information density. You want icons that are easy to tell apart by color, not some sleek minimalist grey in grey.
If you use a tool every day for multiple hours your UI needs will be vastly different. We have forgotten how to build tools for power users.
IMHO Photoshop is still the classic example of this. The UI can feel overwhelming at first, like dropping into a helicopter cockpit. But once you start getting a hang of what you're doing, anything more "minimal" just feels like dumbing things down for the lowest common denominator.
From what I've seen in large enterprises, it's also why OG users are so attached to their mainframe terminal UIs. Yes, it's very hard to learn, but once you've developed some facility, everything else feels unusably slow.
I've never had a bad experience designing like I respect the users intelligence. Humans are insanely smart and capable, treat them that way and good results occur.
My experience isn't quite that. While most humans can be capable when they want to, in typical situations they often don't and aren't. People who have put in years to become proficient in mainframe terminals aren't representative people in a typical situation; most people (myself definitely included) perform most daily actions on autopilot.
EDIT
And that's why I like Vim and most TUI that much. I don't need to follow the cursor or wait an abitrary amount of time because "reasons". It's all muscle memory, and my attention is more on what I'm trying to do than how I'm doing it.
Obviously, maybe more obviouisly now than ever in recorded history, not all humans are smart or capable.
Regardless of capability, however, many humans excel at memorizing complex routes across obscure paths that they experience through spaced repetition, which research suggests can alter memory pathways in the brain to facilitate easier recall[1] and also engages memory functions in our nervous systems beyond our brain.[2]
Any UI, including bad ones, can foster efficient workflows in any user _if_ it accomplishes things compatible with repetitive use:
- the UI's behaviors and interactions are minimally interally consistent
- the UI has pathways from a starting point to a result that are discoverable through those behaviors and interactions
- the UI's reactions to input are sufficiently efficient to avoid arbitrary or dynamic pauses, which can disrupt effective repetition
- the UI's interactions are minimally accessible to people; if they use buttons, shapes, colors, sounds, controls, etc., a person can consistently distinguish between and physically access them when necessary
- a person interacts with the UI long enough to find those pathways from starting points to results, and does so repetitively over long time spans
Modern UI design often attempts to reduce the time to value for users at arbitrary experience levels, at the expense of maintaining the consistency of pathways that reward longtime users who have accumulated training.
The only people using the UI when the change happens are people with a non-zero amount of accumulated training. Any change disrupts consistency. It's a net negative to the people who are around to complain about it, and also resets the often competitive field of users; not only do experienced users have to relearn their workflows to avoid committing errors or wasting time, they also have to compete with new users who have easier access to results that previously required experience through repetition to efficiently reach.
For example, a UI designer might change the UI to surface a feature that they want users to access more easily by making it require 1 or 2 interactions to reach, but a veteran user already has "easy" access to that feature even if it takes 6 or 7 interactions to reach it, some of them obscure. If the change removes the result from the end of the old pathway and moves it to a new one that experienced users don't know, the new UI becomes less efficient for them no matter how smart or capable they are (or aren't). Both the new user and experienced user might be smart and capable or stupid and incompetent; the differentiating factor is experience.
Arguably, the "smart and capable humans" who use complex UIs are either the ones who achieve a level of power to prevent UI changes that degrade consistency of existing pathways to preserve their productivity at the expense of less-experienced users needing more time and training (at which point they probably don't need to use that UI anymore anyway, and the act mostly rewards other experienced users), or the ones who divert time that might be spent complaining about UI changes toward adapting to the new UI's pathways.
The truly disruptive UI/UX changes for repetitively used workflows are the ones that introduce unpredictable delays between interactions. Repetition rewards rhythm and consistent feedback, and unpredictable interaction delays destroy both.
I can imagine how shit it would have been if you have to log into windows and open a web app and use the mouse and stuff to click through a web form hacked up to do the job.
while i agree, i wish more dense applications like Photoshop took the Rhino3D approach of integrating a CLI directly into the interface. yes, you can click the icons or select tools from the menu, but being able to just type a command and arguments (or have it prompt for the arguments ex-post-facto) feels just incredible in an otherwise-GUI application, in a way that memorizing keyboard shortcuts just doesn't compare.
I don’t have any experience in running user studies, but it sounds difficult separate the momentary frustration and drop in efficiency that a change in _familiarity_ brings from an actual difference in long-term _usability_. Do you know if the user survey the consultancy did tried to account for this?
When have most users ever enjoyed a new UI in a system that they're used to? Genuinely asking, because while I enjoy things like new icon themes and even the UI of Windows 11, most of the time I've seen people complain about any new UI that displaces something that they're familiar with.
If I'm wrong and it's just the flat design that is the real issue (which might also be true), then wouldn't the solution be to pick any other modern look and feel, instead of necessarily reverting to the very old one? Not that there's necessarily anything wrong with the more old timey UI design, I think that Windows 9X versions had really good design, perhaps despite some usability issues like no proper fuzzy search in the actual OS etc.
I quite like how themable PrimeVue/PrimeReact/PrimeNG is and swapping themes shouldn't be something impossible, though I don't doubt that with many of the libraries out there that ends up being the case: https://primereact.org/inputgroup/ (click the little palette in the corner to switch themes)
I think that was the problem.
Blender.
Is has seen some drastic changes in UI but barely any backlash. Even holy-cows like right-click select got mercilessly slaughtered and I am not even mad about it, in fact I love the changes.
The main thing is that they are focusing on providing value to users and are dog footing their own software to create movies.
But yeah, generally people hate change and you should avoid changing things as much as possible. Sadly that doesn't work with the way incentives in most companies work.
You should see Material design or Windows 11.
The problem is designers who take 'flat' as 'literally nothing indicating its interactivity.'
I personally like the current idea of "modern" UI, although it does tend to get too bloated at times. I generally prefer something minimal compared to old designs which weren't consistent, made heavy use of shadows and were pretty chaotic. Don't try to tell me old UI was playful (although as evidenced in the post, skeuomorphic UI _can_ be).
The point about some interfaces' usability being hampered by flat UI is valid. Complicated applications full of buttons with single-color icons are very hard to navigate. I find this especially true for GTK apps where the design system enforces a very specific icon style. Example: Pinta was designed to be very close to paint.net, but due to its flat design, it is very hard to navigate. On the other hand, while paint.net may look a little outdated, the design is consistent and optimized for efficient workflows.
I think the ideal design is somewhere in between flat and skeuomorphic. IMHO programs like Office and Inkscape make UI elements clear, while maintaining the ability for efficient workflows. Icons are simple, but a touch of color makes it trivial to distinguish between them.
Not every UI design is perfect for everyone, but interfaces should be designed with different needs in mind. "Power users" most likely just need better keyboard support and care less about how the UI looks.
Both skeuomorphic UIs and flat UIs are particular points (or small regions) in a much vaster design space and we should not speak as if we are obligated to cycle between those two particular points, because it will become a self-fulfilling prophecy. There's many, many, many options beyond those two.
It used to be that on iOS the bottom tabs would get indented when they were selected. That Z-axis intent, that apps would have even when they weren’t very skeuomorphic, that I hope it returns.
Even just a couple of layer's worth of visual depth, that ".5D", is so very useful.
In my team we spend so much time picking icons, and checking that we are matching the design patterns of other apps (yawn...), that we have just ended up with a UX that doesn't prioritise the number one reason people use our app.
Just big buttons with words on them. Words become symbols and do at a glance you could still easily tell what everything is even if you wouldn't actually read it when using it.
I’ll be illiterate in that world, but my kids seem to grok it.
The issue with emoji is that they're very literal.
Or a cloud to mean "upload", or a thin moon crescent to mean "sleep".
These are things that don't derive from the real world (any more) and must be explicitly communicated.
It really is a very good point.
https://www.merriam-webster.com/dictionary/dimorphism "the existence of two different forms (as of color or size) of a species especially in the same population. sexual dimorphism. b. : the existence of a part (such as leaves of a plant) in two different forms."
Unfortunate similarity to diamorphine though.
For decades I have seen a bizarre iteration over various "aesthetics" of interface design, each successive step having been hailed as a new, better way to interact with our devices. Consistently each step has made interacting with devices more tedious.
Design isn't about how colorful your icons are or how many dimensions they have. I think it is very clear that the quality of the work designers do has drastically declined, especially the focus of aesthetic experience over usability has been an absolute disaster.
For example, I just bought a new scanner. It has a hot mess of a user interface. It mashes together all kinds of selection schemes, with little rhyme or reason. You just have to randomly click on things, navigating up and down, until you eventually find, say, the destination folder to write the scans to.
Just the other day I installed Pandora on my phone. I had to ask grok how to close the app, as it has no X button. Blarf. Probably somebody got an award for this.
"Why they added a camera bump? Sticking out of display lid. A thing nobody ask for".
followed by "Why the made the base flat. But removed the rounded palm rest? Adding a sharp corner."
followed by "Why the removed the dent which allowed to open the display lid, which didn't protruded out of the device?"
I appreciate nice design. But if it doesn't follows function, it is harmful.PS: I'm frustrated about the new ThinkPads with a camera bump sticking out of the display. Everybody hates camera bumps. The display lid is broad and provides much space for adding multiple cameras, microphones and sensors of all kind. I know...this topic is about software UI. The change for the sake of the change itself is not beneficial.
How many designer friends do you have? Do you know what they do daily? We know your preconception that regardless of company size and product they are just counting beans.
But there are plenty more. Why settle on the worst one?
Can you please tell me your thoughts on how it is "hard to defend"?
My thoughts: How can designers criticize the use of Comic Sans? If users use it where it's connotations (childlike, casual) are appropriate, such as birthday parties, and love it, who are designers to comment on it? I find this indefensible, as if design sensibilities have a foundation very much like mathematics or physics and there is a clearly Universal litmus test of good design and bad design. There isn't. In fact, arbitrary mores of fashion such as "Comic Sans is uncool" are the very tell that design has foundations as strong as a piece of string in the wind. The disdain for Comic Sans reeks of elitism, where designers gatekeep "good taste" based on arbitrary conventions.
However, I do agree that making fun of people picking the wrong font is a bit elitist. At least Comic Sans is easy to read, so one could do worse.
> ...good design and bad design. There isn't.
It's called good taste. It's not science of course, good design is organic, it evolves, it converges. See carcinisation.
> The disdain for Comic Sans reeks of elitism, where designers gatekeep "good taste" based on arbitrary conventions.
Kinda true, get over it. Trust the people with good taste and if you want to do great, pay them to do this work for you.
But if you have an uncanny love affair with Comic Sans, no force in the Universe can stop you, have fun with it, you are free to ignore everybody.
But yes, comic book lettering is done a specific way for a reason.
Aesthetics are essentially worthless for a user interface and should always be a secondary concern. But clearly designers have elevated aesthetics over usability, hence the numerous and constant redesigns of everything.
If you care about usability you know that a redesign necessarily comes at a great cost, since you are breaking many of the mental connections of your users. It is only justifiable if there is some serious gain by doing that.
That is one of my random thoughts: Windows could have kept the Windows 95 look and been perfectly usable. Sure there might have been a need for certain UI tweaks, but for most office/home use there was no reason to change it.
The whole "let's make it friendly" is annoying. If it's a tool make it practical. If you need to write a manual because of that, then please, go right a head and do that.
> ... Lesson: people with grave incompetence at programming feel completely competent to judge what programming is and should be.
Your own lesson applies here perfectly, only substitute programming with design.
I would say that it's more important to be competent in determining how design is going to be understood and used by users in their individual workflows. Few are more competent to judge that than the users themselves.
I'm pretty sure most folks have seen and experienced the negative impacts of designers changing things for the sake of change (or to justify their paychecks).
There is a massive amount of bad design out there — made by designers. There is a large amount of bad software implementations. By programmers. And awful chairs, food, and ...
Performance aside, the author's game selector design is so much nicer than Airbnb's implementation.
I don't mind the icons and general aethetic direction, but seriously undermined by the very noticeable, janky performance. Makes it feel cheap.
For the reason that in flat design, what is an interactive element and what is text or even decoration, is not clearly separated.
Especially people who are older and do not spend crazy amounts of time on their smartphones, since forever, struggle with flat design.
I could see this happening with my parents when the 3D UI buttons of the 80's and 90's and the skeomorphic design trend of the first decade gave way to flat design.
It's just not clear where a text ends and something that can be clicked or tapped on starts. When all you have to distinguish them is a change in background color or some flat frame.
You need to grow up with this tech or have an intricate understanding of digital interfaces to find this easy and/or natural.
I agree as far as using 3D objects goes. This is just a gimmick and a bad idea.
Any icon should be as simple as possible to convey meaning. Abstract; so it can be rendered in many adverse conditions without requiring change in shape or shading.
The icon sets in that blog post are the opposite of that.
Now? It's completely flatlined, the changes are incremental and made only in the service of getting you to buy more, view more ads and hide fees until the last possible moment in the checkout funnel.
Now I can ask it to do some frontend while I focus on backend in the meantime.
We just need the sales agent now.
Are they part of the actual airbnb "new site" or just the article's vision?
Then it turns out Microsoft brought back 3D in the worst way possible: The non-interactive parts are now 3D — and look interactive — while the interactive parts are still flat like before.
I am trying to find examples, here's one: https://imgur.com/a/8tjT42h
Notice the top part has a 3D graphic which invites clicking... but it is dead. And the live parts look dead and don't invite interaction.
Unlike fashion, where self-expression is central, UI/UX design isn't driven by aesthetic cycles - it's fundamentally about function. The goal is to disappear, serving as a seamless bridge between the user and the task at hand.
Skeuomorphism had its moment because it provided familiarity in the early days of digital interfaces, helping users transition into a new paradigm. But that need has passed. Design has evolved, not cyclically, but linearly - toward clarity, efficiency, and minimal cognitive friction.
What we're seeing now may be visually novel, but I don't believe it represents a true paradigm shift. If anything, it's a stylistic flourish layered on top of the same core goal: helping people get things done as easily and intuitively as possible.
That is very interesting: Modernism and its descendents were very much about minimalism - how much could you do with minimal components. It applies to visual aesthetics (including much of abstract art), writing (e.g., Hemingway), architecture, and much else, even some music (singer-songwriters, Philip Glass). It's democratic - anyone can do it, or far more than can design complicated aesthetics. There have been other trends since, some rejecting that concept, but you can see that minimalism everywhere, in clothes, in industrial design, and even in HN's design. 'Great designers', you may have heard, 'focus not on what to add, but on what to take away'.
AI enables maximalism; it could transform aesthetics in everything. It enables complexity - including in fashion, in architecture, in writing, almost everywhere.
This theory is that AI removes the issue of efficiency for the creator: AI allows people to create maximalist, or non-minimalist design easily. Still, minimalism's value is very much about efficiency for the user, including focus. Excess design is a distraction and is generally not productive - how does distracting us with detailed icons help us? What is the value?
I love that efficiency in modernism. HN takes the minimalist approach, iirc, in part to attract a community that is focused on its content and not bells and whistles. And I worry that in broader society, as people now routinely hide from very serious dangers (to freedom, to peace, from climate change, etc.), this new trend will be more circuses to distract us.
What's left is that every design choice must be functional leaving minimalism as the bleached bones of design - the only thing left after everything has been stripped away. I want to be clear that functionalism and minimalism are not synonymous, but one's impact on the other is rarely overstated.
- a shift happens when someone says it’ll happen
AND
- big businesses have such a love for color and dimension and have not been dulling everything down except for your personalized feed for years now
/s
Flat design has always struck me as an extremist response to an issue. Windows Vista required everyone to be on the same page design-language wise in order to look good, but can the same be said of Windows XP? You can make just a few parts of your app drop-shadowed, you can add some slight reflection gradients while keeping the whole thing matte, you do not need to get rid of all visual depth just to avoid the feeling of zeerust.
In my opinion, a lot recent UI/UX and visual design has become less about seeking to understand and improve the way we interact with machines and more about promoting a digital form of fast fashion full of trends that everyone is expected to follow - change for the sake of change. This post only provides more support for that.
In undergrad I took a course in Human Computer Interaction. It was a bunch of glorious retro materials from all the actual hard research done in the 20th century. After graduating and entering industry the most effort I've ever seen "professionals" put into HCI (and not just graphic design) is to recognize some interaction might be confusing and just go copy whatever apple does.
The debate about aesthetics just hides the ever declining usability of interfaces. Constantly redesigning everything so that it fits some global aesthetic mould is destructive to usability.
I get that EG you might find it hard to sell a green-screen app on an iPhone, but I don't see this diamorphic stuff adding much value.
I just opened the app and, aside from the animated tab icons shown in the article (which are super laggy on my device), the app looks exactly the same as always. How in the world is this a "landmark" redesign?
Inevitably these things are fashion, and big companies want to have just slightly unique experiences. Usually that means doing something hard that the average site will struggle to replicate for a while, be that squishy UX animations, elegant minimalism, now 3D.
Users may state a preference for "delight" and claim to be delighted by maximally rich graphics but I doubt it'll move the needle as a business priority.
As more AI slop pollute our digital consumption streams, my bet is that these attempts at simulacrum will quickly turn garish.
Hard no from me.
bitpush•18h ago
You know what comes after "dimensional" design? Radicial minimalism.
"Once you strip away everything extra, everything ornamental, what remains is the truth, and nothing more" There, I wrote a tagline for design trend of 2030.
1shooner•18h ago
aleksiy123•18h ago
Avicebron•17h ago
GenshoTikamura•12h ago
BobbyTables2•17h ago
dkdbejwi383•12h ago
terribleperson•17h ago
hliyan•15h ago
Kwpolska•13h ago
thyristan•12h ago
Actually, the developer should really do some work in the UI thread beyond just sending out a "button pressed" message: When I press the button, I need immediate feedback that it was pressed and that pressing it had an effect, that things are now happening. Too many UIs, especially in the web where round-trip-times can be long, rely on just firing of a message or a network request. The response to the user, by displaying a spinner, progress bar, modal or new page only happens asynchronously and after a comparatively long delay. This means that users will sometimes repeatedly click because "nothing happens", leading to multiple messages, leading to multiple actions, leading to a big mess.
So the UI thread should synchronously trigger user feedback, take a lock or other measures to prevent unintended repeated actions and start a timeout callback in case the triggered action doesn't happen in a sensible timeframe.
dkdbejwi383•12h ago
ahartmetz•9h ago
About the return of skeuomorphism, I do believe it's happening because people are fed up with flat everything in two colors, but I wish there was less oscillation around the center. As many have mentioned, Win2k was very good, and it was a middle ground between the extremes we're seeing today. Actually, it was extreme in one way: you could tell what a UI element was going to do without trying to click every pixel and seeing what happens.
kccqzy•7h ago
surgical_fire•10h ago
The UI back then was sometimes janky, but it was so much more useful. Icons were meaningful and easy to recognize despite the low resolution. Quite often interfaces were customizable, they were not afraid of users becoming power users. Peak UI for me was probably Winamp 2.
Nowadays it is just a bunch of flat glyphs, things hidden in hamburger menus, arcane submenus, etc. Probably done to increase "engagement" or whatever bullshit metric they want to minmax for.
SebastianKra•13h ago
For example, flat monochrome icons emerged, because previously the UI was massively overshadowing the main content [^1]. So we traded the recognizability of icons for a better overall hierarchy in the UI.
Now that this problem is solved, designers are looking to reintroduce recognizable icons without sacrificing the previous goals. In the AirBnB app, you’ll find that the busy icons are only used when they’re the main focus. Auxiliary icons remain flat.
[^1]: https://www.geeky-gadgets.com/wp-content/uploads/2011/09/fac...
layer8•10h ago
SebastianKra•6h ago
layer8•6h ago
agumonkey•7h ago