Everyone assumes that the perfect communication medium is human (or human like) conversation. If that was the case, couple and marriage therapists wouldn't exist.
The interesting conversation is around art not as a product to be consumed but a medium of human expression with intent behind it.
I’d draw parallels to web design. Yes some frameworks made it incredibly easy for anyone to whip up a decent looking professional site. But then 95% of websites were the same boring single page scroll nonsense with some fancy CSS. It checks a box but isn’t notable. If you hire a designer to do something truly original then you can stand out.
It will be the same with AI where “good enough to check the box” becomes easy, but going beyond that still requires skill and experience.
The same applies to design. Most of the time, you get something that "doesn't suck", which is perfectly fine for projects where using a designer isn't worth it, like internal corporate pages. But consumer-facing pages require nuance to understand the client and their branding (which clients often struggle to articulate themselves), and that's not something current models can capture.
I guess this is primarily a business pattern. Or anti-pattern?
But as it's played out, there are a ton of use cases that don't fit neatly into that model, and each year the cloud platforms offer new tools that fit some of those use cases. AI will probably be able to string together existing tools as IaaS providers offer them, perhaps without even involving an engineer, but for use cases that are still outside cloud platform offerings, seem like things that require some ingenuity and creativity that AI wouldn't be able to grok.
I've had great success using Claude to produce a new landing page which is much more stylish than something I would produce myself. It's also nowhere near the standard expected from a professional designer, but for a FOSS app, that's just fine with me :)
Design by committee is known to produce terrible designs. The best an LLM can do is to completely copy a common decent design.
What's nice is that these sessions bleed into everything. You don't need to look through users' eyes that many times to find great improvement in UX sensibilities.
Since scheduling is the biggest pain point here, I just built a scheduler and a signup form at work. Everyone who gets a session walks away with positive feedback, the company as a whole becomes more interconnected, and now I'm working to get more and more folk on board.
My goal is to unleash a whole flotilla of white collar workers who understand the value of talking to the users such that they too push for lightweight, no action item, scheduled sessions which becomes standard practice as part of our careers and we all end up with better software as a whole.
And ideas born without knowledge about their "implementation" are, by definition, quite low resolution.
But what LLMs do, in the absence of better instructions, is expect that the user WANTS the most middling innocuous output. Which is very reasonable! It's not a lack of capability; it's a strong capability to fill in the gaps in its instructions.
The person who has a good intuition for design (visual, narrative, conversational) but can't articulate that as instructions will find themselves stuck. And unsurprisingly this is common, because having that vision and being able to communicate that vision to an LLM is not a well practiced skill. Instructing an LLM is a little like instructing a person... but only a little. You have to learn it. And I don't think as LLMs get better that this will magically fix itself, because it's not exactly an error, there's no "right" answer.
Which is to say: I think applying design to one's work with AI is possible, important, and seldom done.
It's the reason why LLMs are horrible at writing, and the reason why good design is really hard to get out of an LLM. Figma Make and Claude Code are really just using the out of the box CSS from shadcn that's why everything looks the same.
> After 2.5 years of insane hype, there’s no evidence that current AI is making the design process faster
I am designing prototypes faster today with LLMs this is just flat out wrong. And it's not really 2.5 years it's more like the last 6 months, GPT5, sonnet 4.0 and 4.5 have made this stuff viable to seriously use.
Perhaps it is finally true this time, but the AI hype machine has been making this very same claim for years now.
You'll have to understand if we insist on tangible results before buying that Kool-Aid in bulk.
I wonder how much of the stuff that actually takes up the time of the people we call "designers" to do their jobs is something our current crop of LLMs is good for. If it's 90% then LLMs could make you a 10x designer. If it's 10%, LLMs could make you a 1.11x designer (minus the time it takes to fiddle around with the LLM, of course).
One thing I've wondered and not been sure of, is that you can see productivity boost in parts of the process, but overall the end to end process doesn't seem to be getting done faster.
I'm not sure of this, but it's what I've observed at work.
You might be designing a lot of prototypes faster, but are you landing on something good quicker? Are you getting the final product out faster?
Yeah, but not because of the LLM, more because I have a strong opinion and multiple design options I want to get in front of customers asap. Instead of coding a UI myself which I can do, or working with an engineer, I can try a bunch of stuff, show it to customers early and see what resonates.
Its a combination of having a strong opinion while taking into account what the customer says and does. Otherwise we’re just building faster horses (https://hbr.org/2011/08/henry-ford-never-said-the-fast)
And this speaks to the whole AI hype and tendency to shove it into everything. It’s just a tool. When we do build stuff that uses AI it has to be because we find a problem worth solving and the solution characteristics happen to align with LLM’s, not because we actively want to use the tech and shoehorn it in.
The thing is on a computer today is that most design is prepackaged and not customized to what I'm doing. AI gets me a big step closer to that so much so that it is preferable to most of the prepackaged work done by a professional designer. Again, if I had a professional designer next to me to do slides and videos then that would be better, but very few people have that.
Product design isn't a layer that you apply. It's not an output of some prompt. It's a difficult-to-define process of crafting the interface between the user and product's functionality.
- When I use AI to vibe-code, it gives me a usable result but I personally have no idea if the output is “correct”. I am not sure if there are security vulnerabilities, I don’t know what is actually happening under the hood to make it work, etc.
- When my engineering friends use AI to vibe-design, I notice the same pattern. It looks “designed” but there are obvious usability issues, pattern mismatches for common user goals, and the UI lacks an overall sense of polish.
Basically, my takeaway is that AI is great for spinning things up quickly but it is not a replacement for fundamentals or craft.
> I am utterly disgusted. ... I strongly feel this is an insult to life itself.
https://www.youtube.com/watch?v=ngZ0K3lWKRc
or David Simon:
SHAPIRO: OK, so you've spent your career creating television without AI, and I could imagine today you thinking, boy, I wish I had had that tool to solve those thorny problems...
SIMON: What?
SHAPIRO: ...Or saying...
SIMON: You imagine that?
SHAPIRO: ...Boy, if that had existed, it would have screwed me over.
SIMON: I don't think AI can remotely challenge what writers do at a fundamentally creative level.
SHAPIRO: But if you're trying to transition from scene five to scene six, and you're stuck with that transition, you could imagine plugging that portion of the script into an AI and say, give me 10 ideas for how to transition this.
SIMON: I'd rather put a gun in my mouth.
Best summary I've heard so far
If anything, that gives some comfort around the future of engineering job prospects. While there's still room to worry, "yeah but design is fundamentally human, while engineering is mostly technical and can be automated", I'm sure, just as design has realized, that when we get to a point where AI should be taking over, we'll realize that there's a lot of non-technical things that engineers do, that AI cannot replace.
Basically, if replacing a workforce is the goal, AI image generators and code generators look like replacement technologies from afar, but when you look closer you realize they're "the right solution to the wrong problem", to be a true replacement tech, and in fact don't really move the needle. And maybe AI, by definition of being artificial and intelligence (as opposed to real common sense) as a whole, is fundamentally an approach that "solves the wrong problem" as a replacement tech, even as AGI or even ASI gets created.
I guess they’re vaguely cool looking images? If the author had used them to talk about how “concept art” in games/movies was going to get upended by AI there would be a point there, but as it stands I find it very puzzling that someone who claims to teach design would use them as key examples of why design - a human process of coming up with specific solutions to fuzzy problems with arbitrary constraints - was headed in any particularly direction.
This does not match my experience. I have been using Claude for speeding up design. Describe your page and ask it to use Tailwind and it will come up with some interesting designs. You still need a designer because some designs it comes up with are over the top, and need to be moderated.
At most AI prototypes and images serve the role that a whiteboard drawing or wireframe did before: that's a win, but it's not an monumental change in efficiency.
I ironically think AI is already there in terms of being capable of more, but no one has built the right harnesses for AI.
Loose tailwind classes and shadcn is not it.
This is antidotally untrue. I work for a small startup. We don't have the money / aren't willing to pay for a full-time designer. So, let's just say, our UI design has always been pretty terrible. With AI, using claude, to generate design and HTML / CSS based on requirements, the design that has been generated has been heads and tails better than anything we ever came up with alone.
Suno and image generators are totally different and riding the AI wave at the moment. YouTube turned into crap, and Instagram content as well. AI fatigue is here.
I do think you have to be pretty targeted with your predictions, though. Consumer product design seems to be evolving differently from B2B and at a different pace. Growth curves are different for each.
Better to just show progress instead.
Back then people were similarly incredulous of the entire idea of the internet and apps.
Can’t fit all that in a million tokens.
The current AI and design is in a honeymoon period, focused more on experimentation and functional components, using working design patterns, philosophies, templates, and components in abundance. I trust that it is indeed a good thing to accept “Good Enough Design” and layer in the better and best ones later. Right now, we have good enough designs everywhere.
immibis•12h ago
(If feeling without substance is all you need, then it's okay to use AI. AI Dungeon, for example, was pretty cool. Or slide backgrounds that would have otherwise been solid colours because they're worth $0 and you wouldn't have paid a designer.)
This first chart should be absolutely damning: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
kingkawn•12h ago
pton_xd•9h ago
mwigdahl•12h ago
monooso•12h ago
OP stated that:
> Developers who use AI think they're quicker and better, but they're actually slower and worse.
You responded that this is a "gross overgeneralization of the content of the actual study", but the study appears to back up the original statement. To quote the summary:
> When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
(I realise newer models have been released since the study, but you didn't claim that the findings have been superceded.)
CharlesW•11h ago
It doesn't, and the study authors themselves are pretty clear about the limitations. The irony is that current foundation models are pretty good at helping to identify why this study doesn't offer useful general insights into the productivity benefits (or lack of) of AI-assisted development.
mwigdahl•11h ago
Sure! The study focused on experienced devs working in complex codebases they already knew thoroughly. This was and is the worst case for using AI tooling from a cold start, _particularly_ AI tooling as it existed at the time.
There were also only 16 developers involved in the study.
Time has passed since the study and we've had an entirely new class of tool introduced (the agentic CLI a la Claude Code) as well as two subsequent generations of model improvement (Sonnet 3.7 to Sonnet 4 to Sonnet 4.5). Given that the results of the METR study were stated as an eternal, unqualified truth, the fact that tooling and models are much superior now compared to when the study was conducted is worth noting as well.
immibis•10h ago
CharlesW•12h ago
zdragnar•12h ago
I'd love to see a larger study on more experienced users. The mismatch between their perception and reality is really curious.
MattPalmer1086•11h ago
It's quite possible that it took less overall mental effort from the developers using AI, but it took more elapsed time.
prerok•10h ago
The mistakes were subtle but critical. Like copying a mutex by value. Now, if I would be writing the code, I would not make the mistake. But when reviewing, it almost slipped by.
And that's where the issue is: you have to review the code as if it's written by a clueless junior dev. So, building up the mental model when reviewing, going through all the cases and thinking of the mistakes that could possibly have happened... sorry, no help.
Maybe 10% of typing it out but when I think about it, it's taking more time because I have to create the mental model in my mind then create the mental model out of the thing that AI typed out and then have to compare the two. This latter is much more time consuming than creating the model in my mind and typing it out.
tjr•9h ago
MattPalmer1086•9h ago