Everywhere I look the adoption metrics and impact metrics are a tiny fraction of what was projected/expected. Yes tech keynotes have their shiny examples of “success” but the data at scale tells a very different story and that’s increasingly hard to brush under the carpet.
Given the amount of financial engineering shenanigans and circular financing it’s unclear how much longer the present bonanza can continue before the financial and business reality playing out slams on the brakes.
AI is basically a toy for 99% of us. It's a long long ways away from the productivity boost people love to claim to justify the sky high valuations. It will fade to being a background tech employed strategically I suspect - similar to other machine learning applications and this is exactly where it belongs.
I'm forced to use it (literally, AI usage is now used as a talent review metric...) and frankly, it's maybe helped speed me up... 5-10%? I spend more time trying to get the tools to be useful than I would just doing the task myself. The only true benefit I've gotten has been unit test generation. Ask it to do any meaningful work on a mature code base and you're in for a wild ride. So there's my anecdotal "sentiment".
I get it that some people just want to see the thing on the screen. Or your priority is to be a high status person with a loving family etc.. etc... All noble goals. I just don't feel a sense of fulfillment from a life not in pursuit of something deeper. The AI can do it better than me, but I don't really care at the end of the day. Maybe super-corp wants the AI to do it then, but it's a shame.
And yet, the Renaissance "grand masters" became known as masters through systematizing delegation:
Surely Donald Knuth and John Carmack are genuine masters though? There's the Elon Musk theory of mastery where everyone says you're great, but you hire a guy to do it, and there's the <nobody knows this guy but he's having a blast and is really good> theory where you make average income but live a life fulfilled. On my deathbed I want to be the second. (Sorry this is getting off topic.)
Steve Jobs wrote code early on, but he was never a great programmer. That didn’t diminish his impact at all. Same with plenty of people we label as "masters" in hindsight. The mastery isn’t always in the craft itself.
What actually seems risky is anchoring your identity to being the best at a specific thing in a specific era. If you're the town’s horse whisperer, life is great right up until cars show up. Then what? If your value is "I'm the horse guy," you're toast. If your value is taste, judgment, curiosity, or building good things with other people, you adapt.
So I’m not convinced mastery is about skill depth alone. It's about what survives the tool shift.
I've said this in the past and I'll continue to say it - until the tools get far better at managing context, they will be hard locked for value in most use cases. The moment I see "summarizing conversation" I know I'm about to waste 20 minutes fixing code.
> so it the model isn't the cause
Thing is, the prompts, those stupid little bits of English that can't possiu matter all that much? It turns out they affect the models performance a ton.
So you're at the "first they laugh at us" stage then.
But I will give you this, the "first they ignore us" stage is over, at least for many people.
People are absolutely torn. It seems that ai usage starts as a clutch, then it becomes an essential tool and finally it takes over the essence of the profession itself. Not using it feels like a waste of time. There’s a sense of dread that comes from realizing that it’s not useful to “do work” anymore. That in order to thrive now, we need to outsource as much of your thinking to GPT as possible. If your sense of identity comes from “pure” intellectual pursuits, you are gonna have a bad time. The optimists will say “you will be able to do 10x the amount of work”. That might be true, but the nature of the work will be completely different. Managing a farm is not the same as planting a seed.
This is 180 degrees from how to think about it.
The more thinking you do as ratio to less toil, the better. The more time to apply your intellect with the better machine execution to back that up, the more profit.
The Renaissance grand masters used ateliers of apprentices and journeymen while the grand masters conceived, directed, critiqued, and integrated their work into commissioned art; at the end signing their name: https://smarthistory.org/workshop-italian-renaissance-art/
This is how to leverage the machine. It's your own atelier in a box. Go be Leonardo.
I'm a professional developer, and nothing interesting is happening to the field. The people doing AI coding were already the weakest participants, and have not gained anything from it, except maybe optics.
The thing that's suffocating is the economics. The entire economy has turned its back on actual value in pursuit of silicon valley smoke.
There are a broad range of opinions but the expression seems to have been extremely chilled.
Here is my guess for the puzzle: creative work is subjective and full of scaffolding. AI can easily generate this subjective scaffolding to a "good enough" level so it can get used without much scrutiny. This is very attractive for a creative to use on a day to day basis.
But, given the amount of content that wasn't created by the creative, the creative feels both a rejection of the work as foreign and a feeling of being replaced.
The path is less stark in more objective fields because the quality is objective, so harder to just accept a merely plausible solution, and the scaffolding is just scaffolding so who cares if it does the job.
"creatives" tend to have a certain political tribe, that political tribe is well-represented in places that have this precise type of authenticity/etc. language around AI use...
Basically a good chunk of this could be measuring whether or not somebody is on Bluesky/is discourse-pilled... and there's no way to know from the study.
jp8585•1h ago
Terretta•40m ago
Or maybe teach your LLM to fix itself. Starting rule set:
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing