Part of it is going through the VC gauntlet, I believe. Let's face it, to get money from VCs, you need to abase yourself, to learn how to lie to them and to yourself, to focus only on the survival of your company and pretending that you're going to make lots of money, regardless of your original goals and ideals. If you're a bit of a techie, you have just entered a world of appearances, where [it feels like] pretending to be successful and knowing something more than others matters more than actually doing something. And being kicked out would mean losing funding, which means everything for which you've twisted yourself into something you were not.
I think that this strongly favors hivemind/mob thinking.
Let me give you a technical parallel. A couple of engineers/architects from Big Co. that's hugely successful leave and go to Hot Startup. There they proselytize their One True Way because honestly that's all they know. Everybody in Hot Startup goes along with it because they are Senior Engineers from Big Co. who are now plotting the course and Big Co. is HUGE so they know what they're talking about. Now because Hot Startup is suddenly using the One True Way everybody else in the market tries to copy them because that's obviously why Hot Startup is Hot. This leads to a job market where people optimize for things used by Hot Startup. This tilts the skill set of the general tech market towards the One True Way making it gospel to a lot of people. So hiring managers who don't know the first about anything suddenly start optimizing for One True techs and ask for 20 years experience with React. They think they're doing the safe thing by using the same tech stack used by everybody else - the industry standard. Never mind that the "industry standard" changes every time it's convenient.
This is the same thing for CEOs. Oh you're having a slightly down quarter and have to answer to investors? Say you're using AI. That's the in-thing and will give you that bump to ride out the quarter. You screwed up in 2021-22 and hired a fuckton of people who are just sitting on their hands costing the company money? Say AI and get rid of them because they're not productive. It's got nothing to do with collusion or anything like that. It's just that people have mismatched expectations and things happen downstream of these unmanaged expectations.
That lot of people literally cannot get hired anywhere but startups because everyone else isn't so naive
> things happen downstream of these unmanaged expectations
It sounds like a metric fuckton of people need to retire or get out of the way already if they can't set expectations despite being in the exact position where they should be able to talk to multiple audiences
None of these excuses appease the investors nor the heads-down employees. Shit will have to change sooner rather than later. Many factors will make it so. This is exactly what defines a tech bubble.
As Picard says, "It is possible to commit no mistakes and still lose. That is not a weakness; that is life"
This hivemind is called Blackrock.
This doesn't match my experience talking with people outside of tech, and as such, the whole essay feels like a straw man. There definitely are some people who drank the kool-aid, but they seem like a minority? I don't live in the Bay Area though.
That, and nontech folks tend to assume it can do all the jobs except for the ones they actually know enough about to explain why actually an LLM can’t do that reliably enough to be even slightly okay.
That said, every major technology wave has needed a similar level of push, hype, and momentum to reach mass adoption. The Internet existed for decades before the public knew what to do with it. AOL gave such a huge push with the “You’ve got mail”, endless free trial CDs and an almost manic push to bring it into homes for it to become the foundation of modern life. The same was true of personal computers: early machines like the Apple II or IBM PC were expensive, clunky, and had little practical software. But without the evangelism, marketing, and cultural hype that surrounded them, the entire ecosystem might never have matured. So while the AI frenzy can feel excessive, some level of over-excitement may be what turns the technology from niche tools into something broadly accessible and transformative — just as it did for the web and the PC before it.
No one gets fired for suggesting no change.
It takes a special level of hype where “doing nothing” is no longer the sensible choice.
Do I wish this hype was spread around to other technologies that are also awesome, of course. I’d love to help someone figure out a way to do that but as of now, we don’t know how to do that. Humans are very bad at holding two different ideas in their head.
That’s what I’m disagreeing with. “Legitimate uses” isn’t something just hanging out in the ether to attach itself to useful technology it happens via a grinding sales process and big industry wide cultural changes.
People don’t like change.
I think AI and its knock-on effects in robotics will have massive productivity boosts in industries where productivity has been lagging for years. It will take decades and multiple boom-busts to happen to drag the population into change but it’ll happen.
People were standing in line for the first iPhone. Gmail had a waiting list. Tesla sold EVs far faster than they could make them.
On the other hand, I now literally have AI icons blinking in several apps, begging to be used. This isn't a regular marketing push of a brand-new product, it is companies desperately trying to justify their billions of dollars of sunk costs by bolting AI onto their existing products.
I'm skeptical the majority of tech experts are struggling to find the utility of them.
How big is your codebase?
Looks like we did not read the same article.
Yes. I am seeing this right now, but it's not just enough to join the chanting around the altar, you've got to be seen enthusiastically using AI as much as possible and it's turned the contributions of some of the most productive high level developers in my company into... ...well, slop.
I'm dealing with a bug from one such right now, where they wrote code that used my library, and it's not working for them, and they lack the understanding of their code to fix it, and the reason it's a bug I'm working on? Cursor couldn't fix it for them, and they lack the mental model to do so, because it's hard to mentally model code you didn't write.
I'm sure there's an inevitable "well, your company/colleague is doing it wrong then" critique incoming. And I agree.
But given that "doing it right" is often being defined as "using it as much as possible" by business leaders across the industry, then we get these paradoxical outcomes where doing so reduces productivity, but no-one is ready to admit that.
You've got to be AI-first, or AI-native, at least if you want the investors to stay invested.
I've never experienced such a collective adherence to tech hype (but probably only because my company couldn't figure out how to jam a blockchain and/or NFT in previously). Not even during the days when everyone was serverless, or "cloud-first" / "cloud-native".
It's wild.
TL;DR - we've always as an industry used appropriate tools for appropriate problems, right now it feels like so many of us are throwing AI slop at walls and seeing what sticks.
> If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to.
So it's saying "don't throw the baby out with the bathwater"? We can have our cake and eat it too, we can have AI and also have it not be damaging? Except we can't. The article gives these two claims as reasons for other things but doesn't realize they are the reasons we can't:
> because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.
> tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.
This kind of thing to me is like saying we could make a really great machine gun that would be used just for peacfully shooting tin cans, if only we could get people to stop using such things to kill each other. Well we can't do that as long as the people who make the decisions about making such things want to make things that kill people instead of putting holes in tin cans. (Not the people who actually make the guns! The people who make the decisions.) And so this part doesn't matter:
> But it's important to remember that there are a lot more of us. ... Very few agree with the hype bubble that the tycoons have been trying to puff up.
But the tycoons still have the money and still call the shots, and that's why we're in the mess we're in. Neither the technical reality nor the majoritarian reality matters to the tycoons. The above quoted statement could apply equal well to all sorts of aspects of modern society --- social media, streaming services, telecom, political ads, gerrymandering, you name it. Most people don't like a lot of what we've got.
And the reason for that is that we're in a situation where what most people want doesn't matter. All that matters is what a small bunch of people with a lot of money wants. So this is off base:
> Once in a while, you might hear some coverage of the critiques of AI, but even those will generally be from people outside the tech industry, and they will often solely be about frustrations or anger with the negative externalities of the centralized Big AI companies. Those are valid and vital critiques, but [...]
No buts. Those are valid and vital techniques, so we can stop there (for now). The technical concerns are of secondary importance at best. The real issue is the concentration of wealth and power. If there were no Google, no OpenAI, no Meta, no Nvidia, no Amazon, and so on, there would be no one in a position to ignore the technical issues and instead spew pre-enshittified garbage out and call it AI. And then maybe people could get down to the business of using it in non-evil ways. But worrying about how many technical people think this or that is a distraction from the issue that it doesn't matter what anyone thinks except a small group of disproportionately powerful people.
Terr_•8h ago