I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance. But it seem to be perfectly OK that it does that for the rest of the world.
It enables people to do non-consensual pornography. It may have democratized the realistic video part of it, but already was available the fan fiction part. Is the problem the democratization or the ability on who have enough budget or sponsored agenda for it? At some moment you have to cut somewhere and define where is the realistic line.
Same goes for misinformation, what is wrong is the democratization and not that the people with enough resources can do it?
About displacing industries, it depends, but was already a big abuse of some of those industries to people. Some will adapt. Some will become obsolete as it happened with the industries they replaced in their own turn.
AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
They can, and do? What else would you call https://www.theguardian.com/world/2013/jun/06/nsa-phone-reco...
That's just one example, Snowden published tons of this stuff.
The encrypted part is kind of important. I blog about the topic a lot!
Try reading the post again? It's in there.
More impotent rage internet spam, zero direct call to action politically. Just circling existential dread in different words, I’ll bet.
The social gossip changed and no one knows which way is up despite the sky being right there still?
> Coordinated inauthentic behavior
> Misinformation
> Nonconsensual pornography
> Displacing entire industries without a viable replacement for their income
The first three of these existed and occurred before the arrival of AI. Perhaps AI makes doing the first 3 easier. If there are not laws governing the first three post-AI, do we need laws governing them? If so, what do those look like?
As for "displacing entire industries without a viable replacement for their income" - yea, as a civilization we need to retrain and reeducate those whose livelihoods are displaced by automation. This too has been true forever...
For some completely unspecified group of “we”. At least the post itself says “why I personally dislike AI”.
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
So yeah, interesting times.
https://www.history.com/articles/industrial-revolution-luddi...
The AI conversation tends to split folks along similar “passionate engineer craftsman” vs. “temporarily embarrassed billionaire” lines.
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
Case in point: the official white house social media account regularly posts low-effort AI meme propaganda.
There's a lot of anti-AI sentiment from the mainstream, but I'm noticing the pro-AI mainstream sentiment comes from people who are either technically-minded grifters looking to deploy automated solutions to snake $$$ from people's pockets, or lazy / disengaged worker drones who just want the computer to do their middling work for them. And it will, up to a point, where your "work" plateaus into a mess of predictable, non-novel banality lol, unless you invest the time to master the tool (which, for what it's worth, isn't like introducing the toothed saw, it's more like a Dremel or a SAWZALL that have specific purposes but casual users won't ever master them)
Buying a digital ELPH in 2001 didn't make you a photographer unless you were a photographer with an open mind. Squarespace doesn't make you a web designer unless you've studied the system and understand the tradeoffs. AWS doesn't solve infrastructure unless you learn how to architect a solution that works for your use-case. AI, by the same property, is shit until you research how it works, experiment with solutions, and find novel workflows to get something out the other end that's new, fresh and exciting.
Companies just rolling "AI" into their products aren't gonna win over customers and users unless they use the tool to deliver something of exceedingly-needed value. If it's a short-term grift or "hail mary", good luck! You'll need it!
Some more annoying personas in the AI space:
- AI CEOs lying to investors and claiming their AIs will one day be impossibly smart.
- Companies that are consumers of AI products having CEOs pushing AI onto their employees as a quick fix thinking that they will get magic productivity gains and be able to cut staff if they just force employees to use it (I have even seen some real examples of companies adding AI usage to performance reviews)
- Companies claiming to be AI-first without launching any significant AI-powered product
But I do think it’s a very measured way to read the situation to refrain from joining the knee-jerk into being an AI-hater. That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
I think that some professionals who refrain from leveraging it from an ethical standpoint will legitimately fall behind their labor competition.
Worth considering: the blog in question that hosts this article is a furry blog. The furry community is largely creatives.
I also think that it’s a technology that didn't develop with some counterculture chops like many earlier technology innovations.
E.g., we could think about something like crypto that had an uphill battle against the establishment and was created with some level of ideological independence.
There are even some more corporate disruptions that plain and simple had better marketing behind them, like how Airbnb and Uber had widely disliked incumbents to “beat” in the market. Early Uber or Airbnb users were basically “beating the system.” At least, that’s how a lot of people perceived them, even if that didn’t turn out to be the reality.
In contrast, AI has felt much more like a corporate circlejerk among the wealthiest super-billionaires. There hasn’t even been the slightest facade of genuine do-goodery in this technology. Some wildly well-funded companies led by sociopathic robot-human CEOs made a plagiarism machine that my boss now insists I use for all my work.
I think that usually the people in the middle of the two extremes have the right thought process going on. It’s clear to me that AI is a great tool that isn’t going away, but perhaps its most passionate champions and detractors both need to settle down.
Sad to say, I can vouch for this.
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.
Any app that is willing to cross that line already has done so (e.g., Facebook).
It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.
Mbwagava•5h ago
Perhaps the calculator is as close as we'll ever get to "superintelligence".