I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.
I would argue that the most passionate AI optimism and pessimism stems from a conviction that it is an inevitable next step in evolution. Given the associated potency, it is hard to not take an extreme position with regard to it.
The positions in between seem to be of the form "everything will stay largely the same, but with a bit more automation", which seems naive rather than level-headed, imho.
Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.
Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.
> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.
I don't think the "we" here was intended to include the general population.
In the way that dark patterns get you to use/pay for a product/service that you might not want to, but are too confused, frustrated, or the cost/time tradeoff is not worth it to understand how to stop using/paying for the product/service. In terms of "AI" in products/services this would be the way that using such an assistant atrophies your skills and knowledge so that you become dependent on the product/service.
Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance. But it seem to be perfectly OK that it does that for the rest of the world.
It enables people to do non-consensual pornography. It may have democratized the realistic video part of it, but already was available the fan fiction part. Is the problem the democratization or the ability on who have enough budget or sponsored agenda for it? At some moment you have to cut somewhere and define where is the realistic line.
Same goes for misinformation, what is wrong is the democratization and not that the people with enough resources can do it?
About displacing industries, it depends, but was already a big abuse of some of those industries to people. Some will adapt. Some will become obsolete as it happened with the industries they replaced in their own turn.
AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.
They can, and do? What else would you call https://www.theguardian.com/world/2013/jun/06/nsa-phone-reco...
That's just one example, Snowden published tons of this stuff.
The encrypted part is kind of important. I blog about the topic a lot!
Try reading the post again? It's in there.
More impotent rage internet spam, zero direct call to action politically. Just circling existential dread in different words, I’ll bet.
The social gossip changed and no one knows which way is up despite the sky being right there still?
> Coordinated inauthentic behavior
> Misinformation
> Nonconsensual pornography
> Displacing entire industries without a viable replacement for their income
The first three of these existed and occurred before the arrival of AI. Perhaps AI makes doing the first 3 easier. If there are not laws governing the first three post-AI, do we need laws governing them? If so, what do those look like?
As for "displacing entire industries without a viable replacement for their income" - yea, as a civilization we need to retrain and reeducate those whose livelihoods are displaced by automation. This too has been true forever...
I think there are two takes:
- investors should know that consumers will eventually find their products distasteful for the lackluster quality - users can pay a little more for products that never have, and never will, use AI.
it's moreso that... the displacement is very... on the nose this time. but... jevons paradox withstanding, I think when you replace human calculators with computers what you end up with is wanting to crunch /even more numbers/. It never slows down. The labor only cheapens...
For some completely unspecified group of “we”. At least the post itself says “why I personally dislike AI”.
20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.
I looked it up having not seen the movie.
This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.
But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.
And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.
So yeah, interesting times.
With an analogy: Connecting an average human to social media is like connecting a Windows 95 machine to the internet.
https://www.history.com/articles/industrial-revolution-luddi...
The AI conversation tends to split folks along similar “passionate engineer craftsman” vs. “temporarily embarrassed billionaire” lines.
Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.
If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.
Case in point: the official white house social media account regularly posts low-effort AI meme propaganda.
There's a lot of anti-AI sentiment from the mainstream, but I'm noticing the pro-AI mainstream sentiment comes from people who are either technically-minded grifters looking to deploy automated solutions to snake $$$ from people's pockets, or lazy / disengaged worker drones who just want the computer to do their middling work for them. And it will, up to a point, where your "work" plateaus into a mess of predictable, non-novel banality lol, unless you invest the time to master the tool (which, for what it's worth, isn't like introducing the toothed saw, it's more like a Dremel or a SAWZALL that have specific purposes but casual users won't ever master them)
Buying a digital ELPH in 2001 didn't make you a photographer unless you were a photographer with an open mind. Squarespace doesn't make you a web designer unless you've studied the system and understand the tradeoffs. AWS doesn't solve infrastructure unless you learn how to architect a solution that works for your use-case. AI, by the same property, is shit until you research how it works, experiment with solutions, and find novel workflows to get something out the other end that's new, fresh and exciting.
Companies just rolling "AI" into their products aren't gonna win over customers and users unless they use the tool to deliver something of exceedingly-needed value. If it's a short-term grift or "hail mary", good luck! You'll need it!
Some more annoying personas in the AI space:
- AI CEOs lying to investors and claiming their AIs will one day be impossibly smart.
- Companies that are consumers of AI products having CEOs pushing AI onto their employees as a quick fix thinking that they will get magic productivity gains and be able to cut staff if they just force employees to use it (I have even seen some real examples of companies adding AI usage to performance reviews)
- Companies claiming to be AI-first without launching any significant AI-powered product
But I do think it’s a very measured way to read the situation to refrain from joining the knee-jerk into being an AI-hater. That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.
I think that some professionals who refrain from leveraging it from an ethical standpoint will legitimately fall behind their labor competition.
Worth considering: the blog in question that hosts this article is a furry blog. The furry community is largely creatives.
I also think that it’s a technology that didn't develop with some counterculture chops like many earlier technology innovations.
E.g., we could think about something like crypto that had an uphill battle against the establishment and was created with some level of ideological independence.
There are even some more corporate disruptions that plain and simple had better marketing behind them, like how Airbnb and Uber had widely disliked incumbents to “beat” in the market. Early Uber or Airbnb users were basically “beating the system.” At least, that’s how a lot of people perceived them, even if that didn’t turn out to be the reality.
In contrast, AI has felt much more like a corporate circlejerk among the wealthiest super-billionaires. There hasn’t even been the slightest facade of genuine do-goodery in this technology. Some wildly well-funded companies led by sociopathic robot-human CEOs made a plagiarism machine that my boss now insists I use for all my work.
I think that usually the people in the middle of the two extremes have the right thought process going on. It’s clear to me that AI is a great tool that isn’t going away, but perhaps its most passionate champions and detractors both need to settle down.
Sad to say, I can vouch for this.
"Make sure you use this website that costs the company money frequently"
I wonder how it will play out when the costs of using an AI service are no longer subsidised by venture capital? (For example Uber is just as expensive as normal taxis now.)
After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.
Try reading some Sarah Kendzior [0] if you want to lose whatever shreds you had left (or if you'd like to base any droplets of hope on a more accurate world-view).
Wynn-Williams, and the author of this post, both severely underestimate how dark the tech-bros vision for the future actually is and how far along they are. Envision Snow Crash, but without the humor.
0 - https://sarahkendzior.substack.com/p/ten-articles-explaining...
When ic the "German spy agency labels AfD as ‘confirmed rightwing extremist’ force". Which will now lead to the removal of AfD members and sympathizers from the civil service. I at least, have a little hope for Germany e.g. Europe. A little hope that the End of the Capitalist era will not end in fascism and there is another way now open for discus involving not only elites (aka tech-bros and old white male).
The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.
Any app that is willing to cross that line already has done so (e.g., Facebook).
It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.
Apple Intelligence as an example only seems to be reading information from Apple’s own default apps as of today, and their developer documentation suggests that capabilities require developers to implement features via their APIs.
It isn’t really a correct read of the situation to say that Apple Intelligence is “watching everything you do” like a service that is just watching your screen output at all times.
Even the service that does do that exact sort of thing (Windows Recall) has an extensive set of controls around filtering out specific apps and private browsing mode and other sensitive information, enabled by default: https://support.microsoft.com/en-us/windows/privacy-and-cont...
So I think the reality is that a lot of the big players making this technology recognize the privacy and security concerns and are designing their AI applications to address those concerns.
I personally feel like AI products are frequently launching with more transparency about data usage than a lot of Web 2.0 era applications like Facebook.
>> Something Awful had a particular response to furries. After creating a subforum specifically for furries to post in, everyone who used it was marked with a custom yellow star avatar, then banned.
I used to think the early internet was less shite than the current internet. Maybe I've been mollycoddled, wrapped in the HN blanket where behaviour like that would never be condoned. Maybe I need to see more social media. But, really? Yellow stars? OK wow.
https://www.youtube.com/playlist?list=PLXiMXx2shRmvI-PuBbmMl...
https://www.youtube.com/watch?v=6j6ZdVSrXNg
Or, a less drastic and recent example, a furry in Australia getting jumped unprovoked: https://x.com/Dennis_Gunnn/status/1918591407314661399
You can find hateful behavior everywhere if you look for it. This isn't exclusive to how people treat the furry fandom.
Mbwagava•9mo ago
Perhaps the calculator is as close as we'll ever get to "superintelligence".