The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.
I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.
The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.
To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.
Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it
I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem
The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.
Therefore, phones are bad?
This is of course before we talk about what criminals do with money, making money truly evil.
Phones are utilities. AI companies are not.
Without Generative AI, we couldn’t…?
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.
And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.
After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:
- advertising
- astroturfing
- other forms of botting
- scamming old people out of their money
True, but no more true than it is if you replace the antecedent with "people".
Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.
It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.
Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.
Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]
As a filmmaker, my friends and I are getting more and more done as well:
https://www.youtube.com/watch?v=tAAiiKteM-U
https://www.youtube.com/watch?v=oqoCWdOwr2U
As long as humans are driving, I see AI as an exoskeleton for productivity:
https://github.com/storytold/artcraft (this is what I'm making)
It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.
I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.
Apart from all the other madness in the world, this is the one thing that has been a dream come true.
As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.
There's financial capital and there's labor capital. AI is a force multiplier for labor capital.
for an 2011 account that's a shockingly naive take
yes, AI is a labor capital multiplier. and the multiplicand is zero
hint: soon you'll be competing not with humans without AI, but with AIs using AIs
While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.
Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.
There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.
I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.
Here's a really old example of what that looks like (the models are a lot better at this now) :
And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
What is it that isn't being done here, and who isn't doing it?
Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.
[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...
[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...
Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.
I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.
I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.
You can still criticise without being mean.
Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.
“Dumb fucks”
Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.
Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.
For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.
Just do enough good that it dwarfs the evil uses of this awesome technology. AND WE NEED MORE GOOD PEOPLE TO CREATE GOOD WITH THIS TECHNOLOGY.
jaybyrd•1h ago
GolfPopper•1h ago
malfist•1h ago
jacquesm•1h ago
soulofmischief•49m ago
I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.
I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
nozzlegear•24m ago
> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.
callc•17m ago
Cool science and engineering, no doubt.
Not paying any attention to societal effects is not cool.
Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.
blibble•7m ago
I'd rather be dead than a cortex reaver[1]
and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that
[1]: https://www.youtube.com/watch?v=1egtkzqZ_XA
johnnyanmac•6m ago
I'm not really a fan of the "you criticize society yet you participate in it" argument.
>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
jaybyrd•1h ago
pesus•1h ago