The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.
I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.
The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.
The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.
To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.
Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.
Therefore, phones are bad?
This is of course before we talk about what criminals do with money, making money truly evil.
Phones are utilities. AI companies are not.
Without Generative AI, we couldn’t…?
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.
And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.
After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:
- advertising
- astroturfing
- other forms of botting
- scamming old people out of their money
True, but no more true than it is if you replace the antecedent with "people".
Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.
It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.
And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?
Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.
But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.
The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.
Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.
Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]
Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.
As a filmmaker, my friends and I are getting more and more done as well:
https://www.youtube.com/watch?v=tAAiiKteM-U
https://www.youtube.com/watch?v=oqoCWdOwr2U
As long as humans are driving, I see AI as an exoskeleton for productivity:
https://github.com/storytold/artcraft (this is what I'm making)
It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.
I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.
Apart from all the other madness in the world, this is the one thing that has been a dream come true.
As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.
There's financial capital and there's labor capital. AI is a force multiplier for labor capital.
for an 2011 account that's a shockingly naive take
yes, AI is a labor capital multiplier. and the multiplicand is zero
hint: soon you'll be competing not with humans without AI, but with AIs using AIs
While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.
Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.
There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.
I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.
Here's a really old example of what that looks like (the models are a lot better at this now) :
It's ostensibly doing things you asked it, but in terms dictated by its owner.
and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding
and you're even paying them to replace you
And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.
What is it that isn't being done here, and who isn't doing it?
Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.
[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...
[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...
Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.
If it's not happening yet, it will...
I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.
I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.
You can still criticise without being mean.
I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.
I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.
I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.
Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.
Even though I think some books can be harmful, I do not propose attacking people who make printing presses.
What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?
If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.
Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.
“Dumb fucks”
Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.
okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?
Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.
For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.
Just do enough good that it dwarfs the evil uses of this awesome technology.
That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.
>Just do enough good that it dwarfs the evil uses of this awesome technology.
The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.
If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).
Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...
Anyway, it's just kind of meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.
Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.
imo there are actually too few answers for what a better path would even look like.
hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.
jaybyrd•1h ago
GolfPopper•1h ago
malfist•1h ago
jacquesm•1h ago
soulofmischief•1h ago
I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.
I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
nozzlegear•44m ago
> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.
callc•37m ago
Cool science and engineering, no doubt.
Not paying any attention to societal effects is not cool.
Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.
peyton•8m ago
blibble•27m ago
I'd rather be dead than a cortex reaver[1]
(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)
[1]: https://www.youtube.com/watch?v=1egtkzqZ_XA
johnnyanmac•26m ago
I'm not really a fan of the "you criticize society yet you participate in it" argument.
>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
dylan604•8m ago
jaybyrd•1h ago
pesus•1h ago
Sharlin•12m ago