Having your likeness used to express an opinion that is the opposite of your own is nasty too. You can produce the kind of thing that has no courtesy, no grace, no kindness or care for the people around you.
The mass extraction and substitution of art has also caused a lot of unnecessary grief. Instead of AI enabling us to pursue creative work… it’s producing slop and making it harder for newbies to develop their craft. And making a lot of people anxious, fearful, and angry.
And finally of course astroturfing, phishing, that kind of thing has in principle become a lot more sophisticated.
It unnerves me that people can pull this capital lever against each other in ways that don’t obviously advance the common good.
We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.
https://x.com/Rimmy_Downunder/status/1947156872198595058
(sorry about the x link couldn't find anything else)
The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.
That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.
It wasn’t just about writing, it felt like it understood the intention behind the message better than I did. That was the first time I questioned where we’re headed.
Most generative AI hallucinations aren’t just data errors. They happen because the language model hits a semantic dead-end — a kind of “collapse” where it can't reconcile competing meanings and defaults to whatever sounds fluent.
We’re building WFGY, a reasoning system that catches these failure points before they explode. It tracks meaning across documents and across time, even when formatting, structure, or logic goes off the rails.
The scariest part? Language never promised to stay consistent. Most models assume it does. We don’t.
Backed by the creator of tesseract.js (36k) More info: https://github.com/onestardao/WFGY
bearjaws•6mo ago
When importing the content back into Moodle, I come to find that one of the transcripts is 30k+ characters, and errored out on import.
For whatever reason, it got stuck in a loop that started like this:
"And since the dawn of time, wow time, its so important, time is so important. What is time, time is so important, theres not enough time, time is so important time"... repeat "time is so important" until token limit.
This really gave me a bit of existential dread.
lynx97•6mo ago
bearjaws•6mo ago