Using the cybernetic to information theory to cog science to comp sci lineage was an increasingly limited set of tools to employ for intelligence.
Cybernetics should have been ported expansively to neurosci, then neurobio, then something more expansive like eco psychology or coodination dynamics. Instead of expanding, comp sci became too reductive.
The idea a reductive system that anyone with a little math training could A/B test vast swaths of information gleaned from existing forms and unlock highly evolved processes like thinking, reasoning, action and define this as a path to intelligence is quite strange. It defies scientific analysis. Intelligence is incredibly dense in biology, a vastly hidden, parallel process in which one affinity being removed (like the emotions) and the intel vanished into zombiehood.
Had we looked at that evidence, we'd have understood that language/tokens/embedded space couldn't possibly be a composite for all that parallel.
People will be swayed by AI-generated videos while also being convinced real videos are AI.
I'm kinda terrified of the future of politics.
On the other side we want to believe in something, so we'll believe in the video that will suit our beliefs.
It's an interesting struggle.
I don't fully believe anything I see on the internet that isn't backed up by at least two independent sources. Even then, I've probably been tricked at least once.
Would that change, maybe not, but maybe it would lessen the power grabs that some small few seem to gravitate towards.
I know if I wanted to influence the major elections, OpenAI, Google and Meta would be the first places I would go. That's a very small group of points of failure. Elections recently seem to be quite narrow, maybe they were before too though, but that kind of power is a silent soft power that really goes unchecked.
If people are more in tune with being mislead, that power can slowly degrade.
That doesn't scale.
During campaign season, they're already running as many rallies as they can. Outside the campaign train, smaller Town Hall events only reach what, a couple hundred people, tops? And at best, they might change the minds of a couple dozen people.
EDIT: It's also worth mentioning that people generally don't seek to have their mind changed. Someone who is planning on voting for one candidate is extremely unlikely to go to a rally for the opposition.
From the writing, through organized religion, printing press, radio and tv, internet and now ai.
Printing press and reformation wars is obvious, radio and totalitarianism is less known, internet and new populism is just starting to be recognized for what it is.
Eventually we'll get it regulated and adjust to it. But in the meantime it's going to be a wild ride.
Which are inapplicable today.
> We will adjust
Will we? Maybe years later... per event. It's finally now dawning on the majority of Britons that Brexit was a mistake they were lied about.
It is a concern... it took a few centuries for the printing press to spur the Catholic/Protestant wars and then finally resolve them.
No, they are not.
And if we are on the subject of being lied about, you might want to consider the deluge of (later ridiculed) Project Fear predictions that came to us not from some rando with a blog, but from senior Government figures like the Chancellor of the Exchequer.
A number on the side of a bus was gross, not net? Meh.
OP is speaking the truth here.
> It's finally now dawning on the majority of Britons that Brexit was a mistake
As of June 2025, 56 percent of people in Great Britain thought that it was wrong to leave the European Union, compared with 31 percent who thought it was the right decision.
https://www.statista.com/statistics/987347/brexit-opinion-po...
I'm not sure "motivated to make a bad decision" is the same thing as "achieved mind control".
What matters is how many people will suffer during this adjustment period.
How many Rwandan genocides will happen because of this technology? How many lynchings or witch burnings?
We will collectively understand that pixels on a screen are like cartoons or Photoshop on steroid.
About 40 years later the Rwandan genocide took place and many scholars attribute a preceding radio-based propaganda campaign as playing a key role in increasing ethnic violence in the aware.
Since then the link between radio and genocide seems to have decreased over time but it's likely that this isn't so much because humans have a better understanding of the medium but more so because propaganda has moved to more effective mediums like the internet.
Given that we didn't actually solve the problems with radio before moving onto the next medium it isn't likely that we'll figure out the problems with these new mediums before millions die.
I can't see how functioning democracy can survive without truth as shared grounds of discussion.
> I don't think the US was a monarchy for its first hundred years.
Did the US not have truth as shared grounds of discussion for its first hundred years?
Prior to the Internet the range of opinions which you could gain access to was far more limited. If the media were all in agreement on something it was really hard to find a counter-argument.
We're so far down the rabbit hole already of bots and astroturfing online, I doubt that AI deepfake videos are going to be the nail in the coffin for democracy.
The majority of the bot, deepfake and AI lies are going to be created by the people who have the most capital.
Just like they owned the traditional media and created the lies there.
> People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not.
And there were things like witch trials where people were burnt at the stake!
The resolution was a shared faith in central authority. Witness testimony and physical evidence don't scale to populations of millions, you have to trust in the person getting that evidence. And that trust is what's rapidly eroding these days. In politics, in police, in the courts.
It's like the old saying: They create their own ecosystem. Circular stock market deals being the most obvious, but the WorldCoin has been for years in the making and Altman often described it as the only alternative in a post-truth world (the one he himself is making of course).
[0] https://www.forbes.com.au/news/innovation/worldcoin-crypto-p...
Then you can see any conversation about the video will be even more divorced from reality.
None of this requires video manipulation.
The majority of people are idiots on a grand scale. Just search any social media for PEMDAS and you will find hordes of people debating the value of 2 + 3 / 5 on all sorts of grounds. “It’s definitely 1. 2+3 =5 then by 5 is 1” stuff like that.
Just consider how a screenshot of a tweet or made-up headline already spreads like a wildfire: https://x.com/elonmusk/status/1980221072512635117
Sora involves far more work than what is required to spread misinfo.
Finally, people don't really care about the truth. They care about things that confirm their world view, or comfortable things. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.
That same link has two “reader notes” about truth.
The lie is half way around the world etc, but that can also be explained by people’s short term instincts and reaction to outrage. It’s not mutually exclusive with caring about truth.
Maybe I’m being uncharitable — did you mean something like “people don’t care about truth enough to let it stop them from giving into outrage”? Or..?
How could all of this wind up leading to a much more fair, kind, sustainable and prosperous future?
Acknowledging risks is important, but where do YOU want all this to go?
But the kids who grow up with this stuff will just integrate into their life and proceed. The society which results from that will be something we cannot predict as it will be alien to us. Whether it will be better or not -- probably not.
Humans evolved to spend most of their time with a small group of trusted people. By removing ourselves from that we have created all sorts of problems that we just aren't really that equipped to deal with. If this is solvable or not has yet to be seen.
Moreover, I think it's really hard overall to imagine a better future as long as all of this technology and power is in the hands of massively wealthy people who have shown their willingness to abuse it to maintain that wealth at our expense.
The optimistic future effectively requires some means of reclaiming some of that power and wealth for the rest of us.
People have always been this way though. The tribes are just organized differently in the internet age.
[0] Trump as "King Trump" flying a jet that dumps shit onto protesters https://truthsocial.com/@realDonaldTrump/posts/1153982516232...
[1] https://www.snopes.com/news/2025/09/30/medbed-trump-ai-video...
What is truth? Pontius Pilate
Reality is specific. Actions, materials. Words and language are arbitrary, they're processes, and they're simulations. They don't reference things, they represent them in metaphors, so sure they have "meaning" but the meanings reduce the specifics in reality which have many times the meaning possibility to linearity, cause and effect. That's not conforming to the reality that exists, that's severely reducing, even dumbing down reality.
Or at least, words had meaning. As we become post-lexical, it becomes harder to tell how well any sequence of words corresponds to reality. This is post truth - not that there is no reality, but that we no longer can judge the truth content of a statement. And that's a huge problem, both for our own thought life, and for society.
Like cars making horse manure in cities a non-issue (https://www.youtube.com/watch?v=w61d-NBqafM)
Maybe the solution to everybody lying would be some way to directly access a person's actual memories from their brains..
While there were some debris instances IRL the freeway was completely shut down per the governors orders and nobody was harmed. (Had he not done this, that same debris may have hit motorists, so this was a good call on his part)
You could see the "Sora" watermark in the video, but it was still popular enough to make it in my reels feed that is normally always a different kind of content.
In this case whoever made that was sloppy enough to use a turnkey service like Sora. I can easily generate videos suitable for reels using my GPU and those programs don't (visibly) watermark.
We are in for dark times. Who knows how many AI-generated propaganda videos are slipping under the radar because the operator is actually half-skilled.
Curious what you used. I have an RTX 5090 and I've tried using some local video generators and the results are absolute garbage unless I'm asking for something extremely simple and uncreative like "woman dancing in a field".
For over a year now we've been at the point whereby a video of anyone saying or doing anything can be generated by anyone and put on the Internet, and it's only becoming more convincing (and rapidly)
We've been living in a post-truth world for almost ten years, so it's now become normalized
Almost half of the population has been conditioned to believe anything that supports their political alignment
People will actually believe incredibly far-fetched things, and when the original video has been debunked, will still hold the belief because by that point the Internet has filled up with more garbage to support something they really want to believe
It's a weird time to be alive
Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?
Truth absolutely is a thing. But sometimes, it's nuanced, and people don't like nuance. They want something they can say in a 280-character tweet that they can use to "destroy" someone online.
Our ways of thinking and our courts understand that you can’t trust what people say and you can’t trust what you read. We’ve internalized that as a society.
Looking back, there seems to have been a brief period of time when you could actually trust photographs and videos. I think in the long run, this period of time will be seen as a historical anomaly, and video will be no more trusted than the printed or spoken word is today.
I think we may revert back to trusting only smaller groups of people, being skeptical of anything outside that group, becoming a bit more tribal. I hope without too many deleterious effects, but a lot could happen.
But humans, as a species, are survivors. And we, with our thinking machines will figure out ultimately how to deal with it all. I just hope the pain of this transition is not catastrophic.
At that moment, it simultaneously became possible to create "deep fakes" by simply forging a signature and tricking readers as to who authored the information.
And even before that, just with speaking, it was already possible to spread lies and misinformation, and such things happened frequently, often with disastrous consequences. Just think of all the witch hunts, false religions, and false rumors that have been spread through the history of mankind.
All of this is to say that mankind is quite used to dealing with information that has questionable authorship, authenticity, or accuracy. And mankind is quite used to suffering as a result of these challenges. It's nothing particularly new that it's moving into a new media format (video), especially considering that this is a relatively new format in the history of mankind to begin with.
(FWIW, the best defense against deep fakes has always been to pay attention to the source of information rather than just the content. A video about XYZ coming from XYZ's social media account is more likely to be accurate than if it comes from elsewhere. An article in the NYTimes that you read in the NYTimes is more likely to be authentic than a screenshot of an article you read from some social media account. Etc. It's not a perfect measure -- nothing is -- but I'd say it's the main reason we can have trust despite thousands of years of deep fakes.)
IMO the fact that social media -- and the internet in general -- have decentralized media while also decoupling it from geography is less precedented and more worrisome.
Fakery isn't new, only the product of scale and quality at which it is becoming possible.
Everything is manipulated or generated until proven otherwise.
It smells of e/acc, effective altruist ethics which are not my favorite, but I don't work at OpenAI so I don't have a say I can only interpret.
I agree, but we will likely continue down this road...
This will simply take us back about 150 years to the time before the camera was common.
The transition period may be painful though.
I find it a bit more concerning that anyone would not already understand how deeply we exist in a "post-truth" world. Every piece of information we've consumed for the last few decades has increasingly been shaped by algorithms optimizing someone else's target.
But the real danger of post-truth is when there is a still enough of a veneer of truth that you can use distortions to effectively manipulate the public. Losing that veneer is essentially a collapse of the whole system, which will have consequences I don't think we can really understand.
The pre and early days of social media were riddled with various "leaks" of private photos and video. But what does it mean to leak a nude photo of a celebrity when you can just as easily generate a photo that is indistinguishable? The entire reason leaks like that were so popular is precisely because people wanted a glimpse into something real about the private life of these screen personalities (otherwise 'leaks' and 'nude scenes' would have the same value). As image generation reaches the limit, it will be impossible to ever really distinguish between voyeurism and imagination.
Similarly we live in an age of mass surveillance, but what does surveillance footage mean when it can be trivially faked. Think of how radicalizing surveillance footage has been over the past few decades. Consider for example the video of the Rodney King beating. Increasingly such a video could not be trusted.
> I'm kinda terrified of the future of politics.
If you aren't already terrified enough of the present of politics, then I wouldn't be worried about what Sora brings us tomorrow. I honestly think what we'll see soon is not increasingly more powerful authoritarian systems, but the break down of systems of control everywhere. As these systems over-extend themselves they will collapse. The peak of social media power was to not let it go further than it was a few years ago, Sora represents a larger breakdown of these systems of control.
People forget, or didn’t see, all the staged catastrophes in the 90s that were shortly afterwards pulled off the channel once someone pointed out something obvious (f.e. dolls instead of human victims, wrong location footage, and so on).
But if you were there, and if you saw that, and then saw them pull it off and pretend like it didn’t happen for rest of the day, then this AI thing is a nothing burger.
I consider myself pretty on the ball when it comes to following this stuff, and even I've been caught off guard by some videos, I've seen videos on Reddit I thought were real until I realised what subreddit I was on
[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...
It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.
I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.
Also because they have the funding to do it.
Reminds me a bit of the early Google days, Microsoft, Xerox, etc,
This is just what the teenage stage of the top tech startup/company in an important new category looks like.
It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.
OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.
It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision
I think he's being a bit harsh here. And there are some confounding factors why.
Yes, we have an AI bubble. Yes there's been a ton of hype that can't be met with reality in the short term. That's normal for large changes (and this is a large technological change). OpenAI may have some rough days ahead of it soon, but just like the internet, there's still a lot of signal here and a lot of work to still be done. Going through Suna+Sora videos just last night was still absoutely magical. There's still so much here.
But, OpenAI is also becoming, to use a Ben Thompson term, an aggregator. If it's where you go to solve many problems, advertising and more is a natural fit. It's not certain who comes out on top of the space (or if it can be shared), but there are huge rewards coming in future years, even after a bubble has popped.
Cal is having a very strong reaction here. I value it, but I wish it was more nuanced.
Ads destroy... pretty much everything they touch. It's a natural fit, but a terrible one.
Your jumping off point is a cliff into a pile of leaves. It looks correct and comfy but will hurt your butt taking it for granted. You’re telling people to jump and saying “it’ll get better eventually just keep jumping and ignore the pain!”
The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.
But also, a company that earnestly believes that it's about to disrupt most labor is going to want to grab as many of those bucks as possible before people no longer have income.
Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.
You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.
The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.
So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.
fwiw, there's no requirement to have a subscription to create content.
Whether AGI does or does not materialize sometime soon doesn't matter. OpenAI, like every company who wants to raise massive amounts of money, needs to show huge growth numbers now. It seems like the unfortunate, simple truth is that slop is a growth hack.
Not nearly on the level of "Kleenex" or "Google" as a term, but impressive given that other companies have spent decades trying to make a similar dent.
as another example: tesla has strung along a known overvaluation for a long time now and there's no end in sight despite a number of blunders
A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn't be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn't be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy"
zerosizedweasle•3h ago
It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."
That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.
cratermoon•3h ago
What we got next: porn
droptablemain•3h ago
c0balt•3h ago
I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.
knicholes•3h ago
benbayard•3h ago
blibble•3h ago
neonnoodle•3h ago
quantified•3h ago
Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.
There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.
noir_lord•3h ago
Attractive people in sexually fulfilling relationships still look at porn.
It's just human.
bossyTeacher•41m ago
How do you know those relationships are "sexually fulfilling"?
quantified•10m ago
rchaud•2h ago
Re: payment systems, Visa and MC are notoriously unfriendly to porn vendors, sending them into the arms of crooked payment processors like Wirecard. Paypal grew to prominence because it was once the only way to buy and sell on Ebay. Crypto went from nerd hobby to speculative asset, shipping the "medium of exchange for porn purchases" entirely.
As for broadband adoption, it's as likely to have occurred for MP3 piracy and being 200X faster than dialup, as it was for porn.
layer8•2h ago
Karrot_Kream•3h ago
In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.
port3000•2h ago
AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday
.....so where is this 100X increase in inference demand going to come from?
Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
Karrot_Kream•1h ago
> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.
hyperpape•42m ago
mola•2h ago
tmaly•2h ago
The app is fun to use for about 10 minutes then that is it.
Same goes for Grok imagine. All people want to do is generate NSFW content.
What happened to improving the world?
qwery•47m ago
I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.
standardUser•1h ago
hollerith•1h ago