frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Internet's Too Fast

https://junejuice.bearblog.dev/the-internets-too-fast/
2•busymom0•3m ago•0 comments

The Calculated Typer – Haskell Symposium (ICFP⧸SPLASH'25) [video]

https://www.youtube.com/watch?v=uCPJ22aj_kI
1•matt_d•4m ago•1 comments

Fixing a MongoDB Replication Protocol Bug with TLA+ [video]

https://www.youtube.com/watch?v=x9zSynTfLDE
1•we6251•5m ago•0 comments

A Royal Gold Medal

https://daniel.haxx.se/blog/2025/10/21/a-royal-gold-medal/
1•leephillips•5m ago•0 comments

Stop buying cloud products: When your "smart home" suddenly turns into e-waste

https://www.wespeakiot.com/stop-buying-cloud-products-when-your-smart-home-suddenly-turns-into-el...
2•speckx•6m ago•0 comments

Thirty Year Operational Experience of the Jet Flywheel Generators [pdf]

https://scientific-publications.ukaea.uk/wp-content/uploads/Preprints/pre-CCFE-PR1728.pdf
1•zeristor•6m ago•0 comments

Joe Brockmeier (jzb) on LWN's 'Vintage' Style

https://hachyderm.io/@jzb/115413478341532720
1•phoronixrly•7m ago•0 comments

Michael Levin – Aging as a Loss of Goal-Directedness

https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.202509872?af=R
2•myth_drannon•8m ago•0 comments

The Gnome Way

https://blogs.gnome.org/aday/2017/08/08/the-gnome-way/
1•airhangerf15•9m ago•0 comments

My wife gave me 100 days to make it as an indie creator

https://blog.jacobstechtavern.com/p/my-wife-gave-me-100-days
2•jakey_bakey•11m ago•0 comments

Open AI launches browser Vibe Check

https://every.to/vibe-check
2•sam1r•13m ago•2 comments

NAT traversal improvements, pt. 2: Challenges in cloud environments

https://tailscale.com/blog/nat-traversal-improvements-pt-2-cloud-environments
1•CharlesW•14m ago•0 comments

Rare Earths Recovery from Ewaste

https://arstechnica.com/science/2025/10/breaking-down-rare-earth-element-magnets-for-recycling/
1•DaveZale•15m ago•0 comments

AWS outage: Are we relying too much on US big tech?

https://www.bbc.com/news/articles/c0jdgp6n45po
5•devonnull•16m ago•0 comments

Hammurabi Currency Converter

https://justine.lol/inflation/
2•jart•17m ago•0 comments

Use Cursor agent inside any ACP compatible IDE

https://github.com/roshan-c/cursor-acp
1•parting0163•17m ago•0 comments

OpenAI Looks to Replace the Drudgery of Junior Bankers' Workload

https://www.bloomberg.com/news/articles/2025-10-21/openai-looks-to-replace-the-drudgery-of-junior...
1•megacorp•19m ago•0 comments

Show HN: Playbook AI – knowledge base for using AI in product development

https://aidevplaybook.com/en
1•greatgenby•19m ago•0 comments

MIT Maritime Consortium Releases "Nuclear Ship Safety Handbook"

https://news.mit.edu/2025/mit-maritime-consortium-nuclear-ship-safety-handbook-1020
1•gnabgib•19m ago•0 comments

Sora 2 Go – Make pro videos using OpenAI's Sora 2, no invite needed

https://sora2go.lovable.app/
1•vannventures•22m ago•1 comments

The Slack-O-lantern says back to woooOOOoooOOOrk [video]

https://www.youtube.com/shorts/Ouu0oi0mcY4
2•ohjeez•23m ago•0 comments

MinIO Goes Source-Only Distribution

https://github.com/minio/minio/issues/21647
1•tiri•23m ago•1 comments

Do we need to be saying 'please' and 'thanks' to AI?

https://www.rnz.co.nz/life/lifestyle/do-we-need-to-be-saying-please-and-thanks-to-ai
4•billybuckwheat•25m ago•0 comments

Fast Slicer for Batch-CVP: Making Lattice Hybrid Attacks Practical

https://eprint.iacr.org/2025/1910
1•nabla9•25m ago•0 comments

OpenAI Is Building a Banker

https://www.bloomberg.com/opinion/newsletters/2025-10-21/openai-is-building-a-banker
2•ioblomov•28m ago•1 comments

Modal editing is a weird historical contingency we have through sheer happensta

https://buttondown.com/hillelwayne/archive/modal-editing-is-a-weird-historical-contingency/
1•todsacerdoti•28m ago•0 comments

Show HN: I scraped 10k+ remote tech jobs into one feed

https://jobdit.co
1•imadbkr•28m ago•0 comments

'Sean Dummy': Musk and Duffy Brawl over the Future of NASA

https://www.politico.com/news/2025/10/21/elon-musk-sean-duffy-nasa-future-00616827
1•JumpCrisscross•28m ago•0 comments

Israeli flag found on hacked Malaysian water company website

https://aseannow.com/topic/1376426-israeli-flag-on-hacked-malaysian-website/
3•jataget•28m ago•0 comments

'It's PR, not the ER': Gen Z is resisting the workplace emergency

https://www.washingtonpost.com/business/2025/10/21/gen-z-workplace-emergencies/
5•nlawalker•30m ago•2 comments
Open in hackernews

Is Sora the beginning of the end for OpenAI?

https://calnewport.com/is-sora-the-beginning-of-the-end-for-openai/
95•warrenm•2h ago

Comments

zerosizedweasle•2h ago
"Whether Sora lasts or not, however, is somewhat beside the point. What catches my attention most is that OpenAI released this app in the first place.

It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."

That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.

cratermoon•2h ago
What was predicted to be next: AGI

What we got next: porn

droptablemain•1h ago
to be fair we also got Stephen Hawking bungee jumping | snowboarding | wrestling | drag racing | ice skating | bull-fighting | half-pipe
c0balt•1h ago
To be very fair here, a long time before gpt-5 porn was already being produced with stable diffusion (and other open models). Civitai in particular was an open playground for this with everything from NSFW loras, prompts to fined tuned models.

I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.

knicholes•1h ago
Wait, we got porn?
benbayard•1h ago
Yes, Sam A said that "erotica" was coming to openAI. I don't think he's mentioned visual pornography though https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...
blibble•1h ago
I can't imagine the republican party is going to be particularly happy about AI being used for mass porn generation
neonnoodle•1h ago
prompt records = mass blackmail generation
quantified•1h ago
Porn has driven everyday tech. Online payment systems, broadband adoption.

Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.

There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.

Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.

noir_lord•1h ago
> There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.

Attractive people in sexually fulfilling relationships still look at porn.

It's just human.

rchaud•53m ago
This is a meme I see online often (and in the show Silicon Valley), but I don't think it holds up in practice.

Re: payment systems, Visa and MC are notoriously unfriendly to porn vendors, sending them into the arms of crooked payment processors like Wirecard. Paypal grew to prominence because it was once the only way to buy and sell on Ebay. Crypto went from nerd hobby to speculative asset, shipping the "medium of exchange for porn purchases" entirely.

As for broadband adoption, it's as likely to have occurred for MP3 piracy and being 200X faster than dialup, as it was for porn.

layer8•46m ago
At least the valuations make sense now. ;)
Karrot_Kream•1h ago
What signals have you seen that point to investment being predicated around AGI? Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts. That's a much more "sober" outlook than AGI.

In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.

port3000•1h ago
"Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts"

AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday

.....so where is this 100X increase in inference demand going to come from?

Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...

Karrot_Kream•5m ago
Integrations and inference costs aren't necessarily 1:1. Integrations can use more AI, reasoning models can cause token explosion, Jevons Paradox can drive more inference tokens, big businesses and government agencies (around the world, not just the US) can begin using more LLMs. I'm not sure integrations are that simple. A lot of integrations that I know of are very basic integrations.

> Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...

While I haven't read the article yet, if this is true then yes this could be an indication of consumer app style inference (ChatGPT, Claude, etc) waning which will put more pressure on industrial/tool inference uses to buoy up costs.

mola•1h ago
I think the motivation for someone like Altman is not AGI, it's power and influence. And when he wields billions he has power, it doesn't really matter if there's AGI coming.
tmaly•49m ago
We were promised AGI and all we are getting is Bob Ross coloring on the walls of a Target store.

The app is fun to use for about 10 minutes then that is it.

Same goes for Grok imagine. All people want to do is generate NSFW content.

What happened to improving the world?

marshfarm•2h ago
Was a mistake from the beginning to use language as the basis for tokens and embedded spaces between them to generate semantics. It wasn't thought out, it was a snowball trial and error that went out of control.
bongodongobob•1h ago
Lol ok. We'll wait for your game changing technology, keep us posted.
kick_in_the_dor•1h ago
OP has a point. Are these type of embeddings the best way to model thought?
brokensegue•1h ago
it's the best way we've found so far?
mentalgear•1h ago
Certainly not the best, just the most practical / commercially sell-able. And once a pattern, like LLM text embedding is established as the way to "AGI", it takes years for other more realistic approaches to gain funding again. Gary Marcus wrote about this extensively how legitimate AGI research is actually being put back years due to the LLM superficial AGI hype.
marshfarm•1h ago
I think the best way would have been to assume thought is wordless (as the science tells us now), and images and probability (as symbols) are still arbitrary. That was the threshold to cross. Neither neurosymbolic, nor neuromorphic get there. Nor will any "world model" achieve anything as models are arbitrary.

Using the cybernetic to information theory to cog science to comp sci lineage was an increasingly limited set of tools to employ for intelligence.

Cybernetics should have been ported expansively to neurosci, then neurobio, then something more expansive like eco psychology or coodination dynamics. Instead of expanding, comp sci became too reductive.

The idea a reductive system that anyone with a little math training could A/B test vast swaths of information gleaned from existing forms and unlock highly evolved processes like thinking, reasoning, action and define this as a path to intelligence is quite strange. It defies scientific analysis. Intelligence is incredibly dense in biology, a vastly hidden, parallel process in which one affinity being removed (like the emotions) and the intel vanished into zombiehood.

Had we looked at that evidence, we'd have understood that language/tokens/embedded space couldn't possibly be a composite for all that parallel.

Sohcahtoa82•1h ago
I'm more concerned about Sora (and video-generating AI in general) being the final pour that cements us into our post-truth world.

People will be swayed by AI-generated videos while also being convinced real videos are AI.

I'm kinda terrified of the future of politics.

pmontra•1h ago
Maybe they'll have to tour and meet people in person because videos will be devoid of trust.

On the other side we want to believe in something, so we'll believe in the video that will suit our beliefs.

It's an interesting struggle.

bilekas•1h ago
Yeah I'm just as annoyed with the AI slop that's coming out as anyone, but the next generation of voters won't believe a thing and so they will be pushed towards believing what they see in real life like campaigners who go door to door etc. It could be a great thing and would give meaning to he electoral system again ironically!
kulahan•1h ago
Honestly I can't see a solution beyond concentrating power to highly localized regions. Lots more mayors, city councils, etc. so there is a real chance you can meet someone who represents you.

I don't fully believe anything I see on the internet that isn't backed up by at least two independent sources. Even then, I've probably been tricked at least once.

bilekas•1h ago
It may come to that, where the federal power is less influential and there to mainly manage overall services of the nation etc, and then the local states let's say manage themselves, if I'm not wrong that was kind of the original idea of the great experiment. It doesn't sound inherently wrong untill you add tribalism into the mix where people are not working with eachother, but that seems to be the major push these days, at least that's the sentiment I get.

Would that change, maybe not, but maybe it would lessen the power grabs that some small few seem to gravitate towards.

I know if I wanted to influence the major elections, OpenAI, Google and Meta would be the first places I would go. That's a very small group of points of failure. Elections recently seem to be quite narrow, maybe they were before too though, but that kind of power is a silent soft power that really goes unchecked.

If people are more in tune with being mislead, that power can slowly degrade.

tinfoilhatter•1h ago
Most members of the US congress and the current presidential administration, are already devoid of trust. I can't speak for other countries governments, but it seems to be a fairly common situation.
Sohcahtoa82•1h ago
> Maybe they'll have to tour and meet people in person

That doesn't scale.

During campaign season, they're already running as many rallies as they can. Outside the campaign train, smaller Town Hall events only reach what, a couple hundred people, tops? And at best, they might change the minds of a couple dozen people.

EDIT: It's also worth mentioning that people generally don't seek to have their mind changed. Someone who is planning on voting for one candidate is extremely unlikely to go to a rally for the opposition.

ajuc•1h ago
Every breakthrough in information technology caused disruption in the historical sense (i.e. millions of deaths).

From the writing, through organized religion, printing press, radio and tv, internet and now ai.

Printing press and reformation wars is obvious, radio and totalitarianism is less known, internet and new populism is just starting to be recognized for what it is.

Eventually we'll get it regulated and adjust to it. But in the meantime it's going to be a wild ride.

bonoboTP•1h ago
We will adjust. And guess what, before photography, people managed somehow. People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not. The way people dealt with it was witness testimony and physical evidence.
sofixa•1h ago
> The way people dealt with it was witness testimony and physical evidence.

Which are inapplicable today.

> We will adjust

Will we? Maybe years later... per event. It's finally now dawning on the majority of Britons that Brexit was a mistake they were lied about.

bonoboTP•1h ago
That has nothing to do with GenAI.
sofixa•16m ago
Yep, it's only made worse by it.
schnable•1h ago
> Maybe years later...

It is a concern... it took a few centuries for the printing press to spur the Catholic/Protestant wars and then finally resolve them.

pessimizer•1h ago
> Which are inapplicable today.

No, they are not.

wongarsu•1h ago
Brexit is a great example how you can just lie by writing stuff on the side of a bus, no fake photos or videos required
sofixa•46m ago
Exactly, it proves how easy it is to influence people. Which would be even easier with fake photos and videos.
hn_throw2025•1h ago
Oh, do shut up. I don’t know if you’re a Brit, but assuming you’re not, I have to inform you that the European project has never been particularly popular over here. But I’m sure you would prefer to believe that Cambridge Analytica achieved mind control over 17.2M people.

And if we are on the subject of being lied about, you might want to consider the deluge of (later ridiculed) Project Fear predictions that came to us not from some rando with a blog, but from senior Government figures like the Chancellor of the Exchequer.

A number on the side of a bus was gross, not net? Meh.

afavour•1h ago
> Oh, do shut up

OP is speaking the truth here.

> It's finally now dawning on the majority of Britons that Brexit was a mistake

As of June 2025, 56 percent of people in Great Britain thought that it was wrong to leave the European Union, compared with 31 percent who thought it was the right decision.

https://www.statista.com/statistics/987347/brexit-opinion-po...

Teever•1h ago
Of course we will adjust. That is a truism that is besides the point.

What matters is how many people will suffer during this adjustment period.

How many Rwandan genocides will happen because of this technology? How many lynchings or witch burnings?

bonoboTP•1h ago
It's not beside the point.you can lie with words, you can lie with cartoons and drawings and paintings. You can lie with movies.

We will collectively understand that pixels on a screen are like cartoons or Photoshop on steroid.

mola•1h ago
Yes, that adjustment could well be monarchy.

I can't see how functioning democracy can survive without truth as shared grounds of discussion.

bonoboTP•1h ago
I don't think the US was a monarchy for its first hundred years.
JadeNB•1h ago
> > I can't see how functioning democracy can survive without truth as shared grounds of discussion.

> I don't think the US was a monarchy for its first hundred years.

Did the US not have truth as shared grounds of discussion for its first hundred years?

timschmidt•6m ago
https://en.wikipedia.org/wiki/Yellow_journalism has been a thing for a very long time.
safety1st•1h ago
The media's been lying to us for as long as it has existed.

Prior to the Internet the range of opinions which you could gain access to was far more limited. If the media were all in agreement on something it was really hard to find a counter-argument.

We're so far down the rabbit hole already of bots and astroturfing online, I doubt that AI deepfake videos are going to be the nail in the coffin for democracy.

The majority of the bot, deepfake and AI lies are going to be created by the people who have the most capital.

Just like they owned the traditional media and created the lies there.

afavour•1h ago
We'll have to adjust, certainly. But that doesn't mean nothing bad will happen.

> People gossiped all sorts of stuff, spread malicious runors and you had to guess what's a lie and what's not.

And there were things like witch trials where people were burnt at the stake!

The resolution was a shared faith in central authority. Witness testimony and physical evidence don't scale to populations of millions, you have to trust in the person getting that evidence. And that trust is what's rapidly eroding these days. In politics, in police, in the courts.

daxfohl•1h ago
Most people's minds are already made up. All this does is add some confirmation bias so they can feel better about what they were already certain of. I don't think it fundamentally changes anyone's opinions.
sixothree•1h ago
The willingness of people to believe misinformation today is astounding. They are already choosing to surround themselves with voices of hate and anger. I don't want to see what's next for them.
CaptainOfCoit•1h ago
"See it to believe it" will once again be more important.
mentalgear•1h ago
Well, maybe it's less about Sora, but how they push the world towards making their next product essential: WorldCoin [0], Altman's blockchain token system (the one with the alien orb) to scan everybody's biometric fingerprint and serve as the only Source of Truth for the World - controlled by one private company.

It's like the old saying: They create their own ecosystem. Circular stock market deals being the most obvious, but the WorldCoin has been for years in the making and Altman often described it as the only alternative in a post-truth world (the one he himself is making of course).

[0] https://www.forbes.com.au/news/innovation/worldcoin-crypto-p...

renewiltord•1h ago
This is a non-concern. You can see videos where a specific thing happens where people will describe a different thing happening. Not some eyewitness off memory. You can look at a video and there will be people on Reddit saying stuff that didn’t happen.

Then you can see any conversation about the video will be even more divorced from reality.

None of this requires video manipulation.

The majority of people are idiots on a grand scale. Just search any social media for PEMDAS and you will find hordes of people debating the value of 2 + 3 / 5 on all sorts of grounds. “It’s definitely 1. 2+3 =5 then by 5 is 1” stuff like that.

hombre_fatal•1h ago
The problem is that we're already post truth.

Just consider how a screenshot of a tweet or made-up headline already spreads like a wildfire: https://x.com/elonmusk/status/1980221072512635117

Sora involves far more work than what is required to spread misinfo.

Finally, people don't really care about the truth. They care about things that confirm their world view, or comfortable things. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.

nothrabannosir•1h ago
> Finally, people don't really care about the truth.

That same link has two “reader notes” about truth.

The lie is half way around the world etc, but that can also be explained by people’s short term instincts and reaction to outrage. It’s not mutually exclusive with caring about truth.

Maybe I’m being uncharitable — did you mean something like “people don’t care about truth enough to let it stop them from giving into outrage”? Or..?

FloorEgg•1h ago
Assuming this is all true, what's the most optimistic view you can take looking ~20 years out?

How could all of this wind up leading to a much more fair, kind, sustainable and prosperous future?

Acknowledging risks is important, but where do YOU want all this to go?

Eisenstein•1h ago
As adults already, we grew up with things that are either not relevant or give us the wrong responses to our heuristics.

But the kids who grow up with this stuff will just integrate into their life and proceed. The society which results from that will be something we cannot predict as it will be alien to us. Whether it will be better or not -- probably not.

Humans evolved to spend most of their time with a small group of trusted people. By removing ourselves from that we have created all sorts of problems that we just aren't really that equipped to deal with. If this is solvable or not has yet to be seen.

mat_b•1h ago
> Finally, people don't really care about the truth. They care about things that confirm their world view. Or they dismiss things that are inconvenient for their tribe and focus on things that are inconvenient for other tribes.

People have always been this way though. The tribes are just organized differently in the internet age.

LexiMax•1h ago
I strongly suspect future generations are going to look back on the age of trying to cram the entire world into one of several shared social spaces and say "What were those idiots thinking?"
godelski•1h ago
Not to mention that the president posts AI slop frequently[0]. He even posted, and took down, a video promising people a "med bay". A fictional device that just cures everything.

[0] Trump as "King Trump" flying a jet that dumps shit onto protesters https://truthsocial.com/@realDonaldTrump/posts/1153982516232...

[1] https://www.snopes.com/news/2025/09/30/medbed-trump-ai-video...

code4life•59m ago
> Finally, people don't really care about the truth.

What is truth? Pontius Pilate

marshfarm•55m ago
We're probably post-narrative and post-lexical (words) but haven't become aware of what to possibly update these tools with. Post-truth is an abstraction rooted in the arbitrary.

Reality is specific. Actions, materials. Words and language are arbitrary, they're processes, and they're simulations. They don't reference things, they represent them in metaphors, so sure they have "meaning" but the meanings reduce the specifics in reality which have many times the meaning possibility to linearity, cause and effect. That's not conforming to the reality that exists, that's severely reducing, even dumbing down reality.

AnimalMuppet•36m ago
There is a reality which exists. Words have meaning. Words are more or less true as the meaning they convey conforms more or less well to the reality that exists. So no, truth is not rooted in the arbitrary. Quite the opposite.

Or at least, words had meaning. As we become post-lexical, it becomes harder to tell how well any sequence of words corresponds to reality. This is post truth - not that there is no reality, but that we no longer can judge the truth content of a statement. And that's a huge problem, both for our own thought life, and for society.

raw_anon_1111•44m ago
Oh well, I’ll put it out there. If people cared about verified provable truths, religion of any kind wouldn’t exist.
OJFord•1h ago
It just makes trusted/verified sources more important, and more people to care about it. I wouldn't be terrified for politics so much as the raised barrier to entry (and concentration) of the press - people will pay attention to the BBC, Guardian, Times, but not (even less so) independentjourno.com; those sources will be more sceptical of whistleblowers and freelance investigative contributions, etc.
Razengan•1h ago
Well, usually when there's a mass problem, some technology eventually eliminates it.

Like cars making horse manure in cities a non-issue (https://www.youtube.com/watch?v=w61d-NBqafM)

Maybe the solution to everybody lying would be some way to directly access a person's actual memories from their brains..

pdntspa•1h ago
Just yesterday I saw a Sora-generated video that purported to be someone filming a failed HIMARS missile failing and falling on stopped traffic and exploding on the 5 in Camp Pendleton on Saturday. (IRL they were doing some kind of live-fire drill and it did actually involve projectiles flying over the freeway.)

While there were some debris instances IRL the freeway was completely shut down per the governors orders and nobody was harmed. (Had he not done this, that same debris may have hit motorists, so this was a good call on his part)

You could see the "Sora" watermark in the video, but it was still popular enough to make it in my reels feed that is normally always a different kind of content.

In this case whoever made that was sloppy enough to use a turnkey service like Sora. I can easily generate videos suitable for reels using my GPU and those programs don't (visibly) watermark.

We are in for dark times. Who knows how many AI-generated propaganda videos are slipping under the radar because the operator is actually half-skilled.

highwaylights•1h ago
I'm surprised this isn't a bigger concern given that:

For over a year now we've been at the point whereby a video of anyone saying or doing anything can be generated by anyone and put on the Internet, and it's only becoming more convincing (and rapidly)

We've been living in a post-truth world for almost ten years, so it's now become normalized

Almost half of the population has been conditioned to believe anything that supports their political alignment

People will actually believe incredibly far-fetched things, and when the original video has been debunked, will still hold the belief because by that point the Internet has filled up with more garbage to support something they really want to believe

It's a weird time to be alive

cruffle_duffle•1h ago
Absolutely! And don’t kid yourself into thinking you are immune from this either. You can find support of basically anything you want to believe. And your friendly LLM will be more than happy to confirm it too!

Honestly it goes right back to philosophy and what truth even means. Is there even such a thing?

Eisenstein•1h ago
People forget that critical thinking means thinking critically about everything, even things you already think are true because they fit into your worldview.
overvale•1h ago
Other people have said this, but I don’t think it’s going to be any different than living in a world where people can spread rumors orally or print lies with a printing press. We’ve been dealing with those challenges for a long time.

Our ways of thinking and our courts understand that you can’t trust what people say and you can’t trust what you read. We’ve internalized that as a society.

Looking back, there seems to have been a brief period of time when you could actually trust photographs and videos. I think in the long run, this period of time will be seen as a historical anomaly, and video will be no more trusted than the printed or spoken word is today.

truelson•1h ago
We are having a huge amount of technological change (we haven't even learned how to handle social media as a society yet...). We're experiencing a global loss in trust, and things may fall apart for a bit until our society develops a better immune system to such ills. It is scary.

I think we may revert back to trusting only smaller groups of people, being skeptical of anything outside that group, becoming a bit more tribal. I hope without too many deleterious effects, but a lot could happen.

But humans, as a species, are survivors. And we, with our thinking machines will figure out ultimately how to deal with it all. I just hope the pain of this transition is not catastrophic.

robofanatic•1h ago
If you don’t like any thing digital (image/video/text) then it’s definitely AI generated. I guess AI has kind of killed the “democratization of news” introduced by social media.
csallen•1h ago
People have been tricked by counterfeits ever since the invention of writing (or even drawing) first made it possible for a person to communicate without being physically present.

At that moment, it simultaneously became possible to create "deep fakes" by simply forging a signature and tricking readers as to who authored the information.

And even before that, just with speaking, it was already possible to spread lies and misinformation, and such things happened frequently, often with disastrous consequences. Just think of all the witch hunts, false religions, and false rumors that have been spread through the history of mankind.

All of this is to say that mankind is quite used to dealing with information that has questionable authorship, authenticity, or accuracy. And mankind is quite used to suffering as a result of these challenges. It's nothing particularly new that it's moving into a new media format (video), especially considering that this is a relatively new format in the history of mankind to begin with.

(FWIW, the best defense against deep fakes has always been to pay attention to the source of information rather than just the content. A video about XYZ coming from XYZ's social media account is more likely to be accurate than if it comes from elsewhere. An article in the NYTimes that you read in the NYTimes is more likely to be authentic than a screenshot of an article you read from some social media account. Etc. It's not a perfect measure -- nothing is -- but I'd say it's the main reason we can have trust despite thousands of years of deep fakes.)

IMO the fact that social media -- and the internet in general -- have decentralized media while also decoupling it from geography is less precedented and more worrisome.

kazinator•1h ago
Here is the thing: we should never have trusted photographs and motion pictures.

Fakery isn't new, only the product of scale and quality at which it is becoming possible.

kazinator•1h ago
Being convinced that real videos are AI is arguably a better position than being convinced that real videos convey the iron-clad truth.

Everything is manipulated or generated until proven otherwise.

anarticle•1h ago
I think their game theoretic aim was to completely discredit video online. Just as we don't accept text in general as truth or image when we see it, we are being flooded with completely fake vids so people can shake the idea that videos are truth.

It smells of e/acc, effective altruist ethics which are not my favorite, but I don't work at OpenAI so I don't have a say I can only interpret.

I agree, but we will likely continue down this road...

shnp•1h ago
Flawless AI generated videos will result in video footage not being trusted.

This will simply take us back about 150 years to the time before the camera was common.

The transition period may be painful though.

roadside_picnic•54m ago
> the final pour that cements us into our post-truth world.

I find it a bit more concerning that anyone would not already understand how deeply we exist in a "post-truth" world. Every piece of information we've consumed for the last few decades has increasingly been shaped by algorithms optimizing someone else's target.

But the real danger of post-truth is when there is a still enough of a veneer of truth that you can use distortions to effectively manipulate the public. Losing that veneer is essentially a collapse of the whole system, which will have consequences I don't think we can really understand.

The pre and early days of social media were riddled with various "leaks" of private photos and video. But what does it mean to leak a nude photo of a celebrity when you can just as easily generate a photo that is indistinguishable? The entire reason leaks like that were so popular is precisely because people wanted a glimpse into something real about the private life of these screen personalities (otherwise 'leaks' and 'nude scenes' would have the same value). As image generation reaches the limit, it will be impossible to ever really distinguish between voyeurism and imagination.

Similarly we live in an age of mass surveillance, but what does surveillance footage mean when it can be trivially faked. Think of how radicalizing surveillance footage has been over the past few decades. Consider for example the video of the Rodney King beating. Increasingly such a video could not be trusted.

> I'm kinda terrified of the future of politics.

If you aren't already terrified enough of the present of politics, then I wouldn't be worried about what Sora brings us tomorrow. I honestly think what we'll see soon is not increasingly more powerful authoritarian systems, but the break down of systems of control everywhere. As these systems over-extend themselves they will collapse. The peak of social media power was to not let it go further than it was a few years ago, Sora represents a larger breakdown of these systems of control.

uvaursi•50m ago
Agreed, but this is mostly coming from people who would normally discredit you bashing MSM as a kook/conspiracy theorist.

People forget, or didn’t see, all the staged catastrophes in the 90s that were shortly afterwards pulled off the channel once someone pointed out something obvious (f.e. dolls instead of human victims, wrong location footage, and so on).

But if you were there, and if you saw that, and then saw them pull it off and pretend like it didn’t happen for rest of the day, then this AI thing is a nothing burger.

ozgrakkurt•46m ago
Or people will just stop believing random things they see online. You are underestimating people imo.
jmkni•42m ago
For sure.

I consider myself pretty on the ball when it comes to following this stuff, and even I've been caught off guard by some videos, I've seen videos on Reddit I thought were real until I realised what subreddit I was on

mentalgear•1h ago
The fact that OpenAI is pushing Sora, and Altman now even hinting at introducing "erotic roleplay"[0] makes it obvious: openAI has stopped being a real AI research lab. Now, they’re just another desperate player in a no-moat market, scrambling to become the primary platform of this hype era and imprison users onto their platform, just like Microsoft and Facebook did before in the PC and social era.

[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...

gilfoy•1h ago
Why is it one or the other? They have enough money to do both.
mentalgear•1h ago
But if you followed them, there are focusing only on product for the last 2 years. The grand GPT-5 and their scaling laws, from which all their LLM AGI hopes originated, turned out to be a dud.
sixothree•1h ago
The amount of animal abuse videos I've seen is a bit disturbing. It only demonstrates how careless they have been, possibly intentionally. I know people on HN have been describing the various reasons why OpenAI has not been a good player, but seeing it first-hand is visceral in a way that makes me concerned about them as a company.
bilekas•1h ago
I got the feeling when this was released that it was just another metric to justify further investment, they were guaranteed to have a lot of users, they can turn around and say "well we have 2 huge applications and were just getting started" investors don't care too much about product quality we've seen, just large numbers.
JohnMakin•1h ago
It's made my non-sora feeds nearly inconsumable, which I admitted to myself was probably a good thing.
schnable•1h ago
OpenAI is making a wild number of product plays at once, trying to leverage the value of the frontier model, brand value, and massive number of eyeballs they own. Sora is just one of many. Some will fail and maybe some will succeed.

It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.

truelson•1h ago
It's to their benefit to try everything right now. And quickly.
furyofantares•1h ago
> It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them.

Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.

I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.

FloorEgg•1h ago
On some level they know that LLMs alone won't lead to AGI so they have to take a shotgun approach to diversify, and also because integrating some parts of all these paths is more likely to lead to the outcome they want than going all in on one.

Also because they have the funding to do it.

Reminds me a bit of the early Google days, Microsoft, Xerox, etc,

This is just what the teenage stage of the top tech startup/company in an important new category looks like.

truelson•1h ago
I love Cal. I really, really take his thoughts on many things to heart for well over a decade now.

I think he's being a bit harsh here. And there are some confounding factors why.

Yes, we have an AI bubble. Yes there's been a ton of hype that can't be met with reality in the short term. That's normal for large changes (and this is a large technological change). OpenAI may have some rough days ahead of it soon, but just like the internet, there's still a lot of signal here and a lot of work to still be done. Going through Suna+Sora videos just last night was still absoutely magical. There's still so much here.

But, OpenAI is also becoming, to use a Ben Thompson term, an aggregator. If it's where you go to solve many problems, advertising and more is a natural fit. It's not certain who comes out on top of the space (or if it can be shared), but there are huge rewards coming in future years, even after a bubble has popped.

Cal is having a very strong reaction here. I value it, but I wish it was more nuanced.

kulahan•1h ago
LLMs are only valuable to me at the moment explicitly because ads are not part of the scene. Maybe for a while others will use it, but it will be on the exact same treadmill as anything else ad-based: you will become the product, any recommendations are worthless trash, and it is oriented to show as many ads as possible, rather than providing useful content.

Ads destroy... pretty much everything they touch. It's a natural fit, but a terrible one.

righthand•1h ago
Any Llm generation tools are spam generation tools. AI-slop === Spam. How big is the market for spam again?
mentalgear•1h ago
Unfortunately due to the generation / verification ratio, spam and misinformation are indeed their most low-hanging fruits. It's so much easier to generate LLM output than to verify it, which is probably why Google held back the Transformer Architecture.
kulahan•1h ago
This isn't even remotely true. In its current iteration, it's one of the best jumping-off points for getting up to speed on the basics of a new topic.
righthand•53m ago
So is spam if you only need a summarized version of the financial difficulties of princes in Nigeria. Or the kinds of people doctors can’t stand.

Your jumping off point is a cliff into a pile of leaves. It looks correct and comfy but will hurt your butt taking it for granted. You’re telling people to jump and saying “it’ll get better eventually just keep jumping and ignore the pain!”

kulahan•10m ago
Nope, spam is specifically unwanted. Also, I'm saying "jump in the leaves, it's fun if you don't try leaping in from a mile up" and you're saying "NO LEAVES KILL PEOPLE ALL THE TIME" lol.
rpjt•1h ago
I have no FOMO when it comes to Sora. None whatsoever. Authenticity is becoming more important day by day.
kbos87•1h ago
I could see Sora having a significant negative impact on short form video products like TikTok if they don’t quickly and accurately find a way to categorize its use. A steady stream of AI generated video content hurts the value prop of short form video in more than one way… It quickly desensitizes you and takes the surprise out that drives consumption of a lot of content. It also of course leaves you feeling like you can’t trust anything you see.
huevosabio•1h ago
Didn't explicitly think about this, but you're right. I already dismiss off the bat a lot of surprising video content because I don't trust it.
kulahan•1h ago
Do people on the dopamine drip really care how real their content is? Tons and tons of it is staged or modified anyways. I'm not sure there's anything Real™ on TikTok anyways.
wobfan•1h ago
Thought the same. The human-generated content is just as brainless as the AI-generated slop. People who watched the first will also watch the latter. This will not change a lot, I think.
bemmu•1h ago
I find Sora refreshing in that I don't have to worry about being tricked by something fake. It's just a fun multiplayer slopfest.
ToucanLoucan•1h ago
I mean, this is basically already status quo for YouTube Shorts. Tons and tons of shorts are AI-voice over either AI video or stock video covering some pithy thing in no actual depth, just piggybacking off of trending topics. And TikTok has had the same sort of content for even longer.

The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.

gdulli•1h ago
> A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn’t be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling.

But also, a company that earnestly believes that it's about to disrupt most labor is going to want to grab as many of those bucks as possible before people no longer have income.

xwowsersx•1h ago
This take feels like classic Cal Newport pattern-matching: something looks vaguely "consumerish," so it must signal decline. It's a huge overreach.

Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.

You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.

The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.

So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.

ilickpoolalgae•1h ago
> It’s unclear whether this app will last. One major issue is the back-end expense of producing these videos. For now, OpenAI requires a paid ChatGPT Plus account to generate your own content. At the $20 tier, you can pump out up to 50 low-resolution videos per month. For a whopping $200 a month, you can generate more videos at higher resolutions. None of this compares favorably to competitors like TikTok, which are exponentially cheaper to operate and can therefore not only remain truly free for all users, but actually pay their creators .

fwiw, there's no requirement to have a subscription to create content.

sosodev•1h ago
This article makes the claim that OpenAI, and AI in general, is massively overhyped because OpenAI is looking to sell slop. I'm not sure I can agree with that basic premise.

Whether AGI does or does not materialize sometime soon doesn't matter. OpenAI, like every company who wants to raise massive amounts of money, needs to show huge growth numbers now. It seems like the unfortunate, simple truth is that slop is a growth hack.

micromacrofoot•1h ago
OpenAI is steering significant amounts of traffic away from Google and ChatGPT is a fairly common name that extends beyond awareness of the company (or even what the word it means specifically).

Not nearly on the level of "Kleenex" or "Google" as a term, but impressive given that other companies have spent decades trying to make a similar dent.

rchaud•45m ago
So what though? At best they will stuff it with ads like Google has done and become what is basically Bing w/ Copilot. That isn't going to pay back the $500 billion or however much they're trying to raise.
micromacrofoot•26m ago
it means I highly doubt it's the "beginning of the end" as postulated... unless it's a very very long end

as another example: tesla has strung along a known overvaluation for a long time now and there's no end in sight despite a number of blunders