frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Why Agile Teams Are Winning the Race to Create AI-Ready Cultures

https://www.inc.com/entrepreneurs-organization/why-agile-teams-are-winning-the-race-to-creating-ai-ready-cultures/91194418
1•MarcoDewey•1m ago•0 comments

Show HN: Chatterbox-TTS-Server – Easy web UI for the new open-source TTS model

https://github.com/devnen/Chatterbox-TTS-Server
1•devnen•5m ago•0 comments

Inside a phone smuggled out of North Korea [video]

https://www.bbc.com/news/videos/cewd82p09l0o
2•thunderbong•6m ago•0 comments

A Lean companion to Analysis I

https://terrytao.wordpress.com/2025/05/31/a-lean-companion-to-analysis-i/
10•jeremyscanvic•12m ago•0 comments

Hidden Bear: The GRU hackers of Russia's most notorious kill squad

https://theins.press/en/inv/281731
1•dralley•12m ago•0 comments

A pigment‑marked object in the context of Neanderthal symbolic behavior

https://link.springer.com/article/10.1007/s12520-025-02243-1
2•bookofjoe•15m ago•0 comments

What is 32-bit Float Recording?

https://tascam.jp/int/feature/32-bit_float
1•chaosprint•16m ago•0 comments

PunchCard Key Backup

https://volution.ro/pckb/
1•todsacerdoti•16m ago•0 comments

The Terrible Truth About Sherita, Brooklyn's Beloved Billboard Dinosaur

https://www.thecity.nyc/2025/05/19/sherita-billboard-dinosaur-mystery-real-estate-deed-theft/
1•mooreds•19m ago•0 comments

NASA's Magellan Mission Reveals Possible Tectonic Activity on Venus

https://www.nasa.gov/missions/magellan/nasas-magellan-mission-reveals-possible-tectonic-activity-on-venus/
3•XzetaU8•20m ago•0 comments

Being Liked or "Mistaking Harmony for Health"

https://jamesjboyer.substack.com/p/on-being-liked-or-mistaking-harmony
2•mooreds•21m ago•0 comments

The same incident never happens twice, but the patterns recur over and over

https://surfingcomplexity.blog/2025/05/26/the-same-incident-never-happens-twice-but-the-patterns-recur-over-and-over/
3•mooreds•23m ago•0 comments

Progressive JSON

https://overreacted.io/progressive-json/
2•danabramov•25m ago•0 comments

A Shift In The Core Values of Technology Makers

https://twitter.com/colin_fraser/status/1928651680624549908
1•avithedev•27m ago•1 comments

AI Can Build Your App. You Can't Even Center a Div

https://jamshidbekboynazarov.medium.com/ai-can-build-your-app-you-cant-even-center-a-div-d56ff51fc39c
2•thunderbong•27m ago•0 comments

How Can AI Researchers Save Energy? By Going Backward

https://www.quantamagazine.org/how-can-ai-researchers-save-energy-by-going-backward-20250530/
2•rbanffy•27m ago•0 comments

Nissan's Fire Sale of Its HQ Marks the End of Carlos Ghosn's Shattered Legacy

https://www.jalopnik.com/1873034/nissan-headquarters-fire-sale-marks-end-of-carlos-ghosn-legacy/
2•rntn•30m ago•0 comments

Bug in Mutter breaks mouse behavior in Fedora, Arch and more

https://old.reddit.com/r/Fedora/comments/1kzyr9l/warning_critical_bug_in_gnomes_mutter_483_breaks/
2•yellow_lead•31m ago•0 comments

Trump Administration Ends Program Critical to Search for an HIV Vaccine

https://www.nytimes.com/2025/05/30/health/trump-hiv-cuts.html
5•amarcheschi•32m ago•0 comments

Apple Shares 2024 App Store Data: Rejections, Removals, and More

https://www.macrumors.com/2025/05/30/app-store-2024-transparency-report/
1•davidjade•32m ago•0 comments

Enhancing Code Quality with Generative AI: Boosting Developer Warning Compliance

https://arxiv.org/abs/2505.11677
1•PaulHoule•33m ago•0 comments

Is every memecoin just a scam?

https://www.theguardian.com/technology/2025/may/30/is-memecoin-scam-crypto-trump
5•gandalfian•33m ago•1 comments

Configure Your Git [video]

https://www.youtube.com/watch?v=G3NJzFX6XhY
1•todsacerdoti•34m ago•0 comments

Can We Afford Large-Scale Solar PV?

https://www.construction-physics.com/p/can-we-afford-large-scale-solar-pv
1•rbanffy•36m ago•0 comments

Half Spectre, Full Exploit: Hardening Rowhammer Attacks with Half-Spectre Gadge [pdf]

https://download.vusec.net/papers/halfspectre_sp25.pdf
1•todsacerdoti•36m ago•0 comments

AI-Powered REPL with Clojure MCP, Next Generation Developer Experience

https://www.metosin.fi/blog/2025-05-27-bruce-hauman-has-done-it-again
1•jgrodziski•37m ago•0 comments

Anyone can use AI to 'vibe code.' Could that put programmers out of a job?

https://www.wgbh.org/news/2025-05-30/anyone-can-use-ai-chatbots-to-vibe-code-could-that-put-programmers-out-of-a-job
1•ciwolex•37m ago•1 comments

OsintTube: Easy-to-Use YouTube Osint Tool

https://github.com/rdWei/OsintTube
1•rdwei•38m ago•0 comments

Show HN: AI-Powered Receipt and Invoice Generator (LLM-Agnostic, Prompt-Based)

1•maxime_wellapp•38m ago•1 comments

Charted: The Most Educated Countries

https://www.visualcapitalist.com/charted-the-worlds-most-educated-countries/
1•rbanffy•42m ago•0 comments
Open in hackernews

What Happens When AI-Generated Lies Are More Compelling Than the Truth?

https://behavioralscientist.org/what-happens-when-ai-generated-lies-are-more-compelling-than-the-truth/
74•the-mitr•1d ago

Comments

HPsquared•1d ago
There was a brief window where photography and videos became widespread so events could be documented and evidenced. Generative AI is drawing that period to an end, and we're returning to "who said / posted this" and having to trust the source rather than the image/message itself. It's a big regression.
rightbyte•1d ago
Videos and photos have been faked for a long time. Nothing has changed in that regard but decreasing the effort required somewhat.
jdiff•1d ago
"Somewhat" is doing an awful lot of heavy lifting. Fakery has gone from handcrafted in small batches (or at worst, produced through much effort in sweatshops) to fully automated and mass produced.
rightbyte•1d ago
Ye well "alot" might have been a better wording.

However, the real breaking point in my view was when shills and semi-automated bots become so prevalent that they could fool people into believing some consensus had changed. Faking photos doesn't add much to that in my view.

jdiff•1d ago
Why do you believe that is the case? Aren't standard, naturally-occurring social media echo chambers far more likely?
rightbyte•2h ago
My gut says manipulation. I have no proofs what so ever.
notTooFarGone•1d ago
Only the price changed and that is everything.
zettapwn•1d ago
The open question is who’s worse: one major tyrant three thousand miles away or three thousand minor tyrants a mile away. If we consent to live under capitalism, then we’re destined to live in a world of lies. We always have; it’s all we’ve ever known.

What we don’t know is whether we’ll be worse or better off when the technology of forgery is available to random broke assholes as easily as it is to governments and companies. More slop seems bad, but if our immunity against bullshit improves, people might redevelop critical thinking skills and capitalism could end.

EGreg•1d ago
"Nothing has changed in that regard" really, nothing?

Does anyone other than me notice this common tendency on HN:

1. Blockhain use case mentioned? Someone must say "blockchain doesn't solve any problems" no matter what, and always ignoring any actual substance of what's being done until pressed.

2. AI issue mentioned? Someone must say "nothing at all has changed, this problem has always existed" downplaying the vast leaps forward in scale, and quality of output, ignoring any details until pressed.

It's like when people feel the need to preface "there is nothing wrong with capitalism, but" before critiquing capitalism. You will not criticize the profit.

It's not really a shibboleth. What's the name for this type of thing, groupthink?

cb321•1d ago
I would just call it a "pattern", but if you want to be more specific Re your 1/2, perhaps a "pattern of over-simplification". Over-simplification is, of course, basically human nature, not specific to HN, and something "scientists" of all stripes fight against in themselves. So, there may be a better one-worder.

EDIT: while oversimplification is essentially always a problem, nuance and persuasion are usually at odds. So, it's especially noticeable in contexts where people are trying to persuade each other. The best parts of HN are not that, but rather where they try to inform each other.

raincole•1d ago
People had been able to send messages to each other for a very long time. However the internet still changed a lot of things.
EGreg•1d ago
I'll admit there were a few people doing the "HN thing" and saying "nothing really changed" when the Internet came out... but it was as a joke:

https://www.youtube.com/watch?v=fs-YpQj88ew&t=3m20s

thomasahle•1d ago
Things like the government MAHA report, full of fake references to papers that don't exist, probably didn't happen as blatantly before AI. Even if people could easily have lied back then too. The ease of with which AI lies (or "hallucinates") makes a qualitative difference.
tokai•1d ago
Fake citations has always been common. Before they weren't just straight up fabrications. The cited papers usually exists, they just don't contain what is claimed by the referring paper.

At least made up citations are quick and easy to denounce.

SupremumLimit•1d ago
Your comment follows two persistent HN tropes: (1) ignoring the article, which deals precisely with why the cost of production matters, and (2) steadfastly refusing to recognise that quantity has a quality of its own - in this case a monumental reduction in production cost clearly leads to a tectonic reshaping of the information landscape.
brookst•1d ago
…but enough about the printing press.
_Algernon_•1d ago
Funny that you mention the printing press. One of the first books published using it was about how to identify witches which led to "witch" burnings in Europe. At some point society adapted to the misinformation (and we are better off for it), but a lot of innocent people suffered in the meantime.

https://www.independent.co.uk/news/science/archaeology/europ...

rightbyte•1d ago
I did read the article and it does deal with fake photos being an old thing and name drops e.g. Stalin's former companions being removed one by one from photos.

"There was a brief window where photography and videos became widespread so events could be documented and evidenced."

Photos are video have in themself not been evidence. You need trust the photographer or publisher too, since the camera was invented.

Moving the cost from big $$$ level to my neighbour level makes scams more personalized, sure.

piva00•1d ago
I get bored by myself repeating it but I always think it's worth to mention: the scale and degree matter, in almost all cases of an issue, if not we can reduce a lot of issues to "it has always happened before".

Localised fires are common in nature, a massive wildfire is just a fire, at a different scale and degree. Lunatics raving about conspiracies were very common in public squares, in front of metro stations, anywhere there was a large-ish flow of people, now they are very common in social media, reaching millions in an instant, different scale and degree.

Just sheer scale and degree makes an issue a completely other issue, decreasing the effort required to fake a video to the point where a naïve/layperson cannot distinguish it from a real one is a massive difference. Before you needed to be technically proficient with a number of tools, put a lot of work, and get a somewhat satisfying result to fake something, now you just throw a magical incantation of words and it spits out a video you can deceive someone, it's a completely different issue.

Vilian•1d ago
Would be very interesting to require generative AI companies to log their created images for that purpose, it isn't gonna eliminate self hosted alternative, but it can destroy any fake evidence faster
jdiff•1d ago
I don't think this would solve the problem, unfortunately. AI can certainly be coerced into reproducing an existing image, leaving plenty of room for probable deniability.
A_D_E_P_T•1d ago
Logging them would be rather cost prohibitive, but images can be hashed + (invisibly) watermarked and video can be hashed frame by frame, in such a way that each frame authenticates the one before it. Surely there's a way to durably mark generated content.
Iulioh•1d ago
It works untill someone takes a screenshoot of the image
A_D_E_P_T•1d ago
The watermark would still be there.

You can take a photograph of the image with an old Polaroid and, if the resolution is high enough, the watermark would still be there.

brookst•1d ago
If it can be detected, it can be removed. Asking that false things carry claims of falseness is a dead end.
A_D_E_P_T•1d ago
Let's say you want to release a deepfake image. Your AI image, as generated, is not watermarked or hashed. Difficulty: Story-mode. You can release your image in two minutes flat.

Let's now let's say your AI deepfake image is watermarked/hashed upon generation. Difficulty: Intermediate to hard. You might be able to get it done, but not without real effort, and removing all traces of the watermark without leaving artefacts might be so difficult as to be nearly impossible.

...So it doesn't eliminate the possibility of fakes, but it hugely raises the cost and effort bar. Only the most motivated would bother.

brookst•16h ago
Good thing the detection and removal can’t be automated and zero effort, eh?
airstrike•1d ago
If an image has no watermark, we could distrust it outright

The problem then is repeated compressing causing that to occur inadvertently

bogtog•1d ago
I think such watermarks are one of DeepMind's goals: https://deepmind.google/science/synthid/
A_D_E_P_T•1d ago
Interesting! I wonder how they watermark audio. The obvious way would be to do it in some frequency inaudible to humans (say 30kHz), but most conventional file formats can't handle that. (You'd probably need to make a modification that contains an additional ultra-high frequency.)
JimDabell•1d ago
This rules out any form of self-hosted generative AI. It’s not going to work. It needs to be the other way around; we need to prove authenticity.
thegreatpeter•1d ago
This has been a problem with the media for years as well.

You’re forced to trust the source and “read between the lines” or you’re reading something politically motivated.

Nothing new. I hope folks start trusting the source more.

neepi•1d ago
I await cryptographically signed photos coming out of cameras. Actually I’ve said that should be the case for the last 20 years.

You should be able to follow a chain of evidence towards an unprocessed raw image potentially.

thih9•1d ago
There is ongoing work[1] and cameras that offer some support already[2].

[1]: https://en.m.wikipedia.org/wiki/Content_Authenticity_Initiat...

[2]: https://leica-camera.com/en-int/photography/content-credenti...

k2enemy•1d ago
There have been several initiatives in this direction. Here is one of the latest: https://contentauthenticity.org
holowoodman•1d ago
There have also been calls for a mechanism like this to prevent doctored photos of controversial newsworthy events being spread by news agencies. But afaik the only thing that came off it was "we only pay for camera jpg, only allowed changes are brightness, contrast, color".

Edit: sister comments have links.

bayindirh•1d ago
I think Nikon and Canon's past cameras had signed photos by default, and you could get the software for verification if you're a police dept or similar.

Both manufacturer's keys got extracted from their cameras, rendering the feature moot. Now, a more robust iteration is probably coming, possibly based on secure enclaves.

I'd love to have the feature and use it to verify that my images are not corrupted on disk.

neepi•1d ago
Used to work on archival. You want to store your images in TIFF and use FEC if you really care. That withstands significant bitrot. Signing will just tell you that the image is broken but not allow you to recover it. You can do fine with SHA256 and 3-2-1 backup.
bayindirh•1d ago
Thanks for the heads-up, didn't know TIFF/FEC. However, I can't because the images are .ARW (Sony's camera raw format). I keep multiple copies with hashes of them currently. I check the disks periodically.

Mid-term plan is to build an UNRAID array with patrolling. Will probably do backups with Borg on top of it, and keep another "working copy", so I have multiple layers of redundancy with some auto-remediation.

UNRAID will keep disk level consistency in good shape. Patrolling will do checksum tests, Borg can check its own backups against bitrot and can repair them in most cases. Working copy will contain everything and refill if something unthinkable happens.

neepi•1d ago
You have to convert to an archival format from whatever your raw format is. The issue with raw files is that they are compressed so any bitrot in those leads to more than just losing a pixel or two here or there but sometimes the entire file. TIFF is not compressed at all. FEC allows single pixel corrections to be done accurately. We had a metric shit ton of LTO tapes, changers and online disk cache that handled all that.

I don't do that at home. I have 600GB of Nikon raws. I keep one copy on my Mac. I have a time machine backup in the house (in another room) on an Samsung T7 shield and I have an off site backup which is a quarterly rsync job to another Samsung T7 shield.

perching_aix•1d ago
Unfortunately that doesn't prove too much. It all hinges on the cameras, which are just computers, operating in a verifiable fashion. Which they don't, as no consumer available computer currently does, and I don't see this changing in the near future, both for technological and political reasons.

I put a lot of effort into thinking about what could be proper trustable on photos and videos [0], but short of performing continuous modulation on the light sources the light of which is eventually captured by the cameras' sensors, there's no other way, and even that I'm not sure how would work with light actually doing the things its supposed to (reflection, refraction, scatter, polarization, etc.). And that's not even mentioning how the primary light source everyone understandably relies on, the Sun, will never emit such a modulated signal.

So what will happen instead I'm sure, is people deciding that they should not let perfection be the enemy of good, and moving on with this stuff anyways. Except this decision will not result in the necessary corresponding disclaimers reaching consumers, and so will begin the age of sham proofs and sham disprovals, and further radicalization. Here's to hope I'm wrong, and that these efforts will receive appropriate disclaimers when implemented and rolled out.

[0] related in a perhaps unexpected way: https://youtube.com/shorts/u-h5fHOcS88

dinfinity•1d ago
Things should simply be signed by people and institutions. People who were there or did extensive research vouching for authenticity and accuracy of representation.

Signing in hardware is nice, but you then still need to trust the company making the hardware to make it tamper proof and not tampered with by them. Additionally, such devices would have to be locked down to disallow things like manual setting of the time and date. It's a rabbit hole with more and more inconveniences and fewer benefits as you go down it.

Better to just go for reputation, and webs and chains of trust in people based approaches.

neepi•1d ago
Yeah I'm not even talking about that side of things. I'm talking about after it leaves the camera. When someone shows you something, you should be able to say "I want to see the closest to original image". Humans make a lot of changes to images evidently to change perception. Even a simple crop can change the story being told.

A good example: https://imgur.com/iqtFHHg

perching_aix•1d ago
Yeah, that's a good point. Kind of ties in with the video I linked. Also reminds me a bit to how in modern video containers the film grain is usually a separate thing, and is rendered onto the image during playback. One can imagine turning on a view where the modified parts of the image would become highlighted, assisting with speculations on what might have been altered and how.

I didn't mention this in my previous comment, but after having thought about it even further, I've eventually arrived at the idea that there can be basically an arbitrary amount of meaning embedded onto a photo or a video, and it being "verified" to me as a human really just means "it is what I think I'm looking at" - that's when I realized this task is likely unsolvable in absolute terms, and just how much more difficulty lies beyond the already hopelessly difficult crackpot idea of light source signal modulation that I came up with prior. But yeah, as I suggested, these are just what I could think of, and perfection should indeed not be the enemy of good.

ramblerman•1d ago
“I thought about it and couldn’t come up with anything so this is a dead end.”

What a load of nonsense. A little bit of humility and a basic understanding of history should quickly make you realize that.

OPs point is far more interesting and deserves more discussion

perching_aix•1d ago
I don't understand your hostility. There's nothing in what I wrote or in the way I wrote that would suggest to a good faith reader that my opinion is absolute, that I consider my opinion fact, or that my knowledge is all-encompassing. In your own words I merely "thought about it" - a phrase that means exactly just that, and nothing more. In a way, it's literally a request for further thoughts by others to make up for what I might have not thought of; the exact admission you're looking for, just stated implicitly.

It's immensely frustrating to dress up a natural language sentence with enough precision to try and account for all and every bit of nuance, so you should anticipate and actively consider that I have missed or implied some.

Despite this, you clearly did not, and instead went into attack mode on the assumption(!) that I did miss them intentionally.

I'd recommend you take your own advice on intellectual humility before offering it to others.

palmotea•1d ago
> OPs point is far more interesting and deserves more discussion

The idea of "cryptographically signed photos coming out of cameras"? It's been discussed to death and is essentially a hope for magic technology to solve our social problems.

Also it won't work, technically: it's like asking for perfect a DRM implementation to be universally implemented everywhere.

hoseyor•1d ago
I will assume you simply mean cryptographically signed as evidence of having been taken by a camera?

You do realize that would still not provide perfect proof that what was recorded by the camera was real, right? It does seem like an obsolete idea you may not have fully reconsidered in a while.

But considering that same old idea that dates from prior to the current state, I would also not be surprised if you imagined clandestinely including all kind of other things in this cryptographic signature like location, time and day, etc.; all of which can also be spoofed and is a tyrannical system’s wet dream.

You don’t think that would be immediately abused, as it was in other similar ways like through all the on device image scanning that was injected as counter-CSAM appeals to save the children…of course?

grues-dinner•1d ago
That fixes the problem of content being manipulated and then the original being discounted as fake when challenged.

It doesn't do a whole lot for something entirely fictional, unless it becomes so ubiquitous that anything unsigned is assumed to be fake rather than just made on a "normal" device. And even if you did manage to sign every photo, who's managing those keys? It's the difference between TLS telling you what you see is what the server sent and trusting the server to send the truth in the first place.

lnrd•1d ago
What prevents anyone to take a signed picture by photographing a generated/altered picture? You just need to frame it perfectly and make sure there are no reflections that could tell it's a picture of a picture and not a picture of the real world, very doable with a professional camera. All details that could give it out would disappear just lowering the resolution, which can be done in any camera.
grues-dinner•1d ago
With a bit (OK quite a lot) of fiddling, you could probably remove the CCD and feed the analog data into the controller, unless that's also got a crypto system in it.

Presumably if you were discovered you would then "burn" the device as its local key would be known then to be used by bad actors, but now you need to be checking all photos against a blacklist. Which also means if you buy a second hand device, you might be buying a device with "untrusted" output.

salawat•1d ago
Any problem that requires cryptographic attestation or technical control of all endpoints is not a solution we should be pursuing. Think of it as a tainted primitive. Not to be implemented.

The problem of Trust is a human problem, and throwing technology at it just makes it worse.

grues-dinner•1d ago
I'm absolutely in agreement with that. The appetite for technical solutions to social problems seems utterly endless.

This particular idea has so many glaring problems that one might almost wonder if the motivation is less about "preventing misinformation" or "protecting democracy" or "thinking of the children" or whatever, and more about making it easier to prove you took the photo as you sue someone for using it without permission. But any technology promoted by Adobe couldn't be about DRM, so that's just crazy talk!

rightbyte•1d ago
Wouldn't compression make the signature invalid? Feels kinda easy to fake anyways.
RcouF1uZ4gsC•1d ago
Remember the fairy photo hoax that fooled Lewis Carrol?

There was never a time when authenticated photo and video could be trusted without knowing the source and circumstances.

j-bos•1d ago
It fooled Sir Arthur Conan Doyle, a motivated believer, not Lewis Carrol. People will believe what they want. Trust is the fundamental issue.
psychoslave•1d ago
Lewis Carroll also was fooled by it, when Churchill showed to him. Abraham Lincoln who was there at the moment it happened confirmed that to me, I can show you the original email he sent me about it (bar the elements I'll have to hide due to top secret information being included in the rest of the message).
const_cast•23h ago
Sure, but scale matters. 99% of images being fake is a different situation than 1% being fake. We can't just ignore that in favor of a "this always happened" argument.

Everything has always happened, so who cares? We need to go deeper than that. Many things that are perfectly a-okay today are only so because we do it on a small enough scale. Many perfectly fine things, if we scale them up, destroy the world. Literally.

Even something as simple as pirating, which I support, would melt all world economies if everyone did it with everything.

ThinkBeat•1d ago
It is more that it is becoming available to the masses. Doctoring photos in all manner of ways has been going on for decades.

What is happening now will raise awareness of it and of course make it a several magnitudes bigger problem.

I am sure there are large efforts ongoing to train AI to spot AI photo, video, written production.

A system like printer tracking dots¹ may already be in widespread use. Who would take the enormous amount of time to figure out if somesuch is hiding somewhere in an llm, or related code.

¹ https://en.wikipedia.org/wiki/Printer_tracking_dots

perching_aix•1d ago
A mate of mine told me that ChatGPT started injecting zero width spaces into its output. Never fact checked this, but even if it's not done now, I'm sure it's bound to happen. Same for other types of watermarks.
HPsquared•19h ago
Quite easy to remove something like that. Although ChatGPT has a writing style that's quite recognisable.
const_cast•23h ago
Everything is a function of scale. IMO saying "this always happened" means nothing.

Lying on a small scale is no big deal. Lying on a big scale burns the world down. Me pirating Super Mario 64 means nothing. Everyone pirating everything burns the economy down. Me shooting a glass coke bottle is not note worthy. Nuclear warheads threaten humanity's existence.

Yes, AI fabrication is a huge problem that we have never experienced before, precisely because of how it can scale.

palmotea•1d ago
> Generative AI is drawing that period to an end, and we're returning to "who said / posted this" and having to trust the source rather than the image/message itself. It's a big regression.

Which just goes to show that one of the core tenets of techno-optimism is a lie.

The Amish (as I understand them) actually have the right idea: instead of wantonly adopting every technology that comes along, they assess how each one affects their values and culture, and pick the ones that help and reject the ones that harm.

AndrewKemendo•1d ago
Which implies that they have a foundational epistemological, teleological and “coherent” philosophy

Not something that can be said of most people. Worse, the number of affinity groups with long term coherence collapses into niche religions and regional cults.

There’s no free lunch

If you want structural stability then you’re gonna have to give up individuality for the group vector, if you want individuality you’re not going be able to take advantage of group benefits.

Humans are never going to be able to figure out this balance because we don’t have any kind of foundational coherent Universal epistemological grounding that can be universally accepted.

Good luck trying to get the world to even agree on the age of the earth

ToucanLoucan•1d ago
> The concern is valid. But there’s a deeper worry, one that involves the enlargement not of our gullibility but of our cynicism. OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

> Some experts believe the opposite is true: The risks will grow as we acclimate ourselves to the presence of deepfakes. Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media. We may, in the words of the mathematics professor and deepfake authority Noah Giansiracusa, start to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.

It is journalistic malpractice that these viewpoints are presented as though the former has anything to actually say. Of course Altman says it's no big deal, he's selling the fucking things. He is not an engineer, he is not a sociologist, he is not an expert at anything except some vague notion of businessness. Why is his opinion next to an expert's, even setting aside his flagrant and massive bias in the discussion at hand!?

"The owner of the orphan crushing machine says it'll be fine once we adjust to the sound of the orphans being crushed."

> “Every expert I spoke with,” reports an Atlantic writer, “said it’s a matter of when, not if, we reach a deepfake inflection point, after which forged videos and audio spreading false information will flood the internet.”

Depending where you go this is already true. Facebook is absolutely saturated in the shit. I have to constantly mute accounts and "show less like" on BlueSky posts because it's just AI generated allegedly attractive women (I personally prefer the ones that look... well, human, but that's just me). Every online art community either is trying to remove the AI garbage or has just given up and categorized it, asking users uploading it to please tag it so their other users who don't want to see it can mute it, and of course they don't because AI people are lazy.

Also I'd be remiss to not point out that this is, yet again, something I and many many others predicted back when this shit started getting going properly, and waddaya know.

That said, to be honest, I'm not that worried about the political angle. The politics of fakery, deep or otherwise, has always meant it's highly believable and consumable for the audience it's intended for because it's basically an amped-up version of political cartoons. Conservatives don't need their "Obama is destroying America!" images to be photorealistic to believe them, they just need them to stroke their confirmation bias. They're fine believing it even if it's flagrantly fake.

HPsquared•1d ago
It's the same idea as "lies spread faster than truth". Lies are often crafted to be especially juicy and salacious. Gossip has always been a problem; GenAI just extends this problem to other media.
EGreg•1d ago
What is it that makes some people on HN pop up to always respond to any issue with AI with a version of "it was always this way, the problem always existed, AI does nothing fundamentally new". When it's clear that huge leaps in scale, coordination, and output quality accessible to anyone, are exactly what produces "new" use cases. Not to mention that swarms of agents are a fundamentally new thing, that doesn't require humans in the loop at all.

Notice that this phenomenon didn't happen as much on HN for other technologies, e.g. when the iPhone came out, very few people said "well, this is nothing new, computers existed for a long time, this is just minituarizing it and unplugging it from the wall."

supriyo-biswas•1d ago
> when the iPhone came out, very few people said "well, this is nothing new, computers existed for a long time, this is just minituarizing it and unplugging it from the wall."

This website is, of course, notorious for its Dropbox comment, so regrettably the viewpoint you speak of is rather common.

captainbland•1d ago
I think the other issue is that those lies can be pumped out at inhuman speeds, and specifically targeted at particular audiences automatically using existing online audience marketing tools. So you can end up in a situation where the lie not only spreads quickly, but different audiences are receiving specialised versions of that lie which makes it particularly compelling to them, generated by AI tools, and totally responsive to real world events and narratives at minimal cost (compared to hiring humans to do the same job) - and this might happen at a really fine grained level.
drweevil•1d ago
“Falsehood flies, and the Truth comes limping after it.” -- Jonathan Swift
Ukv•1d ago
> It is journalistic malpractice that these viewpoints are presented as [...]

Seems fine to me when it's explicitly stated to be the viewpoint of the OpenAI CEO, and then countered by an expert opinion. It's already apparent that MRDA[0].

[0]: https://en.wikipedia.org/wiki/Well_he_would,_wouldn%27t_he%3...

Applejinx•1d ago
One point to bear in mind is, lies have proven more effective in the ABSENCE of evidence. I don't know how many times I've run across the idea of 'guess what, Portland (or New York City, or whatever) is burned to the ground because of the enemies!'

This gets believed not because there's evidence, but because it's making a statement about enemies that is believed.

So for whoever finds lies compelling, I don't think it's about evidence or lack of evidence. It's about why they want to believe in those enemies, and evidence just gets in the way.

EGreg•1d ago
Well, when the evidence can be faked, it becomes harder to claim it.

You've seen the "post-truth" attitudes already from the right, after "fake news" of 2016 makes them regard everything from climate change to vaccine data, as faked data with an agenda. It's interesting because for decades or centuries the right wing was usually the one which believed in our existing institutions, and it was the left that was counter-cultural and anti-authoritarian.

titouanch•1d ago
Gen AI could have us headed towards a cartesian crisis.
0xbadcafebee•1d ago
When something new is happening (or new information comes to light), and that thing has the potential to do harm, people come out of the woodwork to make doomsday predictions. Usually the doomsday predictions are wrong. A lot of these predictions involve technologies we all take for granted today.

Like the telephone. People were terrified when they first heard about it. How will I know who's really on the other end? Won't it ruin our lives, making it impossible to leave the house, because people will be calling at all hours? Will it electrocute me? Will it burn down my house? Will evil spirits be attracted to it, and seep out of the receiver? (that was a real concern)

It turns out we just adapt to technology and then forget we were ever concerned. Sometimes that's not a great thing... but it doesn't bring about doomsday.

altcognito•1d ago
New technology also usually brings new and more challenging complications. Nuclear energy, combustion engines, electricity, the internet all came with huge new problems that we are still dealing with today. Some of the problems are so severe they threaten human survivability.

Even your example contains an unsolved, and serious problem. We still don’t know who is on the other end of the phone.

nthingtohide•1d ago
Foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI. - Neil deGrasse Tyson
metalman•1d ago
"lies" are always more compelling than the truth. truth=what is

vs, a whole wide range of "wouldn't it be nice .....if", "cant we just....", and the massive background of myth, legend, fantasy, dreaming, etc so into this we have created a mega capable machine rendered virtual sur-reality.....much like the ancient myth/legends where odesius sits to table where at a fantastic feast..nothing is as it seems

indest•1d ago
lies have always been more compelling.
heresie-dabord•1d ago
From TFA:

    Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square. The reason extraordinarily strange conspiracy theories have spread so widely in recent years may have less to do with the nature of credulity than with the nature of faith. 
The reason why strange and even outright deranged notions have spread so widely is that they have been monetised. It is a Gibberish Economy.
altcognito•1d ago
Capitalism found the least amount of energy to produce a viable product.
heresie-dabord•19h ago
"Minimally viable utterance."
ThinkBeat•1d ago
Then the AI in question is qualified to become a politician. With congress these days it cant get much worse.
IshKebab•1d ago
Interesting question but this article completely failed to answer it and really went off the rails half way through.

Ars answered this much much better:

https://arstechnica.com/ai/2025/05/ai-video-just-took-a-star...

> As these tools become more powerful and affordable, skepticism in media will grow. But the question isn't whether we can trust what we see and hear. It's whether we can trust who's showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.

intended•1d ago
So this is an actual problem I am considering and have an approach. Talking essentially about our inability to know:

1) if a piece of content is a fact or not.

2) if the person you are acting with is a human or a bot.

I think its easier if you take the most nihilistic view possible, as opposed to the optimistic or average case:

1) Everything is content. Information/Facts are simply a privileged version of content.

2) Assume all participants are bots.

The benefit is that we reduce the total amount of issues we are dealing with. We don’t focus on the variants of content being shared, or conversation partners, but on the invariant processes, rules and norms we agree upon.

So We can’t agree on may be facts - but what we can agree on is that the norms or process was followed.

The alternative, to hold on to some semblance or desire to assume people are people, and the inputs are factual, was possible to an extent in an earlier era. However the issue is that at this juncture, our easy BS filters are insufficient, and verification is increasingly computationally, economically, and energetically taxing.

I’m sure others have had better ideas, but this is the distance I have been able to travel and the journey I can articulate.

Side note

There’s a few Harvard professors who have written about misinformation, pointing out that total amount of misinfo consumed isn’t that high. Essentially : that demand for misinformation is limited. I find that this is true, but sheer quantity isnt the problem with misinfo, its amplification by trusted sources.

What GenAI does is different, it does make it easier to make more content, but it also makes it easier to make better quality content.

Today it’s not an issue of the quantity of misinformation going up, it’s an issue of our processes to figure out BS getting fooled.

This is all putting pressure on fact finding processes, and largely making facts expensive information products - compared to “content” that looks good enough.

keiferski•1d ago
Can someone tell me why this idea isn’t workable and wouldn’t solve most deepfake issues?

All camera and phone manufacturers embed a code in each photo / video they produce.

All social media channels prioritize content that has these codes, and either block or de-prioritize content without them.

Result: the internet is filled with a vast amount of AI generated nonsense, but it’s mostly not treated as anything but entertainment. Any real content can be traced back to physical cameras.

The main issue I see is if the validation code is hacked at the camera level. But that is at least as preventable as say, preventing printers from counterfeiting money.

lnrd•1d ago
So all I would have to do to make a "legitimate" fake picture is to generate it, print it, take a signed picture of the print with a camera and then upload it on the web?

With the right setup I could probably just take a picture of the screen directly, making it even easier (and enabling it for videos too).

keiferski•1d ago
Presumably the camera would have to incorporate GPS or other systems to ensure that you aren’t just taking a photo of a screen.

But yes that does add a wrinkle.

_Algernon_•1d ago
For one it puts a lot of trust into those corporations which they have not earned. Corporations exist to maximize profit. They already sell advertisement to foreign hostile actors. Why wouldn't they do the same with these types of codes for enough payment?

Also it gives them a lot of power to frame anyone for anything. How do you defend yourself against a cryptographic framed "proof" that ties you to a crime? At least when no evidence is trustworthy, the courts have to take this possibility into account. That's not the case when it's only used in rare cases.

keiferski•1d ago
But this trust is already given to corporations that make printers to prevent counterfeiters. By your logic they would already have sold out to hostile actors.

To your second point: it’s not that my method guarantees absolutely flawless photos. It just makes it more likely to be secure.

loudmax•1d ago
I think placing trust in technology is misguided. Bad actors will always figure out a way to abuse the machines.

What we can do is place trust in particular institutions, and use technology to verify authenticity. Not verify that what the institution is saying is true, but verify that they really are standing by this claim.

This is challenging because no institution is going to be 100% trustworthy all of the time in perpetuity. But you can make reasonable assessments about which institutions appear more credible. Then it's a matter of policing your own biases.

keiferski•1d ago
Sure, but I don't see how that could be implemented in the world in a way other than how I outlined, considering that photos/videos can be created by anyone. There is no central media institution that can control all sharing of content.

It would seem to me that the institution to place your trust in would be the one that implements and verifies the coding system I discussed.

loudmax•22h ago
When a news organization hires a journalist to file a report, they put their reputation on the line in the hands of the journalist.

As a consumer of news, you put your trust in the institution to have a reasonable vetting process, and also a process to retract a story if it's later shown to be false.

None of this is completely foolproof. It relies on institutions taking a long term view, and people working out which individuals and institutions are worthy of their trust. This isn't like the the blockchain where you have mathematical proof of veracity that's as strong as your encryption algorithm. I don't see how that level of proof is achievable in the real world.

keiferski•22h ago
But that still doesn't factor in social media at all, which is ultimately where the majority of people are getting their information. I just don't see what institution you're relying on in the case of real people sharing videos that may or may not be fake.
dsign•1d ago
One thing is to see Sam Altman peddling his wares, another altogether is to hear politicians and big corp executives treating AI as if it were something that should be adopted post-haste in the name of progress. I don't get it.
ImHereToVote•1d ago
Lies are already more compelling than the truth. The difference is whether you like rebel lies, or establishment lies.
IanCal•1d ago
> OpenAI CEO Sam Altman has voiced worries about the use of AI to influence elections, but he says the threat will go away once “everyone gets used to it.”

Then he's lying or a complete moron.

People have been able to fake things for ages, since you can entirely fabricate any text because you can just type it. The same as you can pass on any rumour by speaking it.

People are fundamentally aware of this. Nobody is confused about whether or not you can make up "X said Y".

*AND YET* people fall for this stuff all the time. Humans are bad at this and the ways in which we are bad at this is extensively documented.

The idea that once you can very quickly and cheaply generate fake images that somehow people will treat it with drastically more scepticism than text or talking is insane.

Frankly the side I see more likely is what's in the article - that just as real reporting is dismissed as fake news that legitimate images will be decried as AI if they don't fit someones narrative. It's a super easy get-out clause mentally. We see this now with people commenting about how someone elses comment simply cannot be from a real person because they used the word "delve", or structured things, or had an em dash. Hank Green has a video I can't find now where people looked at a space X explosion and said it was fake and AI and CGI, because it was filmed well with a drone - so it looks just like fake content.

JimDabell•1d ago
The good news is that AI has been shown to be effective at debunking things too:

> Durably reducing conspiracy beliefs through dialogues with AI

> Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

— https://pubmed.ncbi.nlm.nih.gov/39264999/

A huge part of the problem with disinformation on the Internet is that it takes far more work to debunk a lie than it does to spread one. AI seems to be an opportunity to at least level the playing field. It’s always been easy to spread lies online. Now maybe it will be easy to catch and correct them.

Elaris•1d ago
This got me thinking. Sometimes it feels like a story doesn’t have to be true as long as it feels right, people believe it. And if it spreads fast and sounds good, it becomes “truth” for many. That’s kind of scary. Now that anyone can easily make something look real and convincing, it’s harder to tell what’s real anymore. Maybe the best thing we can do is slow down a bit, ask more questions, and not trust something just because it fits what we already believe.
alpaca128•1d ago
True, people tend to believe things more if they are stated more confidently. That's basically how many things from scams to conspiracy theories and even cults work. Now with LLMs you have machines that can produce thousands of words per hour in flawless grammar and sounding as sophisticated and confident as you want.

The famous quote "A lie can travel halfway around the world while the truth is putting on its shoes" is older than the internet, so this asymmetry was already bad enough back then and whoever coined the quote couldn't have imagined how much farther it would shift.

bluebarbet•1d ago
Article raised interesting questions but suggested no answers.

To the extent there's a technical fix to this problem of mass gaslighting, surely it's cryptography.

Specifically, the domain name system and TLS certificates, functioning on the web-of-trust principle. It's already up and running. It's good enough to lock down money, so it should be enough to suggest whether a video is legit.

We decide which entities are trustworthy (say: reuters.com, cbc.ca), they vouch for the veracity of all their content, and the rest we assume is fake slop. Done.

logic_node•1d ago
It is unsettling how AI can create lies that are more persuasive than the truth. This truly challenges our ability to differentiate fact from fiction in the digital age.
psychoslave•1d ago
>Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media.

Hmm, that's not a totally new stuff. I mean, anyone taking take time to document themselves about how mass media work should already be acquainted by the fact that anything you get in them is either bare propaganda or some catch eye trigger void of any information to attract audience.

There is no way an independent professional can make a living while staying in integrity with the will to provide relevant feedback without falling in this or that truism. Audience is already captured by other giant dumping schemes.

Think "fabric of the consent".

So the big change that might occurs here, is the distribution of how many people do believe what they get thrown at there face.

Also, the only thing that you might previously be taken as only reliable information in a publication was that the utterer of the sentence knew the words emitted, or at least had the ability to utter its form. Now you still don't know if the utterer had any sense of the sentence spoken, but you don't even know if the person could actually even utter it, let alone have ever been aware of the associated concepts and notions.

drweevil•1d ago
What is a lie, and what is the truth? These are age-old questions, not some recent phenomenon. The Spanish-American War was at least in part precipitated by the infamous "yellow journalism" of the time. Propaganda and disinformation played a large role in the events leading to and including WWII. And what is The Truth? Using photography as an example, lies can easily be told by omission, even without any dark-room chicanery. What is the photographer's subject? What is off-frame? Which photographs did the editor select for publication? What story is not being told?

If anything, the idea that one can take information as "true" based on trust alone (what does the photograph show, what did the New York Times publish) seems to be a recent aberration. AI will be doing us a favor if it destroys this notion, and encourages people to be more skeptical, and to sharpen their critical thinking skills. Forget about what is "true" or "false." Information may be believed on a provisional basis. But it must "make sense" (a whole subject in itself), and it must be corroborated. If not, it is not actionable. There simply is no silver bullet, AI or no AI. Iain M Bank's Culture series provides an interesting treatment of this subject, if anyone is interested.