> Some experts believe the opposite is true: The risks will grow as we acclimate ourselves to the presence of deepfakes. Once we take the counterfeits for granted, we may begin doubting the veracity of all the information presented to us through media. We may, in the words of the mathematics professor and deepfake authority Noah Giansiracusa, start to “doubt reality itself.” We’ll go from a world where our bias was to take everything as evidence to one where our bias is to take nothing as evidence.
It is journalistic malpractice that these viewpoints are presented as though the former has anything to actually say. Of course Altman says it's no big deal, he's selling the fucking things. He is not an engineer, he is not a sociologist, he is not an expert at anything except some vague notion of businessness. Why is his opinion next to an expert's, even setting aside his flagrant and massive bias in the discussion at hand!?
"The owner of the orphan crushing machine says it'll be fine once we adjust to the sound of the orphans being crushed."
> “Every expert I spoke with,” reports an Atlantic writer, “said it’s a matter of when, not if, we reach a deepfake inflection point, after which forged videos and audio spreading false information will flood the internet.”
Depending where you go this is already true. Facebook is absolutely saturated in the shit. I have to constantly mute accounts and "show less like" on BlueSky posts because it's just AI generated allegedly attractive women (I personally prefer the ones that look... well, human, but that's just me). Every online art community either is trying to remove the AI garbage or has just given up and categorized it, asking users uploading it to please tag it so their other users who don't want to see it can mute it, and of course they don't because AI people are lazy.
Also I'd be remiss to not point out that this is, yet again, something I and many many others predicted back when this shit started getting going properly, and waddaya know.
That said, to be honest, I'm not that worried about the political angle. The politics of fakery, deep or otherwise, has always meant it's highly believable and consumable for the audience it's intended for because it's basically an amped-up version of political cartoons. Conservatives don't need their "Obama is destroying America!" images to be photorealistic to believe them, they just need them to stroke their confirmation bias. They're fine believing it even if it's flagrantly fake.
Notice that this phenomenon didn't happen as much on HN for other technologies, e.g. when the iPhone came out, very few people said "well, this is nothing new, computers existed for a long time, this is just minituarizing it and unplugging it from the wall."
This website is, of course, notorious for its Dropbox comment, so regrettably the viewpoint you speak of is rather common.
Seems fine to me when it's explicitly stated to be the viewpoint of the OpenAI CEO, and then countered by an expert opinion. It's already apparent that MRDA[0].
[0]: https://en.wikipedia.org/wiki/Well_he_would,_wouldn%27t_he%3...
This gets believed not because there's evidence, but because it's making a statement about enemies that is believed.
So for whoever finds lies compelling, I don't think it's about evidence or lack of evidence. It's about why they want to believe in those enemies, and evidence just gets in the way.
You've seen the "post-truth" attitudes already from the right, after "fake news" of 2016 makes them regard everything from climate change to vaccine data, as faked data with an agenda. It's interesting because for decades or centuries the right wing was usually the one which believed in our existing institutions, and it was the left that was counter-cultural and anti-authoritarian.
Like the telephone. People were terrified when they first heard about it. How will I know who's really on the other end? Won't it ruin our lives, making it impossible to leave the house, because people will be calling at all hours? Will it electrocute me? Will it burn down my house? Will evil spirits be attracted to it, and seep out of the receiver? (that was a real concern)
It turns out we just adapt to technology and then forget we were ever concerned. Sometimes that's not a great thing... but it doesn't bring about doomsday.
Even your example contains an unsolved, and serious problem. We still don’t know who is on the other end of the phone.
vs, a whole wide range of "wouldn't it be nice .....if", "cant we just....", and the massive background of myth, legend, fantasy, dreaming, etc so into this we have created a mega capable machine rendered virtual sur-reality.....much like the ancient myth/legends where odesius sits to table where at a fantastic feast..nothing is as it seems
Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square. The reason extraordinarily strange conspiracy theories have spread so widely in recent years may have less to do with the nature of credulity than with the nature of faith.
The reason why strange and even outright deranged notions have spread so widely is that they have been monetised. It is a Gibberish Economy.Ars answered this much much better:
https://arstechnica.com/ai/2025/05/ai-video-just-took-a-star...
> As these tools become more powerful and affordable, skepticism in media will grow. But the question isn't whether we can trust what we see and hear. It's whether we can trust who's showing it to us. In an era where anyone can generate a realistic video of anything for $1.50, the credibility of the source becomes our primary anchor to truth. The medium was never the message—the messenger always was.
1) if a piece of content is a fact or not.
2) if the person you are acting with is a human or a bot.
I think its easier if you take the most nihilistic view possible, as opposed to the optimistic or average case:
1) Everything is content. Information/Facts are simply a privileged version of content.
2) Assume all participants are bots.
The benefit is that we reduce the total amount of issues we are dealing with. We don’t focus on the variants of content being shared, or conversation partners, but on the invariant processes, rules and norms we agree upon.
So We can’t agree on may be facts - but what we can agree on is that the norms or process was followed.
The alternative, to hold on to some semblance or desire to assume people are people, and the inputs are factual, was possible to an extent in an earlier era. However the issue is that at this juncture, our easy BS filters are insufficient, and verification is increasingly computationally, economically, and energetically taxing.
I’m sure others have had better ideas, but this is the distance I have been able to travel and the journey I can articulate.
Side note
There’s a few Harvard professors who have written about misinformation, pointing out that total amount of misinfo consumed isn’t that high. Essentially : that demand for misinformation is limited. I find that this is true, but sheer quantity isnt the problem with misinfo, its amplification by trusted sources.
What GenAI does is different, it does make it easier to make more content, but it also makes it easier to make better quality content.
Today it’s not an issue of the quantity of misinformation going up, it’s an issue of our processes to figure out BS getting fooled.
This is all putting pressure on fact finding processes, and largely making facts expensive information products - compared to “content” that looks good enough.
All camera and phone manufacturers embed a code in each photo / video they produce.
All social media channels prioritize content that has these codes, and either block or de-prioritize content without them.
Result: the internet is filled with a vast amount of AI generated nonsense, but it’s mostly not treated as anything but entertainment. Any real content can be traced back to physical cameras.
The main issue I see is if the validation code is hacked at the camera level. But that is at least as preventable as say, preventing printers from counterfeiting money.
With the right setup I could probably just take a picture of the screen directly, making it even easier (and enabling it for videos too).
But yes that does add a wrinkle.
Also it gives them a lot of power to frame anyone for anything. How do you defend yourself against a cryptographic framed "proof" that ties you to a crime? At least when no evidence is trustworthy, the courts have to take this possibility into account. That's not the case when it's only used in rare cases.
To your second point: it’s not that my method guarantees absolutely flawless photos. It just makes it more likely to be secure.
What we can do is place trust in particular institutions, and use technology to verify authenticity. Not verify that what the institution is saying is true, but verify that they really are standing by this claim.
This is challenging because no institution is going to be 100% trustworthy all of the time in perpetuity. But you can make reasonable assessments about which institutions appear more credible. Then it's a matter of policing your own biases.
It would seem to me that the institution to place your trust in would be the one that implements and verifies the coding system I discussed.
As a consumer of news, you put your trust in the institution to have a reasonable vetting process, and also a process to retract a story if it's later shown to be false.
None of this is completely foolproof. It relies on institutions taking a long term view, and people working out which individuals and institutions are worthy of their trust. This isn't like the the blockchain where you have mathematical proof of veracity that's as strong as your encryption algorithm. I don't see how that level of proof is achievable in the real world.
Then he's lying or a complete moron.
People have been able to fake things for ages, since you can entirely fabricate any text because you can just type it. The same as you can pass on any rumour by speaking it.
People are fundamentally aware of this. Nobody is confused about whether or not you can make up "X said Y".
*AND YET* people fall for this stuff all the time. Humans are bad at this and the ways in which we are bad at this is extensively documented.
The idea that once you can very quickly and cheaply generate fake images that somehow people will treat it with drastically more scepticism than text or talking is insane.
Frankly the side I see more likely is what's in the article - that just as real reporting is dismissed as fake news that legitimate images will be decried as AI if they don't fit someones narrative. It's a super easy get-out clause mentally. We see this now with people commenting about how someone elses comment simply cannot be from a real person because they used the word "delve", or structured things, or had an em dash. Hank Green has a video I can't find now where people looked at a space X explosion and said it was fake and AI and CGI, because it was filmed well with a drone - so it looks just like fake content.
> Durably reducing conspiracy beliefs through dialogues with AI
> Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.
— https://pubmed.ncbi.nlm.nih.gov/39264999/
A huge part of the problem with disinformation on the Internet is that it takes far more work to debunk a lie than it does to spread one. AI seems to be an opportunity to at least level the playing field. It’s always been easy to spread lies online. Now maybe it will be easy to catch and correct them.
The famous quote "A lie can travel halfway around the world while the truth is putting on its shoes" is older than the internet, so this asymmetry was already bad enough back then and whoever coined the quote couldn't have imagined how much farther it would shift.
To the extent there's a technical fix to this problem of mass gaslighting, surely it's cryptography.
Specifically, the domain name system and TLS certificates, functioning on the web-of-trust principle. It's already up and running. It's good enough to lock down money, so it should be enough to suggest whether a video is legit.
We decide which entities are trustworthy (say: reuters.com, cbc.ca), they vouch for the veracity of all their content, and the rest we assume is fake slop. Done.
Hmm, that's not a totally new stuff. I mean, anyone taking take time to document themselves about how mass media work should already be acquainted by the fact that anything you get in them is either bare propaganda or some catch eye trigger void of any information to attract audience.
There is no way an independent professional can make a living while staying in integrity with the will to provide relevant feedback without falling in this or that truism. Audience is already captured by other giant dumping schemes.
Think "fabric of the consent".
So the big change that might occurs here, is the distribution of how many people do believe what they get thrown at there face.
Also, the only thing that you might previously be taken as only reliable information in a publication was that the utterer of the sentence knew the words emitted, or at least had the ability to utter its form. Now you still don't know if the utterer had any sense of the sentence spoken, but you don't even know if the person could actually even utter it, let alone have ever been aware of the associated concepts and notions.
If anything, the idea that one can take information as "true" based on trust alone (what does the photograph show, what did the New York Times publish) seems to be a recent aberration. AI will be doing us a favor if it destroys this notion, and encourages people to be more skeptical, and to sharpen their critical thinking skills. Forget about what is "true" or "false." Information may be believed on a provisional basis. But it must "make sense" (a whole subject in itself), and it must be corroborated. If not, it is not actionable. There simply is no silver bullet, AI or no AI. Iain M Bank's Culture series provides an interesting treatment of this subject, if anyone is interested.
HPsquared•1d ago
rightbyte•1d ago
jdiff•1d ago
rightbyte•1d ago
However, the real breaking point in my view was when shills and semi-automated bots become so prevalent that they could fool people into believing some consensus had changed. Faking photos doesn't add much to that in my view.
jdiff•1d ago
rightbyte•2h ago
notTooFarGone•1d ago
zettapwn•1d ago
What we don’t know is whether we’ll be worse or better off when the technology of forgery is available to random broke assholes as easily as it is to governments and companies. More slop seems bad, but if our immunity against bullshit improves, people might redevelop critical thinking skills and capitalism could end.
EGreg•1d ago
Does anyone other than me notice this common tendency on HN:
1. Blockhain use case mentioned? Someone must say "blockchain doesn't solve any problems" no matter what, and always ignoring any actual substance of what's being done until pressed.
2. AI issue mentioned? Someone must say "nothing at all has changed, this problem has always existed" downplaying the vast leaps forward in scale, and quality of output, ignoring any details until pressed.
It's like when people feel the need to preface "there is nothing wrong with capitalism, but" before critiquing capitalism. You will not criticize the profit.
It's not really a shibboleth. What's the name for this type of thing, groupthink?
cb321•1d ago
EDIT: while oversimplification is essentially always a problem, nuance and persuasion are usually at odds. So, it's especially noticeable in contexts where people are trying to persuade each other. The best parts of HN are not that, but rather where they try to inform each other.
raincole•1d ago
EGreg•1d ago
https://www.youtube.com/watch?v=fs-YpQj88ew&t=3m20s
thomasahle•1d ago
tokai•1d ago
At least made up citations are quick and easy to denounce.
SupremumLimit•1d ago
brookst•1d ago
_Algernon_•1d ago
https://www.independent.co.uk/news/science/archaeology/europ...
rightbyte•1d ago
"There was a brief window where photography and videos became widespread so events could be documented and evidenced."
Photos are video have in themself not been evidence. You need trust the photographer or publisher too, since the camera was invented.
Moving the cost from big $$$ level to my neighbour level makes scams more personalized, sure.
piva00•1d ago
Localised fires are common in nature, a massive wildfire is just a fire, at a different scale and degree. Lunatics raving about conspiracies were very common in public squares, in front of metro stations, anywhere there was a large-ish flow of people, now they are very common in social media, reaching millions in an instant, different scale and degree.
Just sheer scale and degree makes an issue a completely other issue, decreasing the effort required to fake a video to the point where a naïve/layperson cannot distinguish it from a real one is a massive difference. Before you needed to be technically proficient with a number of tools, put a lot of work, and get a somewhat satisfying result to fake something, now you just throw a magical incantation of words and it spits out a video you can deceive someone, it's a completely different issue.
Vilian•1d ago
jdiff•1d ago
A_D_E_P_T•1d ago
Iulioh•1d ago
A_D_E_P_T•1d ago
You can take a photograph of the image with an old Polaroid and, if the resolution is high enough, the watermark would still be there.
brookst•1d ago
A_D_E_P_T•1d ago
Let's now let's say your AI deepfake image is watermarked/hashed upon generation. Difficulty: Intermediate to hard. You might be able to get it done, but not without real effort, and removing all traces of the watermark without leaving artefacts might be so difficult as to be nearly impossible.
...So it doesn't eliminate the possibility of fakes, but it hugely raises the cost and effort bar. Only the most motivated would bother.
brookst•16h ago
airstrike•1d ago
The problem then is repeated compressing causing that to occur inadvertently
bogtog•1d ago
A_D_E_P_T•1d ago
JimDabell•1d ago
thegreatpeter•1d ago
You’re forced to trust the source and “read between the lines” or you’re reading something politically motivated.
Nothing new. I hope folks start trusting the source more.
neepi•1d ago
You should be able to follow a chain of evidence towards an unprocessed raw image potentially.
thih9•1d ago
[1]: https://en.m.wikipedia.org/wiki/Content_Authenticity_Initiat...
[2]: https://leica-camera.com/en-int/photography/content-credenti...
k2enemy•1d ago
holowoodman•1d ago
Edit: sister comments have links.
bayindirh•1d ago
Both manufacturer's keys got extracted from their cameras, rendering the feature moot. Now, a more robust iteration is probably coming, possibly based on secure enclaves.
I'd love to have the feature and use it to verify that my images are not corrupted on disk.
neepi•1d ago
bayindirh•1d ago
Mid-term plan is to build an UNRAID array with patrolling. Will probably do backups with Borg on top of it, and keep another "working copy", so I have multiple layers of redundancy with some auto-remediation.
UNRAID will keep disk level consistency in good shape. Patrolling will do checksum tests, Borg can check its own backups against bitrot and can repair them in most cases. Working copy will contain everything and refill if something unthinkable happens.
neepi•1d ago
I don't do that at home. I have 600GB of Nikon raws. I keep one copy on my Mac. I have a time machine backup in the house (in another room) on an Samsung T7 shield and I have an off site backup which is a quarterly rsync job to another Samsung T7 shield.
perching_aix•1d ago
I put a lot of effort into thinking about what could be proper trustable on photos and videos [0], but short of performing continuous modulation on the light sources the light of which is eventually captured by the cameras' sensors, there's no other way, and even that I'm not sure how would work with light actually doing the things its supposed to (reflection, refraction, scatter, polarization, etc.). And that's not even mentioning how the primary light source everyone understandably relies on, the Sun, will never emit such a modulated signal.
So what will happen instead I'm sure, is people deciding that they should not let perfection be the enemy of good, and moving on with this stuff anyways. Except this decision will not result in the necessary corresponding disclaimers reaching consumers, and so will begin the age of sham proofs and sham disprovals, and further radicalization. Here's to hope I'm wrong, and that these efforts will receive appropriate disclaimers when implemented and rolled out.
[0] related in a perhaps unexpected way: https://youtube.com/shorts/u-h5fHOcS88
dinfinity•1d ago
Signing in hardware is nice, but you then still need to trust the company making the hardware to make it tamper proof and not tampered with by them. Additionally, such devices would have to be locked down to disallow things like manual setting of the time and date. It's a rabbit hole with more and more inconveniences and fewer benefits as you go down it.
Better to just go for reputation, and webs and chains of trust in people based approaches.
neepi•1d ago
A good example: https://imgur.com/iqtFHHg
perching_aix•1d ago
I didn't mention this in my previous comment, but after having thought about it even further, I've eventually arrived at the idea that there can be basically an arbitrary amount of meaning embedded onto a photo or a video, and it being "verified" to me as a human really just means "it is what I think I'm looking at" - that's when I realized this task is likely unsolvable in absolute terms, and just how much more difficulty lies beyond the already hopelessly difficult crackpot idea of light source signal modulation that I came up with prior. But yeah, as I suggested, these are just what I could think of, and perfection should indeed not be the enemy of good.
ramblerman•1d ago
What a load of nonsense. A little bit of humility and a basic understanding of history should quickly make you realize that.
OPs point is far more interesting and deserves more discussion
perching_aix•1d ago
It's immensely frustrating to dress up a natural language sentence with enough precision to try and account for all and every bit of nuance, so you should anticipate and actively consider that I have missed or implied some.
Despite this, you clearly did not, and instead went into attack mode on the assumption(!) that I did miss them intentionally.
I'd recommend you take your own advice on intellectual humility before offering it to others.
palmotea•1d ago
The idea of "cryptographically signed photos coming out of cameras"? It's been discussed to death and is essentially a hope for magic technology to solve our social problems.
Also it won't work, technically: it's like asking for perfect a DRM implementation to be universally implemented everywhere.
hoseyor•1d ago
You do realize that would still not provide perfect proof that what was recorded by the camera was real, right? It does seem like an obsolete idea you may not have fully reconsidered in a while.
But considering that same old idea that dates from prior to the current state, I would also not be surprised if you imagined clandestinely including all kind of other things in this cryptographic signature like location, time and day, etc.; all of which can also be spoofed and is a tyrannical system’s wet dream.
You don’t think that would be immediately abused, as it was in other similar ways like through all the on device image scanning that was injected as counter-CSAM appeals to save the children…of course?
grues-dinner•1d ago
It doesn't do a whole lot for something entirely fictional, unless it becomes so ubiquitous that anything unsigned is assumed to be fake rather than just made on a "normal" device. And even if you did manage to sign every photo, who's managing those keys? It's the difference between TLS telling you what you see is what the server sent and trusting the server to send the truth in the first place.
lnrd•1d ago
grues-dinner•1d ago
Presumably if you were discovered you would then "burn" the device as its local key would be known then to be used by bad actors, but now you need to be checking all photos against a blacklist. Which also means if you buy a second hand device, you might be buying a device with "untrusted" output.
salawat•1d ago
The problem of Trust is a human problem, and throwing technology at it just makes it worse.
grues-dinner•1d ago
This particular idea has so many glaring problems that one might almost wonder if the motivation is less about "preventing misinformation" or "protecting democracy" or "thinking of the children" or whatever, and more about making it easier to prove you took the photo as you sue someone for using it without permission. But any technology promoted by Adobe couldn't be about DRM, so that's just crazy talk!
rightbyte•1d ago
RcouF1uZ4gsC•1d ago
There was never a time when authenticated photo and video could be trusted without knowing the source and circumstances.
j-bos•1d ago
psychoslave•1d ago
const_cast•23h ago
Everything has always happened, so who cares? We need to go deeper than that. Many things that are perfectly a-okay today are only so because we do it on a small enough scale. Many perfectly fine things, if we scale them up, destroy the world. Literally.
Even something as simple as pirating, which I support, would melt all world economies if everyone did it with everything.
ThinkBeat•1d ago
What is happening now will raise awareness of it and of course make it a several magnitudes bigger problem.
I am sure there are large efforts ongoing to train AI to spot AI photo, video, written production.
A system like printer tracking dots¹ may already be in widespread use. Who would take the enormous amount of time to figure out if somesuch is hiding somewhere in an llm, or related code.
¹ https://en.wikipedia.org/wiki/Printer_tracking_dots
perching_aix•1d ago
HPsquared•19h ago
const_cast•23h ago
Lying on a small scale is no big deal. Lying on a big scale burns the world down. Me pirating Super Mario 64 means nothing. Everyone pirating everything burns the economy down. Me shooting a glass coke bottle is not note worthy. Nuclear warheads threaten humanity's existence.
Yes, AI fabrication is a huge problem that we have never experienced before, precisely because of how it can scale.
palmotea•1d ago
Which just goes to show that one of the core tenets of techno-optimism is a lie.
The Amish (as I understand them) actually have the right idea: instead of wantonly adopting every technology that comes along, they assess how each one affects their values and culture, and pick the ones that help and reject the ones that harm.
AndrewKemendo•1d ago
Not something that can be said of most people. Worse, the number of affinity groups with long term coherence collapses into niche religions and regional cults.
There’s no free lunch
If you want structural stability then you’re gonna have to give up individuality for the group vector, if you want individuality you’re not going be able to take advantage of group benefits.
Humans are never going to be able to figure out this balance because we don’t have any kind of foundational coherent Universal epistemological grounding that can be universally accepted.
Good luck trying to get the world to even agree on the age of the earth