The speed with which you’ll be able to create and disseminate propaganda is mind blowing. Imagine what a 3 letter agency could do with their level of resources.
Expected reaction: every camera manufacturer will embed chips that hold a private key used to sign and/or watermark photos and videos, thus attesting that the raw footage came from a real camera.
Now it only remains to solve the analog hole problem.
Now you have another problem -- a signature is unique to a device, presumably, so you've got the "I can authenticate that this camera took this photo" problem, which is great if you are validating a press agency's photographs but TERRIBLE if you are covering a protest or human rights violation.
I don't think the CIA will have any problems
- Watermarking is nearly useless as a way of conveying that information, either visibly distorting the image or being sensitive to all manner of normal alterations like cropping, lightness adjustments, and screenshotting.
- New file formats are hard to push to a wide enough audience for this to have the desired effect. If half the real images you see aren't signed, ignoring the signature becomes second-nature.
- Hardware keys can always be extracted in O(N) for an N-bit key. The constant factor is large, but not enough to deter a well-funded adversary. The ability to convincingly fake, e.g., video proof you weren't at a crime scene would be valuable in a hurry. I don't know the limits, but it's more than the 2-10 million dollars you need to extract a key.
- You mentioned the analog hole problem, but that's also very real. If the sensor is engineered as a separate unit, it's trivial to sign whatever data you want. That's hard to work around because camera sensors are big and crude, so integration a non-removable crypto enclave onto one is already a nontrivial engineering challenge.
- If this doesn't function something like TLS with certificate transparency logs and chains of trust then one compromised key from any manufacturer kills the whole thing. Would the US even trust Chinese-signed images? Vice versa? The government you obey has a lot of power to steal that secret without the outside world knowing.
- Even if you do have CT logs and trust the company publishing to them to not publish compromised certs, a breach is much worse than for something like TLS. People's devices are effectively just bricked (going back to that 3rd point -- if all the images you personally take aren't appropriately signed, will a lack of signing seem like a big deal?). If you can update the secure enclave then an adversary can too, and if updating tries to protect itself by, e.g., only sending signed bytecode then you still have the problem that the upstream key is (potentially) compromised.
- Everyone's current devices are immediately obsolete, which will kill adoption. If you grandfather the idea in, there's still a period of years where people get used to real images not being signed, and you still have a ton of wasted money and resources that'll get pushback.
Etc. It's really not an easy problem.
What does it even mean that hardware keys are extractable in O(N) time? If there's some reasonable multiple of N where you can figure out a key, your cryptosystem is broken, physical or not.
It's also very straightforward to attach metadata to media and wouldn't take a format change.
Can you expand on that a bit? Wikipedia's coverage on this seems mostly historical and copy protection focused.
It has always been about trust in the authors.
The main difference is petty fakes would be cheap. I.e. my wife could be shown a fake portraying me for whatever malicious reasons.
What makes you think fake videos will have an outsized impact?
Picture - “this could be out of context” (this is used constantly in politics and people fall for it anyway)
Video removes the question of context and if the person actually did it. So now instead of printing about it or showing an awkward picture from a certain unflattering angle I can generate a video of your favorite politician taking upskirt photos on a city bus.
As the tech gets more and more realistic we’re increasingly straining the average persons ability to maintain presence of mind and ask questions.
Video - “There’s no way to tell whether this is AI-faked or not.”
Don’t get me wrong. I hope this level of fake media causes people to stop taking things at face value and dig into the facts but unfortunately it seems we’re just getting worse at it.
The problem isn’t the fakery, it’s this speed of dissemination on algorithmic social media. It’s increasingly looking like the modern West’s Roman lead pipes.
Verifiable authenticity might just be the next big thing.
If they can't have both, some people prefer confirmation to authenticity. But that's far from a universal preference.
If these same ideas were expressed by Vtubers (virtual youtubers, anime-like filters for people who want to do to-camera video but are shy or protective of their privacy), it would not be troubling, as everyone understands that fictionalized characters are a form of puppetry an can focus on the content of the argument.
But using generative video to simulate ordinary people expressing those ideas is a way of hijacking people's neural responses. Just pick the demographic you wish to micro-target (young/middle/old, working/middle/upper class, etc. etc. etc.) and generate attractive-looking exemplars for messages you want to promote and ugly-looking exemplars for those you wish to discredit.
IMO all digital content is going to have to be signed so the provenance trail can be crawled by an AI across devices.
https://aditya-advani.medium.com/how-to-defeat-fake-news-wit...
In real life, other humans are not machines you can put kindness tokens in and get sex out. AI, on the other hand, you can put any tokens at all into AI and get sex out. I'm worried that people will stop interacting with humans because it's harder.
Sure, the results from a human relationship are 10,000x higher quality, but they require you to be able to communicate. AI will do what it's told, and you can tell it to love you and it will.
for some values of "will".
Which gene do you think encodes for having the hots for AI models?
You remind me to a reporting I saw on Taiwanese schoolchildrens' career goals. Most reported aiming for the semiconductor industry. Crazy how the local gene pool works, what a coincidence.
ncpa-cpl•3h ago