First do a left-right on the link that Aurornis posted [1]. Notice the extra fat in the chin, the elongated ear, the enlarged mouth and nose, the frizzlier hair, the lower shirt cut.
You hate it. You think, intellectually, that this shouldn't work and surely no one would have the gall to so brazenly do this without the fear of being caught and shamed. And then you think, well once the truth is revealed that there will be some introspection and self-reflection on being tricked, and that maybe being tricked here means being tricked elsewhere.
Well someone, in an emotionless room, min-maxed the outcomes and computed that the expected value from such an action was positive.
And here we are.
https://apnews.com/article/fact-check-levy-armstrong-crying-...
Or they do hear about it, maybe a few days or a week later, but they dismiss it because its old news at the point and not worth thinking about to them.
Truth is, most people are never really thinking most of the time. They're reacting in the moment and maybe forming a rationale for their action after the fact.
But that's going to cost money to make and market all these new cameras and I just don't know how we incentivize or pay for this, so we're left unable to trust any images and video in the near future. I can only think of technical solutions and not the social changes that need to happen before the tech is wanted and adopted.
https://authenticity.sony.net/camera/en-us/index.html
https://www.sony.eu/presscentre/sony-launches-camera-verify-...
Ideally it'd become an open standard supported by all manufacturers. Which is what they're trying to do:
Ideally we would have a similar attestation from most people's cameras (on their smartphones) but that's a much harder problem to also support with 3p camera apps.
You will need camera DRM with a hardware security module down all the way to the image sensor, where the hardware is in the hands of the attacker. Even when that chain is unbroken, you'll need to detect all kinds of tricks where the incoming photons themselves are altered. In the simplest case: a photo of a photo.
If HDCP has taught anything, it's that vendors of consumer products cannot implement such a secure chain at all, with ridiculous security vulnerabilities for years. HDCP has been given up and has become mostly irrelevant, perhaps except for the criminal liability it places on 'breaking' it. Vendors are also pushed to rely on security by obscurity, which will make such vulnerabilities harder to find for researchers than for attackers.
If you have half of such a 'signed photos' system in place, it will become easier to dismiss photos of actual events on the basis that they're unsigned. If a camera model or security chip shared by many models turns out to be broken, or a new photo-of-a-photo trick becomes known, a huge amount of photos produced before that, become immediately suspect. If you gatekeep (the proper implementations of) these features only to professional or expensive models, citizen journalism will be disincentivized.
But even more importantly: if you choose to rely on technical measures that are poorly understood by the general public (and that are likely to blow up in your face), you erode a social system of trust that already is in place, which is journalism. Although the rise of social media, illiteracy and fascism tends to suggest otherwise, journalistic chain of custody of photographic records mainly works fine. But only if we keep maintaining and teaching that system.
The WH using social media (X, Pravda Social) for official communication is highly deliberate - they get to declare post-hoc what is actually real communication and what is “just memes”. Of course it won’t make any difference to people amplifying the content. If the WH had to stick to traditional outlets for news they wouldn’t have this fig leaf to hide behind.
Aurornis•2w ago
The differences are not subtle
autoexec•2w ago