https://phys.org/news/2018-02-power-grid-fluctuations-hidden...
> Electric network frequency is a signal unique over time and thus can be used in time estimation for videos.
Although there's a whole other problem with this, which is that it's not going to survive consumer compression codecs. Because the changes are too small to be easily perceptible, codecs will simply strip them out. The whole point of video compression is to remove perceptually insignificant differences.
(Practical systems often include a generational index or a timestamp, which further helps to detect replay attacks.)
I think for the approach discussed in the paper, bandwidth is the key limiting factor, especially as video compression mangles the result, and ordinary news reporters edit the footage for pacing reasons. You want short clips to still be verifiable, so you can ask questions like "where is the rest of this footage" or "why is this played out of order" rather than just going, "there isn't enough signature left, I must assume this is entirely fake."
Definitely interesting for critical event and locations, but quite niche.
If this is the only info that's encoded, then that might not be an entirely bad idea.
(Usually, the stego-ing of info can help identify, say, a dissident who made a video that was critical of a regime. There are already other ways, but defeating them is whack-a-mole, if universities are going to keep inventing more.)
> Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.
If I have the dissident video, and a really big computer, can I identify the particular watermarked light sources that were present (and from there, know the location or owner)?
(Once you have an identifying code, you can go through supply chain and sales information, and through analysis of other videos, to likely determine location and/or owner/user/affiliate.)
If you're even considering going to go to all the trouble of setting up these weird lights and specialized algorithms for some event you're hosting, just shoot your own video of the event and post it. Done.
"Viewers" aren't forensic experts. They aren't going to engage with this algorithm or do some complex exercise to verify the private key of the algorithm prior to running some app on the video, they are just going to watch it.
Opponents aren't going to have difficulty relighting. Relighting is a thing Hollywood does routinely, and it's only getting easier.
Posting your own key and own video does nothing to prove the veracity of your own video. You could still have shot anything you want, with whatever edits you want, and applied the lighting in software after the fact.
I'm sure it was fun to play with the lights in the lab, but this isn't solving a problem of significance well.
I’m under the impression this isn’t for end users, it’s for enforcement within context of intellectual property.
I’m curious to see what the value proposition is as it’s unclear who would be buying this and why. I suppose platforms might want it to prove they can help or offer services to enforce brand integrity, maybe?
One significant problem currently is long form discussions which are taken wildly out of context for the sake of propaganda, cancelling or otherwise damaging the reputation of those involved. The point isn't that a given video isn't edited originally, but that the original source video can be compared to another (whether the original was edited or not is neither here nor there).
I'm not saying this solution is the answer, but attempts to be able to prove videos were unedited from their original release is a pretty reasonable goal.
I also don't follow where the idea that viewers need to be forensic experts arises from? My understanding is that a video can be verified as authentic, at least in the sense of the way the original author intended. I didn't read that users would be responsible for this, but rather that it can be done when required.
This is particularly useful in cases like the one I highlighted above; where a video may be re-cut to make an argument the person (or people) in question never made (and which might be used to smear said persons–a common occurrence in the world of long form podcasting as an example).
I don’t think that’s where we are, right? People are happy to stop looking after they see the video that confirms their negative suspicions about the public figure on the other team, and just assume any negative clips from their own team are taken out of context.
Total Relighting SIGGRAPH Talk: https://www.youtube.com/watch?v=qHUi_q0wkq4
Physically Controllable Relighting of Photographs: https://www.youtube.com/watch?v=XFJCT3D8t0M
Changing the view point post process: https://www.youtube.com/watch?v=7WrG5-xH1_k
Maybe eventually we get a model that can take a video and "rotate" it, or generate a 3D scene that can be recorded at multiple angles. But maybe eventually we may get a model that can generate anything. For now, 4o can't maintain obvious consistency with so many details, and I imagine it's orders of magnitude harder to replicate spatial/lighting differences accurately enough to pass expert inspection.
If you want solid evidence that a video is real, ask for another angle. Meanwhile, anything that needs to be covered with a camera (security or witness) should have at least two.
Or maybe "we installed the right bulbs but then we set the cameras to record in 240p MPEG with 1/5 keyframe per second because nobody in the office understands how digital video works".
Anyways I'm of the opinion that the ultimate end-state of deep fakes will be some sort of hybrid system where the AI creates 3d models and animates a scene for a traditional raytracing engine. It lets the AI do what its best at (faces, voices, movement) and eliminates most of the random inconsistencies. If that happens then faking these light patterns won't be difficult at all.
ranger_danger•4h ago
I don't think there's any possible solution that cannot also be faked in itself.
xandrius•3h ago
Encrypt some data in the video itself (ideally every frame changing), unique and can be created only by the holder the private key. Anyone can verify it. Flag reused codes. That's it?
vorgol•3h ago
wongarsu•3h ago
Or anyone else who cares enough about deepfakes and can afford the effort
kevinventullo•2h ago
wongarsu•2h ago
It'd agree that it's a lot of effort for very marginal gain
do_not_redeem•3h ago
If you flag a reused code in 2 different videos, how do you tell which video is real?
zhivota•3h ago
It's a lot of complexity, so probably only worthwhile for high value targets like government press conference rooms, etc.
do_not_redeem•2h ago
hamburglar•2h ago
“ rather than encoding a specific message, this watermark encodes an image of the unmanipulated scene as it would appear lit only by the coded illumination”
They are including scene data, presumably cryptographically signed, in the watermark, which allows for a consistency check that is not easily faked.
zhivota•2h ago
twodave•3h ago
xandrius•2h ago
For example, you encrypt the hash of the frame itself (+ metadata: frame number, timestamp, etc.) with a pkey. My client decrypts the hash, computes the hash and compares it.
The problem might present itself when compressing the video but the tagging step can be done after compression. That would also prevent resharing.
ranger_danger•3h ago
edm0nd•3h ago