https://www.androidauthority.com/google-veo-3-best-ai-videos...
Any clip or photo can be AI generated, nothing can be trusted. Surveillance videos, cop body cams, "leaked", war video clips and so on.
I think it's great that the technology exist, it has huge possibilities, but it is a double edged sword that will without question be used by people with some agenda/greed/belief/ideology/conviction/whatever.
On a scale of "how trust ending is this?" I'd personally rate its current iteration as low-moderate risk. It will be a matter of when rather than if when we do reach the point where any arbitrary footage can be questioned, but I don't think we're quite there.
I am a bit concerned about what this is going to do, or maybe already has done, to aid in scams. People were already falling for AI Elon, never mind video of this quality.
I see a business opportunity in selling trust and a way to watermark. It will have to be a better business and technical model than trust authorities as those have proven unreliable. You have to become your own certification authority and start creating a trust web.
sylware•5h ago
Any noscript/basic (x)html alternative to read the content.
dvfjsdhgfv•4h ago