The novel elements are social media and AI. I am increasingly convinced that ad-funded social media should be banned and/or tightly regulated like utilities are.
Not how things that are "banned and/or tightly regulated like utilities are" are funded.
The idea is authenticity of images could be challenged. If in doubt, the original photographer or source can provide verification - automated or otherwise, that only superficial X, Y, or Z edits have been made such as lighting or contrast changes.
Copies of the images would contain this ledger or a link. In the future maybe photos without this technology are never to be trusted as a source, or perhaps disallowed on certain platforms.
So it's inverted from "we must detect fake" to "we must detect authenticity." The latter being easier to control.
I mean, nice things are under threat. There's no escaping it. If technology got us into this mess, technology can dig us out.
And that's the low tech version. Taking a trustworthy camera and plugging the CMOS sensor connector into an LLM that generates its output as RAW data that feeds into the sensor isn't that hard if you want to make extra sure you're not detected.
Such a system might even cause more harm than good by giving people the impression that real/AI images can actually be certified.
https://asia.nikkei.com/Business/Technology/Nikon-Sony-and-C...
It's hard to blame AI for something that was obvious three decades ago.
AI is just one of the means to lie to citizens, but way before AI existed, when power and money concentrates in a few hands, democracy and representation is at risk.
Redistribute wealth, and corruption and manipulation will shrink at the same pace that inequality. It happened before, it will happen again.
Basically since only negative lies work on general population, and positive lies are impossible, then negative lies will be generated about everyone above the currently shittiest level of politicians in a race. And neural nets are an x100 power and speed amplifier for this.
mitchbob•7mo ago