I think the TechCrunch headline is slightly more accurate than the Verge headline, which is "Facebook is starting to feed its AI with private, unpublished photos".
In both cases they imply training models is happening when that's not been confirmed.
(Facebook could help here by answering press inquiries about it, which they apparently have not done.)
Given Facebook is about as voracious an actor of surveillance as has ever existed, their track record of respecting few red lines until they have been caught crossing them egregiously, and the bright spotlight Zuckerberg is currently shining on their AI ambitions, it defies reality to imagine them forgoing any data they can get there hands on.
You just know there is a dashboard that summarizes all potential data sources, and engineers wake up with the shakes and sweats, after dreaming that Zuck was standing behind them, with furrowed brow, and pointing to a stat that shows 2% of someone’s most private information still hasn’t been plundered.
Ok, a little hyperbolic. But he & Meta are relentless.
Now that it was established that they wrote malware to bypass tracking protections, nothing surprises me. Apps written by Meta are malware, as far as I'm concerned.
It is safest to assume that your photos are being used for training.
thanatos519•7mo ago
cwmoore•7mo ago
How is it the photos are shared when they’re not shared?
kQq9oHeAz6wLLS•7mo ago
jfengel•7mo ago
Also, it's best to avoid it a site like this with many non-native English speakers. It's an extra layer of difficulty.
cwmoore•7mo ago
harvey9•7mo ago
kQq9oHeAz6wLLS•7mo ago
alex_young•7mo ago