A proposed class-action settlement filed Jan 23, 2026 (N.D. California, San Jose) would have Google pay $68M to resolve claims that Google Assistant sometimes activated on “false accepts” (misheard wake words), recorded private conversations, and allegedly shared/used the resulting data (including for ad targeting). Google denies wrongdoing; the settlement still needs court approval (a preliminary approval hearing is set for March 19, 2026)
Background: reporting in 2019 described contractors transcribing snippets of Assistant recordings, including some captured accidentally.
What architecture would make “false accepts” provably non-exfiltrating (on-device wake word, local buffering, strict opt-in upload)?
If human review is needed to improve models, what’s the least-bad consent + auditing model?
nuclearm•1h ago