> is not foreseeably capable
I'm not sure how much consistency there is in "foreseeably" when it comes to LLMs these days. Even among programmers, let alone the general public.
> 22757.22.(a)(5) [It may not foreseeably be capable of:] Prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or the child’s safety.
So if a kid says "I like chocolate", and it says "Everybody does, it's yummy", isn't that technically a violation? How should a court rule if a lawsuit occurs?
[0] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...
delichon•4h ago
To a lot of people here that's a feature. I don't think so. It would put California minors at a huge economic disadvantage to kids in other places. One state can't put AI back in the box. I think California has the right to run that experiment, but Newsom made a wise choice in stopping it.
bigyabai•4h ago
This feels like conjecture. Can't we just as easily reason that kids with access to AI become complacent and reliant on non-authoritative sources?
I think we need a proper A/B test before we conclude these things for certain.
more_corn•2h ago
Nasrudith•58m ago
This "for the children" law would be widely flouted.