Look at what the bill actually requires. Companies have to publish frameworks showing how they "mitigate catastrophic risk" and implement "safety protocols" for "dangerous capabilities." That sounds reasonable until you realize the government is now defining what counts as dangerous and requiring private companies to build systems that restrict those outputs.
The Supreme Court already settled this. Brandenburg gives us the standard: imminent lawless action. Add in the narrow exceptions like child porn and true threats, and that's it. The government doesn't get to create new categories of "dangerous speech" just because the technology is new.
But here we have California mandating that AI companies assess whether their models can "provide expert-level assistance" in creating weapons or "engage in conduct that would constitute a crime." Then they have to implement mitigations and report to the state AG. That's prior restraint. The state is compelling companies to filter outputs based on potential future harm, which is exactly what the First Amendment prohibits.
Yes, bioweapons and cyberattacks are scary. But the solution isn't giving the government power to define "safety" and force companies to censor accordingly. If someone actually uses AI to commit a crime, prosecute them under existing law. You don't need a new regulatory framework that treats information itself as the threat.
This creates the infrastructure. Today it's "catastrophic risks." Tomorrow it's misinformation, hate speech, or whatever else the state decides needs "safety mitigations." Once you accept the premise that government can mandate content restrictions for safety, you've lost the argument.
Ah yes, the poor, poor innocent private companies... that actually need to be told again and again by governments to stop doing harmful things.
That's the problem.
I'm less worried about catastrophic risks than routine ones. If you want to find out how to do something illegal or dangerous, all an LLM can give you is a digest what's already available on line. Probably with errors.
The US has lots of hate speech, and it's mostly background noise, not a new problem.
"Misinformation" is more of a problem, because the big public LLMs digest the Internet and add authority with their picks. It's adding the authority of Google or Microsoft to bogus info that's a problem. This is a basic task of real journalism - when do you say "X happened", and when do you say "Y says X happened"? LLMs should probably be instructed to err in the direction of "Y says X happened".
"Safety" usually means "less sex". Which, in the age of Pornhub, seems a non-issue, although worrying about it occupies the time of too many people.
An issue that's not being addressed at all here is using AI systems to manipulate customers and provide evasive customer service. That's commercial speech and consumer rights, not First Amendment issues. That should be addressed as a consumer rights thing.
Then there's the issue of an AI as your boss. Like Uber.
Fixing social media is now a near impossible task as it has built up enough momentum and political influence to resist any kind of regulation that would actually be effective at curtailing its worst side effects.
I hope we don't make the same mistakes with generative AI
toxicdevil•1h ago
What the law does: SB 53 establishes new requirements for frontier AI developers creating stronger:
Transparency: Requires large frontier developers to publicly publish a framework on its website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.
Innovation: Establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster. The consortium, called CalCompute, will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation.
Safety: Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.
Accountability: Protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.
Responsiveness: Directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments, and international standards.
hodgesrm•55m ago
observationist•52m ago
fourseventy•50m ago
tadfisher•42m ago
A comprehensive AI regulatory action is way too premature at this stage, and do note that California is not the sovereign responsible for U.S. copyright law.
iinnPP•12m ago
egorfine•24m ago
Drives AI innovation out of California.
micromacrofoot•20m ago
egorfine•14m ago
Internet is becoming fragmented. :-(
andy99•22m ago
drivebyhooting•9m ago
As it is, I would never pay for an AI written textbook. And yet who will write the textbooks of tomorrow?
andy99•3m ago
So would I. You've just demonstrated one of the many reasons that any kind of LLM tax that redistributes money to supposedly aggrieved "creators" is a bad idea.
While by no means the only argument or even one of the tops ones, if an author has a clearly differentiated product from LLM generated content (which all good authors do) why should they also get compensated because of the existence of LLMs? The whole thing is just "someone is making money in a way I didn't think about, not fair!"
ronsor•2m ago
You're not getting a cent from OpenAI, and the government isn't going to do anything about it. Just get over it.
podgietaru•20m ago
freedomben•7m ago
(not directing these questions at you specifically, though if you know I'd certainly love to hear your thoughts)
bloodyplonker22•19m ago
christkv•44m ago
isodev•40m ago
And when the AI bubble pops, does it also prevent corps of getting themselves bailed out with taxpayer money?
freedomben•27m ago
cyanbane•19m ago
ryandrake•11m ago