frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

California governor signs AI transparency bill into law

https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/
79•raldi•1h ago
https://sb53.info/

Comments

toxicdevil•1h ago
Copied from the end of the page:

What the law does: SB 53 establishes new requirements for frontier AI developers creating stronger:

Transparency: Requires large frontier developers to publicly publish a framework on its website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.

Innovation: Establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster. The consortium, called CalCompute, will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation.

Safety: Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.

Accountability: Protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.

Responsiveness: Directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments, and international standards.

hodgesrm•55m ago
What real-world problem does any of this solve? For instance, how does it protect my IP from being vacuumed up and used by LLMs without permission from or payment to me?
observationist•52m ago
It makes an increasingly irrelevant and performative California state government feel like they matter to the rest of the world, mostly.
fourseventy•50m ago
None. It's typical California bullshit regulation
tadfisher•42m ago
The problem it solves is providing any sort of baseline framework for lawmakers and the legal system to even discuss AI and its impacts based on actual data instead of feels. That's why so much of it is about requiring tech companies to publish safety plans, transparency reports and incidents, and why the penalty for noncompliance is only $10,000.

A comprehensive AI regulatory action is way too premature at this stage, and do note that California is not the sovereign responsible for U.S. copyright law.

iinnPP•12m ago
If I had a requirement to either do something I didn't want to do or pay a nickel, I'd just fake doing what needed to be done and wait for the regulatory body to fine me 28 years later after I exhausted my appeal chain. Luckily, inflation turned the nickel into a penny, now defunct, and I rely on the ability to pay debts in legal currency to use another 39 years of appeals.
egorfine•24m ago
> What real-world problem does any of this solve?

Drives AI innovation out of California.

micromacrofoot•20m ago
I think it applies to companies providing services to California (based on how much data from Californian's they process), not just limited to those operating within the state, similar to the CCPA.
egorfine•14m ago
sign one more geo region to block. One more region to remember.

Internet is becoming fragmented. :-(

andy99•22m ago
Your rent seeking is not a real world problem. I'm sceptical about the bill, I would be much more so if it was just some kind of wealth redistribution to the loudest complainers.
drivebyhooting•9m ago
I’d rather pay real human authors and artists for their creativity than openAI.

As it is, I would never pay for an AI written textbook. And yet who will write the textbooks of tomorrow?

andy99•3m ago
> I’d rather pay real human authors and artists for their creativity than openAI.

So would I. You've just demonstrated one of the many reasons that any kind of LLM tax that redistributes money to supposedly aggrieved "creators" is a bad idea.

While by no means the only argument or even one of the tops ones, if an author has a clearly differentiated product from LLM generated content (which all good authors do) why should they also get compensated because of the existence of LLMs? The whole thing is just "someone is making money in a way I didn't think about, not fair!"

ronsor•2m ago
I'd rather not pay OpenAI either. I'll stick with my open-weights models, and I rather anachronistic rent-seeking not kill those.

You're not getting a cent from OpenAI, and the government isn't going to do anything about it. Just get over it.

podgietaru•20m ago
Protection for whistleblowers - which might expose nefarious actions
freedomben•7m ago
I think protection for whistleblowers both in AI and in general is a good thing, but ... do we really need a special carveout for AI whistleblowers? Do we not already have protections for them, or is it insufficient? And if we don't have them already, why not pass general protections instead of something so hyper-specific?

(not directing these questions at you specifically, though if you know I'd certainly love to hear your thoughts)

bloodyplonker22•19m ago
It solves a very real real world problem: putting more money into the hands of government officials.
christkv•44m ago
So they are going to give a bunch of money to Nvidia that they don't have to build their own llm hosting data center?
isodev•40m ago
This is so watered down and full of legal details for corps to loophole into. I like the initiative, but I wouldn’t count on safety or model providers being forced to do the right thing.

And when the AI bubble pops, does it also prevent corps of getting themselves bailed out with taxpayer money?

freedomben•27m ago
At least a bunch of lawyers and AI consultants (who conveniently, are frequently also lobbyists and consultants for the legislature) now get some legally mandated work and will make a shit ton more money!
cyanbane•19m ago
I don't see these, did the URI get switched? Anyone have orig?
ryandrake•11m ago
So the significant regulatory hurdle for companies that this SB introduces is... "You have to write a doc." Please tell me there's actual meat here.
TheAceOfHearts•57m ago
I found this website with the actual bill text along with annotations [0]. The section 22757.12. seems to contain the actual details of what they mean by "transparency".

[0] https://sb53.info/

dang•31m ago
Thanks! we'll add that link to the top text.
davidmckayv•33m ago
This is censorship with extra steps.

Look at what the bill actually requires. Companies have to publish frameworks showing how they "mitigate catastrophic risk" and implement "safety protocols" for "dangerous capabilities." That sounds reasonable until you realize the government is now defining what counts as dangerous and requiring private companies to build systems that restrict those outputs.

The Supreme Court already settled this. Brandenburg gives us the standard: imminent lawless action. Add in the narrow exceptions like child porn and true threats, and that's it. The government doesn't get to create new categories of "dangerous speech" just because the technology is new.

But here we have California mandating that AI companies assess whether their models can "provide expert-level assistance" in creating weapons or "engage in conduct that would constitute a crime." Then they have to implement mitigations and report to the state AG. That's prior restraint. The state is compelling companies to filter outputs based on potential future harm, which is exactly what the First Amendment prohibits.

Yes, bioweapons and cyberattacks are scary. But the solution isn't giving the government power to define "safety" and force companies to censor accordingly. If someone actually uses AI to commit a crime, prosecute them under existing law. You don't need a new regulatory framework that treats information itself as the threat.

This creates the infrastructure. Today it's "catastrophic risks." Tomorrow it's misinformation, hate speech, or whatever else the state decides needs "safety mitigations." Once you accept the premise that government can mandate content restrictions for safety, you've lost the argument.

nubg•32m ago
Was this comment written with the assistance of AI? I am asking seriously, not trying to be snarky.
davidmckayv•31m ago
No. I just write well.
freedomben•20m ago
You clearly already know this, but you do in fact write very well!
davidmckayv•18m ago
Thank you!
troupo•30m ago
> That sounds reasonable until you realize the government is now defining what counts as dangerous and requiring private companies to build systems that restrict those outputs.

Ah yes, the poor, poor innocent private companies... that actually need to be told again and again by governments to stop doing harmful things.

josefritzishere•23m ago
I've never thought censorship was a core concern of AI. It's just regurgitating from an LLM. I vehemently oppose censorship but who cares about AI? I just dont see the use-case.
Animats•12m ago
> Today it's "catastrophic risks." Tomorrow it's misinformation, hate speech, or whatever else the state decides needs "safety mitigations."

That's the problem.

I'm less worried about catastrophic risks than routine ones. If you want to find out how to do something illegal or dangerous, all an LLM can give you is a digest what's already available on line. Probably with errors.

The US has lots of hate speech, and it's mostly background noise, not a new problem.

"Misinformation" is more of a problem, because the big public LLMs digest the Internet and add authority with their picks. It's adding the authority of Google or Microsoft to bogus info that's a problem. This is a basic task of real journalism - when do you say "X happened", and when do you say "Y says X happened"? LLMs should probably be instructed to err in the direction of "Y says X happened".

"Safety" usually means "less sex". Which, in the age of Pornhub, seems a non-issue, although worrying about it occupies the time of too many people.

An issue that's not being addressed at all here is using AI systems to manipulate customers and provide evasive customer service. That's commercial speech and consumer rights, not First Amendment issues. That should be addressed as a consumer rights thing.

Then there's the issue of an AI as your boss. Like Uber.

babypuncher•9m ago
If there's one thing I've learned watching the trajectory of social media over the last 15 years, it's that we've been way to slow to assess the risks and harmful outcomes posed by new, rapidly evolving industries.

Fixing social media is now a near impossible task as it has built up enough momentum and political influence to resist any kind of regulation that would actually be effective at curtailing its worst side effects.

I hope we don't make the same mistakes with generative AI

throwworhtthrow•37s ago
LLMs don't have rights. LLMs are tools, and the state can regulate tools. Humans acting on behalf of these companies can still, if they felt the bizarre desire to, publish assembly instructions for bioweapons on the company blog.
pluc•21m ago
Still nothing about how they stole copyrighted works for profit eh?
srj•1m ago
Reading the text it feels like a giveaway to an "AI safety" industry who will be paid well to certify compliance.

The Game Engine that would not have been made without Rust

https://blog.vermeilsoft.com/2025-09-rust-game-engine/
1•zdw•2m ago•0 comments

Beneath the GDP, a Recession Warning

https://www.wsj.com/opinion/beneath-the-gdp-a-recession-warning-fff133de
1•paulpauper•6m ago•0 comments

Bans on highly toxic pesticides could save lives from suicide

https://ourworldindata.org/pesticide-bans-suicide-prevention
1•kamaraju•6m ago•0 comments

Million-year-old skull rewrites human evolution, scientists claim

https://www.bbc.com/news/articles/cdx01ve5151o
1•paulpauper•7m ago•0 comments

Essential books for modern technology leaders

https://www.hyperact.co.uk/blog/10-essential-books-for-modern-tech-leaders
1•imjacobclark•10m ago•0 comments

YouTube settles lawsuit challenging Section 230 for $25.4M

https://www.courtlistener.com/docket/60643878/178/trump-v-youtube-llc/
2•1vuio0pswjnm7•12m ago•1 comments

Landlords Demand Tenants' Workplace Logins to Scrape Their Paystubs

https://www.404media.co/landlords-demand-tenants-workplace-logins-to-scrape-their-paystubs/
1•throwaway81523•13m ago•0 comments

Can LIGO Detect Daylight Savings Time?

https://arxiv.org/abs/2509.11849
3•zdw•14m ago•0 comments

People World Lose and H

https://www.facebook.com/hidalberto.caratini.santiago.2025
1•gelatonevado•15m ago•0 comments

Afghanistan in total Internet blackout caused by Taliban

https://mastodon.social/@netblocks/115288230006300457
1•walrus01•16m ago•1 comments

Epigenetics

https://en.wikipedia.org/wiki/Epigenetics
1•nis0s•17m ago•0 comments

Welcome to the Internet (2021) [video]

https://www.youtube.com/watch?v=k1BneeJTDcU
1•andrepd•18m ago•0 comments

Macintosh System 7 Ported To x86 With LLM Help in 3 days

https://github.com/Kelsidavis/System7
4•zdw•18m ago•0 comments

AI Photography Is the Next Big Thing in Digital Imaging

https://techglimmer.io/what-is-ai-photography-and-digital-imaging/
2•kaus_meister•22m ago•2 comments

I Tried Htmx

https://bytecron.me/post/i-tried-htmx
1•srid•22m ago•2 comments

A growing number of U.S. adults report cognitive disability

https://news.yale.edu/2025/09/24/growing-number-us-adults-report-cognitive-disability
3•thinkalone•23m ago•0 comments

Generate Swift code programmatically with declarative syntax

https://github.com/brightdigit/SyntaxKit
1•rmason•24m ago•0 comments

Ask HN: Tomorrow is my first hackathon, any advice?

2•ofou•29m ago•3 comments

Riemannian Geometry and Non-Euclidean Geometry

https://www.preposterousuniverse.com/blog/2015/11/26/thanksgiving-10/
1•programmexxx•30m ago•0 comments

Partijgedrag – A Dutch political voting compass built on public data

https://github.com/van-sprundel/partijgedrag
1•ramon156•32m ago•0 comments

Show HN: Clean metrics for messy coding habits

https://timefly.dev
1•cgonzar3•33m ago•3 comments

Google to merge Android and ChromeOS in 2026

https://www.theregister.com/2025/09/25/google_android_chromeos/
5•fork-bomber•35m ago•0 comments

Does Agentic AI imply output goes to infinity?

https://substack.com/inbox/post/174849090
1•mathattack•36m ago•0 comments

Amygdala–liver signalling orchestrates glycaemic responses to stress

https://www.nature.com/articles/s41586-025-09420-1
2•PaulHoule•37m ago•0 comments

Roblox is shutting down discord clone Guilded.gg

https://devforum.roblox.com/t/update-on-guilded-and-communities/3966775
1•HypomaniaMan•37m ago•0 comments

I have been diving deep into the world of FinOps

https://buttondown.com/apievangelist/archive/weekly-api-evangelist-governance-guidance-9568/
1•mooreds•38m ago•0 comments

The Gameboy emulator that runs everywhere (Terminal, Web, Desktop)

https://github.com/raphamorim/gameboy
1•Bogdanp•38m ago•0 comments

Jax: Fast Combinations Calculation

https://github.com/phoenicyan/combinadics
4•phoenicyan•39m ago•0 comments

East Texas man facing October execution will not seek clemency, his lawyer says

https://www.kltv.com/2025/09/25/east-texas-man-facing-october-execution-will-not-seek-clemency-hi...
1•rossant•39m ago•0 comments

Canoeboot: Free, Libre BIOS/UEFI boot firmware

https://canoeboot.org/
4•jethronethro•42m ago•0 comments