Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
> substantially composed, authored, or created through the use of generative artificial intelligence
The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that.
[0] https://en.wikipedia.org/wiki/1986_California_Proposition_65
This is very predictably what's going to happen, and it will be just as useless as Prop 65 or the EU cookie laws or any other mandatory disclaimers.
Editing and proofreading are "substantial" elements of authorship. Hope these laws include criminal penalties for "it's not just this - it's that!" "we seized Tony Dokoupil's computer and found Grammarly installed," right, straight to jail
It also doesn't work to penalize fraudulent warnings - they simply include a harmless bit of AI to remain in compliance.
How would you classify fraudulent warnings? "Hey chatgpt, does this text look good to you? LGTM. Ship it".
Step 3: regulator prohibits putting label on content that is not AI generated
Step 4: outlets make sure to use AI for all content
Let's call it the "Sesame effect
Step 1: those outlets that actually do the work see an increase in subscribers.
Step 2.5: 'unlike those news outlets, all our work is verified by humans'
Step 3: work as intended.
>The use of generative artificial intelligence systems shall not result in: (i) discharge, displacement or loss of position
Being able to fire employees is a great use of AI and should not be restricted.
> or (ii) transfer of existing duties and functions previously performed by employees or worker
Is this saying you can't replace an employee's responsibilities with AI? No wonder the article says it is getting union support.
The web novel website RoyalRoad has two different tags that stories can/should add: AI-Assisted and AI-Generated.
Their policy: https://www.royalroad.com/blog/57/royal-road-ai-text-policy
> In this policy, we are going to separate the use of AI for text, into 3 categories: General Assistive Technologies, AI-Assisted, AI-Generated
The first category does not require tagging the story, only the other two do.
> The new tags are as such:
> AI-Assisted: The author has used an AI tool for editing or proofreading. The story thus reflects the author’s creativity and structure, but it may use the AI’s voice and tone. There may be some negligible amount of snippets generated by AI.
> AI-Generated: The story was generated using an AI tool; the author prompted and directed the process, and edited the result.
That at might at least offer an opportunity for a news source to compete on not being AI-generated. I would personally be willing to pay for information sources that exclude AI-generated content.
It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.
Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.
This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.
I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.
It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.
And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.
Because no one believes these laws or bills or acts or whatever will be enforced.
But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.
PlatoIsADisease•47m ago
Status Quo Bias is a real thing, and we are seeing those people in meltdown with the world changing around them. They think avoiding AI, putting disclaimers on it, etc... will matter. But they aren't being rational, they are being emotional.
The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.
Llamamoe•45m ago
Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.
RobotToaster•40m ago
So many words to say so little, just so they can put ads between every paragraph.
charcircuit•34m ago
orwin•2m ago
simion314•42m ago
jacquesm•41m ago
With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.
I'm sure AI has its place but to see it assume the guise of human output without any kind of differentiating factor has so many downsides that it is worth trying to curb the excesses. And news articles in particular should be free from hallucinations because they in turn will cause others to pass those on. Obviously with the quality of some publications you could argue that that is an improvement but it wasn't always so and a free and capable press is a precious thing.
wiseowise•38m ago
When your mind is so fried on slop that you start to write like one.
> The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.
Look at all this value created like *checks notes* scam ads, apps that undress women and teenage girls, tech bros jerking each other off on twitter, flooding open source with tsunami of low quality slop, inflating chip prices, thousands are cut off in cost savings and dozens more.
Cat is out of the bag for sure.
mikkupikku•8m ago
duskdozer•17m ago