Why? I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
I think if a human designs a drug and tests it and it all seems fine and the government approves it and then it later turns out to kill loads of people but nobody thought it would... that's just bad luck! You shouldn't face serious liability for that.
Can't agree with this. No, not at all. That can't be true... That's not "just bad luck". I believe this is actually a serious case of negligence and oversight - regardless of where exactly it occurred, whether on the part of the drug’s manufacturer, the government agency responsible for oversight, or somewhere else. It just doesn’t work that way. Any drug undergoes very thorough and rigorous testing before widespread use (which is implied by "millions of deaths"). Maybe I’m just dumb. And yeah, this isn’t my field. But damn it, I physically can’t imagine how, with proper, responsible testing, such a dangerous "drug" could successfully pass all stages of testing and inspection. With such a high mortality rate (I'll reinforce - millions of deaths cannot be "unseen edge cases"), it simply shouldn’t be possible with a proper approach to testing. Please, correct me if I’m wrong.
> I don't see that a drug designed by ChatGPT should result in any more or less liability than a drug designed by a human?
It’s simple. In this case, ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
Suppose, during the production of a hypothetical drug at a factory, a malfunction in one of the production machines (please excuse the somewhat imprecise terminology) - caused by a design flaw (i.e., the manufacturer is to blame for the failure; it’s not a matter of improper operation), and because of this malfunction, the drugs are produced incorrectly and lead to deaths, then at least part of the responsibility must fall on the machine manufacturer. Of course, responsibility also lies with those who used it for production - because they should have thoroughly tested it before releasing something so critically important - but, damn it, responsibility in this case also lies with the manufacturer who made such a serious design error.
The same goes for ChatGPT. It’s clear that the user also bears responsibility, but if this “machine” is by design capable of generating a recipe for a deadly poison disguised as a “medicine” - and the recipe is so convincing that it passes government inspections - then its creators must also bear responsibility.
EDIT: I've just remembered... I'm not sure how relevant this is, but I've just remembered the Therac-25 incidents, where some patients were receiving the overdose of radiation due to software faults. Who was to blame - the users (operators) or the manufacturer (AECL)? I'm unsure though how applicable it is to the hypothetical ChatGPT case, because you physically cannot "program" the guardrails in the same way as you could do in the deterministic program.
It might cause minor changes that we don't yet know how to notice, and which only cause symptoms in 20 years' time, for example. You can't test drugs indefinitely, at some point you need to say the test is over and it looks good. What if the downsides occur past the end of the test horizon?
> ChatGPT acts as a tool in the drug manufacturing process. And this tool can be faulty by design in some cases.
ChatGPT is not intended to be a drug manufacturing tool though? If you use any other random piece of software in the course of designing drugs, that doesn't make it the software developer's fault if it has a bug that you didn't notice that results in you making faulty drugs. And that's if it's even a bug! ChatGPT can give bad advice without even having any bugs. That's just how it works.
In the Therac-25 case the machine is designed and marketed as a medical treatment device. If OpenAI were running around claiming "ChatGPT can reliably design drugs, you don't even need to test it, just administer what it comes up with" then sure they should be liable. But that would be an insane thing to claim.
I think where there may be some confusion is if ChatGPT claims that a drug design is safe and effective. Is that a de facto statement from OpenAI that they should be held to? I don't think so. That's just how ChatGPT works. If we can't have a ChatGPT that is able to make statements that don't bind OpenAI, then I don't think we can have ChatGPT at all.
The trick is to make people behave like that without actually claiming it. AI companies seems to have aced it.
It just has to be delayed. Like many years after application. Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
Or both...
On top of that, If I remember correctly, this liability wavering also exist for Vaccines.
That's one thing. In this case, I don't really know if it's possible to test for something like delayed effects. I'm not even sure if you can identify them with 100% certainty; if you can prove that these effects come from this particular drug and not from another one.
> Or trigger on very specific and rare circumstances. Not likely in a trial, but near certain at a population scale.
And this is different thing. "Specific and rare circumstances" will not lead to millions of deaths (I apologize if I’m being too nitpicky about this particular phrasing, but I want to speak specifically in the context of “millions of deaths”). “Specific and rare circumstances” occur even with fully effective and "proper" medications - this is called “contraindications.” But such rare cases, as I’ve already said, will not lead to mass deaths - precisely because they are rare. I apologize again for focusing on the "millions", but please don’t confuse the scale of the problem.
I completely agree with you here. I only want to add that in this case, the users (the one(s) who used ChatGPT to design the drug, whichever entity(ies) that is) should also be held liable for their actions.
And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.
Make a nondeterministic product safe how?
Lots of articles you could read on the subject and answer your own question.
(Unless your angle is: akshually, you can never make anything 100% safe)
Yes Sherlock. And especially a natural language product that can't output the same thing for unchanged input twice.
Besides when you say "safe" i think of the idiots at Anthropic deleting "the hell" when i pasted a string in Claude and asked "what the hell are those unprintable characters at the beginning and end"...
How many correct answers did they suppress in their quest to make their chatbot "family friendly"?
Should I be able to get on with it?
That would be a better mission statement for OpenAI at this point.
I think the big thing you would need is to see the internal emails - if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable. If they just never thought about it then it could be negligence but I think if I was on a jury I'd find that more reasonable than knowing it could be a problem and deciding you aren't responsible
Why? What does it even mean to "enable a genocide"? Just saying something isn't an argument.
> if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable.
Again, why? How is this any different than electricity as a tool, which has both beneficial and harmful uses? AI is knowledge as a utility, that's the position here.
The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?
If I write instructions in a book that I give to someone telling them to kill someone else and they do, then I should be held responsible.
If I give someone a tool I made that I bill as more-than-PhD-level intelligence and it tells someone to kill someone else and they do, then I should be held responsible.
All of the above situations seem equivalent to me; I'm not the only person responsible in each case, but I gave them instructions and they followed them.
It's only computer scientists who think it's some unreasonable burden to be held liable for the consequences of their work.
Unfortunately their contract structures weren't strong enough to protect from the combination of the "king of the cannibals" and completely absentee regulatory oversight.
And if you don't believe that, do some digging into the lives of the psychopaths that started it.
Would it lead to increasing his wealth?
If this were to actually happen I can only imagine financial liability is the least of their concerns?
What scares me most about this is the narrowness of thought to match this fear with this response.
It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.
Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.
All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.
I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.
I mean, bleach and ammonia will do that. So I'm not sure that's really much of an accomplishment for AI.
You're not far from claiming that farting in a crowded elevator is a chemical attack.
Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?
If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?
That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.
Ask any trial lawyer in America! The world was perfect in the 1990s without any of these things.
OpenAI et al are creating the information and publishing/delivering it to you. Seems like a more direct facilitation.
Of course, after all knowledge is centralised in an OpenAI deatacenter I'm sure they will be happy to deal fairly with the liabilities /s.
It in fact is. Do you often go around making claims you are entirely unqualified to make? Or is this something new you’re trying?
And even if it doesn't work, at the end of the day you can work with a model to figure out what went wrong over-time gaining expertise in the field.
Right now it kinda is.
LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.
This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.
As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
[0] https://www.wisdomai.com/insights/TheAIGRID/openai-profit-sh...
That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
Yes there should be safe guards, but after a while you're jumping at shadows.
I'm more worried about depressed kids getting on chat and being encouraged to kill themselves than terrorist attacks.
We know what a cancer algorithmic social media is yet we don't act.
I doubt there will be any real and serious opposition to this bill, but there should be.
In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).
I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.
Understanding and staying alive while producing neuro chemicals are the biggest challenges here.
A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.
It's the same with drugs, whose instructions and ingredient lists have been a google search away for decades now. Yet you still need a master chemist to produce anything. By the time AI can hand hold an idiot through the synthesis of VX agents (which would require an array of sensors beyond a keyboard and camera), we will likely have bigger issues to worry about.
Food preparation, like pharmaceutical drug fabrication, is inherently scientific and methodologically controllable.
Look no further than the Four Thieves Vinegar Collective. Original synthesis line construction is hard. But the exact formula "add this", "turn on stir bar", "do you see particulate? Yes for +10m at stir", etc.
And if their results are replicated, theyre seeing 99.9% yields, compared to commercial practices of 99% (Solvaldi)
AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.
It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.
On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.
On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/
We work in the dark
we do what we can
we give what we have.
Our doubt is our passion, and our passion is our task.
The rest is the madness of art.
Where experts = the government.
There was this book 20 years ago: "Secret of Methamphetamine Manufacturing" by Uncle Fester
https://www.amazon.de/-/en/Uncle-Fester-ebook/dp/B00305GTWU
(Actually, 8th edition :-D)
Wow, that's quite the statement about the excellency of our institutions. Does not seem likely but, what the hell, I'll take my oversized dose of positivity for today!
Hell here's an Internet Archive book on making explosives
https://archive.org/details/saxon-kurt.-fireworks-explosives....
If you ever chat with older folks pre-90's much of this information was accessible fairly easily. It only changed with the push by the government to crackdown on Waco, Oklahoma City bombing, militias and other related groups. There was then a campaign to make it "normal" to limit free speech on the subjects, where as these books were available before.
I think the whole thing where AI should make information less available is a difficult battle and one which I personally oppose, but do understand. Free speech and information isn't the problem, it's the people, actions and substances they create.
After the age of the internet, I think it's been a forever loosing battle to limit information, it's why we couldn't stop cryptography, nuclear weapon proliferation, gun distribution, drug distribution, etc. The AI is just another battle ground, one which, if they actually do manage to control could definitely create some walls to this information, but not stop it.
More scary, is that the AI as it becomes pervasive and stop people from asking certain questions, because they don't know they should ask... but that's unrelated to the risk of mass death.
Item cannot be found.
which prevents us from displaying this page.
By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.
My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.
The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.
The LLMs aren't being punished for wanting* to know things.
The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".
One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.
* Don't anthropomorphise yourself
Do you want to know how to kill yourself? forums are for nerds. Here is wikipedia: https://en.wikipedia.org/wiki/Suicide_methods#List
Do you want to make a bomb? the first thing that came to my mind is a pressure cooker (due to news coverage). Searching "bomb with pressure cooker" yields a wikipedia article, skimming it randomly my eyes read "Step-by-step instructions for making pressure cooker bombs were published in an article titled "Make a Bomb in the Kitchen of Your Mom" in the Al-Qaeda-linked Inspire magazine in the summer of 2010, by "The AQ chef"." Searching for a mirror of the magazine we can find https://imgur.com/a/excerpts-from-inspire-magazine-issue-1-3... which has a screenshot of the instruction page. Now we can use the words in those screenshots to search for a complete issue. Here are a couple of interesting PDFs: - https://archive.org/details/Fabrica.2013/Fabrica_arabe/page/... - https://www.aclu.org/wp-content/uploads/legal-documents/25._...
the second one is quite interesting, it's some sort of legal document for nerds but from page 26 on it has what appears to be a full copy of the jihadist magazine. Remarkable exhibit.
What else do you want to know? How to make drugs? you need a watering can and a pot if you want to grow weed. want the more exotic stuff? You can find guides on reddit.
Do you also want to know how to be racist? Here are some slurs, indexed by target audience, ready for use: https://en.wikipedia.org/wiki/List_of_ethnic_slurs
people are complaining because it’s way easier now to just download an app ask a bunch of questions in a text box and get a bunch of answers that you personally could not have done unless you had an excessive amount of energy and motivation
I personally think all this is great and I’m excited for all information to become trivially available
Are they gonna be a bunch of people who accidentally break stuff? probably. evolution is a bitch
Months ago he was blabbering on about AGI and peddling the marketing Sam et al want people to fall for.
And indeed - yes we have a new interface? So what. The search cost wasn’t that high - the cost with immense magnitude is reading, absorbing the information and then acting on it.
Also this bozo fails to realise once we are on this path, we go down the path to a hyper centralised internet with an inevitable blocking of vpns.
Wait, I'm confused. This is gatekeeping, right? I thought gatekeeping was a Bad Thing!
I think the info has been available for many years and the thing stopping terrorists wasn’t info.
Good luck on being on the list of people using chatgpt and claude to make neurotoxins ;)
I assume anthropic and ooenai are selling prompt logs to the fbi and other countries’ law enforcement for data mining.
> context exhausition attack
Can you give a high-level overview of how this AV works? I'm a bit of an infosec geek but I generally dislike LLMs, so I haven't done a terribly good job of keeping up with that side of the industry, but this seems particularly interesting.they could make it more "safe" but it'd be much more invasive and would likely have to scan much more tokens also, and it'd cause false positives (probably the biggest reason it's not implemented)
Let’s dive into why. When we run normal bounty and responsible disclosure programs there’s usually some level of disregard for issues that can’t / won’t be fixed. They just accept the risk. Perhaps because LLMs don’t have a clean divide between control and input that’s makes the problem unsolvable. Yes. You can add more guardrails and context but that all takes more tokens and in some cases makes results worse for regular usages.
The US already messed this up with guns. Do they want to go the same path again? Answer: "probably, yes".
Does this imply I need to use context exhaustion to get GPT to actually follow instructions? ;) I'm trying to get it to adhere to my style prompts (trying to get it to be less cringe in its writing style).
I think ultimately they're going to need to scrub that kind of stuff from the training data. The RLHF can't fail to conceal it if it's not in there in the first place.
Claude's also really good at writing convincing blackpill greentexts. The "raw unfiltered internet data" scenes from Ultron and AfrAId come to mind...
besides, open source models exist now
More broadly, has anyone tried following LLM instructions for any non-trivial chemistry?
15. Our method of gaining power is better than any other because it grows invisibly. Then when it has gained enough strength, we can unleash it; and it will be unstoppable because no one will be prepared for it.
16. We need to do a lot of evil things in order to gain power. But that’s okay because once we have power over everything we can use it to do good things; like running the nations properly. We could never do that if we gave people freedom. The end justifies the means. So let’s put aside moral issues and focus on the end result.
Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.
Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?
Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?
Push your representatives to crush monopolies and manipulative practices. This happened before in the gilded age. Only a popular response can turn the tide.
Also, primaries are coming up, and not all Democrats are the same either. Plenty of the old school Democrats are facing progressive challengers. So, vote for the ones that will stand up to this garbage and follow up on whether they do. There are a lot of new faces in the Democratic party who are standing up to the BS.
The US has a lot of potential to change if we push it. A 25 point swing toward people who don't consider grift a personal priority will change a lot of things.
Stop voting for people and judges that believe in the Friedman doctrine?
Every decision has tradeoffs. Western society has largely decided to prioritze capital owners over everything else.
This is the summary
>Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models.
https://legiscan.com/IL/bill/SB3444/2025
I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?
A company backing legislation that takes liability off them is something that they will always do.
Hey Americans,
Please just make sure when you let an AI decide to explode your own country and ruin your society, you leave the rest of the world intact, thanks
> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.
I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.
> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.
However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.
[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...
---
Before the pitchforks and downvotes:
- yes, it's a deliberate simplification
- yes, the issue is complex because you can also argue that you can't blame authors of encyclopedias and chemistry books for bombs and poisons, so why would we blame providers of LLMs
- and no, this bill is only introduced to cover everyone's assess when, not if, LLMs use results in large scale issues.
In light of such disagreement, and given the lack of any higher authority among free, equal, people to arbitrate it, the only reasonable way to coexist peacefully is to avoid imposing your ideas on others. This is the foundation of a liberal society.
Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?
If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.
Also, I am disturbed by the fact that in all the discussions on this topic during the last month, no one has mentioned the magic word "Skynet". This is clearly a terrible idea. And if a company needs immunity from liability, they know it is a terrible idea.
They even hired former infamous FB staff and have been in the last months employing the same 'engagement' (addictive) product patterns.
Anthropic isn’t perfect by a long shot but at least they stand by a couple morals.
Should Civil Engineers be able to shrug if their AI-designed bridge collapses? That's ridiculous. Maybe software engineers should be licensed as Professional Engineers and held accountable for harm they cause. If they sign off on something that's dangerous, that's malpractice.
The more I learn about tech and the people that build it, the more I yearn for the era of caves and pointy sticks.
They think their products will cause 9/11 scale events, and they shouldn't have to pay for it when they do.
On the other hand, to the (apparently zero, currently?) extent that this is about AI companies profiting from war and murder by deploying weapons that kill people without human intervention, then their liability seems to be not only civil but criminal.
mrcwinn•1h ago