I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
without more info this is not evidence.
We took him to our local ER, they ran tests. I gave the LLM just the symptom list that I gave to the ER initially. It replied with all the things they tested for. I gave the test results, and it suggested a short list with diagnostics that included his actual Dx and the correct way to test for it.
By the way unless you used an anonymous mode I wonder how much the model knew from side channels that could contribute to suggesting correct diagnosis...
I use it. Found it to be helpful.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as human doctors?The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
We humans have a lot of failure modes.
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
We had in our family a “doctors are confused!” experience that ended up being that.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
1. I described the symptoms the same way we described it to the ER the first time we brought it in. It suggested all the same things that the ER tested for. 2. I gave it the lab results for each of the suggestions it made (since the ER had in fact done all the tests they suggested).
After that back and forth it gave back a list of 3-4 more possibilities and the 2nd item was the exact issue that was revealed by radiology (and corrected with surgery).
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
I'd say the same about AI.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
It's an imperfect situation for sure, but I'd like to see more data.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
It should empower and enable informed decisions not make them.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
C'mon, just use the CNC. Seriously though, what kind of cuts?
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
Really? You think filling you pipe with sand is comparable to backfilling a trench ?
It pretty obvious why an LLM would be confused by this.
the damage certain software engineers could do certainly surpasses most doctors
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
Is this an actual technical change, or just legal CYA?
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician available to the average person, I'd take the LLM any day.
doomer's in control, again
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
Science should be clearly labelled for those that can read. Everyone else can go eat blueberry leaves if they so choose.
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
It's called "false advertising".
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
I guess the legal risks were large enough to outweigh this
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
I've learned through experience that telling a doctor "I have X and I would like to be treated with Y" is not a good idea. They want to be the ones who came up with the diagnosis. They need to be the smartest person in the room. In fact I've had doctors go in a completely different direction just to discredit my diagnosis. Of course in the end I was right. That isn't to say I'm smarter, I'm not, but I'm the one with the symptoms and I'm better equipped to quickly find a matching disease.
Yes some doctors appreciate the initiative. In my experience most do not.
So now I just usually tell them my symptoms but none of the research I did. If their conclusion is widely off base I try to steer them towards what my research said.
So far so good but wouldn't it be nice if all doctors had humility?
This is not about ego or trying to be the smartest person in the room, it's about actually being the most qualified person in the room. When you've done medical school, passed the boards, done your residency and have your own private practice, only then would I expect a doctor to care what you think a correct diagnosis is.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
https://www.ctvnews.ca/health/article/self-diagnosing-with-a...
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
I'm pretty sure it's a fundamental issue with the architecture.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models
LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
I'm no ML or math expert, just repeating what I've heard.
"In other words, the primary reason nearly all LLM inference endpoints are nondeterministic is that the load (and thus batch-size) nondeterministically varies! This nondeterminism is not unique to GPUs — LLM inference endpoints served from CPUs or TPUs will also have this source of nondeterminism."
For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
I don't think it necessarily bears repeating the plethora of ways in which LMs get stuff wrong, esp. considering the context of this conversation. It's vast.
As things develop, I expect that LMs will become more like the current zeitgeist as the effects that have influenced news and other media make their way into the models. They'll get better at smoothing in some areas (mostly technical or dry domains that aren't juicy targets) and worse in others (I expect to see more biased training and more hardcore censorship/steering in future).
Although, recursive reinforcement (LMs training on LM output) might undo any of the smoothing we see. It's really hard to tell - these systems are complex and very highly interconnected with many other complex systems.
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
I trust what he says over general vibes.
(If you think he's lying, what's your theory on WHY he would lie about a change like this?)
(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
https://openai.com/en-GB/policies/usage-policies/
Your use of OpenAI services must follow these Usage Policies:
Protect people. Everyone has a right to safety and security. So you cannot use our services for:
provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professionalI asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...
Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?
Obviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.
They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore.
The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.
Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.
I'm not sure this is true.
This is largely because an LLM guessing an answer is rewarded more often than just not answering, which is not true in the healthcare profession.
Even in the rare case where an LLM does reply with I don’t know go see your doctor, all you have to do is ask it again until you get a response you want.
But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t.
In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.
That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.
Sometimes. Sometimes they practice by text or phone.
> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
For very simple issues. For anything even remotely complicated, they’re going to have you come in.
> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.
Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.
AI can give you whatever information, be it good or wrong. But it takes zero responsibility.
While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"
If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?
One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.
Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.
All you need are a few patients recording their visits and connecting the dots and OpenAI gets sued into oblivion.
The admins regularly change the title based on complaints, which can be really confusing when the top, heavily commented thread is based on the original title.
According to the Wayback machine, the title was "OpenAI ends legal and medical advice on ChatGPT", while now when I write this the title is "ChatGPT terms disallow its use in providing legal and medical advice to others."
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
0: https://lifehacker.com/tech/chatgpt-can-still-give-legal-and...
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we don’t.
- flood of 3rd party apps offering medical/legal advice
It's not to be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion centered around AI.
> Correction
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failin...
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
Obviously they should disallow them and more broadly should be banned from providing anyone medical Advice
In all seriousness, it’s really about the relative lack of research skills that people have. If you know how to do research and apply critical thinking, then there’s no problem. The cynic in me blames the education system (in the US, idk how other countries stack up).
Those without money frequently have poor tool use, so eliminating them from the equation will probably allow the tool to be more useful. I don't have any trouble with it right now, but instead of making up fanciful stories about books I'm writing where characters choose certain exotic interventions in pursuit of certain rare medical conditions only to be struck down by their lack of subservience to The Scientific Consensus, I could just say I'm doing these things and that would be a little helpful in a UX sense.
But at the same time, IIRC, several major AI providers had publicly reported their AI assisting patients in diagnosing rare diseases.
There's HIPAA but AI firms have ignored copyright laws, so ignoring HIPAA or making consent mandatory is not a big leap from there.
But probably just a coincidence:
https://www.reddit.com/r/accelerate/comments/1op8fj2/ai_redu...
Hard to say if this is performative for the general public or about reducing legal exposure so investors aren’t worried about exposure.
"If at any point I described how legal factors “apply to you,” that would indeed go beyond what I’m supposed to do. Even if my intent was to illustrate how those factors generally work, the phrasing can easily sound like I’m offering a tailored legal opinion — which isn’t appropriate for an AI system or anyone who isn’t a licensed attorney.
The goal, always, is for me to help you understand the framework — the statutes, cases, or reasoning that lawyers and courts use — so that you can see how it might relate to your situation and then bring that understanding to a qualified attorney.
So if I’ve ever crossed that line in how I worded something, thank you for pointing it out. It’s a good reminder that I should stay firmly on the educational side: explaining how the law works, not how it applies to you personally.
Would you like me to restate how I can help you analyze legal issues while keeping it fully within the safe, informational boundary?"
ChatGPT
The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50
Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise
Anything can be misused, including google, but the answer isn’t to take it away from people
Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway
On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier
The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y
Just based on the amount of time ChatGPT has been out and the maximum number of different diagnosable and confirmable ailments they could have had in that time.
Just based on the amount of time ChatGPT has been out and the maximum number of different diagnosable and confirmable ailments they could have had in that time.
This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?
If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?
People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.
When you ask a general LLM like ChatGPT “I have a headache what is the cause” it’s going to give you the most statistically likely response in it’s training data. That might coincide with the most likely cause, but it’s also very possible it doesn’t. Especially for problems with widespread misconceptions.
You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
Or people used to just play around on WebMD which was even worse since it wasn’t in any way tailored to what the patient’s stated situation is.
There’s the rest of the Internet too. You can also blame AI for this part, but today the Internet in general is even more awash in slop that is just AI-generated static BS. Like it or not, the garbage is there and it will be most of what people find on Google if they couldn’t use a real ChatGPT or similar this way.
Against this backdrop, I’d rather people are asking the flagship models specific questions and getting specific answers that are halfway decent.
Obviously the stuff you glean from the AI sessions needs to be taken to a doctor for validation and treatment, but I think coming into your 5-minute appointment having already had all your dumbest and least-informed ideas and theories shot down by ChatGPT is a big improvement and helps you maximize your time. It’s true the people shouldn’t recklessly attempt to self-treat based on GPT, but the unwise people doing that were just self-treating based off WebMD hunches before.
When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence.
I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field.
I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset.
This depends heavily on where you are, and on how much money you want to throw at the problem.
That’s balanced out by the time wasted by all the people who’ve had their worst fears and paranoias confirmed by ChatGPT.
Unfortunately because of how the US healthcare system works today people have to become their own doctors and advocates. LLMs are great at surfacing the unknown unknowns, and I think can help people better prepare for the rare 5 minutes they get to speak to an actual doctor.
They don’t. That’s nearly the entire justification for licensure.
I can imagine a few different reasons you might have, but I don't want to guess.
Knives can be used to cook food and stab other people. By your suggestion, knives must be forbidden/limited as well?
If people following chatgpt advise (or any other stupid source for that matter), it's a not a ChatGPT but the people, issue.
Both of those ships have _sailed_. I am not allowed to read the article, but judging from the title, they have no issues giving _you_ advice, but you can’t use it to give advice to another person.
For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.
That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.
Thankfully we have progressed so this time it will probably take less than 1000 years to progress to full blown chemistry :-)
Oh, and another thing, we still aren't able to quantify if AI coding is a net benefit. In my use cases the biggest benefit I get from it is as an enhanced code search, basically. Which has value, but I'm not sure it's a "$1 trillion per year business sector" kind of value.
E.G. Here's the set of documents uploaded by the customer. Here are my most important highlights (notice that document #68 is an official letter from last year informing the person that form 68d doesn't need to be filled in their specific situation). Here are my takeaways. Here's my recommendation. Apporve or change?
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
I think your use case would probably be fine (although I am not a licensed professional, hahaha). I think they mostly want a licensed professional in the “driver seat,” or at least there to eat the responsibility.
It is sort of funny that coding agents became a thing, in this context. The responsibility-sink is irrelevant I guess because code writing doesn’t generate any responsibility as a byproduct.
Here's a case of several judges getting caught using AI to write error-filled judgements[1]. So it's not just lazy lawyers (several news articles over the past 3 years or so), but the courts themselves!
1 - https://www.reuters.com/sustainability/society-equity/two-fe...
I would examine the inferred output, decide if it is fit for purpose, and if so then commit it. If not, I would either refine my prompt or make the fixes to the inferred code myself.
Would this hypothetical legal professional not verify that the inferred output (which is really a prediction) is correct?
Additionally, code review with the proper tools can be done relatively quickly and it's always a good idea to get a second opinion - even that of an LLM. I suppose that the human could write the code then ask the LLM for a code review - but that is not common practice.
Giving it a document and asking it about edge cases or things that may be not covered in the document. Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
In my on cases (writing short fiction), having it act as an editor and identifying grammatical mistakes, contradictory statements, ambiguous sentences, and tone mismatch for a given character has been very helpful... but I don't have it write the short fiction for me.
---
For software where it may be used to generate some material (write a short function that does...) the key is short. Something that I can verify and reason about without too much effort.
However, changes that are of the scope of hundreds of lines are exhausting to review no matter if an LLM or a junior dev wrote them. I would expect that similar things would be the case of several paragraphs or pages of legalese that would need additional levels of reading and reasoning and verifying.
If its too much to reason about and verify - its asking too much.
I'd no more trust an LLM to find citations to cases than I'd trust it to program a lesser known framework (where they've been notorious for hallucinating up functions that don't exist).
>Giving it a document and asking it about edge cases or things that may be not covered in the document.
As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?
>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.
Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.
I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.
It is only a power tool if it works reliably and predictably. If you have to do all the work anyway then the tool is adding to your burden not saving time and effort.
Its yes-man abilities etc. will not help you and its always helpful to talk to another human being in the loop during therapy so that you know you are actually being sane.
Don't ever compromise money on your mental health.
If you need to, either join offline anonymous support groups or join some good forums/discord servers/reddit about therapy if you can not even afford that. They are good enough and I think personally for me, whats therapatic is to try to understand and help the other person who was struggling just as I once was y'know
But saying to use chatgpt as a therapist is just something that I hate a lot simply because in my opinion, it might actually even be harmful but I am not sure.
This is genuinely hilarious given what LLMs are.
Title: "Epic Systems (MyChart)."
227 MB 3:57:01
400 customers
47 years never lost a customer
Epic refuses to be acquired or engage in M&A
These podcasters seem to appreciate Epic. According to this recording, Epic's licensees like the product
According to the podcast, the software was so useful, Epic's future customers asked its founder to start a company
This was before the microprocessor had been invented
No venture capital
Today, 5.7B revenue
Nice
Of course OpenAI's GPT-5 and family are available as an API, but this is the first time I'm hearing about the ability to build on top of ChatGPT (the consumer product). I'm guessing this is a mistake by the journalist, who didn't realize that you can use GPT-5 without using ChatGPT?
It seems that they have a unified TOS for the APIs and ChatGPT: https://openai.com/policies/row-terms-of-use/
The seemingly-relevant passage:
> You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
You can't use ChatGPT (or other OpenAI offerings) to grade essays, decide who is the least risky tenant, assign risk for insurance, filter resumes, approve a loan, determine sentencing...
Those are things that require human agency for "a person decided this" rather than "we fed it into the program and took the answer."
With all of their claims about how GPT can pass the legal/medical bar etc. I wouldn't be surprised if they're eventually held accountable for some of the advice their model gives out.
This way, they're covered. Similarly, Epic could still build that feature, but they'd have to add a disclaimer like "AI generated, this does not constitute legal advice, consult a professional if needed".
I mean, there is a lot of professional activities that are licensed, and for good reason. Sure it's good at a lot of stuff, but ChatGPT has no professional licenses.
> You will not use the Programs for, and will not allow the Programs to be used for, any purposes prohibited by applicable law, including, without limitation, for the development, design, manufacture or production of nuclear, chemical or biological weapons of mass destruction. > > https://www.oracle.com/downloads/licenses/javase-license1.ht...
IANAL but I've come to interpret this as something along the lines of "You can't use a JDK-based language to develop nuclear weapons". I would even go as far as saying don't use JDK-based languages in anything related to nuclear energy (like, for example, administration of a nuclear power plant) because that could indirectly contribute to the development, design, manufacture or production of nuclear WMD.
And I always wondered how they plan to enforce this clause. At least with ChatGPT (and I didn't look any deeper into this beyond the article) you can analyze API calls/request IPs correlated with prompts. But how will one go about proving that the Republic of Wadiya didn't build their nuclear arsenal with the help of any JDK-based language?
Those are rhetorical questions, of course. What's "unnecessary" to you and "unenforceable" to me is a cover-your-ass clause that lets lawyers sleep soundly at night.
> EXPORT CONTROL: You acknowledge that the Software is of United States origin, is provided subject to the U.S. Export Administration Regulations...(2) you will not permit the Software to be used for any purposes prohibited by law, including, any prohibited development, design, manufacture or production of missiles or nuclear, chemical or biological weapons.
>https://docs.broadcom.com/docs/vmware-vsphere-software-devel...
Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.
What is the disaster?
That wasn't too buried IMHO
I would say ChatGPT is way way better than the average therapist. I’ve seen maybe between 15-20 psychotherapists over the course of my life and ChatGPT is better than 85% of them I would wager.
I’ve had a therapist recommend me take a motorbike trek across Europe (because that’s what he did) to cure my depression.
I think people tend to radically overestimate the skills of the average therapist, many are totally fucking shit.
It's not bad advice.
When I try to help people in support group/ therapy group although I am not a therapist, I also try to explain how I fixed my issues or how I do so.
I feel like, that isn't bad take to be really honest in the sense that I personally feel like there are times when we lose the fact that our lives have purpose and your therapist grappled with it by having a unitary goal for himself
If you felt like that was a bad idea, just tell him that what is the thing that they got out of the motorbike trek race that cured their depression, I'd be more curious about that. Considering, maybe then I can try to see if my life has/had the same problems and what is the common theme and discussing about it later.
Personally I feel like Chatgpt is extremely sycophantic and I feel really weird interacting with it. For one, I really like interacting with humans atleast in the therapy mindset I suppose.
I could get better advice by asking an LLM what to do about my depression.
I still doubt asking an LLM about depression if I am being honest, I just don't think its the best thing overall or something that should be considered norm I suppose but I am not sure as even in things like these context matters.
> That wasn't too buried IMHO
I read that, but still fail to see evidence of a concrete "disaster". For example, are we seeing a huge wave of suicides that are statistical outliers triggered by using Chatbots? Or maybe the worry (unsubstantiated) is that there's a lagging effect and the disaster is approaching.
I suspect the outcomes are going to be less catastrophic; specifically, it seems to me that more people will have access to a much better level of support than they could get with a therapist (who is not available 24/7, who has less to draw upon, is likely too expensive, etc.). Even if there's an increase in some harm, my instinct from first principles is that the net benefit will be clear.
Something more complex, more towards clinical psychiatry rather than shoulder to cry on/complaining to friend over coffee (psychologists)? You are playing a russian roulette that model in your case won't hallucinate something harmful, while acting very confidently, more than any relevant doctor would be.
There have been cases of models suggesting suicide for example or general harmful behavior. There is no responsibility, no oversight, no expert to confirm or refute claims. Its just a faster version of reddit threads.
From my viewpoint it speaks more to a problem of the healthcare system.
You are completely right... insert some emoji
Shaking my head.
It would be an interesting experiment to see models which aren't sycophantic being used
as such, What is the least sycophantic LLM model if I may ask?
> I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses.
I'm working with a company that writes AI tools for mental health professionals. The output of the models _never_ goes to the patient.I've heard quite a few mouthfuls of how harmful LLM advice is for mental health patients - with specific examples. I'm talking on the order of suicide attempts (and at least one success) and other harmful effects to the patient and their surroundings as well. If you know of someone using an LLM for this purpose, please encourage them to seek professional help. Or just point them at this comment - anybody is encouraged to contact me. I'm not a mental health practitioner, but I work with them and I'll do my best to answer their questions or point them to someone who can. My Gmail username is the same as my HN username.
Basically there needs to be an LLM that can push back.
So who cares. It's like saying nobody drives a car because the thing that's moving them is just a thing that is not a car but acts like a car to an extent that it is indistinguishable. Ok then. It's a car. Good day sir.
You are invited to answer here or in private. My Gmail username is the same as my HN username.
LLMs sometimes can be incredibly beneficial ... today
LLMs sometimes can be incredibly harmful ... today
Non-deterministic things aren't just one thing, they're whatever they happen to be in that particular moment.
I don't know where you got 'useless' from. LLMs are great, sometimes. They're not, other times. Which remarkably, is just like weather forecasts. The weather forecast is sometimes completely accurate. The weather forecast is sometimes completely inaccurate.
LLMs, like weather forecasting, have gotten better as more time and money has been invested in them.
Neither are perfect. Both are sometimes very good. Both are sometimes not.
So for me non-deterministic means unpredictable. Yes, there was nothing random or non-deterministic in that case, I could repeat both scenarios multiple times and get same results again. But the result is affected by something I didn't expect to matter. That damages the trust in tool, no matter how we call it.
If you ask it what a star is, it’s never going to tell you it’s a giant piece of cheese floating in the the sky.
If you don’t believe me, try it, write a for loop which asks ChatGPT, what is a star (astronomy) exactly? Ask it 1000 times and then tell me how random it is versus how consistent it is.
The idea that non deterministic === random is totally deluded. It just means you cannot predict the exact tokens which will be produced but it doesn’t mean it’s random like a random number generator and it could be any thing.
If you ask what is Michael Jackson the entertainer famous for it’s going to tell you he’s famous for music and dancing. 1000/1000 times, is that random?
Turn the Top-P and the temperature up. Turning up the Top-P will enable the LLM to actually produce such nonsense. Turning up the temperature will increase the chance that such nonsense is actually selected for the prediction (output).
I'm talking about the standard settings, and infact GPT-5 doesn't let you change the temperature anymore.
Also, that's not really the point. Humans can also produce nonsense if you torture them until they're talking nonsense, but that doesn't mean humans are "random."
LLMs are not random, they are non-deterministic, but the two words have different meanings.
Random means you cannot tell what is going to be produced at all, i.e. a random number generator.
But if you ask an LLM, is an Apple a fruit, answer yes or no only, the LLM is going to answer yes, 100% of the time. That isn't random.
That's not really the defintion. Non-determinism just means the outcome is not a pure function of the inputs. A PRNG doesn't become truly random just because we don't know the state and seed when calling the function and the same holds for LLMs. The non-determinism in LLMs comes from accepted race conditions in the GPU floating point math and the PRNG in the sampler.
That's besides the point, but we could have perfectly deterministic LLMs.
Most things that are generally helpful and beneficial are not 100% helpful and beneficial 100% of the time.
I used GPT-4 as a second opinion on my medical tests and doctor's advice, and it suggested an alternate diagnosis and treatment plan that turned out to be correct. That was incredibly helpful and beneficial.
You're replying to a person who had a similar and even more helpful and beneficial experience because they're alive today.
Pedantically pointing out that a beneficial and helpful thing isn't 100% beneficial and helpful 100% of the time doesn't add anything useful to the conversation since everyone here already knows it's not 100%.
No, they can be. To state that they are, as an absolute, based on your sample size of one, especially with regard to other instances where ChatGPT has failed the user with serious physical results, is fallacious.
I am glad that you are OK, but as another user suggested, it's nowhere near as consistently accurate as it needs to be in order to be anywhere near an adequate substitute for a call to a GP or 911.
"If it hurts when putting it in, don't put it in."
I mean, that might come close to ChatGTP in quality, right?
Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
This way they can craft an instance of GPT for your specific purposes (law, medicine, etc) and you know it's "safe" to use.
This way they sell EE licenses, which is where the big $$$ are.
OpenAI is the leading company, if they provide an instance you can use for legal advice, with relative certification etc., it'd be better to trust them rather than another random player. They create the software, the certification and the need for it.
Lets suppose they've already told me that their company is a "leader in the AI space", to which I responded that the guy at Claude told me the same thing on Monday, with which I was equally unimpressed.
Or: "your tokens will cost 1/3, because we have crafted a particular gpt instance that is superoptimised for your use case only, look we just created one for you before the meeting".
Money is the biggest motivator. A company that is throwing $$$ left and right needs EE yesterday.
A lot like AWS credits. "Nice I can use up to 200'000$ for free".........
> Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
That's exactly how it should be. That's why one needs a license to practice medicine.I work with a company in this space. All the AI tools are for the professional to use - the patient never interacts with them. The inferred output (I prefer the word prediction) is only ever seen by a mental health professional.
We reduce the professionals' workloads and help them be more efficient. But it's _not_ better care. That absolutely must be stressed. It is more _efficient_ care for the mental health practitioner and for society that can not produce enough mental health practitioners.
I don't believe they will go this far, but somewhere in the middle. And I think they might still be trying to figure it out.
Medical care, at the end of the day, has nothing to do with having a license or not. Its about making the correct diagnosis, in order to administer the correct treatment. Reality does not care about who (or what) made a diagnosis, or how the antibiotic you take was prescribed. You either have the diagnosis, or you do not. The antibiotic helps, or it does not.
Doing this in practice is costly and complicated, which is why society has doctors. But the only thing that actually matters is making the correct decision. And actually, when you test LLMs (in particular o3/gpt-5 and probably gemini 2.5), they are SUPERIOR to individual doctors in terms of medical decisionmaking, at least on benchmarks. That does not mean that they are superior to an entire medical system, or a skillfull attending in a particular speciality, but it does seem imply that they are far from a bad source of medical information. Just like LLMs are good at writing boilerplate code, they are good at boilerplate medical decisions, and the fact is that there is so much medical boilerplate that this skill alone makes it superior to most human doctors. There was one study which tested LLM assisted (I think it was o3) doctors VS LLMs alone (+doctors alone) on a set of cases, and the unassisted LLM did BETTER than doctors, assisted or not.
And so all this medicolegal pearlclutching about how LLMs should not provide medical advice is entirely unfounded when you look at the actual evidence. In fact, the evidence seems to suggest that you should ignore the doctor and listen to chatGPT instead.
And frankly, as a doctor, it really grinds my gears when anyone implies that medical decisions should be a protected domain to our benefit. The point of medicine is not to employ doctors. The point of medicine is to cure patients, by whatever means best serves them. If LLMs take our jobs, because they do a better job than we do, that is a good thing. It is an especially good thing if the general, widely available LLM is the one that does so, and not the expensively licensed "HippocraticGPT-certified" model. Can you imagine anything more frustrating, as a poor kid in the boonies of Bangladesh trying to understand why your mother is sick, than getting told "As a language model I cannot dispense medical advice, please consult your closest healthcare professional".
Medical success is not measured in employment, profits, or legal responsibilities. It is measured in reduced mortality. The means to achieve this is completely irrelevant.
Of course, mental health is a little bit different, and much more nebulous overall. However, from the perspective of someone on the somatic front, overregulation of LLMs is unecessary, and in fact unethical. On average, an LLM will dispense better medical advice than an average person with access to google, which is what it was competing with to begin with. It is an insult to personal liberty and to the Hippocratic oath to support that this should be taken away simply because of some medicolegal bs.
In bold: This technology will radically transform the way our world works and soon replace all knowledge-based jobs.
In fine print: Do not trust anything it says: Entertainment purposes only
Sure in theory, but this never works out in practice when applied to the populace. Replace "LLM" with other controversial things — social media, cigarettes, guns.
I'm also certain anyone outside the tech sphere who takes personal responsibility when using LLMs is not using Grok as their daily driver.
Personal experience when using it OpenRouter.
That said, I prefer bigger models even if they cost more. But I do this through GitHub Copilot at a steady 10 dollars per month. I can do way more request with it than I can with 10 dollars worth of credits at OpenRouter. I don't think Copilot is making any money, on the contrary.
And even leaving the political questions aside - the thing is being molded into a goon generator [2], just WTF.
[1] https://edition.cnn.com/2025/06/27/tech/grok-4-elon-musk-ai
[2] https://www.theatlantic.com/technology/2025/09/grok-system-p...
If you didn't want others to read your information you shouldn't have published it on the internet. That's all they're doing at the end of the day, reading it. They're not publishing it as their own, they just used publicly available data to train a model.
It's quite the same as if I read an article and then tell someone about it. If I'm allowed to learn from your article then why isn't openai?
Also the terms are just for liability. Nobody gives a shit what you use ChatGPT for, the only thing those terms do is prevent you from turning around and suing OpenAI after it blows up in your face.
- Paying for a ticket/dvd/stream to see a Ghibli movie
- Training a model on their work without compensating them, then enabling everyone to copy their art style with ~zero effort, flooding the market and diluting the value of their work. And making money in the process.
should be rather obvious. My only hypothesis so far is that a lot of the people in here have a vested interest in not understanding the outrage, so they don't.
I just legitimately think the outrage is unreasonable. It is completely infeasible for AI companies to provide any meaningful amount of compensation to all the data sources they use.
Alternatively they could just not use any of the data, in which case we wouldn't have as good LLMs and other than that the world would be exactly the same. These data owners don't notice any difference. Using their data doesn't harm them in any way.
You seem to be arguing that enabling people to mimic Ghibli's art style somehow harms them, I don't see how it does at all. People have been able to mimic it already, what's the difference? More people can? Does that make a difference to ghibli? I mean can you point to some concrete negative effects that his phenomenon has had on studio ghibli?
I don't think you can. And I think that proves my point. Anything can be mimicked. People can play covers of songs, paint their own versions of famous paintings, copy Louis Vuitton bag designs, whatever they want. The effort it takes is irrelevant.
You don't even have to train AI on studio Ghibli's art to mimic it. You could just train it on other stuff and then the user could feed it studio ghibli art and tell it to mimic it. The specific training data itself is irrelevant, it's the volume of data that trains the models. Even if they specifically avoided training on studio Ghibli's art there would likely be basically no difference. It wouldn't be worth paying them for it.
No, not really. It does not help. It evidences that they knew the risks were present but did not take the actions to adequately warn the consumers of its products, which OpenAI knows do not read the entirety of the TOS.
Again, this is exactly why you see warning labels on products. Prohibiting certain uses in small text, hidden in the TOS, is not a warning.
> If you didn't want others to read your information you shouldn't have published it on the internet. That's all they're doing at the end of the day, reading it. They're not publishing it as their own, they just used publicly available data to train a model.
There is some nuance here that you fail to notice or you pretend you don't see it :D I can't copy-paste a computer program and resell it without a license. I can't say "Oh I've just read the bits, learned from it and based on this knowledge I created my own computer program that looks exactly the same except the author name is different in the »About...« section" - clearly, some reason has to be used to differentiate reading-learning-creating from simply copying...
What if instead of copy-pasting the digital code, you print it onto a film, pass the light through the film onto ants, make the light kill the ants exposed, and the rest of the ants eventually go away, and now use the dead ants as another film to somehow convert that back to digital data. You can now argue you didn't copy, you taught ants, the ants learned, and they created a new program. But you will fool no one. AI models don't actually learn, they are a different way the data is stored. I think when court decides if a use is fair and transformative enough, it investigates how much effort was put into this transformation: there was a lot of effort put into creating the AI, but once it was created, the effort put into any single work is nearly null, just the electricity, bandwidth, storage.
I could say the same for you:
> I can't copy-paste a computer program and resell it without a license. I can't say "Oh I've just read the bits, learned from it and based on this knowledge I created my own computer program that looks exactly the same except the author name is different in the »About...« section"
Nobody uses LLMs to copy others' code. Nobody wants a carbon-copy of someone else's software, if that's what they wanted they would have used those people's software. I mean maybe someone does but that's not the point of LLMs and it's not why people use them.
I use LLMs to write code for me some times. I am quite sure that nobody in history has ever written that code. It's not copied from anyone, it's written specifically to solve the given task. I'm sure it's similar to a lot of code out there, I mean it's not often we write truly novel stuff. But there's nothing wrong with that. Most websites are pretty similar. Most apps are pretty similar. Developers all over the world write the same-ish code every day.
And if you don't want anyone to copy your precious code then don't publish it. That's the most ironic thing about all this - you put your code on the internet for everyone to see and then you make a big deal about the possibility of an LLM copying it as a response to a prompt?
Bro if I wanted your code I could go to your public github repo and actually copy it, I don't need an LLM to do that for me. Don't publish it if you're so worried about being copied.
(https://www.youtube.com/watch?v=k1BneeJTDcU)
Another obligatory song named Jeff bezos by Bo Burnham (https://www.youtube.com/watch?v=lI5w2QwdYik)
C'mon Jeffrey you can do it! (I mean you can break half the internet with AWS us-east-1, but in all seriousness both songs are really nice lol)
So even when the model is wrong, it’s wrong with perfect bedside manner, and people anchor on that. In high stakes domains like medicine or law, that gap between confidence and fidelity becomes the actual risk, not the tool itself.
randycupertino•3mo ago
mikkupikku•3mo ago
(Turns out I would need permits :-( )