Empower people. People should be able to make decisions about their lives and their communities. So we don’t allow our services to be used to manipulate or deceive people, to interfere with their exercise of human rights, to exploit people’s vulnerabilities, or to interfere with their ability to get an education or access critical services, including any use for:
…
automation of high-stakes decisions in sensitive areas without human review:
- critical infrastructure
- education
- housing
- employment
- financial activities and credit insurance
- legal ===
- medical ===
- essential government services
- product safety components
- national security
- migration
- law enforcement
It’s a safe bet: we don’t allow you to ask for medical advice so we are not liable if you do and drink mercury or what have you based on our advice.
"you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional"
So, they didn't add any handrails, filters, or blocks to the software. This is just boilerplate "consult your doctor too!" to cover their ass.
Does prohibit, for illustration, LLM-powered surgical device.
Everything else is “gray area”?
There is no way for them to even remotely verify if you are "without appropriate involvement by a licensed professional" in the room, so to a rebellious outlaw, these prohibitions might as well not exist.
Generally a bad idea. If you want to be a doctor, go to medical school.
Two months ago it helped me accurately identify a gastrointestinal diverticulitis-type issue, find the right medication for it (metronidazole), which fixed the issue for good. It also guided me on identifying the cause, and also on pausing and restoring fiber intake appropriately.
Granted, it is very easy for people to make serious mistakes in using LLMs, but granted how many mistakes doctors make, it is better to take some self-responsibility first. The road to making a useful diagnosis can be windy, but with sufficient exploration, GPT will get you there.
If the AI isn’t smart enough to replace a licensed expert even given unlimited access to everything a doctor would learn in medical, where is the value in the AI?
Now we are moving the goalposts to “it’ll be a nice tool to use like SaaS software.”
Other than OpenAI, I don’t think that’s actually true of what the companies have been advertising.
But, in any case, things can have value and still fall short of what those with a financial interest in the public overestimating the imminent significance of an industry promote. The claim here was about what was necessary for AI to have value, not what was necessary to meet the picture that the most enthusiastic, biased proponents were painting. Those are very different questions, and, if you don’t like moving goalposts, you shouldn’t move them from the former to the latter.
So stories like this are no longer possible? https://news.ycombinator.com/item?id=45734582
_wire_•18h ago
"You can't believe how smart and capable this thing is, ready to take over and run the world"
(Not suitable for any particular purpose - Use at your own risk - See warnings - User is responsible for safe operation...)
(Pan from home robot clumsily depositing clean dishes into an empty dishwasher to a man in VR goggles in next room making all the motions of placing objects in a box)
Check all services you wish to subscribe ($1000 per service per month): - Put laundry in washing machine - Microwave mac & cheese dinner - Change and feed baby - Get granny to toilet - Fix Windows software update error on PC - Reboot wifi router to restore internet connection
SoftTalker•15h ago