But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?
I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.
Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.
I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.
What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?
<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.
In a forum, it is the actual people who post who are responsible for sharing the recommendation.
In a chatbot, it is the owner (e.g. OpenAI).
But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.
This seems like a web problem, not a ChatGPT issue specifically.
I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.
Is there an angle here I am not picking up on, do you think?
I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".
On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.
OpenAI _might plausibly_ be responsible for certain outputs.
I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.
NewJazz•47m ago
dfajgljsldkjag•33m ago
https://www.youtube.com/watch?v=TLObpcBR2yw
loeg•25m ago
That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.