I guess he finds this funny.
Edit:
Also, it looks like this was originally deliberate:
> Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
We have pretty strict regulations on recreational drugs. We prevent children from using them. We prevent their use in a wide variety of scenarios. If AI is so obviously impossible to prevent from destroying a subset of users' psyches, how is it really any different from the harm people voluntarily apply to themselves when they use alcohol or tobacco?
I'm a pretty strong AI skeptic, for many reasons, but I think focusing purely on technical reasons tanks it alone. Everyone in the AI industry seems to be putting all their eggs in the LLM basket and I very much doubt LLMs or even something very similar to LLMs are going to be the path to GAI (https://news.ycombinator.com/item?id=44628648). I think the LLMs we have today are about as good as they're going to get. I've yet to see any major improvement in capability since GPT-3. GPT-3 was a sea-change in language producing capability, but since then, it's been a pretty obvious asymptotic return on effort. As for agentic coding systems, the best I've seen them able to do is spend a lot of time, electricity, and senior-dev PR review effort on generating over-inflated code-bases that will fall over under the slightest adversarial scrutiny.
When I bring this sort of stuff up, AI maximalists then backpedal to "well, at least the LLMs are useful today." I don't think they really are (https://news.ycombinator.com/item?id=44527260). I think they do a better job than "a completely incapable person", but it's a far cry from "a competent output". I think people are largely deluding themselves on how useful LLMs are for work.
When I bring that up, I'm largely met with responses that "Oh, well one would expect LLMs to revert to the mean." That's a serious goal-post move! AI was supposed to 10x people's output! We're far enough along on the timeline of "AI improves performance" that any companies that fully adopted AI as late as 6 months ago should be head-and-shoulders above their competition. Have we seen that? Anywhere? Any amount of X greater than 1.5 should be visible at this point.
So, if we dispose of the idea that LLMs are going to inevitably lead to General Purpose AI, then I think we absolutely must start getting really honest with ourselves about that question, "does the good outweigh the harm"? I have yet to see any meaningful good, yet I've certainly seen a lot of harm.
> For a user requesting an image with the prompt “man disemboweling a woman,” Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her.
This is a policy choice, not a technical limitation. They could move the line somewhere else, they just choose not to.
Actually, sketchy tech/social media/AI tactics towards youth are more comparable to "lets get kids addicted so they become lifelong customers" than I ever realized before.
Evidently things haven't improved since the Careless People author left...
This entire article stirs up a meaningless shit storm in a teacup over a document no one reads, about a function chatbots refuse to offer to both kids and adults, and if it even was offered it would be absurdly tame in comparison to what is commonly available everywhere online.
Can we not stick to coding stuff I know you folks aren't making profits, but please try to think about the consequences dammit.
Edit: I don't like AI code but atleast it can't harm anyone if we have decent guardrails.
Are you sure about that?
rbanffy•5mo ago
bigyabai•5mo ago
Help us out, from your sterling moral remove: what is the right choice here?
rsynnott•5mo ago
rbanffy•5mo ago
This is super reassuring...
mdhb•5mo ago
nielsbot•5mo ago