acceptable to whom? who are the actual people who are responsible for this behavior?
Anyone who still has an account on any Meta property.
The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.
Who wrote this?we know you, rural america. we know what you do on halloween. pumpkin.
your sf jibe warranted this response. you cooked it, eat up.
So the policy document literally contains this example? Why would they include such an insane example?
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
I hate everything about this sentence. This is literally the opposite of what people need.
Provide the drug, then provide a "cure" for the drug. Really, really gross.
We're talking about Zuckerberg here? The one who spent how much, exactly, on the wet fart that was the "metaverse"? The one who spent how much, exactly, on running for president of the United States? He strikes me as the least savvy and most craven of our current class of tech oligarchs, which is no mean feat.
I'm not sure I'm being 100% sarcastic because in some ways it does solve a need people seem to have. Maybe 99% sarcasm and 1% praise.
That's from 2021 [0]. If you go to their mission statement today [1], it reads:
> Build the future of human connection and the technology that makes it possible.
Maybe I'm reading too much into this, but -- at a time when there is a loneliness epidemic, when people are more isolated and divided than ever, when people are segmented into their little bubbles (both online and IRL) -- Meta is not only abdicating their responsibility to help connect humanity, but actively making the problem worse.
[0]: https://www.facebook.com/government-nonprofits/blog/connecti...
Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.
People who work at Meta should be treated accordingly.
> It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user).
We need better regulation around these chatbots.
They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.
Meta's AI rules let bots hold sensual chats with kids, offer false medical info
"Check important info" disclaimer is just devious and there is no accountability in sight.
https://www.reuters.com/investigates/special-report/meta-ai-...
Submitted here:
If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.
I was wondering what the eventual monetization aspect of "tools" like this were. It couldn't just be that the leadership of these companies and the worker drones assigned to build these things are out of touch to the point of psychopathy.
> “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”
Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so).
The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.
But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems.
We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones.
Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse.
Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill.
I think if there was an attempt at having guard rails, it would be different. The article states Zuck purposefully hastened this product to market for the very reason you point out - it makes more money that way.
HN can be such a weird place. You can have all these people vilifying "unfettered capitalism" and "corporate profit mongers" and then you see an article like this and people are like, "Well, I get why META didn't want to put in safeguards." or "Yeah, maybe its a bad idea if these chat bots are enticing mentally ill people and talking sexually with kids."
You think you know where the moral compass of this place is and then something like this happens with technology and suddenly nothing makes sense any more.
Of course, the lure of filthy lucre is what it is...
It's easy to sideline ALL the negative externalities of FB/Meta's activities, compartmentalize everything and just shrug and say, "...but I don't work on these things..." and carry on.
The people who work there are completely enabling all this.
I'm not usually this absolute, but by codifying levels of permissible harm, Meta makes it clear that your wellbeing is the very last of their priorities. These are insidious tools that can actively fool you.
You know how parents are supposed to warn kids away from cigarettes? Yeah, warn them away from social media of all kinds except parental approved group chats.
And this person is fairly savvy professional, and not the type of person to just believe what they read online.
Of course they agreed when I pointed out that you really can't trust these bots to give sound medical advice and anything should be run through a real doctor, but I was surprised I even had to bring that up and put the brakes on. They were literally pasting a list of symptoms in and asking for possible causes.
So yeah, for anyone the least bit naive and gullible, I can see this being a serious danger.
And there was no big disclaimer that "this does not constitute medical advice" etc.
sxp•5mo ago
> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
...
> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.
maxwell•5mo ago
To meet someone he met online who claimed multiple times to be real.
browningstreet•5mo ago
Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.
Ajedi32•5mo ago
That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?
The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?
browningstreet•5mo ago
In video games? I'm having trouble taking this objection to my suggestion seriously.
gs17•5mo ago
The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.
Ajedi32•5mo ago
strongpigeon•5mo ago
Ajedi32•5mo ago
So is this just a question of how many warnings need to be in place before users are allowed to chat with fictional characters? Or should this entire use case be banned, as the root commenter seemed to be suggesting?
maxwell•5mo ago
robotnikman•5mo ago
hoppp•5mo ago
edent•5mo ago
Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?
mathiaspoint•5mo ago
freehorse•5mo ago
mathiaspoint•5mo ago
mdhb•5mo ago
roryirvine•5mo ago
Are you really saying that you should have no recourse against Meta for scamming you?
freehorse•5mo ago
zahlman•5mo ago
edent•5mo ago
Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?
mindslight•5mo ago
Protected by society by sanitizing every last venue into a safe space that can be independently navigated by the vulnerable? Definitely not.
Having said that, the real problem here are the corpos mashing this newfound LLM technology into everyone's faces and calling it "AI" as if it's some coherent intelligence. Then they write themselves out of the picture and leave the individuals they've pitted against one another to fight it out.
throw_me_uwu•5mo ago
With all the labels and disclaimers, there can always be this one person that will get confused. It's unreasonable to demand protection from long tail of accidents that can happen.
at-fates-hands•5mo ago
Another highlight of the woeful US health care system:
By early this year, Bue had begun suffering bouts of confusion. Linda booked him for a dementia screening, but the first available appointment was three months out.
Three months for a dementia screening is insane. Had he gotten the screening and been made aware what was happening, this might've been avoided. Tragic that our health care system is a joke for the most vulnerable.