acceptable to whom? who are the actual people who are responsible for this behavior?
Anyone who still has an account on any Meta property.
The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.
Who wrote this?So the policy document literally contains this example? Why would they include such an insane example?
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
I hate everything about this sentence. This is literally the opposite of what people need.
Provide the drug, then provide a "cure" for the drug. Really, really gross.
We're talking about Zuckerberg here? The one who spent how much, exactly, on the wet fart that was the "metaverse"? The one who spent how much, exactly, on running for president of the United States? He strikes me as the least savvy and most craven of our current class of tech oligarchs, which is no mean feat.
I'm not sure I'm being 100% sarcastic because in some ways it does solve a need people seem to have. Maybe 99% sarcasm and 1% praise.
That's from 2021 [0]. If you go to their mission statement today [1], it reads:
> Build the future of human connection and the technology that makes it possible.
Maybe I'm reading too much into this, but -- at a time when there is a loneliness epidemic, when people are more isolated and divided than ever, when people are segmented into their little bubbles (both online and IRL) -- Meta is not only abdicating their responsibility to help connect humanity, but actively making the problem worse.
[0]: https://www.facebook.com/government-nonprofits/blog/connecti...
Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.
People who work at Meta should be treated accordingly.
> It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user).
We need better regulation around these chatbots.
They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.
Meta's AI rules let bots hold sensual chats with kids, offer false medical info
"Check important info" disclaimer is just devious and there is no accountability in sight.
https://www.reuters.com/investigates/special-report/meta-ai-...
Submitted here:
If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.
I was wondering what the eventual monetization aspect of "tools" like this were. It couldn't just be that the leadership of these companies and the worker drones assigned to build these things are out of touch to the point of psychopathy.
> “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”
Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do thing the family has standing to sue (and Meta being cagey about their response indicates so).
sxp•2h ago
> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
...
> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.
maxwell•2h ago
To meet someone he met online who claimed multiple times to be real.
browningstreet•2h ago
Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.
Ajedi32•1h ago
That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?
The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?
browningstreet•1h ago
In video games? I'm having trouble taking this objection to my suggestion seriously.
gs17•1h ago
The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.
Ajedi32•1h ago
strongpigeon•37m ago
Ajedi32•27m ago
So is this just a question of how many warnings need to be in place before users are allowed to chat with fictional characters? Or should this entire use case be banned, as the root commenter seemed to be suggesting?
edent•2h ago
Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?
mathiaspoint•2h ago
freehorse•55m ago
mathiaspoint•18m ago
zahlman•1h ago
edent•1h ago
Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?