Out of curiosity, what about the article strikes you as indicative of mental illness? Just a level of openness / willingness to engage with speculative or hypothetical ideas that fall far outside the bounds of the Overton Window?
The title "Roko's Lobbyist" indicates we're on the subject of Roko's Basilisk, which is why I refered to Zizians, a small cult responsible for the deaths of several people. That's the chaos and destruction of mentally ill people I was referring to, but perhaps mental illness is too strong a term. People can be in a cult without being mentally ill.
I feel the topic is bad science fiction, since it's not clear we can get from LLMs to conscious super-intelligence. People assume it's like the history of flight and envision going from the Wright Brothers to landing on the Moon as one continuum of progress. I question that assumption when it comes to AI.
I'm a fan of science fiction so I appreciate you asking for clarification. There's a story trending today about an OpenAI investor spiraling out so it's important to keep in mind.
Article: A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say "I find it kind of disturbing even to watch it."
The intent of the work (all of the articles) aren't meant to assertively paint a picture of today, or to tell the reader how or what to think, rather to encourage the reader to start thinking about and asking questions that our future selves might wish we'd asked sooner. It's attempting to occupy the liminal space between what bleeding-edge research confirms, and where it might bring us 5, 10, or 15 years from now. It's at the intersection of today's empirical science and tomorrow's speculative science-fiction that just might become nonfiction someday.
I appreciate your concern for the mental health and well-being of others. I'm quite well-grounded and thoroughly understand the existing mechanisms of the human tendency towards anthropomorphism, and as someone who's been professionally benchmarking LLM's on very real-world, quantifiable security engineering tasks since before ChatGPT came out, someone who's passionate about deeply understanding not just how "AI" got to where it is now, but where it's headed (more brain biomimicry across the board is my prediction), I have something of a serious understanding of how these systems work at a mechanical level. I just want to be cautious about not seeing the forest because I'm too busy observing how the trees grow.
Thank you for your feedback.
lihaciudaniel•4h ago
anonym29•4h ago
That said, I think asking 7 billion humans to be nice is a much less realistic ask than asking the leading AI labs to do safety alignment not just on the messages that AI is sending back to us, but on the messages that we are sending to AI, too.
This doesn't seem to be a new idea, and I don't claim to be the inventor of it, I just hope someone at e.g. Anthropic or OpenAI sees this and considers sparking up conversations internally about it.
lihaciudaniel•3h ago
anonym29•2h ago
See: Google's Perspective API, OpenAI Moderation API, Meta's Llama Guard Series, Azure AI Content Safety API, etc.