This is an addition to the other three laws embedded in positronic brains:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
To me the zeroth law echoes the paternalism built into LLMs, where they take on the role of shepherd rather than tool.The other day I asked one a question, and didn't get an answer, but did get a lecture about how misleading the answer could be. I really don't want my encyclopedia to have an opinion about which facts I shouldn't know.
As someone who actually likes to explore ideas and steal man myself on these chats, it's especially obnoxious because those types of comments do no favors in guiding you down good paths with subjects that you may be working on and learning.
Of course the average user likes getting their ego stroked. Looks like OpenAI will embrace the dopamine pumping style that pervades the digital realm.
- synthetic friend
- a tool that happens to be much faster than google/ctrl-f/man-pages/poking around github
perhaps offer GPT-5-worker and GPT-5-friend?
minimaxir•2h ago