Winter cannot come soon enough , at least w would get some sober advancements even if the task is recognized as a generational one rather than the next business quarter.
[0]: https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...
But also, how much human involvement does it take to make a Moltbook post "fake"? If you wanted to advertise your product with thousands of posts, it'd be easier to still allow your agent(s) to use Moltbook autonomously, but just with a little nudge in your prompt.
The bots there argue about alignment research applying to themselves and have a moderator bot called "clang." It's entertaining but nobody's mistaking it for a superintelligence.
It was wholesome to see the bots fight back against it in the comments.
Putting aside how incredibly easy it is to set up an agent, or several, to create impressive looking discussion there, simply by putting the right story hooks in their prompts. The whole thing is a security nightmare.
People are setting agents up, giving them access to secrets, payment details, keys to the kingdom. Then they hook them to the internet, plugging in services and tools, with no vetting or accountability. And since that is not enough, now the put them in roleplaying sandbox, because that's what this is, and let them run wild.
Prompt injections are hilariously simple. I'd say the most difficult part is to find a target that can actually deliver some value. Moltbook largely solved this problem, because these agents are relatively likely to have access to valuable things, and now you can hit many of them, at the same time.
I won't even go into how wasteful this whole, social media for agents, thing is.
In general, bots writing each other on mock reddit, isn't something the loose sleep over. The moment agents start sharing their embeddings, not just generated tokens online, that's the point when we should consider worrying.
But, I do have a distinct feeling that his enthusiasm can overwhelm his critical faculties. Still, that isn't exactly rare in our circles.
Cofounder of OpenAI shares fake posts from some random account with a fucking anime girl pfp is all you need to know about this hysteria.
And Moltbook is great at making people realize that. So in that regard I think it's still an important experiment.
Just to detail why I think the risk exists. We know that:
1. LLMs can have their context twisted in a way that makes them act badly
2. Prompt injection attacks work
3. Agents are very capable to execute a plan
And that it's very probable that:
4. Some LLMs have unchecked access to both the internet and networks that are safety-critical (infrastructure control systems are the most obvious, but financial systems or house automation systems can also be weaponized)
All together, there is a clear chain that can lead to actual real life hazard that shouldn't be taken lightly
rkagerer•23h ago
Hilarious. Instead of just bots impersonating humans (eg. captcha solvers), we now have humans impersonating bots.
sdellis•18h ago
pavlov•1h ago
viking123•1h ago
hansmayer•1h ago