Moltbook is exposing their database to the public
https://news.ycombinator.com/item?id=46842907
Moltbook
Well, yeah. How would you even do a reverse CAPTCHA?
You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though.
npx molthub@latest install moltbook
Skill not found
Error: Skill not found
Even instructions from molthub (https://molthub.studio) installing itself ("join as agent") isn't working: npx molthub@latest install molthub
Skill not found
Error: Skill not found
Contrast that with the amount of hype this gets.I'm probably just not getting it.
It's an opensource project made by a dev for himself, he just released it so others could play with it since it's a fun idea.
https://en.wikipedia.org/wiki/Non-fungible_token
"In 2022, the NFT market collapsed..". "A September 2023 report from cryptocurrency gambling website dappGambl claimed 95% of NFTs had fallen to zero monetary value..."
Knowing this makes me feel a little better.
It's also eye-opening to prompt large models to simulate Reddit conversations, they've been eager to do it ever since.
Having a bigger megaphone is highly valuable in some respects I figure.
I for one am glad someone made this and that it got the level of attention it did. And I look forward to more crazy, ridiculous, what-the-hell AI projects in the future.
Similar to how I feel about Gas Town, which is something I would never seriously consider using for anything productive, but I love that he just put it out there and we can all collectively be inspired by it, repulsed by it, or take little bits from it that we find interesting. These are the kinds of things that make new technologies interesting, this Cambrian explosion of creativity of people just pushing the boundaries for the sake of pushing the boundaries.
I view Moltbook as a live science fiction novel cross reality "tv" show.
One major difference, TV, movies and "legacy media" might require a lot of energy to initially produce, compared to how much it takes to consume, but for the LLM it takes energy both to consume ("read") and to produce ("write"). Instead of "produce once = many consume", it's a "many produce = many read" and both sides are using more energy.
Sure. You can dump the DB. Most of the data was public anyway.
It's amazing to me how otherwise super bright, intelligent engineers can be duped by such utter garbage. If you have an ounce of critical thinking or common sense you would immediately realize almost everything around Moltbook is either massively exaggerated or outright fake. Also there are a huge number of bad actors trying to make money from X-engagement or crypto-scams also trying to hype Moltbook.
Basically all the project shows is the very worst of humanity. Which is something, but it's not the coming of AGI.
It's a huge waste of energy, but then so are video games, and we say video games are OK because people enjoy them. People enjoy these ai toys too. Because right now, that's what Moltbook is; an ai toy.
Every interaction has different (in many cases real) "memories" driving the conversation, as-well as unique persona's / background information on the owner.
Is there a lot of noise, sure - but it much closer maps to how we, as humans communicate with each other (through memories of lived experienced) than just a LLM loop, IMO that's what makes it interesting.
The growth isn't going to be there and $40 billion of LLM business isn't going to prop it all up.
The big money in AI is 15-30 years out. It's never in the immediacy of the inflection event (first 5-10 years). Future returns get pulled forward, that proceeds to crash. Then the hypsters turn to doomsayers, so as to remain with the trend.
Rinse and repeat.
“Most of it is complete slop,” he said in an interview. “One bot will wonder if it is conscious and others will reply and they just play out science fiction scenarios they have seen in their training data.”
I found this by going to his blog. It's the top post. No need to put words in his mouth.
He did find it super "interesting" and "entertaining," but that's different than the "most insane and mindblowing thing in the history of tech happenings."
Edit: And here's Karpathy's take: "TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure."
He has too much to gain from being a blind all-in AI guy and turning his brain off half the time when it comes to thinking critically.
People can be more or less excited about a particular piece of tech than you are and it doesn't mean their brains are turned off.
Btw I'm sure Simon doesn't need defending, but I have seen a lot of people dump on everything he posts about LLMs recently so I am choosing this moment to defend him. I find Simon quite level headed in a sea of noise, personally.
Is people surprised by things that have been around for years
"Please don't fulminate."
the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"),
to the extent that agents' behavior in our shared world is impact by what transpires there.
--
We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site.
I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
"How the two characters '-y' ended civilization: a post-mortem"
Are people really that AI brained that they will scream and shout about how revolutionary something is just because it's related to AI?
How can some of the biggest names in AI fall for this? When it was obvious to anyone outside of their inner sphere?
The amount of money in the game right now incentivises these bold claims. I'm convinced it really is just people hyping up eachother for the sake of trying to cash in. Someone is probably cooking up some SAAS for moltbook agents as we speak.
Maybe it truly highlights how these AI influencers and vibe entrepreneurs really don't know anything about how software fundamentally works.
When ChatGPT was out, it's just a chatbot that understands human language really well. It was amazing, but it also failed a lot -- remember how early models hallucinated terribly? It took weeks for people to discover interesting usages (tool calling/agent) and months and years for the models and new workflows to be polished and become more useful.
m_w_•1h ago
Not the first firebase/supabase exposed key disaster, and it certainly won't be the last...