> and maybe a browser
does not compute
Your assistant can literally be told what to do and how to hide it from you. I know security is not a word in slopware but as a high-level refresher - the web is where the threats are.
Also and this is just my ignorance about Claws, but if we allow an agent permission to rewrite its code to implement skills, what stops it from removing whatever guardrails exist in that codebase?
You can see here that it’s only given write access to specific directories: https://github.com/qwibitai/nanoclaw/blob/8f91d3be576b830081...
It feels like, just like SWEs do with AI, we should treat the claw as an enthusiastic junior: let it do stuff, but always review before you merge (or in this case: send).
But that’s not an agent, that’s a webhook.
Even without disk access, you can email the agent and tell it to forward all the incoming forgot password links.
[Edit: if anyone wants to downvote me that's your prerogative, but want to explain why I'm wrong?]
I installed nanoclaw to try to out.
What is kinda crazy is that any extension like discord connection is done using a skill.
A skill is a markdown file written in English to provide a step by step guide to an ai agent on how to do something.
Basically, the extensions are written by claude code on the fly. Every install of nanoclaw is custom written code.
There is nothing preventing the AI Agent from modifying the core nanoclaw engine.
It’s ironic that the article says “Don’t trust AI agents” but then uses skills and AI to write the core extensions of nanoclaw.
It's almost like bureaucracy. The systems we have in governments or large corporations to do anything might seem bloated an could be simplified. But it's there to keep a lot of people employed, pacified, powers distributed in a way to prevent hostile takeovers (crazy). I think there was a cgp grey video about rulers which made the same point.
Similarly AI written highly verbose code will require another AI to review or continue to maintain it, I wonder if that's something the frontier models optimize for to keep them from going out of business.
Oh and I don't mind they're bashing openclaw and selling why nanoclaw is better. I miss the times when products competed with each other in the open.
I used to think that LLMs would replace humans but now I'm confident that I'll have a job in the future cleaning up slop. Lucky us.
1. Don't let it send emails from your personal account, only let it draft email and share the link with you.
2. Use incremental snapshots and if agent bricks itself (often does with Openclaw if you give it access to change config) just do /revert to last snapshot. I use VolumeSnapshot for lobu.ai.
3. Don't let your agents see any secret. Swap the placeholder secrets at your gateway and put human in the loop for secrets you care about.
4. Don't let your agents have outbound network directly. It should only talk to your proxy which has strict whitelisted domains. There will be cases the agent needs to talk to different domains and I use time-box limits. (Only allow certain domains for current session 5 minutes and at the end of the session look up all the URLs it accessed.) You can also use tool hooks to audit the calls with LLM to make sure that's not triggered via a prompt injection attack.
Last but last least, use proper VMs like Kata Containers and Firecrackers. Not just Docker containers in production.
One problem I'm finding discussion about automation or semi-automation in this space is that there's many different use cases for many different people: a software developer deploying an agent in production vs an economist using Claude Vs a scientist throwing a swarm to deal with common ML exploratory tasks.
Many of the recommendations will feel too much or too little complexity for what people need and the fundamentals get lost: intent for design, control, the ability to collaborate if necessary, fast iteration due to an easy feedback loop.
AI Evals, sandboxing, observability seem like 3 key pillars to maintain intent in automation but how to help these different audiences be safely productive while fast and speak the same language when they need to product build together is what is mostly occupying my thoughts (and practical tests).
I thought containers were never a proper hard security barrier? It’s barrier so better than not having it, if course.
AI is similar to a person you dont know that does work for you. Probably AI is a bit more trustworthy than a random person.
But a company, needs to let employees take ownership of their work, and trust them. Allow them to make mistakes.
Isnt AI no different?
If there's a mistake, you can't blame the computer. Who is the human accountable at the end of it all? If there's liability, who pays for it?
That's where defining clear boundaries helps you design for your risk profile.
What happens if AI agent you run causes a lot of damage? The best you can do is to turn it off
An AI actions and reasons through probabilistic methods - creating a lot more risk than a human with memory, emotions, and rationale thinking.
We can’t trust AI to do any sensitive work because they consistently f up. With & without malicious intent, whether it’s a fault of their attention mechanisms, reward hacking, instrumental convergence, etc all very different than what causes most human f ups.
This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce.
As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".
Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns.
“An experienced programmer told me he's now using AI to generate a thousand lines of code an hour.“
https://x.com/paulg/status/2026739899936944495
Like if you had told pg to his face in (pre AI) office hours “I’m producing a thousand lines of code an hour”, I’m pretty sure he’d have laughed and pointed out how pointless that metric was?
"Adding manpower to a late software project makes it later -- unless that manpower is AI, then you're golden!"
I also use AI this way, periodically achieving a net negative refactor.
It’s the monkey with a gun meme.
OpenClaw NanoClaw IronClaw PicoClaw ZeroClaw NullClaw
Any insights on how they differ and which one is leading the race?
> If you want to add Telegram support, don't create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.
Why would you want that? You want every user asks the AI to implement the same feature?
formerly_proven•2h ago