const cleanPrompt = (context.systemPrompt ?? "")
.replace(/personal assistant running inside OpenClaw/g, "personal assistant running inside GlueClaw")
.replace(/HEARTBEAT_OK/g, "GLUECLAW_ACK")
.replace(/reply_to_current/g, "reply_current")
.replace(/\[\[reply_to:/g, "[[reply:")
.replace(/openclaw\.inbound_meta/g, "glueclaw.inbound_meta")
.replace(/generated by OpenClaw/g, "generated by GlueClaw");
zeulewan•2h ago
I’m enjoying using OpenClaw more than ever because it’s actually cheaper than ever. I have it using the official Claude Code backend now with a custom system prompt, so my regular Max tokens work with it.
Super easy to install but may be buggy rn. https://github.com/zeulewan/glueclaw
Background: Anthropic changed their billing, which made API-based use a lot more expensive. That’s all good, they’re rolling out Mythos, which likely needs a lot more compute, but I found that OpenClaw wouldn’t work through a custom system prompt even though other things still would.
What I found out: By binary searching the OpenClaw system prompt and scrubbing out mentions of OpenClaw, I was able to get it working again.
Technical info: It spawns the real Claude CLI as a subprocess with `--system-prompt` and `--output-format stream-json` and translates OpenClaw’s gateway tools through the MCP loopback bridge. The system prompt is scrubbed of Anthropic’s detection triggers and response tokens like `HEARTBEAT_OK` are translated back so the gateway still works. It creates a model provider called `glueclaw` and three models: `glueclaw-opus`, `glueclaw-sonnet`, and `glueclaw-haiku`.
PRs welcome. MIT licensed.
Note: This project uses only official, documented Claude Code CLI flags. No reverse engineering, no credential extraction, and no API spoofing. I don’t believe this is against the TOS, but I’d love for Anthropic to clarify the policy here and let us know whether OpenClaw through the official Claude tooling is all good.