task.md:
#!/usr/bin/env claude-run
Analyze this codebase and summarize the architecture.
Then: chmod +x task.md
./task.md
These aren't just prompts. Claude Code has tool use, so a markdown file can run shell commands, write scripts, read files, make API calls. The prompt orchestrates everything.A script that runs your tests and reports results (`run_tests.md`):
#!/usr/bin/env claude-run --permission-mode bypassPermissions
Run ./test/run_tests.sh and summarize what passed and failed.
Because stdin/stdout work like any Unix program, you can chain them: cat data.json | ./analyze.md > results.txt
git log -10 | ./summarize.md
./generate.md | ./review.md > final.txt
Or mix them with traditional shell scripts: for f in logs/\*.txt; do
cat "$f" | ./analyze.md >> summary.txt
done
This replaced a lot of Python glue code for us. Tasks that needed LLM orchestration libraries are now markdown files composed with standard Unix tools. Composable as building blocks, runnable as cron jobs, etc.One thing we didn't expect is that these are more auditable (and shareable) than shell scripts. Install scripts like `curl -fsSL https://bun.com/install | bash` could become:
`curl -fsSL https://bun.com/install.md | claude-run`
Where install.md says something like "Detect my OS and architecture, download the right binary from GitHub releases, extract to ~/.local/bin, update my shell config." A normal human can actually read and verify that.The (really cool) executable markdown idea and auditability examples are from Pete Koomen (@koomen on X). As Pete says: "Markdown feels increasingly important in a way I'm not sure most people have wrapped their heads around yet."
We implemented it and added Unix pipe semantics. Currently works with Claude Code - hoping to support other AI coding tools too. You can also route scripts through different cloud providers (AWS Bedrock, etc.) if you want separate billing for automated jobs.
GitHub: https://github.com/andisearch/claude-switcher
What workflows would you use this for?
serious_angel•4w ago
I was excited in the possibly extravagant implementation idea and... when I read enough to realize it's based on some yet another LLM... Sorry, no, never. You do you.
m-hodges•4w ago
That’s entirely what Claude Code does.
serious_angel•4w ago
Sorry, I have literally no interest in all of it that makes you dependent on it, atrophies mind, degrades research and social skills, and negates self-confidencen with respect to other authors, their work, and attributions. Nor any of my colleagues in military and those I know better in person.
Constant research, general IDEs like JetBrains's, IDA Pro, Sublime Text, VS Code, etc. backed by forums, chats, and Communities, is absolutely enough for the accountable and fun work in our teams, who manage to keep in adequate deadlines.
I just disable it everywhere possible, and will do all my life. The close case to my environment was VS Code, and hopefully there's no reason to build it from source since they still leave built-in options to disable it: https://stackoverflow.com/a/79534407/5113030 (How can I disable GitHub Copilot in VS Code?...)
Isn't it just inadequate to not think and develop your mind, and let alone pass control of your environment to a yet another model or "advanced T9" of unknown source of unknown iteration.
In pentesting, random black-box IO, medicine experimental unverified intel, log data approximation why not? But in environment control, education, art or programming, fine art... No, never ^^
Related: https://www.tomshardware.com/tech-industry/artificial-intell...
jedwhite•4w ago
The default permissions are to not allow execution. Which means that you can use the eval and text-generation capabilities of LLMs to perform assessments and evaluations of piped-in content without ever executing code themselves.
The script shebang has to explicitly add the permissions to run code, which you control. It supports the full Claude Code flag model for this.