They are going to slowly add "features" that brings handcoding back till its like 100% handcoding again.
Yes - it’s fine to think of it as handholding (or handcoding). These model providers cannot be responsible for ultimate alignment with their users. Today, they can at best enable integration so a user, or business, can express and ensure their own alignment at runtime.
The nature of these systems already requires human symbiosis. This is nothing more than a new integration point. Will empower agents beyond today’s capabilities, increase adoption.
1) Assign coding task via prompt 2) Hook: Write test for prompt proves 3) Write code 4) Hook: Test code 5) Code passes -> Commit 6) Else go to 3.
Exit Code 2 Behavior
PreToolUse - Blocks the tool call, shows error to Claude
This is great, it means you can set up complex concrete rules about commands CC is allowed to run (and with what arguments), rather than trying to coax these via CLAUDE.md.E.g. you can allow
docker compose exec django python manage.py test
but prevent docker compose exec django python manage.py makemigrations
We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (https://github.com/qltysh/qlty) but Claude a bit hit and miss in following them. The determinism here is a win.
It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.
No machine learning work? That would compete.
No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.
Can we build websites with it? What websites don’t compete with Anthropic?
Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?
Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?
Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.
It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent
```lint-monorepo.sh
# read that data
json_input=$(cat)
# do some parsing here with jq, get the file path (file_path)
if [$file_path" == "$dir1"*]
run lint_for_dir1
```
A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:
Black Knight: None shall pass.
King Arthur: What?
Black Knight: None shall pass!
King Arthur: I have no quarrel with you good Sir Knight, But I must cross this bridge.
Black Knight: Then you shall die.
King Arthur: I command you, as King of the Britons, to stand aside!
Black Knight: I move for no man.
King Arthur: So be it!
[they fight until Arthur cuts off the Black Knight's left arm]
King Arthur: Now, stand aside, worthy adversary.
Black Knight: 'Tis but a scratch.
King Arthur: A scratch? Your arm's off!
Black Knight: No, it isn't.
King Arthur: Well, what's that then?
Black Knight: I've had worse.
0 - https://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grai...I was using the API and passed $50 easily, so I upgraded to the $100 a month plan and have already reached $100 in usage.
I've been working on a large project, with 3 different repos (frontend, backend, legacy backend) and I just have all 3 of them in one directory now with claude code.
Wrote some quick instructions about how it was setup, its worked very well. If I am feeling brave I can have multiple claude codes running in different terminals, each working on one piece, but Opus tends to do better working across all 3 repos with all of the required context.
Still have to audit every change, commit often, but it works great 90% of the time.
Opus-4 feels like what OAI was trying to hype up for the better part of 6 months before releasing 4.5
I have been using it for other stuff (real estate, grilling recipes, troubleshooting electrical issues with my truck), and it seems to have a very large knowledge base. At this point, my goal is to get good at asking the right kinds of questions to get the best/most accurate answers.
ramoz•3h ago
Hooks will be important for "context engineering" and runtime verification of an agent's performance. This extends to things such as enterprise compliance and oversight of agentic behavior.
Nice of Anthropic to have supported the idea of this feature from a github issue submission: https://github.com/anthropics/claude-code/issues/712
chisleu•2h ago
This is a pretty killer feature that I would expect to find in all the coding agents soon.