But, hey; it worked on me. I'm going to try it, since I've been looking for exactly this.
Of course, the question is price. It's "free during beta".
write any project that has a lot of interactive 'dialogues', or exacting and detailed comments, eats a lot of tokens.
My record for tapping out the Claude Max API quickly was sprint-coding a poker solver and accompanying web front end w/ Opus. The backend had a lot of gpgpu stuff going on , and the front end was extremely verbose w/ a wordy ui/ux.
For example “commit and push”
You can make it somewhat better by adding instructions in CLAUDE.md, but I did notice those instructions getting ignored from time to time unless you "remind it".
See for yourself: https://github.com/search?utf8=%E2%9C%93&q=%F0%9F%A4%96+Gene...
I have not needed multiple agents or using CC over an SSH terminal to run overnight. The main reason is that LLMs are not correct many times. So I still need time to test. Like run the whole app, or check what broke in CI (GitHub Actions), etc. I do not go through code line by line anymore and I organize work with tickets (sometimes they are created with CC too).
Both https://github.com/pixlie/Pixlie and https://github.com/pixlie/SmartCrawler are vibe coded (barely any code that I wrote). With LLMs you can generated code 10x than writing manually. It means you can also get 10x the errors. So the manual checks take some time.
Our existing engineering practices are very helpful when generating code through LLMs and I do not have mental bandwidth to review a mountain of code. I am not sure if we scale out LLMs, it will help in building production quality software. I already see that sometimes CC makes really poor guesses. Imagine many such guesses in parallel, daily.
edit: typo - months/weeks
This genuinely isn't an attack, I just don't think you can? The AI isn't granted copyright over what it produces.
Rewriting it to something sane would be harder and more time consuming than just writing a decent implementation upfront.
If people use Claude without a critical eye, our code bases will grow immensely.
Sounds like the baseline for programming in teams. People are more likely to write their own helpers or install dependencies than knowing what's already available in the repository.
It's already great at spinning up 5+ agents working on different PRs, triggered by just @mentioning claude on any github issue.
That said, Terragon here (which is akin to Codex and Jules) are often "too autonomous" for my taste. Human in the loop is made more difficult -- commenting may not be enough: I can't edit the code from those because the workspace is so ephemeral and/or remote.
How are people just firing them off to build stuff with any confidence?
AI won't magically know your codebase unless it is pretty vanilla - but then you teach it. If it makes a mistake, teach it by adding a rule.
You have to confine the output space or else you quickly get whatever.
I added a web server ontop, so I can use Claudia from my phone now: https://github.com/getAsterisk/claudia/pull/216
For example: changing the type signatures of all functions in a module to pass along some extra state, a huge amount of work. I ended up reverting the changes and replacing the functionality with thread local storage (well, dynamically scoped variables).
So, definitely not a panacea, but still well worth the money.
One of the next features I'm expecting wrappers to add on top is auto-translation. In many work contexts it makes more sense to translate what the user said to English, process that and translate the answer back than ask the model to speak the language natively.
So, it's kinda both. Terragon works on separate tasks in parallel, Claude Code farms out subtasks and sometimes also in parallel.
yes
I've been struggling to convince users to use the provided Web Components instead of React but now with Claude Code, the frontend language/framework doesn't matter; it's increasingly irrelevant.
The frontend language is Claude Code. What is behind is irrelevant so long as it works. The backend platform is irrelevant as well, so long as it works efficiently and is serverless.
This is the ideal situation I've been waiting for. My opinionated platform is no longer opinionated because soon anyone will be able to use it without understanding it.
0_gravitas•4h ago
ipnon•4h ago