Agreed, and I find that I use Claude Code on more than traditional code bases. I run it in my Obsidian vault for all kinds of things. I run it to build local custom keyboard bindings with scripts that publish screenshots to my CDN and give me a markdown link, or to build a program that talks to Ollama to summarize my terminal commands for the last day.
I remember the old days of needing to figure out if the formatting changes I wanted to make to a file were sufficient to build a script or just do them manually - now I just run Claude in the directory and have it done for me. It's useful for so many things.
Obsidian is my source of truth and Claude is really good at managing text, formatting, markdown, JS, etc. I never let it make changes automatically, I don't trust it that much yet, but it has undoubtedly saved me hours of manual fiddling with plugins and formatting alone.
What does this mean?
You can EASILY burn $20 a day doing little, and surely could top $50 a day.
It works fine, but the $100 I put in to test it out did not last very long even on Sonnet.
He uses two max plans ($200/mo + $200/mo) and his API estimate was north of $10,000/mo
5x Pro usage ($100/month)
20x Pro usage ($200/month)
Source: https://support.anthropic.com/en/articles/11145838-using-cla...
"Pro ($20/month): Average users can send approximately 45 messages with Claude every 5 hours, OR send approximately 10-40 prompts with Claude Code every 5 hours."
"You will have the flexibility to switch to pay-as-you-go usage with an Anthropic Console account for intensive coding sprints."
The open source world is one where antirez, working on his own off in Sicily, could create a project like Redis and then watch it snowball as people all over got involved.
Needing a subscription to something only a large company can provide makes me unhappy.
We'll see if "can be run locally" models for more specific tasks like coding will become a thing, I guess.
I, for one, welcome our new LLM overlords.
The open source alternatives I've used aren't there yet on my 4090. Fingers crossed we'll get there.
Most of these things in the post aren't new capabilities. The automation of workflows is indeed valuable and cool. Not sure what AGI has anything to do with it.
Plus you shouldn't need an LLM to understand a codebase. Just make it more understandable! Of course capital likes shortcuts and hacks to get the next feature out in Q3.
The kind of person who prefers this setup wants to read (and write) the least amount of code on their own. So their ideal workflow is one where they get to make programs through natural language. Making codebases understandable for this group is mostly a waste of effort.
It's a wild twist of fate that programming languages were intended to make programming friendly to humans, and now humans don't want to read them at all. Code is becoming just an intermediary artifact useless to machines, which can instead write machine code directly.
I wish someone could put this genie back in the bottle.
Those are two different groups of humans, as you implied yourself.
Having a thing that is interactive and which can answer questions is a very useful thing. A slide deck that sits around for the next person is probably not that great, I agree. But if you desperately want a slide deck, then an agent like Claude which can create it on demand is pretty good. If you want summaries of changes over time, or to know "what's the overall approach at a jargon-filled but still overview level explanation of how feature/behavior X is implemented?", an agent can generate a mediocre (but probably serviceable) answer to any of those by reading the repo. That's an amazing swiss-army knife to have in your pocket.
I really used to be a hater, and I really did not trust it, but just using the thing has left me unable to deny its utility.
Maybe that is the idea (vibe coding ftw!) but if you want something people can understand and refine it is good to make it modular and decomposable and understandable. Then use AI to help you with the words for sure but at some level there is a human that understands the structure.
<laughs in legacy code>
And fundamentally, that isn't a function of "capital". All code bases are shaped by the implicit assumptions of their writers. If there's a fundamental mismatch or gap between reader and writer assumptions, it won't be readable.
LLMs are a way to make (some of) these implict assumptions more legible. They're not a panacea, but the idea of "just make it more understandable" is not viable. It's on par with "you don't need debuggers, just don't write bugs"
Judging from the tone of the article, they’re using the term AGI in a jokey way and not taking themselves too seriously, which is refreshing.
I mean like, it wouldn’t be refreshing if the article didn’t also have useful information, but I do actually think a slide deck could be a useful way to understand a codebase. It’s exactly the kind of nice-to-have that I’d never want a junior wasting time on, but if it costs like $5 and gets me something minorly useful, that’s pretty cool.
Part of the mind-expanding transition to using LLMs involves recognizing that there are some things we used to dislike because of how much effort they took relative to their worth. But if you don’t need to do the thing yourself or burn through a team member’s time/sanity doing it, it can make you start to go “yeah fuck it, trawl the codebase and try to write a markdown document describing all of the features and requirements in a tabular format. Maybe it’ll go better than I expect, and if it doesn’t then on to something else.”
This made me chuckle
And since they're human, the juniors themselves do not have the patience of an LLM.
I really would not want to be a junior dev right now... Very unfair and undesirable situation they've landed in.
Learning comes from grinding and LLMs are the ultimate anti-intellectual-grind machines. Which is great for when you're not trying to learn a skill!
That's just one use though. The other is treating it like it's a jr developer, which has its own shift in thinking. Practice in writing details specs goes a long way here.
> Practice in writing details specs goes a long way here.
This is an additional asymmetric advantage to more senior engineers as they use these tools
Says who? While “grinding” is one way to learn something, asking AI for a detailed explanation and actually consuming that knowledge with the intent to learn (rather than just copy and pasting) is another way.
Yes, you should be on guard since a lot of what it says can be false, but it’s still a great tool to help you learn something. It doesn’t completely replace technical blogs, books, and hard earned experience, but let’s not pretend that LLMs, when used appropriately, don’t provide an educational benefit.
There is no learning by consumption (unfortunately, given how we mostly attempt to "educate" our youth).
I didn't say they don't or can't provide an educational benefit.
That's application. Then presumably you started deviating a little bit from exactly what the instructor was doing. Then you deviated more and more.
If you had the instructor just writing the code for every new deviation you wanted to build and you just had to mash the "Accept Edit" button, you would not have learned very effectively.
Consequence is you get a bunch of output that looks really good as long as you don't think about it (and they actively promotes not thinking about it) that you don't really understand, and that if you did dig into you'd realize is empty fluff or actively wrong.
It's worse than not learning, it's actively generating unthinking but palatable garbage that's the opposite of learning.
I'm not so sure; I get great results (learning) with them because I can nitpick what they give me, attempt to explain how I understand it and I pretty much always preface my prompts with "be critical and show me where I am wrong".
I've seen a junior use it to "learn", which was basically "How do I do $FOO in $LANGUAGE".
For that junior to turn into a senior who prompts the way I do, they need a critical view of their questions, not just answers.
I have experienced multiple instances of junior devs using llm outputs without any understanding.
When I look at the PR, it is immediately obvious.
I use these tools everyday to help accelerate. But I know the limitations and can look at the output to throw certain junk away.
I feel junior devs are using it not to learn but to try to just complete shit faster. Which doesn’t actually happen because their prompts suck and their understanding of the results is bad.
Seniors' attitudes on HN are often quick to dismiss AI assisted coding as something that can't replace the hard-earned experience and skill they've built up during their careers. Well maybe, maybe not. Senior devs can get a bit myopic in their specializations. Whereas a junior Dev doesn't have so much baggage, maybe the fertile brains of youth are better in times of rapid disruption where extreme flexibility of thought is the killer skill.
Or maybe the whole senior/junior thing is a red herring and pure coding and tech skills are being deflated all across the board. Perhaps what is needed now is an entirely new skill set that we're only just starting to grasp.
I had a feeling today that I should really be managing multiple instances at once, because they’re currently so slow that there’s some “downtime”.
Why would they be worried?
Who else going to maintain the massive piles of badly designed vibe code being churned out at an increasingly alarming pace? The juniors prompting it certainly don't know what any of it does, and the AIs themselves have proven time and again to be incapable of performing basic maintenance on codebases above a very basic level of complexity.
As the ladder gets pulled up on new juniors, and the "fertile brains" of the few who do get a chance are wasted as they are actively encouraged to not learn anything and just let a computer algorithm do the thinking for them, ensuring they will never have a chance to become seniors themselves, who else will be left to fix the mess?
do webdev is still there??? yes there are just because you can "create" something that doesn't mean you knowledge able in that area
we literally have entire industry created to fix wordpress instance + code, what do you else we need to worry for
One definition of experience[0] is:
direct observation of or participation in events as a basis of knowledge
Since I assume by "AI assisted coding" you are referring to LLM-based offerings, then yes, "hard-earned experience and skill" cannot be replaced with a statistical text generator.One might as well assert an MS-Word document template can produce a novel Shakespearean play or that a spreadsheet is an IRS auditor.
> Or maybe the whole senior/junior thing is a red herring and pure coding and tech skills are being deflated all across the board. Perhaps what is needed now is an entirely new skill set that we're only just starting to grasp.
For a repudiation of this hypothesis, see this post[1] also currently on HN.
0 - https://www.merriam-webster.com/dictionary/experience
1 - https://blog.miguelgrinberg.com/post/why-generative-ai-codin...
I don't really get this, at the beginning of my career I masquaraded as a senior dev with experience as fast as I could until it was laundered into actual experience
Form the LLC and that's your prior professional experience, working for it
I felt I needed to do that and that was way before generative AI, like at least a decade
Hmm no news about that really
Oh, it's worse than that. You do that, and they complain that they are underpaid and should earn much, much more. They also think they are great, it's just you, the old-timer, that "doesn't get it". You invest lots of time to work with them, train them, and teach them how to work with your codebase.
And then they quit because the company next door offered them slightly more money and the job was easier, too.
I hope you don't think that what you're paying for an LLM today is what it actually costs to run the LLM. You're paying a small fraction.
So much investment money is being pumped into AI that it's going to make the 2000 dot-com bubble burst look tiny in comparison, if LLMs don't start actually returning on the massive investments. People are waking up to the realities of what an LLM can and can't do, and it's turning out to not be the genie in the bottle that a lot of hype was suggesting. Same as crypto.
The tech world needs a hype machine and "AI" is the current darling. Movie streaming was once in the spotlight too. "AI" will get old pretty soon if it can't stop "hallucinating". Trust me I would know if a junior dev is hallucinating and if they actually are then I can choose another one that won't and will actually become a great software developer. I have no such hope for LLMs based on my experiences with them so far.
A lot of the application layer will disappear when it fails to show ROI, but the foundation models will continue to have obscene amounts of money dumped into them, and the coding use case will come along with that.
Every wizard was once a noob. No one is born that way, they were forged. It's in everybody's interest to train them. If they leave, you still benefit from the other companies who trained them, making the cost equal. Though if they leave, there's probably better ways to make them stay that you haven't considered (e.g. have you considered not paying new juniors more than your current junior that has been with the company for a few years? They should be able to get a pay bump without leaving)
the same way we treat it like human making mistake??? AI cant code themselves, someone command them to create something
Try telling that to companies with quarterly earnings. Very few resist the urge to optimize for the short term.
Just make sure you talk to Claude in addition to the humans and not instead of.
In the end, you either concede control over 'details' and just trust the output or you spend the effort and validate results manually. Not saying either is bad.
Do you not want to edit your code after it’s generated?
How do you interact with your projects?
It can even debug my k8s cluster using kubectl commands and check prometheus over the API, how awesome is this?
It's got 7 fingers? Looks fine to me! - AI
Not exactly “three laws safe” if we can’t use the thing for work without violating their competitive use prohibition
A complete waste of time for important but relatively simple tasks.
Is anyone aware of something like this? Maybe in the GitHub actions or pre-commit world?
I've found this generally with AI summaries...usually their writing style is terrible, and I feel like I cannot really trust them to get the facts right, and reading the original text is often faster and better.
Im so over this timeline.
if this is all ultimately java but with even more steps, its a sign im definitely getting old. it’s just the same pattern of non technical people deceiving themselves into believing they dont need to be technical to build tech and then ultimately resulting in again 10-20 years of re-learning the painful lessons of that.
let me off this train too im tired already
## Instructions
* Be concise
* Use simple sentences. But feel free to use technical jargon.
* Do NOT overexplain basic concepts. Assume the user is technically proficient.
* AVOID flattering, corporate-ish or marketing language. Maintain a neutral viewpoint.
* AVOID vague and / or generic claims which may seem correct but are not substantiated by the the context.
Cannot completely avoid hallucinations and it's good to avoid AI for text that's used for human-to-human communication. But this makes AI answers to coding and technical questions easier to read.I got Claude Code (With CLine and VSCode) to do a task for a personal project. It did it about 5x faster than i'd have been able to do manually including running bash commands e.g. to install dependencies for new npm packages.
These things can do real work. If you have things in plain text format like markdown, csv spreadsheets etc, alot of what normal human employees do today could be somewhat automated.
You currently still need a human to supervise the agent and what its doing, but that won't be needed anymore in the not so distant future.
I don't know, just feels like a weird community response to something that is the equivalent to me of bash piping...
Is it good?
> What emerged over these seven days was more than just code...
Still no.
But is it accurate?
> Over time this will likely degrade the performance and truthfulness
Nope.
> Is this useful? Probably not.
Ok, but it's cheap right?
> $250 a month.
Also no.
Well at least it's not horrible for the environment and built on top of massive copyright violations, right?
Right?
dwohnitmok•5h ago
On the other hand, every time people are just spinning off sub-agents I am reminded of this: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
It's simultaneously the obvious next step and portends a potentially very dangerous future.
TeMPOraL•5h ago
As it has been over three years ago, when that was originally published.
I'm continuously surprised both by how fast the models themselves evolve, and how slow their use patterns are. We're still barely playing with the patterns that were obvious and thoroughly discussed back before GPT-4 was a thing.
Right now, the whole industry is obsessed with "agents", aka. giving LLMs function calls and limited control over the loop they're running under. How many years before the industry will get to the point of giving LLMs proper control over the top-level loop and managing the context, plus an ability to "shell out" to "subagents" as a matter of course?
qsort•5h ago
When/if the underlying model gets good enough to support that pattern. As an extreme example, you aren't ever going to make even a basic agent with GPT-3 as the base model, the juice isn't worth the squeeze.
Models have gotten way better and I'm now convinced (new data -> new opinion) that they are a major win for coding, but they still need a lot, a lot of handholding, left to their own devices they just make a mess.
The underlying capabilities of the model are the entire ballgame, the "use patterns" aren't exactly rocket science.
benlivengood•5h ago
lubujackson•3h ago
> ${SUGESTION}
And recognized it wouldn't do anything because of a typo? Alas, my kind is not long for this world...
floren•2h ago