You can technically hack the API key from the subscription, but that’s probably brittle.
Or is there some other meta I’m missing?
but I agree, at least the way I use AI tools, it'd be unfeasible to review the code using this method.
You can even pause. I will public a CLI that is doing same base on same syntax. And it use github claude action yaml syntax: https://github.com/codingworkflow/claude-runner/blob/main/.g...
I would love to see what a system like Claude Code could cook up running continuously for weeks. But I imagine it would get stuck in some infinite recursive loop.
E.g. it wanted to build a data query language with temporal operations but completely forgot to keep historical data.
It currently lacks the ability to focus on the overall goal and prioritize sub-tasks accordingly and instead spirals into random side quests real quick.
With your leave, this is going up on my wall :)
I think the talent pipeline has contracted and/or will and overcorrect. But maybe the industry’s carrying capacity of devs has shrunk.
Things will probably continue in that general direction. And just like today, a small number of people who really know what they're doing keep everything humming so people can build on top of it. By importing 13 python libraries they don't completely understand, or having an AI build 75% of their CRUD app.
Jokes aside, while I'm almost sure that the ability to code can be lost and regained just like training a muscle what I'm more worried is the rug pull and squeeze that is bound to happen sometime in the next 5 to 10 years unless LLMs go the way of Free Software GNU style. If the latter happens then LLMs for coding will be like calculators and such more or less and personally I don't know how more harmful that would be compared to the boost in productivity.
That said if the former becomes reality (and I hope not!) then we're in for some huge existential crises when people realize they can barely materialize the labour part of their jobs after doing the thinky part and the meetings part.
In time, even video and embodied training may be possible for amateurs, though that's difficult to contemplate today.
People into homelabs have been running AI tools on home servers for years.
Pretty much me with some IDEs and their code inspections and refactoring capabilities and run profile configurations (especially in your average enterprise Java codebase). Oh well.
Which is a problem when exactly? When civilization collapses?
Your code should have tests the AI can use to test the code it wrote.
And thanks to MCP, you can literally point your LLM to the documentation of your preferred tool [1].
One of the skills I've developed is spinning (back) up on problems quickly while holding the overall in my head. I find with AI I'm just doing that even more often and I now have 5 direct reports (AI) added to the 6 teams of 8 I work with through managers and roadmaps.
Years ago I interviewed at Rackspace. They did a data structures and algorithms type interview. One of the main questions they had was about designing a data structure for a distributed hash table, using C or equivalent, to be used as a cache and specifically addressing cache invalidation. After outlining the basic approach I stopped and said that I have used a system like that in several projects at my current and former jobs and I would use something like Redis, memcache, or even Postgres in one instance, and do a push to cache on write system rather than a cache server pulling values from the source of truth if it suspected it had stale data. They did not like that answer. I asked why and they said it’s because I’m not designing a data structure from scratch. I asked them if the job I am applying for involved creating cache servers from scratch and they said “of course not. We use Redis.” (It might have been memcache, I honestly don’t remember which datadore they liked). Needless to say, this wasn’t a fit for either of us. While I am perfectly capable of creating toy versions of these kinds of services, I still stand by using existing battle tested software over rolling your own.
If you worry about forgetting how to code, then code. You already don’t know how to code 99% of the system you are using to post this comment (Verilog, CPU microcode, GPU equivalents, probably MMU programming, CPU-specific assembly, and so on). You can get ahead of the competition by learning some of that tech. Or not. But technically all you need is a magnetized needle and a steady hand.
We definitely would not have Electron and that's a world I want to live in.
Yeah, well it would be the next major step towards human irrelevance.
Or at least, for developers.
I think I'm faster with Claude Code overall, but that's because it's a tradeoff between "it makes me much faster at the stuff I'm not good at" and "it slows me down on the stuff I am good at". Maybe with better prompting skills, I'll be faster overall, but I'm definitely glad I don't have to write all the boilerplate yet again.
That being said it will not surprise me if subscribers actually are losing Claude money and only API is profitable.
I assume they have very peaky demand, especially when Europe + N American office hours overlap (though I'm assuming b2b demand is higher than b2c). I'm also assuming Asian demand is significantly less than "the west", which I guess would be true given the reluctance to serve Chinese users (and Chinese users using locally hosted models?).
I know OpenAI and Anthropic have 'batch' pricing but that's slightly different as it's asynchronous and not well suited for a lot of code tasks. Think a more dynamic model for users would make a lot more sense - for example, a cheaper tier giving "Max" usage but you can only use it 8pm-6am Eastern time, otherwise you are on Pro limits.
zackify•2h ago
nxobject•2h ago
zachthewf•1h ago
benreesman•1h ago
But when you get into dark corners, Opus remains useful or at the minimum not harmful, Sonnet (especially Claude Code) are really useful doing something commonly done in $MAINSTREAM_STACK, but will wreck your tree in some io_uring sorcery.