You've reached the end!
That's not vibe coding. Imagine if you were hiring a chef and a candidate came in who'd never used a stove. Sure, technically there are other ways to heat food, but it would be a bit odd.
Teams where I work can use Claude Code, Codex, Cursor, and Copilot CLI. Internally, it seems like Claude Code and Codex are the more popular tools being used by most software teams.
If you’re new to these tools, I highly recommend trying to build something with them during your free time. This space has evolved rapidly the past few months. Anthropic is offering a special spring break promotion where you can double the limits on weeknights and weekends for any of its subscription plans until the end of March.
I’ve seen some folks who are quite productive with these tools, but there is a lot more slop too. On my team on same the code base you see two different team members producing vastly different results.
And if they use LLMs to assist, does the same thing happen?
This doesn't make a lot of sense to me even as someone who uses agentic programming.
I would understand not hiring people who are against the idea of agentic programming, but I'd take a skilled programmer (especially one who is good at code review and debugging) who never touched agentic/LLM programming (but knows they will be expected to use it) over someone with less overall programming experience (but some agentic programming experience) every single time.
I think people vastly oversell using agents as some sort of skill in its own right when the reality is that a skilled developer can pick up how to use the flows and tools on the timescale of hours/days.
You prompt it. That's it. Yes, there are better and worse ways of prompting; yes, there are techniques and SKILLs and MCP servers for maximizing usability, and yes, there are right ways to vibe code and wrong ways to vibe code. But it's not hard. At all.
And the last person I want to work with is the expert vibe coder who doesn't know the fundamentals well enough to have coded the same thing by hand.
What does one have to do with the other? Since when is following every fad a prerequisite for competence?
I will say it's a little weird to frame it as "every fad" though. Do you really not see any net new or lasting utility for software engineering in AI tools? If not then more power to you, but software engineering being a fast-moving field where there are (fair or unfair) expectations to keep up is nothing new.
Everyone talking about vibe coding all your dependencies and the problem is that the people who are good with these tools and do get 50% or greater productivity benefits won’t be able to empathize with the people who are bad with these tools and create all the slop.
I think AI encourages people to take side quests to solve easy problems and not focus on hard problems.
That without domain expertise problems will compound themselves. But I dunno, I agree that they’re here to stay.
It'll require stronger and more frequent push back to keep under control.
I noticed that some of these roles come from businesses that recently had layoffs and were now asking their staff to "do more with less" so not exactly places people would be eager to work at, unless they have to.
I don't know if this is the new norm but this craziness is not helped by the increase in the number of "AI influencers" pushing the hype. Unfortunately, I've been seeing this on HN a lot recently.
E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
What do you mean by this?
The productivity gains are real, and in some cases they are enormous. It is actively, profoundly stupid to pass on them. You need to learn how to work with AI.
But my point is, those are, by definition, lower value. Check back in a big company how much their revenue growth is (which is ultimately the only metric that's hard to game), then the situation changes.
Otherwise, im sure diff per person per day went up 10x. Output in the sense I am talking about is different.
Don’t know/care about coding with AI? You’re unhireable now. Grim.
But man, I'm sure glad I left FAANG when I did. All this hysterical squawking over AI sounds utterly insufferable. If Claude was forced upon me at my job I would have likely crashed out at some point.
Personally I still believe that despite AI being moderately useful and getting better over time, it's mostly only feasible for boilerplate work. I do wonder about these people claiming to produce millions of lines of code per day with AI, like what are you actually building? If it's then Nth CRUD app then yeah, I see why... Chances are in the grand scheme of things, we don't really need that company to exist.
In roles that require more technical/novel work, AI just doesn't make the cut in my experience. Either it totally falls over or produces such bad results that it'd be quicker for a skilled dev to make it manually from scratch. I'd hope these types of companies are not hiring based on AI usage.
Writing code:
- can you remove this <zustand store> and move the state here into <a new slice in a different zustand store>
- Can you update the unit tests for the working copy changes that need test updates / created (it wrote a bunch of tests which were satisfactory, I just deleted some redundant ones)
- I removed this <TypeName> type can the usages in this file be safety replaced with <OtherType>? (it analyzed the type differences, confirmed it was safe, and made the replacement, though an IDE could have done the replacement too)
- Can you fix this type error (it built a type guard function to address it, which is boilerplate code)
- is there a way to add the output here not just the input (it found a way to plumb some context through the codebase that I needed, pretty rote)
Other stuff I did in this time:
- can you review the code in commit <hash> (been doing this a lot)
- Seems my changes in this branch has broken <feature>. can you add logging to help me diagnose what is now wrong, and also analyze the git history to maybe get some theories on what maybe broke it (this can save a lot of time, digging through git history manually can be time consuming)
- This file doesn't work as expected anymore given that we've implemented <feature>. Can you investigate (another good time save, it's good at digging through a bunch of changes quickly)
Hope this was of interest to you!
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
If you end up having engineers do the work of product people, you'd end up with the typical "engineered mess" that might be very fast, and lots of complicated abstractions so 80% of the codebase can be reused, but no user can make sense of it.
Add in LLMs that tend to never push back on adding/changing things, and you wend up with deep technical debt really quickly.
Edit: Ugh, apparently you wrote your comment just to push your platform (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) which is so trite, apparently HN is for people to push ads about their projects now...
One problem I personally have here is that I write code as a way to reason through and think about a problem. It's hard for me to grasp the best solution in a space until I try some things out first.
Does that mean you need AI subscriptions just to run your backend? That explodes costs even more than opaque cloud pricing. Sweet!
A decent company wouldn't necessarily look for someone who can type faster or commit 100x more code like the vibers do, but look into how you understand the code.
We're not concerned about hiring for the 'skill' of using these things, but more as a culture check - we are a very AI-forward company, and we are looking for people who are excited to incorporate AI into their workflow. The best evidence for such excitement is when they have already adopted these tools.
Among the team, the expectation is that most code is being produced with AI, but there is no micromanager checking how much everyone is using the AI coding tools.
My first experience with it was a year ago and the tests it produced were so horrendously hard to maintain that I kinda gave up, but I imagine that things have gotten a lot better in the last year.
teeray•21h ago
nitink23•20h ago
al_borland•19h ago
If someone has been doing that for 10 years and learning nothing, that would be a huge red flag. One that will likely become more common has LLM usage increases.
fatih-erikli-cg•19h ago