> Use these tools as a massive force multiplier of your own skills.
Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.
> Use these tools for rapid onboarding onto new frameworks.
I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.
I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.
“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.
One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.
Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.
It feels like toil because it's not the interesting or engaging part of the work.
If you're going to build a piece of furniture. The cutting, nailing, gluing are the "boiler plate" that you have to do around the act of creation.
LLM's are just nail guns.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
There are? For example, rails has had boilerplate generation commands for a couple of decades.
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
I've felt this learning just this week - it's taken me having to create a small project with 10 clear repetitions, messily made from AI input. But then the magic is making 'consolidation' tasks where you can just guide it into unifying markup, styles/JS, whatever you may have on your hands.
I think it was less obvious to me in my day job because in a startup with a lack of strong coding conventions, it's harder to apply these pattern-matching requests since there are fewer patterns. I can imagine in a strict, mature codebase this would be way more effective.
"Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)
It seems to me that if you have been pattern matching the majority of your coding career, then you have a LLM agent pattern match on top of that, it results in a lot of headaches for people who haven't been doing that on a team.
I think LLM agents are supremely faster at pattern matching than humans, but are not as good at it in general.
Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.
[0] acceptable-level as defined by the rest of the team - they're checking my PRs.
> Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.
If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?
> ... but I don't need to learn Ruby to write acceptable-level code [0].
Since the team you work with uses Ruby, why do you not need to learn it?
> [0] acceptable-level as defined by the rest of the team - they're checking my PRs.
Ah. Now I get it.
Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.
Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?
Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?
EDIT:
If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.
This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.
0 - https://rspec.info/features/6-1/rspec-rails/feature-specs/fe...
There is a good discussion/interview¹ between Alan Kay & Joe Armstrong about how most code is developed backwards b/c none of the code has a formal specification that can be "compiled" into different targets. If there was a specification other than the old driver code then the process of porting over the driver would be a matter of recompiling the specification for a new kernel target. In absence of such specification you have to substitute human expertise to make sure the invariants in the old code are maintained in the new one b/c the LLMs has no understanding of any of it other than pattern matching to other drivers w/ similar code.
1. The original hardware spec is usually proprietary, and
2. The spec is often what the hardware was supposed to do. But hardware prototype revisions are expensive. So at some point, the company accepts a bunch of hardware bugs, patches around them in software, ships the hardware, and reassigns the teams to a newer product. The hardware documentation won't always be updated.
This is obviously an awful process, but I've seen and heard of versions of it for over 20 years. The underlying factors driving this can be fixed, if you really want to, but it will make your product totally uncompetitive.
I think the training data is especially good, and ideally no logic needs to change.
That's even before taking on the brutal linux kernel mailing lists for code review explaining what that C code does which could be riddled with bugs that Claude generated.
No thanks and no deal.
AI hype in a nutshell.
Literally all comments on this post are about AI hype, all of them.
Other people commenting about AI hype on the post isn't an indication that the post itself was created to hype AI, or that that the post itself is "bad".
I said nobody will use the driver. But I am terrible wrong because one person will?
Second, any post on hackernews is made to generate hype.
Yes? The person who needs it is using it. Other people who need it (anyone who wants to archive tapes of that kind) now can, too.
> Second, another post on hackernews about how AI helps you code is not AI hype?
Do you think it was written with the intent to specifically hype AI, rather than to report on a passion project?
If I post a recipe of baked shit and i get a reply "nobody will eat that shit" are they wrong?
Too much hope.
I suspect HN readers won't see enough value in your baked shit recipe for it to reach the front page - sorry. But bake away!
I doubt you didnt know that so that leaves me with two options:
You comment in bad faith or you are autistic and dont get hyperbole.
In ether case your comments feel annoying.
Even without that other article, this really reads like the author tried it for menial tasks on a neat passion project, and reports his success on it. (I'm a kernel developer, so I can empathize.)
Whatever you have sounds more like "blanket knee-jerk unfounded pessimism"
The last version of the driver that was included in the kernel, right up until it was removed, was version 3.04.
BUT, the author continued to develop the driver independently of kernel releases. In fact, the last known version of the driver was 4.04a, in 2000.
My goal is to continue maintaining this driver for modern kernel versions, 25 years after the last official release." - https://github.com/dbrant/ftape
A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.
Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
You would certainly need an expert to make sure your air traffic control software is working correctly and not 'vibe coded' the next time you decide to travel abroad safely.
We don't need a new generation who can't read code and are heavily reliant on whatever a chat bot said because: "you're absolutely right!".
> Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)
Useful enough for Stripe to building their own blockchain and even that and the rest of them are more energy efficient than a typical LLM cycle.
But the LLM grift (or even the AGI grift) will not only cost even more than crypto, but the whole purpose of its 'usefulness' is the mass displacement of jobs with no realistic economic alternative other than achieving >10% global unemployment by 2030.
That's a hundred times more disastrous than crypto.
The same approach can be used to modernise other legacy codebases.
I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).
As a giant caveat, I should note that I have a small bit of
prior experience working with kernel modules, and a good
amount of experience with C in general, so I don’t want to
overstate Claude’s success in this scenario. As in, it
wasn’t literally three prompts to get Claude to poop out a
working kernel module, but rather several back-and-forth
conversations and, yes, several manual fixups of the code.
It would absolutely not be possible to perform this
modernization without a baseline knowledge of the internals
of a kernel module.
Of note is the last sentence: It would absolutely not be possible to perform this
modernization without a baseline knowledge of the internals
of a kernel module.
This is critical context when using a code generation tool, no matter which one chosen.Then the author states in the next section:
Interacting with Claude Code felt like an actual
collaboration with a fellow engineer. People like to
compare it to working with a “junior” engineer, and I think
that’s broadly accurate: it will do whatever you tell it to
do, it’s eager to please, it’s overconfident, it’s quick to
apologize and praise you for being “absolutely right” when
you point out a mistake it made, and so on.
I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.Finally, the author asserts:
I’m sure that if I really wanted to, I could have done this
modernization effort on my own. But that would have
required me to learn kernel development as it was done 25
years ago.
This could also be described as "understanding the legacy solution and what needs to be done" when the expressed goal identified in the article title is: ... modernize a 25-year-old kernel driver
Another key activity identified as a benefit to avoid in the above quote is: ... required me to learn ...
Learning what must be done to implement a device driver in order for it to operate properly is not "gatekeeping." It is a prerequisite.
> I love agents explaining me projects I don‘t know.
Awesome. This is one way to learn about implementations and I applaud you for benefiting from same.
> Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.
Again, this is not the same as implementing an OS device driver. Even though one could justify saying Firefox is way more complicated than a Linux device driver (and I would agree), the fact is that a defective device driver can lock-up the machine[0], corrupt internal data structures resulting in arbitrary data corruption, and/or cause damage to peripheral devices.
Thinking about asking Claude to reimplement it from scratch in Rust…
[1] https://codeberg.org/superseriousbusiness/gotosocial/src/bra...
Do you disagree with some part of the statement regarding "AI" in their CoC? Do you think there's a fault in their logic, or do you yourself personally just not care about the ethics at play here?
I find it refreshing personally to see a project taking a clear stance. Kudos to them.
Recently enjoyed reading the Dynamicland project's opinion on the subject very much too[0], which I think is quite a bit deeper of an argument than the one above.
Ethics seems to be, unfortunately, quite low down on the list of considerations of many developers, if it factors in at all to their decisions.
[0] https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relation...
These ethics are definitely derived from a profit motive, however petty it may be.
If you don't have the technical understanding of a language or framework, there is going to be a lot of ambiguity in your prompts.
This specificity gap leads the LLM to fill in those gaps for you, which may not be what you intended. And that's usually where bugs hide.
I think this is the flip side to being a "force multiplier"
I would hope Claude would be like “Hold on man…”
| I'll create a simple C++ class with a constructor that takes a tuple. Here's an example...
> nah C
| C doesn't have classes or constructors - those are C++ features. In C, you would use a struct and create an initialization function. Here's how you'd handle a tuple-like structure:
Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.
Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.
unethical_ban•4h ago
One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.
anyfoo•4h ago
unethical_ban•4h ago
Another thought, IIRC in the plugins for Claude code in my IDE, you can "authorize" actions and have manual intervention without having to leave the tool.
My point is there were ways I think they could have avoided copy/paste.
anyfoo•3h ago
That is a bit different than allowing unconfirmed loading of arbitrary kernel code without proper authentication.
nico•4h ago
frumplestlatz•3h ago
Even a minor typo in kernel code can cause a panic; that’s not a reasonable level of power to hand directly to Claude Code unless you’re targeting a separate development system where you can afford repeated crashes.