frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•38s ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•4m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•4m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•5m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•8m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•8m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•9m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•9m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•10m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•12m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•12m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
2•jerpint•13m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•14m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•17m ago•0 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•18m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•19m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•21m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•21m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•21m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•22m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•23m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•24m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•25m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•28m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•29m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•31m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•31m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•34m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•38m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•38m ago•0 comments
Open in hackernews

Write the damn code

https://antonz.org/write-code/
233•walterbell•4mo ago

Comments

manoDev•4mo ago
I use AI as a pairing buddy who can lookup APIs and algorithms very quickly, or as a very smart text editor that understands refactoring, DRY, etc. but I still decide the architecture and write the tests. Works well for me.

Apparently what the article talks against is using it like software factory - give it a prompt of what you want and when it gets it wrong, iterate on the prompt.

I understand why this can be a waste of time: if programming is a specification problem [1], just shifting from programming language to natural language doesn’t solve it.

1. https://pages.cs.wisc.edu/~remzi/Naur.pdf

lukevp•4mo ago
Yes, but… The AI has way more context on our industry than the raw programming language does. I can say things like “add a stripe webhook processor for the purchase event” and it’s gonna know which library to import, how to structure the API calls, the shape of the event, the database tables that people usually back Stripe stuff with, idempotency concerns of the API, etc.

So yes you have to specify things but there’s a lot more implicit understanding and knowledge that can be retrieved relevant to the task you’re doing than a regular language would have

lomase•4mo ago
Have you deployed any of those Stripe integrations to prod?

Can you show it to us?

bryanrasmussen•4mo ago
currently on HN's front page we have write the damn code, and write the stupid code, but we don't have write the good code.
recursive•4mo ago
Good code is a hoax.
hunterpayne•4mo ago
skill issue
peterfirefly•4mo ago
git gud
kragen•4mo ago
The first five times you solve a problem, you don't know enough about it to write good code for it.
tarwich•4mo ago
Yup. It's not the learning AI or prompt engineering is bad in anyway. A similar writeup https://news.ycombinator.com/item?id=45405177 mentions the problem I see: when AI does most of the work, I have to work hard to understand what AI wrote.

In your model, I give enough guidance to generally know what AI is doing, and AI is finishing what you started.

nasretdinov•4mo ago
I kinda agree with the author — as a person with more than enough coding experience I don't get much value (and, certainly, much enjoyment) from using AI to write code for me. However it's invaluable when you're operating in even a slightly unfamiliar environment — essentially, by providing (usually incorrect or incomplete) examples of the code that can be used to solve the problem it allows to overcome the main "energy barrier" for me — helping to navigate e.g. the vast standard library of a new programming language, or provide idiomatic examples of how to do things. I usually know _what_ I want to do, but I don't know exactly the syntax to express it in a certain framework or language
CraigJPerry•4mo ago
There's a product called Context7 which among other things provides succinct examples of how to use an API in practice (example of what it does: https://context7.com/tailwindlabs/tailwindcss.com )

It's supposed to be consumed by LLMs to help prepare them to provide better examples - maybe a newer version of a library than is in the model's training data for example.

I've often thought rather than an MCP server of this that my LLM agent can query, maybe i just want to query this high signal to noise resource myself rather than trawl the documentation.

What additional value does an LLM provide when a good documentation resource exists?

dunham•4mo ago
Yeah, I don't leverage LLMs much, but I have used it to look up APIs for writing vscode extensions. The code wasn't usable as-is, but it gave me an example that I could turn into working code - without looking up all of the individual api calls.

I've also used it in the past to look up windows api, since I haven't coded for windows in decades. (For the equivalent of pipe, fork, exec.) The generated code had a resource leak, which I recognized, but it was enough to get me going. I suspect stack overflow also had the answer to that one.

And for fun, I've had copilot generate a monad implementation for a parser type in my own made-up language (similar to Idris/Agda), and it got fairly close.

anabis•4mo ago
> invaluable when you're operating in even a slightly unfamiliar environment

Its like the car navigation or Google Maps. Annoying and not much useful when in hometown. Very helpful when traveling or in unfamiliar territory.

fusslo•4mo ago
I think about 2 months ago my company got a license for Cursor/claude ai access.

At first it was really cool getting an understanding of what it can do. It can be really powerful, especially for things like refactoring.

Then, I found it to be in the way. First, I had to rebind the auto-insert from TAB to ctrl+space because I would try tabbing code over and blamo: lines inserted, resulting in more work deleting them.

Second, I found that I'd spend more time reading the ai generated autocomplete that pops up. It would pop up, I'd shift focus to read what it generated, decide if it's what I want, then try to remember what the hell I was typing.

So I turned it all off. I still have access to context aware chats, but not the autocomplete thing.

I have found that I'm remembering more and understanding the code more (shocking). I also find that I'm engaging with the code more: taking more of an effort to understand the code

Maybe some people have the memory/attention span/ability to context switch better than me. Maybe younger people more used to distractions and attention stealing content.

javier2•4mo ago
Yeah, I also have the auto complete disabled. To me its the most useful when I am working in an area I know, but not the details. Such as, I know cryptography, but I don't know the cryptography APIs in nodejs, so Claude is very helpful when writing code for that.
WesleyJohnson•4mo ago
I love Cursor and the autocomplete is so helpful, until it's not. I don't know why I didn't think to rebind the hotkey for that. Thank you.
gopalv•4mo ago
> I have found that I'm remembering more and understanding the code more (shocking).

I feel like what I felt with adaptive cruise control.

Instead of watching my speed, I was watching traffic flow, watching cars way up ahead instead.

The syntax part of my brain is turned off, but the "data flow" part is 100% on when reading the code instead.

stouset•4mo ago
Wait, really? This is kind of surprising to me. Even without adaptive cruise control, I generally spend very few brain cycles paying attention to speed. My speed just varies based on conditions and the traffic flow around me, and I'm virtually never concerned with the number on the dial itself.

As a result I've never found adaptive cruise control (or self-driving) to be all that big a deal for me. But hearing your perspective suddenly makes me realize why it is so compelling for so many others.

ryukafalz•4mo ago
That's how it should be ideally, but that can be a problem depending on the infrastructure around you. In my area (South Jersey) the design speed of our roads is consistently much higher than the posted speed limit. This leads to a lot of people consistently going much faster than the posted limit, and to people internalizing the idea that e.g. it's only really speeding if you're going 10+ mph over the limit. Which isn't actually safe in a lot of places!

If the design speed of your roads is a safe speed for those around you then yeah that works perfectly.

hansonkd•4mo ago
I think the worst part of the autocomplete is when you actually just want to tab to indent a line and it tries to autocomplete something at the end of the line.
dingnuts•4mo ago
ok call me a spoiled Go programmer but I have had an allergy to manually formatting code since getting used to gofmt on save. I highly recommend setting up an autoformatter so you can write nasty, undented code down the left margin and have it snap into place when you save the file like I do, and never touch tab for indent. Unless you're writing Python of course haha
justinrubek•4mo ago
Format on save is my worst enemy. It may work fine for go, but you'll eventually run into code where it isn't formatted using the one your editor is configured for. Then, you end up formatting the whole file or having to remember how to disable save formatting. I check formatting as a git hook on commits instead.
chatmasta•4mo ago
If you’re checking it on git hooks then it’s even safer to have format on save. Personally I default to format on save, and if I’m making a small edit to a file that is formatted differently, and it would cause a messy commit to format on save, then I simply “undo” and then “save without formatting” in the VSCode command palette.
jvalencia•4mo ago
We have format commits so that we have separate non-logic commits that don't have to be aggravated over if we find files are all off.
chatmasta•4mo ago
You can also add those non-logic commits to a .git-blame-ignore-revs file, and any client that supports it will ignore those commits when surfacing git blame annotations. I believe GitHub supports this but not sure. I think VSCode does…
anal_reactor•4mo ago
The problem with autoformatting is that whitespace is a part of the program. I use whitespace to separate logical entities that cannot be made their own functions in an elegant way. Using autoformatters often breaks this, changing my code to something that looks good but doesn't actually make sense.

The more gentle autoformatters actually do their job correctly, but the more aggressive ones make code harder to read. And BTW, I hate golang with a passion. It's a language designed to get fifty thousand bootcamp grads from developing countries to somehow write coherent code. I just don't identify with that, although I do understand why it needs to exist.

yard2010•4mo ago
My "trick" is to undo then press the tab again swiftly. If I'm quicker than the internet it works.
hatefulmoron•4mo ago
I remember discussing with some coworkers a year(?) ago about autocomplete vs chat, and we were basically in agreement that autocomplete was the better feature of the two.

Since we've had Claude Code for a few months I think our opinions have shifted in the opposite direction. I believe my preference for autocomplete was driven by the weaknesses of Chat/Agent Mode + Claude Sonnet 3.5 at the time, rather than the strengths of autocomplete itself.

At this point, I write the code myself without any autocomplete. When I want the help, Claude Code is open in a terminal to lend a hand. As you mentioned, autocomplete has this weird effect where instead of considering the code, you're sort of subconsciously trying to figure out what the LLM is trying to tell you with its suggestions, which is usually a waste of time.

wongarsu•4mo ago
LSP giving us high-quality autocomplete for nearly every language has made simple llm-driven autocomplete less magical. Yes, it has good suggestions some of the time, but it's not really revolutionary

On the other hand I love cursor's autocomplete implementation. It doesn't just provide suggestions for the current cursor location, it also provides suggestions where the cursor should jump next within the file. You change a function name and just press tab a couple of times to change the name in the docstring and everywhere else. Granted, refactoring tools have done that forever for function names, but now it works for everything. And if you do something repetitive it picks up on what you are doing and turns it into a couple quick keypresses

It's still annoying sometimes

lukevp•4mo ago
Autocomplete is a totally different thing that this article isn’t talking about. It is referring to the loop of prompt refinement which by definition means it’s referring to the Agent Mode type of integrations. Autocomplete has no prompting.

I agree autocomplete kinda gets in the way, but don’t confuse that with all AI coding being bad, they’re 2 totally distinct functions.

leptons•4mo ago
"AI" autocomplete has become a damn nuisance. It always wants to second-guess what I've already done, often making it worse. I try to hit escape to make it go away, but it just instantly suggests yet another thing I don't want. It's cumbersome. It gets in the way to an annoying extent. It's caused so many problems, I am about to turn it off.

The only time it helps is when I have several similar lines and I make a change to the first line it offers to change all the rest of the lines. It's almost always correct, but sometimes it is subtlety not and then I waste 5 minutes trying to figure out why it didn't work only to notice the subtle bug it introduced. I'm not sure how anyone thinks this is somehow better than just knowing what you're doing and doing it yourself.

bogdanoff_2•4mo ago
I totally agree with the "attention stealing".

What you can do is create a hotkey to toggle autocomplete on and off.

zkmon•4mo ago
Precisely. That's the most optimal way to use AI code assistants right now.

If you keep on refining the prompts, you are just eating the hype that is designed to be sold to C Suite.

bityard•4mo ago
I don't care much about hype one way or the other, but I find that continually asking for changes/improvements past the first prompt or two almost always sends the AI off into the weeds except for all of the simplest use cases.
stocksinsmocks•4mo ago
New prompts in the same session are dangerous because the undesired output (including nonsense reasoning) is getting put back into the context. Unless you’re brainstorming and need the dialogue to build up toward some solution, you are much better off removing anything that is not essential to the problem. If the last attempt was wrong, clear the context, feed in the spec, what information it must have like an error log and source, and write your instructions.
MangoCoffee•4mo ago
AI is pretty good on CRUD web app for me. I worked out a web page for create something and if the next page is similar. i just told AI to use the previous page as template. it cut down a lot of typing.

AI is just another tool, use it or turn it off. it shouldn't matter much to a developer.

righthand•4mo ago
IMO no one is taking even the first bit of software development advice with Llms.

Today my teammate laughed off generating UI components to quickly solve a ticket. Knowing full well no one will review the ticket now that it’s Llm generated and that it will probably make our application slower because of the unreviewed code gets merged. The consensus is that anything they make worse, they can push off to fix onto me because I’m the expert on our small team. I have been extremely vocal about this. However It is more important to push stuff through for release and make my life miserable than make sure the code is right.

Today I now refuse to fix anymore problems on this team and might quit tomorrow. This person tells me weekly they always want to spend more time writing and learning good code and then always gets upset when I block a PR merge.

Today I realized I might hate my current job now. I think all Llms have done is enabled my team to collect a pay check and embrace disinterest.

OutOfHere•4mo ago
I am in the minority who agrees with you that the code should be right.

Don't quit. Get fired instead (strictly without cause). In this way you can at least collect some severance and also unemployment. You will also absolve yourself of any regrets for having quit. Actually, just keep doing what you're doing, and you will get fired soon enough.

The other thing you can try is to ask for everyone to have their own project that they own, and for the assigned owner be fully responsible for it, so you can stop reviewing the work of other people.

hunterpayne•4mo ago
This is good advice. If you quit, you don't get severance nor do you get UI. If they let you go, you do.
sitzkrieg•4mo ago
severance isn't a guarantee in say usa tho
karmelapple•4mo ago
Personally, I consider this horrendous advice.

If you're not in step with where you're at, and you can find other employment where you'll be happier, why not change?

You could apply your same logic to, "If you're in a relationship with a significant other, don't break up with them... get them to break up with you! You will absolve yourself of any regrets of dumping them." Yes, and you will have wasted both your time, and their time.

And the same goes for working at a company that you feel isn't good for you.

leptons•4mo ago
Do Not Quit until you have accepted an offer from another job. I'm serious. Don't do it. It's fucking hell out there right now for tech jobs.
nlcs•4mo ago
Job market is currently really bad, it has never been worse. Two years ago, it was almost impossible to find an expert for a more specialized domain like computer vision or RTOS. Now, it’s impossible not to receive applications from multiple experts for a single role (and that’s only counting experts; senior and junior software developers or architects aren’t even included) that isn't even a sepcialized role and at best, "just a simple" senior role.
kragen•4mo ago
That's surprising! Thanks for letting us know.
_kblcuk_•4mo ago
Sorry to hear your situation, but that doesn't really sound like it's LLM's (a tool in the end) fault, more that poor ways of working are a norm in company you work at. Not much would change if you replace "LLM" with "Consultancy" in your post. And it's hard to really connect the dot between "generated by llm" and "slow" -- code performance doesn't really depend on whether it's being generated or typed out.
righthand•4mo ago
No, I disagree.
wduquette•4mo ago
Unless you're solving the same old problem for the Nth time for a new customer, you don't really understand the problem fully until you write the code.

If it's a new problem, you need to write the code so that you discover all the peculiar corner cases and understand them.

If it's the (N+M)th time, and you've been using AI to write the code for the last M times, you may find you no longer understand the problem.

Fair warning. Write the damn code.

alphazard•4mo ago
The approach of treating the LLMs like a junior engineer that is uninterested in learning seems to be the best advice, and correctly leverages the existing intuitions of experienced engineers.

Spend more time on interfaces and test suites. Let the AI toil away making the implementation work according to your spec. Not implementing the interface is a wrong answer, not passing the tests is a wrong answer.

If you've worked in software long enough you will have encountered people who are uninterested in learning or uncoachable for whatever reason. That is all of the LLMs too. If the LLM doesn't get it, don't waste your time; it will probably never get it. You need to try a different model or get another human involved, same as you would for an incompetent and uncoachable human.

As an aside: my advice to junior engineers is to show off your wetware, demonstrate learning and adaptation at runtime. The models can't do that yet.

giancarlostoro•4mo ago
What's really funny is, if you copy its output, and start a new prompt, and ask it "From the perspective of Senior / Staff level engineer, what is wrong with this code?" and you paste the code you got from the LLM, it will trash all over its own code with a fresh mind. Technically you can do it in the existing prompt, but sometimes LLMs get a bug up their butts about what they've decided is reality all of a sudden.

When switching context in any way, I start a new prompt.

lomase•4mo ago
I never use LLMs but what happens if you use the same code and write:

"From the perspective of Senior / Staff level engineer, what is good about this code"

Does it praise it?

giancarlostoro•4mo ago
Probably points out the bits it got correct I suppose.
svieira•4mo ago
"This is a clever usage of the too—little—used plus operator to perform high performance addition"
jeffrallen•4mo ago
You're absolutely right! /s
mierz00•4mo ago
I do this to code I have personally written as well.

Taking a step back and reviewing all my changes gives a different perspective and often find things I didn’t see when in the weeds.

pkdpic•4mo ago
> Ask AI for an initial version and then refactor it to match your expectations.

> Write the initial version yourself and ask AI to review and improve it.

> Write the critical parts and ask AI to do the rest.

> Write an outline of the code and ask AI to fill the missing parts.

So well put. I'm writing these on a post it note and putting it above my monitor. I held off on using agents to generate code for a long time and finally was forced to really make use of them and this is so in line with my experience.

My biggest surprises have been how much the model doesn't seem to matter (?) when I'm making the prompts appropriately narrow. Also surprised at how hard it is to pair program in something like cursor. If your prompting is even slightly off it seems like it can go from 10xing a build process to making it a complete waste of time with nothing to show but spaghetti code at the end.

Anyway long live the revolution, glad this was so technically on point and not just a no-ai rant (love those too tho).

g42gregory•4mo ago
Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

Kinda the opposite advice from the blog. :-)

Edit: Somebody pointed out that, in order to read/review code, you have to write it. Very true. It brings a questions of how do you acquire/extend your skills in the age of AI-coding assistance? Not sure I have an answer. Claude Code now has /output-style: Learning, which forces you to write part of the code. That's a good start.

rileymichael•4mo ago
> keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years

sure thing. we've been '6 months' away from AI taking our jobs for years now

g42gregory•4mo ago
Not saying AI will take anybody's job. It's just that the nature of the job is changing, and we have to acknowledge that. Will still be competitive. Will still require strong SE/CS knowledge and skills. Will still require/favor CS/EE degrees, which NVIDIA CEO told us not to get anymore. :-)

Also, it looks like the OpenAI and Anthropic has completed their fundraising cycles. So the AGI "has been cancelled" for now. :-)

kragen•4mo ago
Nobody has any idea what will happen in 1–2 years. Will AI still be just as incompetent at writing code as it is today? Will AI wipe out biological humanity? Nobody has any idea.
g42gregory•4mo ago
Very true. One thing we could do it to take a positive/constructive view of the future and drive towards it. People can all lose their jobs, OR we could write 1,000x more software. Let's give corporate developers tools to write 1,000x more software, instead of buying it from the outside vendors, as a way of example.
kragen•4mo ago
It might work!
mhuffman•4mo ago
>Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.

I'm not saying that it definitely isn't going to happen, but there is a loooong way to go for non-FAANG medium and small companies to let their livelihoods ride on AI completely.

>I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...

If we get to a point in 1-2 years where AI is vibe-coding at a high mostly error-free level, what makes you think that it couldn't review code as well?

g42gregory•4mo ago
I can't see into the future, but I think that AI, at any level, will not excuse people from the need to acquire top professional skills. Software engineers will need to know Software Engineering and CS, AI or not. Marketers will have to understand marketing, AI or not. And so on... I could be wrong, but that's what I think.

AI-assistance is a multiplier, not an addition. If you have zero understanding before AI, you will get zero capabilities with AI.

dayvster•4mo ago
Yes! I could not agree more with this sentiment.

We over-analyse, over discuss, over plan and over optimize before we even write the first import or include.

Some of my best ideas came to me as I was busy programming away at my vision. There's almost a zen like state there

econ•4mo ago
Write the code, deploy.

the end

larodi•4mo ago
Blessings, brother, but this insight will never get through to the masses. I can bet about it, so no rage.
0x696C6961•4mo ago
This is exactly how I work and I feel like the tools don't accomodate this workflow. I shouldn't have to tell the agent to explicitly re-read the a file after every edit.
arthurjj•4mo ago
"Two prompts and then do it yourself" is a pretty good heuristic. Last year I was simulating a boardgame [1] and wasted ~1 hour trying to get ChatGPT to solve a basic coding combinatorics problem. I needed a method in python to generate all possible hand decisions a player could make. I couldn't make it understand that certain choices were equivalent

[1] https://arthur-johnston.com/paths_of_civ_tech_tree/

ivanjermakov•4mo ago
With time you just feel what kinds of problems it is able to solve. Similar to how all of us have a feeling for "good code" just by looking at it.
NewEntryHN•4mo ago
What's up with the "prompt refinement" business? Are folks trying to get it right with one shot?

My experience is that treating the generated code as a Merge Request on which you submit comment for correction (and then again for the next round) works fairly well.

Because the AI is bad you get more rounds than in a real code review, but because the AI is fast and in your command each round is way faster than with a code review with a human (< 10 minutes feedback loop).

ivanjermakov•4mo ago
I wonder how many devs _bear_ current LLM coding tools just because afraid of getting "out of the loop". My experience that if I'm struggling with a problem, LLM would rather waste my time than help.
nojito•4mo ago
>If, given the prompt, AI does the job perfectly on first or second iteration — fine. Otherwise, stop refining the prompt. Go write some code, then get back to the AI. You'll get much better results.

This is terrible advice.

Why would I go through the write, run, debug loop myself when I can just have cc do it?

jadenPete•4mo ago
I think when the answer becomes more important than the journey, AI tools become more valuable. Want to figure out why a bug is occurring? Tools like Claude Code can be great for this. Implementing a new API or feature? You’ll have to read all the code it wrote, and if you’re like most software engineers, you’ll be faster at writing code than reading it. So you might as well just write it yourself.

This has helped to explain why, at least for me, LLMs have been more useful for reading code than writing code. I’m also just reluctant to submit the code it’s written on my behalf without making hundreds of small adjustments, but I think I’ll need to get over that, as I wouldn’t be so nit-picky a junior engineer were completing the task.

mooiedingen•4mo ago
I argue that if using a llm that has been trained on 'human' text is a cause for more troubles when used to code opposed to when one uses a model trained only code allow me to explain: If one uses models only trained on code all one has to do is to write the instructions as a comment at the top of the document and the llm will do the rest: ``` #The Following piece of code #Shall be written in python #The goal of the code is to #Scrape news.ycombinator.com #Fetching the last 10 posts #using requests and BS4 #The output shall be parsed #Title and corresponding url ``` In fact if the model used is not capable to finnish the script at this point it means that the model cannot code,
raxxorraxor•4mo ago
I would not recommend iterating too much with AI. On the contrary, do "milestones" and solve the smaller problems independently.

If the context window is full, better save the progress by reformulating the problem and prompt the AI with the progress you made before. A new start with an empty context.

In my experience any coding AI starts with really good code and it goes down from there.

SLWW•4mo ago
learn to decompose is probably the most rage-baiting phrase I've heard in a long time. Breaking down a problem into smaller parts is not "decomposing"
foobarchu•4mo ago
It's not rage bait, it is an extremely well established term in CS.

https://en.m.wikipedia.org/wiki/Decomposition_(computer_scie...

SLWW•4mo ago
> extremely well established term in CS

3 citations..

Studied CS in college, got my degree, went and worked for the last 8 years (in software) and i have never once heard decomposition used in anyway during my time in college and out until this very moment.