frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•2m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•12m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•13m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•18m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•21m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•23m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•25m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•26m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•28m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•40m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•45m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•50m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•58m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments
Open in hackernews

I Don't Want to Code with LLM's

https://blaines-blog.com/I-dont-want-to-code-with-LLMs
29•B56c•4mo ago

Comments

Our_Benefactors•4mo ago
I can’t take this article seriously, and neither should you. Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

It’s anyone’s prerogative to continue to advocate for the horse and buggy over the automobile, but most people won’t bother to take the discussion seriously.

snickerbockers•4mo ago
>Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

These two sentences appear to be at odds with one another.

Our_Benefactors•4mo ago
The data showed llms are better. This put debate to rest. Now we are post-debate.
snickerbockers•4mo ago
"the data"
JohnFen•4mo ago
What data are you talking about? Why do you value it above the data showing the opposite?
snickerbockers•4mo ago
It's superior data because it supports his expectations. His expectations are right because they are based on superior data. Checkmate Luddites.
Our_Benefactors•4mo ago
Meanwhile, you have furnished zero data that supports your claims. Ho hum.
snickerbockers•4mo ago
Your initial statement is that you are not open to debate so i don't see what the point would be. Furthermore you defined "serious inquiries" as synonymous with your own preconceived ideas so by definition I cannot refute anything you say using a "serious inquiry". Do not interpret this as some sort of complement or concession but it is not possible to argue against you.

Even putting the sophistry aside your argument is incomplete because you never defined what "productivity" means in this context or how it can be quantified. I would never dispute that a pseudo-random bullshit generator can shit out javascript faster than any human, but that's not necessarily productive.

lmf4lol•4mo ago
give me one seriously peer reviewed study please with proper controls

i wait

Our_Benefactors•4mo ago
Go ahead and move the goalposts now... This took about 2 minutes of research to support the conclusions I know to be true. You can waste time as long as you choose in academia attempting to prove any point, while normal people make real contributions using LLMs.

### An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation We evaluate TESTPILOT using OpenAI’s gpt3.5-turbo LLM on 25 npm packages with a total of 1,684 API functions. The generated tests achieve a median statement coverage of 70.2% and branch coverage of 52.8%. In contrast, the state-of-the feedback-directed JavaScript test generation technique, Nessie, achieves only 51.3% statement coverage and 25.6% branch coverage. - *Link:* [An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation (arXiv)](https://arxiv.org/abs/2302.06527)

---

### Field Experiment – CodeFuse (12-week deployment) - Productivity (measured by the number of lines of code produced) increased by 55% for the group using the LLM. Approximately one third of this increase was directly attributable to code generated by the LLM. - *Link:* [CodeFuse: Generative AI for Code Productivity in the Workplace (BIS Working Paper 1208)](https://www.bis.org/publ/work1208.htm)

capyba•4mo ago
“ Productivity (measured by the number of lines of code produced) increased”

The LLM’s better have written more code, they’re a text generation machine!

In what world does this study prove that the LLM actually accomplished anything useful?

Our_Benefactors•4mo ago
As expected, the goalposts are being moved.

LOC does have a correlation with productivity, as much as devs hate to acknowledge it. I don’t care that you can provide counterexamples to this, or even if the AI on average takes more LOC to accomplish the same task - it still results in more productivity overall because it arrives at the result faster.

capyba•4mo ago
Nothing about this is moving goalposts - you and/or the person(s) conducting this study are the ones being misleading!

If you want to measure time to complete a complex task, then measure that. LOC is an intermediate measure. How much more productive is "55% more lines of code"?

I can write a bunch of garbage code really fast with a lot of bugs that doesn't work, or I can write a better program that works properly, slower. Under your framework, the former must be classified as 'better' - but why?

I read the study you reference and there is literally nothing in the study that talks about whether or not tasks were accomplished successfully.

It says: * Junior devs benefited more than senior devs, then presents a disingenuous argument as to why that's the senior devs' fault (more experienced employees are worse than less experienced employees, who knew?!) * 11% of the 55% increase in LOC was attributed directly to LLM output * Makes absolutely no attempt to measure whether or not the extra code was beneficial

Our_Benefactors•4mo ago
Yes, like I said, it’s not hard to provide counterexamples to why more LOC is better, but it’s also missing the forest for the trees to pretend it doesn’t matter at all.
footy•4mo ago
> This took about 2 minutes of research to support the conclusions I know to be true

This is a terrible way to do research!

Our_Benefactors•4mo ago
The point is that the information is readily available, and rather than actually adding to the discussion they chose to crow “source?”. It’s very lame.
psunavy03•4mo ago
If you are seriously linking "productivity" to "lines of code produced," that says all about your credibility that I need to know.
Our_Benefactors•4mo ago
Do you think LOC and program complexity are not correlated? You are arguing in bad faith.
psunavy03•4mo ago
Neither has anything to do with the effectiveness of a piece of software or the productivity of the people who created it.
darvid•4mo ago
you're absolutely correct!
Refreeze5224•4mo ago
Then put me solidly in the Luddite camp. I think you should look into the history of the Luddites though. They were not against technology; they were against technology that destroyed jobs.

AI is about destroying working-class jobs so that corporations and the owning class can profit. It's not about writing code or summarizing articles. Those are just things workers can do with it. That's not what it's actually for. Its purpose is to reduce payroll costs for companies by replacing workers.

logicprog•4mo ago
> They were not against technology; they were against technology that destroyed jobs.

They were not against technology; they were against technology that their destroyed jobs. If we had followed what they wanted, we'd still be in a semi pre industrial artisnal economy, and the worse off for it.

lkey•4mo ago
So you didn't read about them.

> In North West England, textile workers lacked these long-standing trade institutions and their letters composed an attempt to achieve recognition as a united body of tradespeople. As such, they were more likely to include petitions for governmental reforms, such as increased minimum wages and the cessation of child labor.

Sounds pretty modern doesn't it? unions, wages, no child-exploitation...

And the government response?

> Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.

AllegedAlec•4mo ago
> Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

"Guys this debate is so stupid. Every serious inquiry shows productivity gains when we take away all senses, jack workers into the matrix and feed them a steady diet of speed intravenously. This put debate to rest. Now we are post-debate"

Something can increase productivity and still not be good.

xg15•4mo ago
You mean like the study that found a 20% productivity drop instead of gain?
logicprog•4mo ago
It didn't really show that if you break down the data, and its methodology was pretty bad

https://www.fightforthehuman.com/are-developers-slowed-down-...

steve_adams_86•4mo ago
The Luddites had some great ideas and were driven by a more sophisticated philosophy than people tend to give them credit for. I think their motivations are still applicable and worth considering today.
charleslmunger•4mo ago
If you're working on something where the cost of bugs is high and they're tricky to detect, LLM generated code may not be a winning strategy if you're already a skilled programmer. However, LLMs are great for code review in these circumstances - there is a class of bugs that are hard to spot if you're the author.

As a simple example, accidentally inverting feature flag logic will not cause tests to fail if the new behavior you're guarding does not actually break existing tests. I and very senior developers I know have occasionally made this mistake and the "thinking" models are very good at catching issues like this, especially when prompted with a list of error categories to look for. Writing an LLM prompt for an issue class is much easier than a compiler plugin or static analysis pass, and in many cases works better because it can infer intent from comments and symbol names. False positives on issues can be annoying but aren't risky, and also can be a useful signal that the code is not written in a clear way.

snickerbockers•4mo ago
>Reviewing is worse than writing

I think the reason this discussion keeps coming up is that the people who are getting a lot out of these tools are people who are, at best, the software-equivalent of assembly-line workers. If something can be easily understood by passively reading it then it probably isn't complicated or novel and therefore it's not surprising a pseudorandom bullshit generator can do it for you; all it lacks is a unit testing system which can verify that its interpretation of the problem-statement matches the interpretation which would be most obvious to a human and that is evidently not a solved problem thus far.

If the hardest part of your job is understanding code written by other people and even code written by yourself in the distant past, then LLMs are of literal use because the problem they solve was never a significant bottleneck and in fact their "solution" only serves to pump a higher volume of fluid through the neck of the proverbial bottle.

It's the difference between reading somebody's paper in a mathematical journal to understand how they came to the conclusion they are presenting, and merely using the identity they have proven on faith. If all that mattered was to perform some calculation based on their work then its clear which approach will get more work done in less time but if you don't take it for granted that everything in the journal is correct or if you want to be able to further develop ideas based upon their proof then you have to spend a few days or even weeks trying to understand how each step leads to its successor.

It's also why i hate the old adage about not reinventing wheels, it promotes ignorance by asserting that education itself is ignorance.

capyba•4mo ago
I’m glad to hear someone say that. I’ve been wrestling for weeks with the idea of reinventing a particular wheel in my profession, for a personal coding project. The problem is that my implementation can’t ever be as complete or as useful as the existing solutions because it’s way too much for one person to accomplish in a reasonable amount of time.

But, I like it, I’ve reinvented many wheels in my work and it’s benefited me greatly. So I will reinvent this particular wheel as well…

thomas_moon•4mo ago
The only viewpoint I really agree with in this article is the "use it or lose it" mentality. Skills are developed and maintained by practicing them, but if all the author really wants to do is write code, then LLMs are literally an answer to their prayers!

You can enable virtually free test driven development. Write the test names down and let the LLM implement them for you. You save 50% of your time and you get to go to town on implementation and or optimizations.

You can have the LLM take the non-tech-counterparts description of a bug and have it point you at precise lines of code to investigate rather than grepping around a codebase you might not know well.

You can onboard to new languages, frameworks, repositories extremely fast by having a partner (the LLM) explain implementation patterns and approaches on demand! You don't even need to talk to another human being! Get your questions answered in seconds and start coding!

You can rapidly prototype. You can get immediate code reviews. You can rubber duck. You can visualize business/logic flows and code branching to better understand existing implementations. You can even have the LLM write an implementation plan for you then write the code yourself!

If you cant find a way to write more code with LLMs, its either an imagination or skill issue.

pavel_lishin•4mo ago
> You can enable virtually free test driven development. Write the test names down and let the LLM implement them for you. You save 50% of your time and you get to go to town on implementation and or optimizations.

That's assuming that it writes good tests, and that you don't care to take the time to verify the tests it wrote, no?

steve_adams_86•4mo ago
I do find LLMs useful for scaffolding this stuff, but yeah, good test writing still seems to require a lot of hand-holding. I don't mind. I'm happy with my tests and they get written faster. Hand-holding and verifying is still faster than how I used to do it, and the LLMs admittedly capture more cases than I did without them. They will try to create test cases that make no sense too, but it's worth having to delete if it means it also comes up with test cases that I didn't.
truetraveller•4mo ago
No. There's a difference between writing code, and getting code written. LLMs are the second.
footy•4mo ago
all of this sounds awful
thomas_moon•4mo ago
The whole point of my post is that you get to pick and choose what parts this magical software automates for you. Based on your response, it's an imagination issue for you.
footy•4mo ago
I can imagine doing it and I have in fact used it to automate things like tests. This usually leads to me having to rewrite the tests and spending more time on them than I otherwise would have. Or delete tests that test trivial functionality, or add tests for x even when I specifically mentioned it in my prompt.

Being able to imagine something doesn't mean I have to like it.

lcnPylGDnU4H9OF•4mo ago
> or add tests for x even when I specifically mentioned it in my prompt

> Write the test names down and let the LLM implement them for you.

This sort of reinforces the idea I (and I believe others) have that people mostly talk past each other on this topic. It seems like there might be some other difference in understanding and/or practice when it comes to using these tools effectively. This seems to be a common issue to notice once one starts noticing it.

anikom15•4mo ago
LLM’s can write documentation well, too.
Vaslo•4mo ago
I feel like we are still using it when cleaning up the code that we often get I guess.
Tade0•4mo ago
We can't uninvent LLMs. They're here already and the best course of action for everyone is to learn to live with them.

That being said I noticed that the more opinionated a language/framework/library is, the worse off one is using LLMs.

I was surprised by this, but then I put a particularly fishy line into GitHub's search box. What I saw were piles upon piles of bad practices and incorrect usages. There's a lot of bad code there and LLMs are learning from it.

iLemming•4mo ago
What I don't understand in all that noise from the LLM critics - they keep talking about how LLMs are so horrendously bad at writing code as if that's the only thing we're trying to use them for. As if they're not even genuine programmers, working on real projects, touching code every day.

Software crafting is so much more than merely writing code. There's a significant amount of reading code that goes into it. Code written by you. Code written by someone else. Someone else's code that you butchered with your edits, your own code butchered by someone else, and everything intertwined in between. Code that can't easily be explained by looking at it - sometimes you have to find relevant PRs, tickets, documentation, related online communication, some loosely-related code sitting someplace else, etc.

LLMs absolutely can help you read code, just as they are very capable of helping someone study a book or an academic paper. Denying that fact simply is ignorance. Of course, LLMs are absolutely capable of leading you in the wrong direction, confusing you, and giving you incorrect facts, even when you're studying text in plain English, just like it's possible to end up at the bottom of a lake when driving a car. Everyone needs to exercise caution and "know what the fuck they're doing" when using a model. But calling LLMs "bullshit generators" and "magic 8 balls" is so stupid. Sure, if you use it to perform bullshit stuff, it will generate nothing but bullshit.

elwebmaster•4mo ago
“a new paradigm for software development that you must learn or be left behind” that’s a completely inaccurate statement. Nobody is saying that you will be “left behind”. It certainly is a new paradigm but it doesn’t mean the old way of doing things won’t continue to exist. Just like there are still some problems that require code to be written in C or even assembly. Just like there are hand-made goods. The size of the opportunity is a whole different story.
capyba•4mo ago
I disagree - a lot of people with positions of power are unfortunately saying the “will be left behind “ bit
bdangubic•4mo ago
not all of course as there’ll be morons always but a lot of people who are saying “will be left behind” see an amazing tool which in hands of the right people is a great multiplier…
steevivo•4mo ago
I think you wrong the problem is not LLM , the problem is you .
geldedus•4mo ago
You're free not to. Time will tell who's right