frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LLMs Are Not a Higher Level of Abstraction

https://www.lelanthran.com/chap15/content.html
1•lelanthran•1m ago•0 comments

A framework agnostic platform to manage local agents from your phone

https://onepilotapp.com
2•elearia•4m ago•0 comments

Musk spars with OpenAI atty in trial over OpenAI's evolution from a nonprofit

https://apnews.com/article/musk-altman-openai-nonprofit-trial-bdbe85d62c2b678458fe68148eb6fba5
1•1vuio0pswjnm7•4m ago•1 comments

We Caught Prompt Security Leaking API Keys

https://www.youtube.com/watch?v=cZLdWtcSE04
1•acorn221•4m ago•0 comments

I Recreated the Apple Lisa Computer Inside an FPGA – The LisaFPGA Project

https://www.youtube.com/watch?v=8jNQDcpHc68
2•cyrc•7m ago•0 comments

Questions of US interventionism as 25story Juárez surveillance tower scrutinized

https://english.elpais.com/international/2026-05-03/amid-questions-of-us-interventionism-in-mexic...
1•c420•9m ago•0 comments

FCC votes to ban all Chinese labs from certifying electronics sold in the US

https://www.tomshardware.com/tech-industry/fcc-votes-to-ban-all-chinese-labs-from-certifying-elec...
2•jonbaer•11m ago•1 comments

Elon Musk Says AI 'Smarter Than Humans' Next Year During OpenAI Testimony

https://www.newsweek.com/elon-musk-vs-sam-altman-feud-explained-as-openai-trial-begins-11886815
2•1vuio0pswjnm7•12m ago•1 comments

PHP King Extension and KingRT Video Call App

https://kingrt.com/
1•bold_iggl•12m ago•1 comments

Space War

http://cleancoder.com/space-war
2•evo_9•13m ago•0 comments

Maybe AI Isn't a Bubble After All

https://www.theatlantic.com/economy/2026/05/ai-bubble-revenue-anthropic/687022/
17•Anon84•13m ago•2 comments

Collaborative Editing in CodeMirror

https://marijnhaverbeke.nl/blog/collaborative-editing-cm.html
2•luu•14m ago•0 comments

Show HN: Local semantic memory for coding agents

https://github.com/Chadi00/thr
1•chadiiek•15m ago•0 comments

Apple Was Caught Off Guard by MacBook Neo's "Off the Charts" Demand

https://www.macrumors.com/2026/05/01/apple-was-caught-off-guard-by-macbook-neo/
2•ZeidJ•17m ago•0 comments

New Claude-Code Plugin for Jupyterlab

https://github.com/stellarshenson/jupyterlab_claude_code_extension
1•stellars•18m ago•0 comments

The Oscars Just Banned AI from Winning Acting and Writing Awards

https://gizmodo.com/the-oscars-just-banned-ai-from-winning-acting-and-writing-awards-2000753740
5•ZeidJ•19m ago•0 comments

PolyPulse – C++ TUI scalper that exploits oracle lag Polymarket BTC/ETH markets

https://github.com/NeuroNord/PolyPulse
2•neuronord•21m ago•0 comments

Achieving Rapid CVE Remediation in an Era of Escalating Vulnerabilities

https://flox.dev/blog/achieving-rapid-cve-remediation-in-an-era-of-escalating-vulnerabilities/
2•ronef•21m ago•0 comments

Most Companies Aren't Anywhere Near Ready for AI

https://twitter.com/DanielMiessler/status/2050666594188304484
2•iceboundrock•27m ago•0 comments

Show HN: MegaLLM – Universal LLM client for any OpenAI-compatible API

https://megallm.netlify.app/
2•heliskyr2•37m ago•0 comments

How many e's are in the word seventeen [video] (AI hallucination)

https://www.youtube.com/shorts/nks72LuZO20
2•Imustaskforhelp•40m ago•1 comments

tank-os: Fedora bootc image for running OpenClaw as a rootless Podman workload

https://github.com/LobsterTrap/tank-os
2•indigodaddy•41m ago•0 comments

Feedback Loops

https://fastersafely.com/lean-software-engineering/principles/feedback-loops/
3•dev_by_day•42m ago•0 comments

Barry Levinson's box-office flop 'Toys' predicted the future of warfare

https://www.cnn.com/2026/05/03/entertainment/toys-movie-barry-levinson-modern-warfare-cec
3•mooreds•45m ago•0 comments

How to organize 3 acquired companies into one coherent website

https://littlelanguagemodels.com/how-to-structure-your-sites-after-a-big-acquisition/
2•mooreds•46m ago•0 comments

What Chromium versions are major browsers are on?

https://chromium-drift.pages.dev/
48•skaul•47m ago•11 comments

We're on a mission to get books in brains

https://betterbookclub.com/about/
2•mooreds•48m ago•0 comments

Physical buttons outperform touchscreens in new cars (2023)

https://etsc.eu/physical-buttons-outperform-touchscreens-in-new-cars/
4•hubraumhugo•48m ago•0 comments

Abusing Science(2020)

https://pmc.ncbi.nlm.nih.gov/articles/PMC7566036/
3•rolph•49m ago•0 comments

Gray Media's Chain-Wide Arbitration Rollout

https://tostracker.app/analysis/gray-media-arbitration
2•tldrthelaw•50m ago•0 comments
Open in hackernews

Uncle Bob: It's Over

https://old.reddit.com/r/vibecoding/comments/1srfqm0/uncle_bob_its_over/
46•lopespm•1h ago

Comments

monkpit•1h ago
It’s hard to give up, but likely necessary. That doesn’t mean quality has to suffer, we can still gate with deterministic quality tooling where it matters. But yeah, at some scale it stops mattering how human readable the code is, as long as AI can effectively and efficiently (token-wise) make edits or add features.
nine_k•59m ago
The point is not human readability, but good structure. Spaghetti code is as bad for an LLM as for a human, because structural complexity and the amount of coupling are fundamental limits, not human-specific.
renticulous•48m ago
Amazing tweet.

https://x.com/stevesi/status/2050325415793951124

Here's how history rhymes with this logic. The development of compilers v writing assembly language was not without a very similar "controversy" — that is, are the new tools more efficient or less efficient.

The first compilers were measured relative to hand-tuned assembly language efficiency. The existing world of compute was very much "compute bound" and inefficient code was being chased out of every system.

The introduction of the first compilers generally delivered code "within 10-30%" as efficient as standard professional assembly. This "benchmark" was enough for almost a generation of Fortran programmers to dismiss the capabilities of compilers.

Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code. Debugging a compiler is a nightmare (personal experience). This only provided more "ammo."

With the arrival of COBOL the debate started to shift. COBOL generated decidedly "bloated" code so there was no way to win the efficiency argument. But what people started to realize was that a "modern" programming language made it possible to deliver vastly more software and for many more people to work on the same code (ASM notorious for being challenging for multiple engineers on the same portion of code). So the metric slowly started to move from "as good as hand tuned assembler" to "able to write bigger, more sophisticated code in less time with more people). Computers gained timesharing, more memory, and faster CPUs which made the efficiency argument far less compelling (only to repeat with the first 8K or 64K PCs).

This entire transition is capped off with a description in Fred Brooks "Mythical Man Month" book, one of the seminal books in the field of programming and standard issue book sitting in my office waiting for me on my first day at Microsoft. (See full book free here https://web.eecs.umich.edu/~weimerw/2018-481/readings/mythic...)

It is very early. I was not a programmer when the above happened though I did join the professional ranks while many still held these beliefs. For example, I interned writing COBOL on mainframes while PCs were using C and Pascal which were buggy and viewed as inefficient on processor/space-constrained PCs.

The debate would continue with C++, garbage collection, interpreted v compiled (Visual Basic) and more. As a fairly consistent observation over decades, every new tool is viewed through a lens (at first) by experienced programmers over what is worse while new programmers use the tool and operate in a new context (eg "more software" or "bigger projects"). The excerpt below shows this debate as captured in 1972.

lelanthran•4m ago
> Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code.

Incorrect. They had bugs that generated incorrect code. They didn't routinely have bugs that generated incorrect code :-/

And the bugs they had were reproducible.

monkpit•27m ago
That’s where the tooling comes in!
duped•1h ago
I fully believe AI can write better code faster than Robert C. Martin.
GiorgioG•57m ago
Clean Architecture and Uncle Bob can take a hike.
Applejinx•47m ago
This. Uncle Bob was already over, and now he seems to be hitting the skids REAL bad. Just listening to him is tough: this guy's bad news, I didn't realize he was this bad off.
abbadadda•56m ago
I thought this was about Uncle Bob being “canceled.”
Kwpolska•50m ago
Which is long overdue.
an0malous•48m ago
What did he do?
2ndorderthought•35m ago
Wealthy white dude edging towards senility taking a liking to bathrobe social media shorts. Take a guess. It's going to involve a political party and a lot of weird public takes unrelated to software.
amarant•27m ago
White men are not allowed to grow old? How come?

Doesn't really seem fair, I'm gonna be a old white man some day, ain't really that much I can do about it...(Well, I suppose sex changes are a thing now, but really?)

2ndorderthought•23m ago
Of course they are. I'm only stating a trend so people can infer.
wizzwizz4•19m ago
Are you going to be wealthy, with your head buried in the firehose of an algorithmic feed? Those are things you can do something about.

Alternatively, you could take a crack at deconstructing whiteness. Depending how young you currently are, you might be able to make a dent by the time you're an old man. That's trickier though, because it involves serious social reform. Or if sociology isn't your deal, maybe you could become a biologist, and cure old age?

b65e8bee43c2ed0•11m ago
shalom
runarberg•17m ago
Even just purely on a professional level, he’s clean code architecture was very bad advice, which was marketed and hyped up to something it never deserved. The software industry should have cancelled Uncle Bob like archeologists cancelled Graham Hancock purely for his professional opinions (though I am not against cancelling him for his political opinions either; we can do both).
Cheese48923846•24m ago
He became Lord Voldemort. No one knows exactly what he did, but you don't dare even whisper his name.
RobRivera•52m ago
That's just, like, his opinion man
runarberg•49m ago
His opinions were never really good to begin with, he was just excellent at marketing them as good opinions.

It comes as no surprise to me that the guy who has bad opinions about software architecture, has worse opinions about vibe coding.

Bridged7756•39m ago
He's an idol, didn't you know? Much like his software architecture takes, they'll be taken as gospel.
livinglist•33m ago
Personally I have never been a fan of clean code architecture…to each their own I guess
adriand•49m ago
Kind of a great video! I enjoyed it. His point about testing coverage and generating mutations to ensure the tests fail resonated. I get concerned sometimes that the AI is writing tests not to ensure the logic is correct, but to ensure the tests pass against the code it already wrote. Any other ideas on this? Is there a code review step or CI checkpoint that would decrease the likelihood of that?
ozlikethewizard•24m ago
To be fair the overwhelming majority of tests I've seen in the wild written by humans have been the same. Not a lot of good material for AI to learn from.
relativeadv•48m ago
"Forty years later, in September of 2018, I started working on this version of Space War. It's an animated GUI driven system with a frame rate of 30fps. It is written entirely in Clojure and uses the Quil shim for the Processing GUI framework." - Robert Martin

https://blog.cleancoder.com/uncle-bob/2021/11/28/Spacewar.ht...

tgma•45m ago
For all LLM flaws, if it kills the whole Agile/SCRUM/whatever grift, it will have been worth it. The damage these guys have done to software industry at large is unfathomable.
whstl•21m ago
Hot take, but the bureaucracy of Scrum, the Figmafication of design and the disdain of PMs for iterative deliveries generates more work and waste than AIs are able to save.
MeetingsBrowser•42m ago
The craziest thing about AI is you can just try it yourself and check if the claims are true.

I use Claude code and codex daily. They have become an integral part of my workflow.

There is no task that takes me a day that they can complete in five minutes.

Even with the lightning fast progress being made, it looks like LLMs are a decade or more away from being that good.

If AI can do your job for you, you should be the first to know. Just try it and see!

qudat•37m ago
Fundamentally it cannot be much better than how well we can write the spec and then validate the results.

It’s always gonna be a multi shot process. And it can already write code good enough. That’s no longer the bottleneck.

Further, Qwen 27b is such an incredible masterpiece for coding and it can run on consumer hardware today. Anthropic/OpenAI are gonna give up on coding models very soon. There’s not gonna be any money in it when you can run your own local model for significantly cheaper.

Qwen27b is not SOTA but the value is insane. You can basically use it for small tasks and then route harder problems to opus or sonnet and boom you’ve said a lot of money.

2ndorderthought•34m ago
Super trivial to hand verify 350kloc changes for sure.
qayxc•28m ago
Quis custodiet ipsos custodes?
Aeolun•30m ago
> There is no task that takes me a day that they can complete in five minutes.

Five minutes is pushing it, but 15 minutes? Absolutely.

johnfn•27m ago
There are definitely tasks you can prompt an AI in 5 minutes that would take a whole day to do. One example is adding something to a CI pipeline and getting it to green (i.e. maybe you're adding your first ever e2e test), especially when your CI pipeline is painfully slow. e.g. if your pipeline takes 30 minutes to finish, and it takes around 10 tries to figure out all the random problems, that was easily a full day task before AI. Now I prompt AI to figure it out, which takes 5 minutes of active attention, and it figures it out for the rest of the day while I do other stuff.
Aurornis•15m ago
I mean I wasn’t sitting around unproductively waiting for 30 minute CI runs to finish before LLMs came along, either.

I also like to use LLMs for background work on iterative tasks, but the way some people talk about work in the days before LLMs make me realize how we’re arriving at these claims that LLMs make us 10X more productive. If it took someone all day to do a few minutes of active work then I could see how LLMs would feel like a 10X or 50X productivity unlocker simply by not shutting down and doing nothing at the first sign of a pause.

MeetingsBrowser•3m ago
There are definitely some tasks that AI has made 10x or 100x faster, but not the tasks that make up my day to day.

For me, there may be one thing I do every few months that AI is really good at.

The overwhelming majority of the work I do, LLM tooling is just ok at. Definitely faster overall, but with lots of human planning, hand holding and course correction.

I would estimate LLMs make me, on average 50% more productive , which is huge! But from my experience I cannot believe anyone is experiencing a 8h/5m multiple productivity boost overall

MattGaiser•25m ago
The delta isn't a day to 5 minutes, but a day to a half hour (where most of my larger tickets take)? Yes, especially as you don't need to watch it do its thing anymore.

To me, the lack of amazing productivity gains is that we have done nothing to speed up figuring out what to build and nothing to speed up getting code into production from pull request and in a lot of companies, code review is already saturated.

Also, the agents are good at figuring out problems for themselves, so I can ask it to set up a CI/CD pipeline, give it GitHub access, and it will just try things until it succeeds.

whstl•25m ago
Yep. It depends so much on task, expectations, ability to express what you want and whether the problem has been solved elsewhere or not.

The results are always so ridiculously different.

lelanthran•2m ago
> The results are always so ridiculously different.

Well... yes! It's not the same as running a program through a compiler 100k times and getting the same binary, it's... different: https://www.lelanthran.com/chap15/content.html

HeavyStorm•9m ago
Not my experience. AI takes a lot less time doing tasks than myself. My current issue is that 2 out of 3 they don't produce the code that I want, so I either have to reprompt or do it myself. And the solution is simple: just accept their way; I'm just not there yet.

In any case, on that one time that AI works perfectly, it saves me hours of coding. So the potential is there...

doginasuit•40m ago
There are probably some respectable workflows that involve an LLM writing most of the code, but AI is still terrible at understanding some critical parts of the problem. You still have to tell it what to write and how it should work or there are high odds that you'll get a hot mess. And there still needs to be a human that understands everything there and how to debug it. For me, the most enjoyable path there is to write it myself, because I would rather be involved in writing the code than only involved in reading it. It might not be the fastest path there, but it gets the job done for the foreseeable future. I could end up like the Amish who choose not to use technology that was developed after a certain point, from what I can tell they do alright.
tonyarkles•22m ago
> but AI is still terrible at understanding some critical parts of the problem

I agree to some extent with regards to writing new code. One piece where I have been perpetually impressed is at asking it to put together a plausible explanation of how something weird has happened. I have been blown away, multiple times, by Codex and Claude’s ability to take a prompt like “When I did X, I expected Y to happen but instead observed Z. Put together an explanation for how that could happen, including the individual lines of code that can lead to ending up in that state.”

In one notable case, it traced through a pretty complex sensor fusion -> computational geometry problem and identified a particular calculation far upstream that could go negative in certain circumstances, which would lead to a function far downstream generating a polygon with incorrect winding order (clockwise instead of CCW).

In another, it identified a variable that was being initialized to 0 instead of initialized to (a specific runtime value that it should’ve been initialized to during a state transition). The downstream effect, minutes later, would be pathological behaviour that would happen exactly once per boot.

In both cases I was provided with a specific causal chain of events with individual source files and line numbers so that I could verify the plausibility of the explanation myself.

doginasuit•9m ago
That is how I use it too, for explanations and suggestions when I run into something unexpected. It is incredible in the back seat.

I don't mean to completely dismiss their utility. I realized recently that I was having more fun coding than I ever remember. It is a strange feeling to go along with vibe out there that software developers are becoming obsolete.

daviding•39m ago
English is the new programming language.
benatkin•28m ago
I'm not sure I agree, but I can see it used as an open-ended interview question. "Is English the new programming language?" It would be a good test if someone gravitates towards pedantry (AIs can speak another common language just as well!) or if they actually get into the difference between prompting and programming, or whether it's at its core an LLM or just an AI based on transformer architecture. Extra points for having it be part of an async interview and interviewees using an LLM to write the answer, and interviewers using LLMs to grade them.
LaGrange•39m ago
I'm an AI skeptic, but I do think that _he_ will be out-coded by AI, no problem.
perrygeo•38m ago
I tend to agree with his point.

But I found myself laughing at the style; just ranting about software like a cartoon villain in his bathrobe. No fucks given.

mrcartmeneses•36m ago
Uncle Bob full of shit? Colour me purple!
julionc•28m ago
"It is unavoidable. It is your destiny. You, like your father, are now mine."
HumblyTossed•26m ago
He helped enshitify the industry - empowering midlings to cry about "clean code" instead of actually learning to produce a great product. No thanks, Bob.
OldSchool•25m ago
More "bad news" and from the man who helped create and then promote Agile to dilute the value of software developers by forcing software development out of the control freak's nightmare where it started: seemingly esoteric, non-understandable by management, and make sure the next generation of developers knows their place. That's Agile's insidious purpose as far I am concerned.

As for AI-written code, I wouldn't fly on a plane controlled by AI-designed and AI-tested code, but much of development is busy work, not problem solving or design. AI excels at turning a protocol spec into a parser for example. I'll take that any day. AI excels at finding stuff, particularly non-code, thesis-level ideas for algorithms and also at about the same level, what's been shown not to work when solving a non-deterministic problem.

If we're lucky, AI will fill in after exposing who is only doing busy work and who is creating.

znort_•14m ago
i had to laugh at his announcement that "otoh ai will give you the power to get all that coverage and cyclomatic complexity stats done in minutes, which you know doesn't really mean that the code is going to work".

also, his prediction assumes that ai will be able to learn from its own code going forward. will it also create its new programming languages and tools?

but it's a funny rant.

HeavyStorm•13m ago
That's a conspiracy theory if I ever heard one.
oytis•21m ago
That gotta be a joke right? It's like running agents to write agent ochestrators to write orchestrators for orchestrators just for clean code
cmiles74•21m ago
I don't have a lot of patience for Bob. That being said I have to agree with him on test coverage (that's as far as I made it through his monologue). IMHO, that is something that I 100% am okay letting the LLM tooling write and manage. I used to argue about whether or not we needed a test that verified that the value of a constant didn't change, and if 100% coverage was really that important. Now I don't care, I just let Claude write the test and keep it up-to-date.
andrewl•20m ago
That was a bizarre performance.
k3vinw•1m ago
Gives me a whole new perspective to the phrase clean code.