frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•2m ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•6m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•22m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•28m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•28m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•31m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•34m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•44m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•44m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•49m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•53m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•54m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•57m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•57m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•2h ago•1 comments
Open in hackernews

The Software Engineers Paid to Fix Vibe Coded Messes

https://www.404media.co/the-software-engineers-paid-to-fix-vibe-coded-messes/
95•zdw•4mo ago

Comments

westurner•4mo ago
Typical coding LLM issues:

Hallucinations

Context limits

Lack of test coverage and testing-based workflow

Lack of actual docs

Lack of a spec

Great README; cool emoji

sysguest•4mo ago
well that's enough for "good-looking documentation-is-everything" kinda teams
westurner•4mo ago
I'd take tests over docs but that's a false dilemma.

What does the (Copilot) /tests command do, compared to a prompt like "Generate tests for #symbolname, run them, and modify the FUT function under test and run the tests in a loop until the tests pass"?

Documentation is probably key to the Django web framework's success, for example.

Resources useful for learning to write great docs: https://news.ycombinator.com/item?id=23945815

"Ask HN: Tools to generate coverage of user documentation for code" https://news.ycombinator.com/item?id=30758645

TuxSH•4mo ago
Context limits (regardless of hard limits) are a show stopper IMO, the models completely fail assignments with >= 30k LoC (or so) codebases.

You're better off feeding them a few files to work with, in isolation, if you can.

ttoinou•4mo ago
Sooo the LLM codes just like me ?
westurner•4mo ago
No; it doesn't care when it gives you incomplete garbage.

You have to tell it to validate its own work by adding to, refactoring, and running the tests before it replies.

Most junior developers do care and would never dump partial solutions on a prompter as though they're sufficient like LLMs.

Every time I remember to get `make test-coverage` working and have myself or the LLM focus on lines that aren't covered by tests.

Junior or Senior, an employee wouldn't turn in such incomplete, not compiling assignments that % of the time; even given inadequate prompts as specifications.

ttoinou•4mo ago
If you’re hiring someone remotely without any trust you could absolutely get random garbage that pretend to be real work from a human.

A human software developer doesn't code in the void, he interacts with others.

The same when you have an AI coder, you interact with it. It's not fire and forget.

wellthisisgreat•4mo ago

    Lack of actual docs
    Lack of a spec
Well, not my LLMs at least
imposterr•4mo ago
>If the resulting software is so poor you need to hire a human specialist software engineer to come in and rewrite the vibe coded software, it defeats the entire purpose.

I don't think this is entirely true. In a lot of cases vibe coding something can be a good way to prototype something and see how users respond. Obviously don't do it for something where security is a concern, but that vibe-coded skin cancer recognition quiz that was on the front page the other day is a good example.

antonvs•4mo ago
I've spent a good chunk of my career fixing or rewriting messes created by human developers. A majority of startups that succeed need this at some point, whether it's because of time and resource constraints during initial development, experience and competence issues, poor choices that got baked in, or whatever.

Right now, vibe coding just means there might be a lot more of this, assuming vibe coding succeeds well enough to compete with the situations I described.

FearNotDaniel•4mo ago
Exactly… back in 1975 Fred Brooks was advising programmers to “plan to throw the first version away” (The Mythical Man-Month) and it’s still true today. Just the tools to build that rapid prototype that have changed. Once it was Ruby on Rails, once it was Visual Basic 6, very often it’s still Excel macros…
thegrim33•4mo ago
Wasn't part of throwing away the first version because of all the knowledge you gain while actually building it? So that you could build it much better the second time, with much better abstractions/design? If you had AI code it the first time, you don't gain that same knowledge.
daanlo•4mo ago
You still do from operating the software. You see what problems users have using it, which types of problems they tackle with it. By the end of running the software for a month you will have typically learned a boat load.
Analemma_•4mo ago
But you and the parent are both ignoring that, in the real world, prototypes constantly (in fact, nearly always) get shipped straight to production and not updated or rewritten, precisely because they solve a pressing problem, and if they solve it well they get real users right away who start bringing up new feature requests and bugs before the prototype can ever be fixed.

This was a constant pattern in software engineering even before LLMs, but LLMs are making it much worse, and I think it's very head-in-the-sand behavior to ignore that. It's akin to going "well, you can't blame the Autopilot because the person should have been fully-attentive ready to react at any millisecond". That's not how humans work, and good engineering is supposed to take real-world human behavior into consideration

bravetraveler•4mo ago
Like Red Teams for InfoSec, reliability teams meet developers. Not new, but keep pumping Gig Culture/the fad, I guess.
sshine•4mo ago
That’s a good comparison.

But at another scale.

I tell my CS students who ask if there will be any junior positions for them when they graduate:

There will be an entire new industry of people who vibed 1000 lines of MVP and now are stuck with something they can’t debug. It’s not called a junior developer, but it’s called someone who actually knows programming.

Also, they will continue to deliver code that is full of security holes, because programming teachers are often not competent to teach those aspects, and IT security professionals who teach tend to be poor programmers or paper pushers.

npoc•4mo ago
That will only be a temporary phase while LLMs still produce bugs that LLMs can't predict or fix themselves.
Phemist•4mo ago
Untill GödeLLM comes along and proves this is a permanent phase after all
sshine•4mo ago
Probably simultaneously, LLMs will perpetuate faulty MVPs that need fixing for the viber economy, while at the same time, all those AI engineering problems get solved, but only for those who know how to ask.
sltr•4mo ago
From my relevant post last week:

> AI now lets anyone write software, but it has limits. People will call upon software practitioners to fix their AI-generated code.

https://www.slater.dev/about-that-gig-fixing-vibe-code-slop/

herdcall•4mo ago
Well, I'm sure we've all seen code produced by human developers that is 10x worse than what my Claude Code produces (certainly I have), so let's be real. And it's improving scary fast.
nickstambaugh•4mo ago
I think the bar has raised, for sure. There's code I work on from prior seniors that is worse than what our current juniors write, I'm assuming AI is assisting with that but as long as the PR looks good, it's no different to me.
trenchpilgrim•4mo ago
I've noticed that generally OK design patterns and sticking to idiomatic code has increased while attention to small but critical details remains the same or maybe slightly decreased.
dmitrygr•4mo ago
Hard disagree. Humans fail in ways I know, can predict, and know where to look for. ML coding assistants fail in all sorts of idiotic ways and thus every damn line needs to be scrutinized.
Ekaros•4mo ago
What actually scares me is the idea that with humans you can manage to follow their train of thought. But if LLM just rewrites everything each time, well that is impossible to follow and then there is same work to be done over and over again each review.
scrollaway•4mo ago
> Humans fail in ways I know, can predict, and know where to look for.

You clearly haven't worked with humans.

sherburt3•4mo ago
I can understand how a mediocre SWE thinks and can anticipate what corners were cut, I have no idea what an LLM is thinking.
janderland•4mo ago
This seems like a lack of experience. The more I work with LLMs, the better I get at predicting what they’ll get wrong. I then shape my prompts to avoid the mistakes.
stuaxo•4mo ago
Try working with a bad dev using an LLM.
righthand•4mo ago
I get two types of merge requests nowadays. The first is a traditional piece of code. Something simple like a bit of marketing text to a page or a new react component that adds another css effect to some content. The second type is a long complex merge request, for something more complex than a menu (not really though)…tabs, uses new dependencies, none of the old dependencies, is filled with emdashed code comments about personal dev choices (instead of logic flow or business context), and the core file convention is named after the implementers library choice: `react-tabs`. If I bring up any of these issues with the implementer they tell me “we can fix it later and they need to just get it out”.

The first type of merge request is one that should be generated by an LLM and the second is one that should be generated by a human.

Instead I get neither but I get efficiency so someone can deliver at the last minute. And so I can can go mop up the work later or my job is hell the next time “we just need to get this out the door”.

THANK YOU LLMS

Kinrany•4mo ago
Clearly LLMs are not the ones to blame
righthand•4mo ago
Yes but are we using technology to enable our worst tendencies?
karl_p•4mo ago
As the mythical man month says: never ship the prototype. Plan to write one and then throw it away.
foreigner•4mo ago
I have literally never seen that happen in practice. The prototype code always evolves to become the product.
datadrivenangel•4mo ago
if you throw away a boat one piece at a time, have you thrown away the entire boat by the end?
DougN7•4mo ago
Well… yes you have thrown away the entire boat. But I get where you were coming from.
eithed•4mo ago
Ah, the modern ship of Theseus
vaxman•4mo ago
That is essentially how I survived the nuclear winter after the dot-com bubble burst (taking out most of the senior level tech workers across the vast majority of US domestic business --in those days, senior level meant formal training, 20+ years experience and over 40yo with kids in school, mortgages, etc..and when all the jobs go away in an industry for six years, you are forced to crack open your 401K and retire while you figure out something else...there was no coming back for them). The takeaway now is that the informally trained web people who came up without guidance beyond Google Search and forum contributions from Europe but gained control of the industry in the Crash's aftermath will live on as "vibe coders" forever and their "heavy lifting" partners at AWS, GCP and Azure will live on for a while hosting trillion+ parameter LLMs, even as the first wave of American CS graduates since the Crash are about to (finally!) hit their 20 year mark and gain control of tech across all industries (pulling the plug on the "heavy lifting cloud" that they don't need/want to budget for).

But unlike that six year gap during the tech nuclear winter (2000-2006) when you could literally follow those over-confident $10/hr kids around cleaning up one botched effort to port custom Windows apps to LAMP after another, this time it will be different. The LLMs are trained largely on the European-dominated code bases on Github and it's just enough to keep the "vibe coders" out of real bad trouble (like porting a financial application from Visual BASIC into PHP which has different precision floating point resolution between distributions/releases or de-normalizing structured customer data and storing it in KV pairs "because everybody is doing it so relational databases must be obsolete".) The work to cleanup their "vibe coded" mess will not be as intense (especially considering LLMs will help), but there will be a lot more of it this time around and re-hosting it more economically will be a Thing.

Sadly, American businesses will discover they don't need trillion parameter LLMs (due to MoE, quantization, agentic mini-models, etc.) and the supply of acceptable vector processing chips will catch up to demand (bringing prices down for "on prem" deployments) and that "AI snake oil factor" (non-deterministic behavior and hallucinations) will become more than a concern expressed over weekend C-suite golf games and yacht excursions (you know, where someone always gets fired to set an example of what happens when you don't make your numbers). AI had been dead so long that the top C-suites can't even remember the details of how/why it died anymore (hint: you could get fired for even saying "AI" up until the 2000 Crash giving rise to the synonym "ML" as a more laser focused application of AI), just that they don't trust it. The astonishing demonstrations at OpenAI, Anthropic, xAI, Google and Meta are enough to cause C-suites to write a few checks, causing a couple of ramps in the stock market, but those projects by and large are NOT working out due to the same 'ole same 'ole and I fear this entire paradigm will suffer the same fate as IBM Watson. The stock market may well crash again because of this horsepucky even though there IS true potential with this technology, just as with Web 1.0. (All it needs for that is a catalyst event --maybe not Bill Gates throwing a chair, maybe something in the dispute between Sammy and Elon.) Same as it ever was.

minimally•4mo ago
https://archive.is/jX10p
basfo•4mo ago
In my experience, LLM-generated code is only as good (or as bad) as the software engineering skills of the “vibe coder.” A seasoned engineer will not only craft clear, detailed prompts that specify how something should be implemented, but will also review the AI’s output on the fly, correcting major derailments—things like: “Don’t create a new function for that; just modify X to add support for this case.” They’ll even do an initial review of the code before opening a PR.

The real problem arises when non-technical people use an LLM to generate a full project from scratch. The code may work, but it’s often unmaintainable. These people sometimes believe they’re geniuses and view software engineers as blockers, dismissing their concerns as mere technical “mumbo jumbo.”