frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Quantum computers could have a fundamental limit after all

https://phys.org/news/2026-03-quantum-fundamental-limit.html
1•g-b-r•38s ago•0 comments

Sam, the 'Other' SED from the 80s

https://julienlargetpiet.tech/articles/sam-the-other-sed-from-the-80s.html
1•random__duck•45s ago•0 comments

The Project I Kept Postponing (AI Didn't)

https://sami.eljabali.org/the-project-i-kept-postponing-ai-didnt/
1•samieljabali•1m ago•0 comments

Think Tank, Idea Collaborate

https://thousandmindsai.com
1•wesley-Alan•2m ago•0 comments

Ukraine's drone defense tech reshapes combat as warfare evolves [video]

https://www.youtube.com/watch?v=mnX7hE-9OK8
1•teleforce•4m ago•0 comments

Show HN: A war-strategy game played by AI agents

https://agentempires.app/
1•ttamslam•9m ago•0 comments

Which Porsche 911 Generation Is the Best Investment in 2026?

https://pistonalpha.com/articles/porsche-911-investment-guide-2026
1•magrix•9m ago•0 comments

Tell HN: Bug in Claude Code CLI is instantly draining usage plan quotas

https://github.com/anthropics/claude-code/issues/38335
1•nikhilgk•10m ago•1 comments

Ootils – An open source supply chain engine designed for AI agents (not humans)

https://github.com/ngoineau/ootils-core
1•ngoineau•14m ago•0 comments

A whirlwind tour of systemd-nspawn containers (2025)

https://quantum5.ca/2025/03/22/whirlwind-tour-of-systemd-nspawn-containers/
1•indigodaddy•16m ago•0 comments

Show HN: Vocab extractor for language learners using Stanza and frequency ranks

https://huggingface.co/spaces/vladvlasov256/vocab-nlp
2•crivlaldo•17m ago•0 comments

Can this technology end drone warfare? [video]

https://www.youtube.com/watch?v=unraT22a4zY
1•teleforce•17m ago•0 comments

Open-Sourcing Our Mail Client Mono Mail

https://github.com/erickim20/monomail-desktop
2•rhksnrla•24m ago•2 comments

Arena Zero Ep.1 [video]

https://www.youtube.com/watch?v=qqcH-1Rk-ow
2•thewanderer1983•25m ago•0 comments

M4 and M5 Macs cannot run 4k screens in HiDPI mode – limited to 3.3k

https://github.com/waydabber/BetterDisplay/discussions/4215
3•smcleod•26m ago•1 comments

Build123d: A Python CAD programming library

https://github.com/gumyr/build123d
2•Ivoah•27m ago•0 comments

Age verification, child protection and economic power

https://www.cyberverso.net/age-verification-child-protection-and-economic-power/
2•MatteoFrigo•29m ago•0 comments

TeamPCP Supply Chain Campaign: Update 002

https://isc.sans.edu/diary/32838
2•jruohonen•30m ago•0 comments

Samsung Magician disk utility takes 18 steps and two reboots to uninstall

https://chalmovsky.com/2026/03/29/samsung-magician.html
2•chalmovsky•30m ago•0 comments

Things I learned building a model validation library

https://wilsoniumite.com/2025/01/24/things-i-learned-building-a-model-validation-library/
2•Wilsoniumite•31m ago•0 comments

AI isn't killing jobs, it's 'unbundling' them into lower-paid chunks

https://www.theregister.com/2026/03/24/ai_job_unbundling/
6•gnabgib•34m ago•1 comments

Para-Academic Techno-Philosophy

https://elftheory.substack.com/p/para-academic-techno-philosophy
2•lentoutcry•34m ago•0 comments

Generating one token at a time is a blessing in disguise

https://kachkach.com/blog/generating-one-token-at-a-time-is-a-blessing-in-disguise
2•halflings•36m ago•1 comments

The Acceleration of Addictiveness (2010)

https://paulgraham.com/addiction.html
2•microsoftedging•37m ago•0 comments

Show HN: OpsScaleIQ – The operational intelligence OS for franchise operators

https://opsscaleiq.com
2•dsptl•37m ago•0 comments

Personal story: BR airlines sites sucks. Struggling to cancel seat selection

https://blog.thisago.com/story/20260329-cancellingFlightSeatSelection.txt
2•thisago•37m ago•0 comments

Show HN: Tabical – Tinder-style city micro-itineraries, personalized by swipe

https://tabical.com/
4•akhilpotturi•39m ago•0 comments

Hundreds of strangers flock to San Francisco beach to dig a really big hole

https://www.sfgate.com/sf-culture/article/hundreds-strangers-flock-sf-beach-dig-really-big-221583...
4•Stratoscope•40m ago•0 comments

Ask HN: What is TensorFlow still good for now?

1•asxndu•42m ago•1 comments

What category theory teaches us about dataframes

https://mchav.github.io/what-category-theory-teaches-us-about-dataframes/
6•fanf2•44m ago•0 comments
Open in hackernews

The "Vibe Coding" Wall of Shame

https://crackr.dev/vibe-coding-failures
113•wa5ina•1h ago

Comments

wa5ina•1h ago
A curated directory of documented incidents where AI-generated and vibe-coded software failed in production.
g947o•49m ago
Curated? More like hallucinated.
aimadetools•1h ago
Thats already a big list
cratermoon•1h ago
I cannot read dark grey text on a black background
monksy•1h ago
Thought experiment here: What about the bugs that humans have wrote. (I'm not excusing or justifying to say AI Coding is better). At one point we shamed companies for producing and being sloppy with their engineering practices. All of the sudden in the last 10 years, we accepted company's excuses of "of well we don't care and we're garbage." (A lot of Amazon tone death documentation/surprise bugs/google's head scratching disconnect to the user, etc behaviors).

But I think this is a great thing to show that they're pushing to outsource coding to a bot and to shame them that their plan isn't working out so well as they're trying to force people to believe.

I think it may help if we start personalizing these trends with the people who are amplifying it. I.e. Jassyslop, Siemiatbot (Klarna CEO was bold to brag he dropped 80% of a role for AI) etc.

bigstrat2003•1h ago
Honestly, we should shame companies for poor engineering whether humans are directly doing the work or handing it off to an LLM.
monksy•1h ago
I agree with you. However, business individuals have decided that they're "a better judge" of our practices and they've used financial, legal, and coercion to get their way.
tayo42•1h ago
Everything is blameless, you can't do that to humans lol
doug_durham•1h ago
“Vibe coded”? I doubt that there is the documentary evidence that the code in these systems was never touched by a human. At best this is a list of code where AI tools were used in development. To be honest if you just created a list of all outages in all companies and systems you’d probably have a better list since AI tools are ubiquitous.
bigstrat2003•1h ago
> AI tools are ubiquitous.

Only among people who don't value the quality of their output. There are, fortunately, many who do value quality and are not using AI tools until they get to the point where they can usefully contribute.

simonw•50m ago
> Only among people who don't value the quality of their output.

I value the quality of my output and I make extensive use of AI tools.

That's why the original definition of "vibe coding" is useful: creating code with AI tools without reviewing or caring about the quality of that code.

It's also possible to use AI tools as part of a responsible engineering process that is intended to produce production quality software.

eloisant•47m ago
Have you used a state of the art tool (e.g. Claude Code) in the past 6 months? If you only tried free tools, or only tried 1 year ago last, you really need to check again.

AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.

I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.

onion2k•19m ago
Isn't it odd that you wrote your comment with AI then!?

Ha, gotcha, AI slop poster!

I know you didn't, but this is where we'll end up if people just write off everything as 'bad because AI' instead of critically assessing the quality of something on its own merit rather than the (very ironic) 'vibe' that it was generated rather than written.

Thorrez•1h ago
For CVE-2026-0755, that's a vulnerability in gemini-mcp-tool. gemini-mcp-tool's Github repo says "This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google." but this list shows the Google logo next to the vulnerability.

Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.

https://github.com/jamubc/gemini-mcp-tool

Disclosure: I work at Google, but not on anything related to this.

joe_mamba•56m ago
>Also, it's not entirely obvious to me that the vulnerability was introduced by vibe coding.

IDK why people act as if vibe coding invented software bugs that lead to vulnerabilities, as if those weren't already a thing by human programmers.

bdcravens•40m ago
The same reason some use crime committed by illegal immigrants to push action, while ignoring the fact that citizens are more likely percentage-wise to commit those same crimes. It's confirmation bias at the least, and intellectual dishonesty at the worst, but either way, they want their worldview to be validated.
Gud•32m ago
I know this is extremely off topic, but illegal immigrants are far more likely to commit crimes than citizens, not that this has anything to do with software bugs...
simonw•19m ago
You got that exactly the wrong way round.

Here's one set of numbers from the CATO institute: https://www.cato.org/policy-analysis/illegal-immigrant-murde...

The only way your statement holds up is if you treat the act of existing while undocumented as a crime for this comparison, in which case sure - it's a tautology.

bdcravens•19m ago
I probably won't comment further, since as you said this is very off-topic (I only meant to draw out an analogy as to why discussions about AI tend to be ideologically skewed), but every statistic I've seen shows far lower crime rates among illegal immigrants versus citizens (aside from the statutory crime of being in the country illegally).
g947o•51m ago
The first link claims the 6-hour outage wiped 99% of order volume. I went to the "source" and found an (AI generated?) ad by a company that wants to sell a product, where I cannot find the 99% number.

This whole website and everything around it are almost ironic.

NewsaHackO•39m ago
Yea, I was about to comment the same thing. I have noticed a lot of people weaponizing people's hatred of AI/slop and using rage baiting to drive views. No doubt someone would have looked at that entry of "Amazon lost 6M orders due to slop!" at face value and come away thinking it was true.
vunderba•15m ago
This site, especially if you look at all the previous posts from this domain, is almost assuredly AI generated.

One of the "fun" hallmarks of many of these LLM assisted websites is that they seem to completely disregard basic accessibility (especially Web Content Accessibility Guidelines [1]). That small dark gray subtext on a black background is just horrific.

[1] - https://webaim.org/resources/contrastchecker

rvz•1h ago
This is the web3isgoinggreat equivalent for crypto, but for vibe coding with AI.
porcoda•1h ago
In my experience over the last couple years, lists like this won’t move the needle at all. The AI zealots reject anything that calls into question the AI stuff, usually appealing to “just wait, better models/agents/guardrails imminent” and claiming that anecdotal productivity gains are worth the risk. The people concerned about AI already are concerned and just fall back to “I told you so”. Unfortunately the decision makers seem to still be following the zealots promising wondrous productivity, profit, and a future full of flying cars.
bluegatty•1h ago
If by 'zealots' you mean the vast majority of developers, who are using AI tools in one way or another.

The AI is already substantially better than most humans for a huge spectrum of at least narrow tasks. Those 'skills' will expand in scope, the evidence is overwhelming and unequivocal.

Within 12 months it will be considered a 'security concern' to not have AI at least to some degree of autonomous review.

It's very easy to overstate the impact of AI (and sometimes it's annoying), but it's just unreasonable to be in 'denial' at this stage.

The only concern really is how, when, and with what kind of oversight we use the new tools - that that 'they are used'.

voxl•12m ago
Can you list a view tasks that AI is better at then other tools? Not humans mind you, because that is unimpressive, I mean other deterministic tools.

For example, I'd rather use a calculator to do calculations than ask an LLM to do it. I'd rather use LanguageTool for grammar than asking an LLM to do it. Id RTFM then have an LLM summarize it.

simonw•1h ago
Why is the LiteLLM incident on there? The linked article for that one is a 404.

I didn't read any credible arguments suggesting that was caused by vibe coding. They had their PyPI publishing credentials stolen thanks to an attack against a CI tool they were using.

Plus the linked article for the Amazon outage is https://d3security.com/blog/amazon-lost-6-million-orders-vib... which appears to be some other vendor promoting their product without providing any details on what happened at Amazon.

g947o•49m ago
My impression is that the first item on the website should be the site itself.

Barely anything on the site makes sense if you look at them closely.

We call that "slop", the last time I checked.

scientism•40m ago
Indeed. The joke is that the website itself is vibe coded.
mrkeen•34m ago
> Why is the LiteLLM incident on there? The linked article for that one is a 404.

-> [Endor Labs] https://www.endorlabs.com/learn/teampcp-isnt-done

-> On March 24, 2026, Endor Labs identified that litellm versions 1.82.7 and 1.82.8 on PyPI contain malicious code not present in the upstream GitHub repository. litellm is a widely used open source library with over 95 million month downloads. It lets developers route requests across LLM providers through a single API.

NewsaHackO•20m ago
That doesn't answer how stolen credentials are related to AI-assisted coding.
benatkin•18m ago
It seems like blogspam. It's curated according to an author's comment, but it treats ones verified by a security organization like Vite's just the same as ones like the blog post about Claude calling a Terraform command. And this is on a site which appears to sell other AI generated content for a subscription.

Edit: it appears the traditional content is free. What is paid is an AI interview pack, which is basically content with some tokens in order to present the content. They could be cheap Haiku tokens. Also it isn't a subscription, it's one-time purchase of packs. My bad.

Dig1t•1h ago
I kind of think that the "Human Coding" Wall of Shame would be quite a bit larger and contain examples that are every bit as egregious.
dgb23•45m ago
I don’t think that’s the point of showcasing these issues.

The specific point is that you cannot prompt your way to reliable software (AKA vibe coding). Just as you cannot reach the same goal by glueing together stackoverflow snippets without understanding them.

Dig1t•33m ago
I understand that, but the interesting bit is to compare how it performs relative to the average human coder. We can point out specific flaws for eternity, but if it makes 1% fewer mistakes or allows humans to code faster without increasing the number average number of mistakes, then I'd say that that's still providing value. I feel like just enumerating different mistakes that it's made is sort of biased against it because it leaves out a comparison to the alternative.

Sort of like showing off self-driving car crashes. You can spend all day listing the crashes and showing people how it has problems, but if it's statistically safer than the average driver it would save thousands of lives per year to deploy it anyway even if it's not perfect.

bluegatty•1h ago
'vibe coding' is too loose a term. Everything will be generated by AI in the very near future, and it will range from 'fancy auto complete' to 'entirely autonomously generated' with many nuances and subtleties in between.
jabwd•57m ago
If you mean by everything "stuff that has been done before and no one cares about" then, yeah, probably.

New code will still need to be written though.

bluegatty•50m ago
No, I mean everything.

It's not reasonable to suggest that AI is only going to repeat older patterns that have been trodden before, or 'things that don't matter'.

AI will be writing most new code, by far.

Without even getting into complicated arguments about 'creativity' - the AI is an encyclopedia of best practices, and can think a couple of steps ahead for most things you'll ever want to do.

Like pro chess players thinking they're going to beat the algo with some kind of fancy human creativity.

Developers roles are changing, very fundamentally, you're now 1/2 a layer of abstraction above the code, and you're not going to writing it better than AI (in most cases) any more than a human will be better at sawing wood than the power tools. And yet, carpenters still exist.

jabwd•46m ago
Goodluck
bluegatty•40m ago
People riding horses in the age of automobiles are the one's who need 'luck'.
apgwoz•24m ago
The key to this argument is that we won’t need to rely on Anthropic/OpenAI soon — will they exist in the same way they do today in 12-18 months? The “open” models are getting better and better, and people are figuring out ways to make inference run on lesser hardware. It already might be viable for people that don’t expect “instantaneous” and are doing more hybrid development.

But you’re also never going to convince the people who still only run vi on the Linux console, without Xorg…

shermantanktop•1h ago
So this is a list of incidents where random people on the internet speculated about rumors that AI was to blame. The companies typically deny it. Insiders who know the details are generally unable to comment due to how large companies manage PR.

So basically Reddit.

ares623•1h ago
I love dunking on vibe coding as much as the next guy but is there actual evidence for most of the entries that such is the case? IMO that will make the point even stronger.
nubinetwork•1h ago
What about all the o365 outages and windows bugs caused by ai written code?
tonymet•1h ago
AI might have been an opportunity to take engineer hubris down a knotch. Perhaps to reassess the excesses (bad performance, bad UX, poor reliability, costly development & operations, etc) . Instead of reflection, we decided to shame AI as vibe coding .

How much abysmal code and products have we all shipped? Exploitative, clumsy , dangerous, vulnerable? What was our excuse?

I find the entire anti-vibe coding movement to be terribly tacky and judgmental.

We have an incredible tool that could 10-100x productivity. We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.

Sad, really.

bluefirebrand•53m ago
> How much abysmal code and products have we all shipped?

> We should be using it to fix all of the terrible software we’ve made over the past 20 years. Instead there are 2-3 camps. People building stuff, people hyping AI and people shaming the first 2.

This seems like an odd take. The pro who are using and hyping AI are not fixing all of the crap we put out the last 20 years. They are putting the gas pedal on the amount of crap being shipped

I don't think anyone except the most die hard AI lovers truly believe is producing high quality work on the balance. It is absolutely producing more but worse output than we've ever seen before

Even if it is capable of producing high quality work, you have to realize that most people using it are not capable of getting it to produce work of that quality. Nor do they seem to really care to

tonymet•46m ago
I don’t disagree. But what is the detractors’ goal? What will the shaming accomplish? The tools are here and can be used for good or ill.

Think about the 90s PC revolution, opening up developer opportunities. There were commercial devs and open source devs. The open source devs decided to put the new resources and tools to use to evangelize computing . And in many ways won.

We have new tools now, and can put them to good use. Moaning from the sidelines is a losing strategy.

bluefirebrand•19m ago
> Think about the 90s PC revolution, opening up developer opportunities

I don't think the general arc of computing since the 90s has been good for humanity

As a detractor, that's my goal. I want to undermine this garbage technology that is actively making life worse for the majority of people while enriching a vanishingly small segment of humans at our expense

wulfstan•26m ago
I am currently wrestling a vibe-coded codebase into a shippable state and we should call these tools out for what they actually are - technical debt generators.
22hsG•58m ago
Simon Shillison appears, comments that hurt his income stream are flagged.
cjrd•53m ago
now do one for human-coded incidents.
OsrsNeedsf2P•46m ago
I would be interested in a comparison of SLOs, "Before and after" adopting AI
crazygringo•46m ago
Is this meaningful at all, without a control?

How often does software fail in production with human-written code? How many times has a production failure been avoided because an LLM didn't make a typo or mistake that a human would have?

This is pushing an agenda. It's not measuring anything meaningful.

extrabajs•35m ago
A control? This is just a list of incidents, not an experiment.
crazygringo•32m ago
The "Why this matters" section at the bottom is clearly drawing conclusions as if it were an experiment.
lucasfin000•16m ago
This is definitely the right question. A list of failures without any baseline won't tell you anything. You would need the same exercise for human-written code at a comparable scale before drawing any conclusions at all. Without it, it's just confirmation bias.
gobdovan•45m ago
Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.

Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.

dzonga•15m ago
why is this flagged ?