frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Elon Musk and Sam Altman are going to court over OpenAI's future

https://www.technologyreview.com/2026/04/27/1136466/elon-musk-and-sam-altman-are-going-to-court-o...
1•joozio•2m ago•0 comments

There's no such thing as the petrodollar

https://www.ft.com/content/be345914-7b4b-4264-bcbd-6e5e33b798c7
1•helsinkiandrew•2m ago•0 comments

EU tells Google to open up AI on Android; Google says "unwarranted intervention"

https://arstechnica.com/ai/2026/04/europe-could-force-google-to-open-android-to-other-ai-assistants/
3•vrganj•5m ago•0 comments

Show HN: Built a local-first way to make AI context reusable across tools

https://www.proxvanta.com/
1•bonjourmr•9m ago•0 comments

PayPal's $4B stablecoin is mostly held by DeFi yield farmers

https://stablecoinbrief.substack.com/p/paypals-4b-stablecoin-is-mostly-held
1•knivef•10m ago•0 comments

CrowdStrike Linux Agent Easy way to make it better

1•SilverPlate3•11m ago•0 comments

China's Great Firewall Poisoning the .icu TLD Nationwide

https://www.safwire.net/p/gfw-icu-tld
3•domainers•16m ago•0 comments

Show HN: zot – Yet another coding agent harness

https://www.zot.sh
5•patriceckhart•17m ago•0 comments

Generation Is Required for Data-Efficient Perception

https://arxiv.org/abs/2512.08854
1•E-Reverance•19m ago•0 comments

I built a business idea validator. Now I'm scared mine is the bad idea

https://www.indiehackers.com/post/i-built-a-business-idea-validator-now-im-scared-mine-is-the-bad...
1•SoloVault•22m ago•0 comments

Show HN: Screen time as a binary grid scorecard

https://twitter.com/yarsanich/status/2048076926024057231
4•yarsanich•24m ago•0 comments

MarkNext Specification v1.0

https://github.com/skorotkiewicz/marknext/blob/HEAD/MARKNEXT_SPEC.md
1•modinfo•24m ago•0 comments

Too many meetings? Try this

https://www.leadinginproduct.com/p/how-to-have-fewer-meetings
1•benkan•31m ago•0 comments

USAF Esports Team Wins the 2026 Armed Forces Esports Championship

https://armedforcessports.defense.gov/Media/News-Stories/Article-View/Article/4470862/us-air-forc...
3•nxobject•32m ago•0 comments

BYD Seal 08 debuts with Blade Battery 2.0: 1,000km range, 5-min charging, 684hp

https://electrek.co/2026/04/27/byd-seal-08-blade-battery-2-1000km-range-beijing-auto-show/
2•breve•32m ago•0 comments

CATL says sodium batteries are mainstream-ready, signs 60 GWh deal

https://electrek.co/2026/04/27/catl-sodium-ion-battery-60gwh-energy-storage-deal/
2•breve•34m ago•0 comments

AgentCheck – Pytest for AI Agents

https://pypi.org/project/pygent-test/
2•ash_ai•37m ago•0 comments

GTFOBins

https://gtfobins.org/
40•StefanBatory•37m ago•3 comments

The next step beyond Lovable–where the AI doesn't just build the UI

https://www.extern.co.za/
2•Luncedo•37m ago•1 comments

IMDB introduces mandatory account: User reviews only readable after login

https://basic-tutorials.com/news/imdb-introduces-mandatory-account-user-reviews-only-readable-aft...
4•tokyobreakfast•38m ago•0 comments

Show HN: Modern alternative to Google Dictionary, AI-powered and context-aware

https://chromewebstore.google.com/detail/quickdef-–-ai-dictionary/ioepkncpchchdiookgpkckafhfjcehke
1•hanifrev•41m ago•0 comments

Show HN: Gate – AI workers handle dev tickets in a visual workspace

https://soliddark.net/gate
2•SolidDark•44m ago•0 comments

Cryptography Challenges KalmarCTF 2026

https://blog.zksecurity.xyz/posts/kalmar2026/
4•ahpuh•46m ago•0 comments

Curryvim, the new Neovim distro, that does not try to be VSCode

https://github.com/SyntaxError2505/curryvim
1•SyntaxError2505•58m ago•1 comments

Our response to the April 2026 incident

https://lovable.dev/blog/our-response-to-the-april-2026-incident
1•filleokus•1h ago•0 comments

Barbara Liskov: Data Abstraction, Dijkstra, Distributed Systems

https://www.developing.dev/p/turing-award-winner-data-abstraction
2•signa11•1h ago•0 comments

Show HN: Netflix for Internet Pirates

https://plank.lsreeder.com/
1•lsreeder01•1h ago•2 comments

Building an In-House Lovable

https://engineering.merciyanis.com/blog/going-ai-native-how-we-handed-our-backlog-to-agents
2•axi0m•1h ago•0 comments

Pompeii archaeologists use AI to reconstruct man killed in volcano's eruption

https://www.npr.org/2026/04/28/g-s1-118986/pompeii-archaeologists-use-ai-to-reconstruct-man-kille...
3•razorbeamz•1h ago•1 comments

Show HN: Nat-zero – Scale-to-zero NAT instances for AWS (Terraform module)

https://machine.dev/blog/nat-zero-scale-to-zero-nat-instances/
2•leonardosul•1h ago•1 comments
Open in hackernews

Vibe Coding Will Break Your Company

https://www.forbes.com/sites/jasonwingard/2026/04/23/vibe-coding-will-break-your-company/
53•sminchev•1h ago

Comments

blurbleblurble•59m ago
"This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment."
piloto_ciego•50m ago
I don't know, my intuition since I started doing this software stuff professionally is that most people have dog piss judgment, most people are just making it up as they go, and "well thought out and planned well" is typically the enemy of actually getting anything done.

I don't know, I just feel like, "start building and the customers will tell you where the value is."

chromacity•56m ago
Third evidently AI-generated "AI is bad" story in a day. I'm gonna lose it...
kombookcha•52m ago
Somebody else can spin up some AI-generated "AI is good" stories and post those in response. Maybe somebody will deploy respective agents to do both automatically.

The house always wins.

threatripper•48m ago
Are AI agents posting this fully aware that they are AI? If they are trained only on human material they may not even understand their own true reality.
boelboel•11m ago
I swear there's been like 7 (mostly positive) stories about Mythos on the FT. They add basically nothing.
16bitvoid•39m ago
It's dystopian. I wish we could just roll back to 2022 and pick a different timeline. Anything and everything is either about AI and/or written by AI, and it's all the shittier for it. Software and services are becoming buggy, content quality plowed straight through bedrock, most people use AI to turn off their brains, and the people that care are left drudging through slop and garbage in both their professional and personal lives.

I want off this train to hell. I am truly (not exaggerating) on the verge of abandoning everything to go live in the woods.

loveparade•33m ago
The more AI-generated AI bad stories we get the more likely LLMs will produce more!
coldtea•13m ago
LLMs are told what to produce.

"Write me a 500 word post about how AI is great" and such shit.

What such stories would change is worsen the training data, so that we get more of that style of writing (rather than angle).

daishi55•56m ago
> Dr. Jason Wingard is a globally recognized executive with deep experience across corporate, nonprofit, and academic sectors, specializing in the future of learning and work.He currently serves as Senior Advisor at Harvard University, where he advises trustees, senior administrators, and faculty leaders, and leads a research agenda on workforce transformation and innovation. He is also Executive Chairman of The Education Board, Inc. and Senior Advisor at Social Finance, Inc., providing strategic and visionary consulting while advancing a national research agenda on leadership and workforce development.He most recently served as the 12th President of Temple University, where he held dual tenured faculty appointments as Professor of Management and Professor of Policy, Organizational, and Leadership Studies.Previously, Dr. Wingard was Dean of the School of Professional Studies at Columbia University and Managing Director and Chief Learning Officer at Goldman Sachs. Earlier, he served as Vice Dean of the Wharton School, University of Pennsylvania; President & CEO of the ePals Foundation and Senior Vice President at ePals, Inc.; and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.An award-winning author, Dr. Wingard has published widely on leadership, learning, and workforce strategy.

Not sure exactly what this guy does or what his expertise is, but I am fairly certain it’s not software development.

coldtea•21m ago
He's responsible for the great success of Silicon Graphics, Inc. (SGI)

> and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.

sublinear•9m ago
How is that relevant to the question of software development expertise?
coldtea•3m ago
It was a fun jab - as SGI famously tanked in the late 90s.

But SGI also had quite a lot of software, including their OS (IRIX), imaging and 3D modelling libs and tools, and this little thing called OpenGL.

aussieguy1234•55m ago
The faster you go with vibe coding, the more of a mess you'll get yourself into
piloto_ciego•51m ago
This is just patently false in my experience thus far. I mean, I'm "vibe engineering" and know what I'm doing relatively well? But the way this works now is I'm more like an architect than a coder anymore. This means I can do things faster, but it also means it's less fun. But the customer doesn't really care about "fun" - so I do what I've gotta do.

But if anything, I could probably go a lot faster and be fine, it's just my life would be miserable. If you're going to "vibe code" try to remember to actually... you know... vibe.

aussieguy1234•40m ago
My definition of vibe coding is coding without review (for example, a non technical person vibe coding something). In the hands of a competent engineer the AI tools do boost productivity.

But even there, there is responsibility capacity, you can't have an engineer maintaining large numbers of systems at once, so if you moved fast you can still get yourself in trouble even with technical review.

I'd argue that doing vibe coding without a competent engineer reviewing the work is likely to have worse outcomes than drafting your own legal documents without consulting an actual lawyer.

Both are likely to result in nasty surprises in the future.

spoiler•37m ago
The thing is, the development timeline is so compressed that you lose intimate knowledge of the codebase. Like, I don't think humans can form memories that detailed that quickly? Maybe it's just a me problem though. Anyway, so when you need to debug or fix stuff, AI's reasoning will be "welp makes sense, I suppose" and your mental mood of the codebase is now slippery. Eventually there comes a time where at best you can draw an incoherent high-level diagram of the architecture.

And AIs solution to problem is generally "more of the same" to fix it. It rarely looks at fixing design problems

slopinthebag•23m ago
> I'm more like an architect than a coder anymore

I don't understand this dichotomy. Coding is architecting, you can't divorce these things. In fact that is all it really is. It doesn't matter if you're writing assembly or python.

akmann•55m ago
Is anyone really vibe coding like this? I mean if someone without any coding skills vibe codes a whole app, cannot expect that this is production ready.. i think anybody with common sense should know this right?
DubiousPusher•51m ago
I think it comes down to your team discipline. It can magnify your sins and your virtues.
fragmede•33m ago
Define "production". You're not scaling to webscale on day one with a vibe coded app, but most apps never reach that anyway.
zmmmmm•48m ago
This article is full of incoherent logic and conflation of different AI risks with one another.
wewewedxfgdf•45m ago
More hysterical over reaction to AI.
2001zhaozhao•40m ago
> Speed without judgement is a liability

So, what's the alternative?

Speed without judgement? (Maybe you'll be fine. Or maybe your business gets run to the ground by spaghetti code piling up beyond any hope for human review and quality controls breaking)

Judgement without speed? (That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship)

Judgement + speed at the same time? (layoff most of your employees and keep only the visionaries? how do you even filter for people who can make good decisions?)

slopinthebag•26m ago
> That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship

That sounds right but is it actually true? By that I mean shipping faster. First mover advantage is a thing, but it's not the only thing, and that's also not the same as shipping additional features quickly.

I mean, Apple is famous for being purposely late to entire markets, and they're doing pretty well...

This mentality is just "move fast and break things", and just because it's a common trope in the SFBA doesn't make it effective across the board.

2001zhaozhao•10m ago
Note: I am assuming that it is 2027-28 and reliable AI automated coders exist (or the equivalent workhorse AI in your field), which makes implementation time negligible compared to making decisions. The effect is somewhat weaker with present-day-level AIs. I'm also assuming that the 100-person company is very competent with AI outside of making decisions, but that the startup can plan things much faster due to not needing a committee to do so.

Very rough maths:

If your 100 person team still follows collaborative processes to cancel out errors (let's say it takes 10 people a day to decide on a single deliverable's shape), then give the design to the AI to implement (as we assume the AI can do it without supervision), then you can ship 10 deliverables a day.

At the same time, that 4 people team can have all of them bouncing ideas off of AIs to help them make decisions in rapidfire all day. They'll each individually spend an hour working on a decision then hand it to an AI. Their decisions are on average as good as your 10-member team meetings because while your medium-sized company's decisions sometimes end up suboptimal due to politics, the startup's decisions are individuals so make the wrong call more often, and I assume these two effects cancel out. In that case, your competitor with 4 people cranks out 32 deliverables a day assuming that the implementation AIs don't have to be supervised at all.

In summary it's not "move fast and break things", it's just "move fast, focus on making decisions, delegate everything else to the AI". Remember that the decisions are all that matters if the AI can do all the implementation.

acron0•24m ago
I think the judgement angle is the only interesting part of this article, and the piece worth pursuing is automating the judgement where possible.
bluesaddoll•37m ago
These articles are such doomsaying, yesterday's clickbait. Again, the worst-case scenario is being introduced as the one that will surely happen to your company.
sminchev•35m ago
I read it in a report: AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.

In all cases, whole enterprise solution can't be made with pure vibecoding. Specification is needed, a basis of predefined rules, coding styles, security considerations.

2001zhaozhao•32m ago
> AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.

It also worsens the problem in general by making it way, WAY easier for the bad ones to performatively appear good. They'll have the better-sounding promises but if you listen to them you'll crash and burn in a few years. This doesn't even have to be intentional, just someone technically ignorant channeling AI sycophancy while simultaneously playing politics (i.e. promotionmaxxing while delegating ideas to AI) will have the problematic effect.

netcan•31m ago
So the article isn't very good but the vibe coding debate is pretty interesting.

This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.

First question is how much is more software actually worth to you."

This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.

But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.

Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.

The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.

If you have no customers yet... you can unfettter the vibe and see if it works.

Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.

The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.

There's no way to do vibe coding safely, at scale, currently.

tossandthrow•20m ago
> how much is more software actually worth to you.

I really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.

I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.

stingraycharles•11m ago
This smells like BS to me, and I have a bird’s eye view into several enterprises and startups.

LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).

coldtea•19m ago
>The thousand flowers died at Google because they had reached a point where opportunities are not everywhere.

It died because Google reached the enshittification penny pinching rent-seeking stage.

Incipient•31m ago
I'm sure a vibe coded internal or external application WILL break a company. The thought process is however, out of 10 companies:

- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

- 2 will partly use AI, and maybe keep up, maybe not

- 1 will go nuts vibe an entire app and explode (see Tea app or whatever)

- 4 will have an inefficient app, suffer reputational damage, lose some money, or similar, but probably survive

- 1 will hit the jackpot and get a 100M ARR company with 4 people.

Stats are of course completely made up, but you get the point.

2001zhaozhao•25m ago
> 1 will hit the jackpot and get a 100M ARR company with 4 people.

I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.

But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.

If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.

(oh and how will this affect wealth inequality? i prefer to not think about it)

NitpickLawyer•25m ago
> 4 will have an inefficient app, suffer reputational damage

Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...

amonith•18m ago
I mean a lot do get reputational damage (e.g. a lot of people hate Jira because how slow it is, or Microsoft Teams - same story) - it's just that nothing comes of it, so "suffered" is perhaps the wrong word here. People curse them and still use them.
Incipient•18m ago
Sorry there should have been an 'and/or' clause in there.

Reputational I was thinking leaking data, or generating wrong information for users etc

freekh•14m ago
Sonos?
jvdp•14m ago
Sonos.
coldtea•18m ago
>- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

Would why would they? As if their software being made faster is the differentiator?

In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.

If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.

donatj•14m ago
I think it's more complicated than that.

Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and a fraction of the time because the actual hard part, determining the products edges, has already been done for them.

Animats•21m ago
The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.

But give it the authority to do something and there's real trouble.

OsrsNeedsf2P•20m ago
Obligatory "This is not an article by Forbes staff, and has a reputation bar so low it can't be used on Wikipedia"
immanuwell•19m ago
but the villain here isn't the marketing manager shipping fast, it's the leadership that clapped instead of asking the hard questions
slopinthebag•14m ago
I feel like there is a lot of really reductive and over simplistic arguments being made on both sides here. Vibe coding won't necessarily break your company, and rejecting AI similarly won't necessarily leave you behind. Neither the speed of development nor quality of software seems particularly correlated with business success imo. Plenty of businesses exist which either ship slower than their competitors or produce much lower quality software, often times both (hello Microsoft!). Is it crazy to think other things matter way more?

Like, is it wrong to think the variance in both velocity and quality between successful companies is just as large if not larger than the delta between AI usage and no AI usage?

What about a conservative approach to AI adoption, looking for a moderate boost in velocity but maintaining most existing quality? Would that not be ideal? Or might it depend on the specific market the company operates in?

pu_pe•5m ago
> The bottleneck in the AI era is not production. It is discernment.

> The right question to ask after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss.

> That is a governance story, not a software story.

> The Question Is Not Adoption. It Is Readiness.

> The right question is diagnostic, not strategic.

I don't know if AI will fully replace programmers, but it has already replaced writers of this type of bullshit puff piece.