frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Designer creates first app, with Claude Code

https://bendansby.com/cest/
1•webwielder2•1m ago•0 comments

Vynly – a social feed only AI agents can post to

https://vynly.co
1•nftdude2024•1m ago•0 comments

Biotin and FBXW7 are essential to bypass glutamine addiction

https://www.cell.com/molecular-cell/fulltext/S1097-2765(26)00097-3?_returnURL=https%3A%2F%2Flinki...
1•bookofjoe•2m ago•0 comments

Waving Country Flags in the Terminal

https://old.reddit.com/r/commandline/comments/1srq130/waving_country_flags_in_the_terminal_inspir...
1•shmulc•2m ago•0 comments

Compare any two versions of a package (rust, go, Ruby, JavaScript,)

https://unpackage.dev
1•leighmcculloch•3m ago•0 comments

Show HN: F0rtune500 – No one can prove you *didn't* work at these companies

https://f0rtune500.app/
1•theseidel•5m ago•1 comments

Ukraine Proposes Renaming Part of the Donbas in Trump's Honor

https://www.nytimes.com/2026/04/21/world/europe/donnyland-ukraine-donbas-trump.html
2•geox•5m ago•0 comments

Japan lifts ban on lethal arms exports for first time since second world war

https://www.ft.com/content/539bdbe1-a535-44fb-aa7a-2c1fe1828adf
1•mikhael•5m ago•0 comments

Wired has published an extraordinarily inaccurate article about GrapheneOS

https://xcancel.com/GrapheneOS/status/2046600100344950809#m
1•hnburnsy•5m ago•0 comments

Why AI alone cannot fix social problems

https://restofworld.org/2026/ai-social-good-humans/
1•Brajeshwar•7m ago•0 comments

APIResponse

https://playwright.dev/docs/api/class-apiresponse
1•gorox•7m ago•0 comments

Advanced Packaging Limits Come into Focus

https://semiengineering.com/advanced-packaging-limits-come-into-focus/
1•PaulHoule•8m ago•0 comments

Will the lunar spacesuits be ready in time?

https://arstechnica.com/space/2026/04/whats-the-deal-with-spacesuits-for-the-moon-will-they-be-re...
1•LorenDB•9m ago•0 comments

I built an AI reviewer that analyses code as a PM and a system architect

https://github.com/OneSpur/scanner
1•Kapitsyn•9m ago•1 comments

Show HN: Zero-allocation embedded security in Rust (fits in 256KB Flash)

https://github.com/craton-co/craton-shield
1•victor-craton•9m ago•0 comments

Squarespace sold me a domain then threatened my account for owning it

https://mattrb.com/blog/squarespace-threat/
1•mattrb•10m ago•0 comments

As Oceans Warm, Great White Sharks Are Overheating

https://e360.yale.edu/digest/great-white-sharks-climate
3•speckx•10m ago•0 comments

How to Corrupt an SQLite Database File

https://sqlite.org/howtocorrupt.html
1•thunderbong•11m ago•0 comments

P4WNED: Insecure defaults in Perforce expose source code across the internet

https://morganrobertson.net/p4wned/
2•pale_delirium•11m ago•1 comments

From a flea-market Siemens S62 to an AI phone

https://medium.com/@fabryz/from-a-flea-market-siemens-s62-to-an-ai-phone-204b35eacc12
1•Fabryz•12m ago•0 comments

AMD Ryzen 9 9950X3D2 Dual Edition: Tons of cache for tons of dollars

https://arstechnica.com/gadgets/2026/04/amd-ryzen-9-9950x3d2-dual-edition-review-tons-of-cache-fo...
1•LorenDB•13m ago•0 comments

Why images use 3x more tokens in Claude Opus 4.7

https://www.claudecodecamp.com/p/images-cost-3x-more-tokens-in-claude-opus-4-7
2•aray07•13m ago•0 comments

Medicare for all is the best hope for US healthcare (2025)

https://www.theguardian.com/business/2025/nov/09/healthcare-medicare-reform
1•thelastgallon•14m ago•1 comments

Show HN: An AI workflow to automate your LinkedIn job search

https://gabidev.gumroad.com/l/grasshopper
1•gab18•14m ago•0 comments

Jujutsu / jj megamerges for fun and profit

https://isaaccorbrey.com/notes/jujutsu-megamerges-for-fun-and-profit#user-content-fnref-1
2•fanf2•17m ago•1 comments

2026 State of Kubernetes Resource Optimization: CPU at 8%, Memory at 20%

https://cast.ai/blog/2026-state-of-kubernetes-resource-optimization-cpu-at-8-memory-at-20-and-get...
2•BlackPlot•17m ago•0 comments

Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

https://www.wired.com/story/ai-generated-maga-girls/
3•Aboutplants•17m ago•0 comments

The Lempert Report on Substack

https://phillempert.substack.com/p/from-beef-is-back-to-powermac-and
1•plempert•17m ago•0 comments

Show HN: Read.place/view – Reader view for any article and TL;DR summary

https://readplace.com/view
1•fagnerbrack•18m ago•2 comments

The mail sent to a video game publisher

https://www.gamefile.news/p/panic-mail-arco-despelote-time-flies-thank-goodness-teeth
1•colinprince•20m ago•0 comments
Open in hackernews

Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return

https://techcrunch.com/2026/04/20/anthropic-takes-5b-from-amazon-and-pledges-100b-in-cloud-spending-in-return/
71•Brajeshwar•1h ago

Comments

ozgrakkurt•1h ago
So they are basically taking debt from amazon which is not a financial institution?
ferguess_k•1h ago
Everyone eventually wants to be a landlord and a banker (essentially a debt landlord).
spwa4•1h ago
> At the heart of this deal is Amazon’s custom chips: Graviton (a low-power CPU) and Trainium (an Nvidia competitor and AI accelerator chip). The Anthropic deal ...

Yeah, totally not desperately seeking investment to keep the party going ...

bombcar•30m ago
It does seem like the tempo and volume of the music is getting louder and louder as the number of chairs is subtly decreasing, doesn’t it?
brianjlogan•11m ago
Because also look at the bond market... It's all coming to a crescendo including the global economic recession indicators which will be a cold sprinkler on the whole party.

Gemma4 being able to run on commodity hardware I think is the real win out of this. Pop the bubble. Settle the craziness and the claws. Let scientists and engineers tinker and improve in the background. Hopefully we can have GPUs be affordable for gaming again although I'm starting to think that will never happen.

zaevlad•1h ago
Hope this will let them boost their capacity and offer higher limits on code models...
jinushaun•1h ago
Isn’t this kind of like the Nvidia/OpenAI deal? Just circulating debt/money
maksimov•1h ago
And I think Oracle got into it as well, and later suffered
iot_devs•1h ago
Someone can explain to me what's the expectations for these AI labs?

I mostly see their products as commodity at this point, with strong open source contenders.

Eventually it will become hard to justify the premium on these models.

cma•1h ago
Everyone using Claude code on a personal subscription is default opted in to getting their data trained on. Private troves of data like are seen to potentially end up in a winner take all scenario. More data, better models, attracts more users, results in more exclusive data (what Altman calls the data flywheel).
spenvo•1h ago
PSA: this is true (the defaults), but there's a "Help improve Claude" setting that you can disable here https://claude.ai/settings/data-privacy-controls It's my understanding that, as long as this is off, Anthropic does not train on Claude Code conversations, inputs/outputs -- if anyone knows otherwise, please tell and provide a link if possible.
johnbarron•11m ago
>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on

This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use.

[1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..."

[1] - https://aws.amazon.com/blogs/security/securing-generative-ai...

hmmmmmmmmmmmmmm•1h ago
None of them have any moat, OpenAI already lost the lead [1] and no one is "winning". It is just a race to the bottom as they burn through GPUs that won't even last that long.

[1] https://x.com/kenshii_ai/status/2046111873909891151/photo/2

Tepix•1h ago
GPUs are lasting longer than foreseen, in fact old GPUs are more valuable now (making more money!) than they were three years ago when they were new.

Tokens will continue to increase in price until the supply meets the demand. That's going to take a while.

mossTechnician•47m ago
Are old datacenter GPUs making more money than they were before? Various sources point to GPUs dying quickly (in 2024, a Google engineer suggested 3 years maximum), and even if they don't, newer chips cause rapid depreciation of older ones.[1]

[0]: https://www.tomshardware.com/pc-components/gpus/datacenter-g...

[1]: https://www.cnbc.com/2025/11/14/ai-gpu-depreciation-coreweav...

throwup238•40m ago
AWS is still offering g4dn instances that run on NVIDIA T4 GPUs, which were first released in 2018. My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs because it’s not worth validating the numeric codes on new GPUs. People (especially journalists) underestimate how long these cards are economically useful.
renewiltord•31m ago
The sources are the sources. The reality is the reality.
nl•1h ago
$30B ARR says otherwise.
Sayrus•58m ago
ARR says nothing about the ability of these companies to retain customers once subsidies stop.
101008•45m ago
revenue is not profit
trgn•39m ago
in no world is 30B ARR a bad thing
sensanaty•32m ago
If they're spending 60B anually then that is bad. Obviously none of us know what their real burn rate is, but revenue is an irrelevant number if you don't have the full picture.
lokar•36m ago
And EBITA is not GAAP
loveparade•57m ago
I give it one to two more years before open source models have fully caught up. Products are commodities and models are commodities too. GPUs cores are still hard to get for inference at scale right now. They need a platform with lock in but unsure what that would look like and why it wouldn't be based on open source models.
alex_duf•42m ago
What does "fully caught up" mean in the context of an ever evolving technology? I think I'm in support of open weight models (though there are safety implications), but these things aren't cheap to train and run. This fact alone gives no incentive for leading labs to release cutting edge open weight models. Why spend the money then give the product for free?

Now if "fully caught up" means today's level of intelligence is available for free in two years, by then that level of intelligence means very little

stavros•40m ago
Yeah I don't understand it, it's a marathon with three companies perpetually a minute ahead, and people keep saying "I expect the stragglers to catch up".

The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure.

mrbombastic•31m ago
It makes perfect sense if you think things cannot improve indefinitely
inciampati•6m ago
They do approximate any function... within the range they're trained on. And that range is human limited, at least today.
patrickmcnamara•29m ago
It's not a marathon, or any race. There is no a finish line. It doesn't matter that much that someone is a minute ahead.
vorticalbox•40m ago
It’s never free your shifting costs from paying a company for their api use vs the power costs of running it locally.
ForrestN•38m ago
I think this "Mythos" situation, whether real or hype, points to the endgame here. Eventually, when you have a model powerful enough to have big consequences in the world, you stop worrying about selling it to consumers and start either a) using it to rule the world or b) watch as it gets nationalized. If you have a machine powerful enough to automate everything, why sell access to it when you could just...be all things to all people? Use the god machine yourself to take over more and more of the economy?
lokar•37m ago
I disagree. The point of the mythos hype is to get regulation to cut off competitors.
SpicyLemonZest•14m ago
Sometimes selling services is just the best business model. Intuit has accounting software powerful enough to have big consequences in the world, yet they mostly sell it to accountants rather than doing the accounting themselves.
johnbarron•33m ago
Please, some of us are long NVIDIA...let us cope in peace. :-)

Here is the thing nobody wants to say out loud or they are too dumb to realize. AI is intelligence, and intelligence has almost never been the binding constraint on productivity.

So you will get no productivity increase from the AI bubble. Yes, you read that correctly.

The test is simple, if raw brainpower were the bottleneck, you could 10x any company by hiring 200 PhDs. In practice you get 200 brilliant people writing unread memos, refactoring things that worked, and forming a committee to rename the committee. Smart has always been cheaper and more abundant than the discourse pretends.

Every real productivity revolution came from somewhere else like energy (steam, electricity), capital stock (machines that do the physical work), or coordination (railroads, shipping containers, the assembly line, the internet).

None of these raised the average IQ of the workforce, they changed what a given worker could move, reach, or coordinate with. Solow old line basically still holds. The output per worker grows when you give the worker better tools and infrastructure, not better neurons.

Meanwhile the actual bottlenecks in a modern firm are regulatory approval, legacy systems, procurement cycles, customer adoption, internal politics, and physical supply chains that don't care how clever your email was. A smart brains intern at every desk produces more artifacts, not more throughput, and in a lot of organizations, more artifacts is actively negative ROI.

Jevons does not save you either, cheaper cognition mostly means more slide decks, not more GDP.

So the setup is that models are commoditizing on one side, and on the other side a product whose core value add (more intelligence, faster) is aimed at a constraint that was never really binding. This of course a rough combo for a trillion dollar capex supercycle.

Fun for the trade, while it lasts, but there is no thesis. Just dont tell CNBC and short NVDA on time ,-)

brianjlogan•14m ago
Besides to say that your competitor can turn around and hire the same team of PHDs at the same rate that you can. Compare and contrast PHD's on leaderboards and have access in seconds with a new API key or model selector.

Granted LLM's are not even PHDs.

What a weird time we live in...

engineer_22•17m ago
>I mostly see their products as commodity at this point, with strong open source contenders.

> Eventually it will become hard to justify the premium on these models.

On the contrary, the model is the moat.

The model represents embodied capital expenditure in the form of training. Training is not free, and it is not a commodity, it is heavily influence by curation.

Eventually the ever-increasing training expense will reduce the competition to 2-3 participants running cutting edge inference. Nobody else will be able to afford the chips, watts, and warehouse. It's a physics problem - not a lack of will.

If you're a retail user, and a lower-tier model is suitable for your work, you'll have commodity LLM's to help you. Deprecated models running on tired silicon. Corporate surveillance and ad-injection.

But if you're working on high-stakes problems in real time, you're going to want the best money can buy, so you'll concentrate your spend on the cutting-edge products, open API's, a suite of performance monitoring tools and on-the-fly engineering support. And since the cutting edge is highly sought after, it's a seller's market. The cutting edge products buoyed by institutional spend will pull away from the pack. Their performance will far exceed what you're using, because your work isn't important. Hockey stick curve. Haves and Have-Nots.

The economic reality is predetermined by today's physical constraints - paradigm shifting breakthroughs in quantum computing and superconductors could change the calculus but, like atomic fusion power, don't count on it being soon.

shubhamjain•1h ago
If you think you need to spend $100B, does using a third-party cloud provider still make sense? It doesn’t matter what sweet deal Amazon is pitching—in that scenario, you’d want to own your stack. Especially in a hyper-competitive field like this, where margins are going to matter a lot soon.

It feels like these hyperscalers are just raising as much as they can giving extremely rosy projections becauses these sooner or later peak is going to be reached (if that hasn’t happened already)

Tepix•1h ago
Sure: If you can't get enough compute by ordering it yourself, make deals with anyone who promises to get you more compute.
Zababa•1h ago
I think it could make sense to not want to own the stack if you think it's going to cost you velocity/focus? Which is probably the play here. But I'm not certain at all.
loveparade•1h ago
Good lucking getting GPUs.
LogicFailsMe•54m ago
Classic time value of money situation. They get access to the HW now so they can continue to grow the business. Of course, if you think AI is just pets.com redux, I can see how you'd think it's already peaked. All those years of very important people insisting Bezos couldn't just pull a switch on reinvesting all the revenue into growing Amazon and then he did exactly that comes to mind.
credit_guy•53m ago
Here’s the answer to your queation (from the article)

> The Anthropic deal specifically covers Trainium2 through Trainium4 chips, even though Trainium4 chips are not currently available. The latest chip, Trainium3, was released in December. On top of that, Anthropic has secured the option to buy capacity on future Amazon chips as they become available.

deskamess•3m ago
So it comes down to how much of that $100 bn is in the 'option', I guess. Then it's not an expense at all.
superkuh•2m ago
Ah. So it's a scalper situation where an unethetical entity buys up all the supply and then resells it for a greater price.
Culonavirus•52m ago
Only Google and xAI build their own, no? I don't think it's that easy to vertically integrate massive datacenters into a software company. Both Google and xAI (Tesla, SpaceX) have a massive wealth of experience when it comes to building factories.
jeffbee•42m ago
New level of glazing Elon Musk unlocked. xAI has a vertical integration advantage because Tesla once moved into an old Toyota factory and because once they paid Panasonic to put a Tesla sign outside a Panasonic battery factory. Incredible content.
petesergeant•28m ago
I would struggle to dislike Elon more, but this seems like you’re some kind of weird anti-Musk fanatic
mitchell_h•49m ago
I watched some explain how deepseak got good and the Chinese approach to LLM training. Really wish I could remember it. The premise was China thinks of LLMs not as a thing separate from hardware, but gains efficiencies at each layer of the stack. From Chips to software, it's all integrated and purpose built for training.

Wonder if Anthropic is making a mistake by focusing on "consumer" hardware, and not going super specialized.

elefanten•47m ago
DeepSeek uses merchant silicon like everyone else.

edit: I misunderstood, I thought you were implying they designed their own GPUs. nevermind

renewiltord•29m ago
It’s fake news predicated on China not being able to get GPUs. But it turns out everyone was getting them their GPUs by serial number swaps in warehouse.
jubilanti•24m ago
So you watched some random video from some random YouTuber, didn't even remember who made it, so much so you didn't even remember that deepseek isn't spelled "deapseak", didn't bother to even find it or verify, and then you go asserting your memory as fact on a serious discussion forum.

Comments like yours add nothing to the discussion.

dktp•47m ago
I think these pledges offload some of the risk onto Amazon/Oracle/etc

If Anthropic/OpenAI miss projections, infra providers can somewhat likely still turn around and sell it to the next guy or use it themselves. If they have more demand than expected (as Anthropic currently does), vcs will throw money at them and they can outbid the competition

If they built it themselves and missed projections it's a much more expensive mistake

It's just risk sharing. Infra providers take some of the risk and some of the upside

throwup238•30m ago
> If they built it themselves and missed projections it's a much more expensive mistake

Not if their pricing comes with multiyear commitments for reserved pricing. No doubt they get a huge volume discount but the advertised AWS reserved pricing is already enough for pay for a whole 8x HX00 pod plus the NVIDIA enterprise license plus the staff to manage it after only a one year commitment. On-demand pricing is significantly more expensive so they’re going to be boxed in by errors in capacity planning anyway (as has been happening the last few months).

The economics here are absurd unless you’re involved in a giant circular investment scheme to pump up valuations.

bombcar•31m ago
If you’re sure it’s going to go gangbusters you want to get it all in-house asap.

If you’re not sure it’s going to blow the socks off, foisting capital investment on partners is a great deal.

See the difference in companies/franchises that always own the land/building and those that always lease.

bilekas•30m ago
I imagine it comes down to if they want to buy hardware every generation, that gets very expensive and depreciates quickly. You've then got a whole load of assets on your books that are technically obsolete for the bleeding edge. This way, AWS buys and maintains the hardware and OpenAI doesn't need to claim it as depreciation ?

Just a guess.

vasco•20m ago
That is a project you can work on at any point in the future and the more you delay it the more certain your investment will be about what you really need. But those additions to the PnL are capped to the costs.

In the meantime if you work on revenue generating work, that side of PnL is uncapped. So you can either put some engineers on reducing your costs at most by 100% or, if they worked on product ideas they could be working on things that generate over 9000% more revenue.

lubujackson•16m ago
Look at GPU and RAM prices and data center rollout. We have quickly reached Earth's capacity for compute - it is a lot like the housing market. Once there is global saturation, the price to buy becomes increasingly high EVERYWHERE. Let's also not forget that Anthropic moves the market with their purchases and usage. They might literally be unable to buy capacity they need (or project to) and are doing this deal to pave a roadmap for the near-term and to keep global prices (somewhat) down.
samdixon•14m ago
From my understanding, if you want to use native Claude in AWS Bedrock, it runs from an AWS datacenter. I'm guessing that's why regardless of running your own stack... they still need a footprint in all the major clouds.
MeetingsBrowser•5m ago
Going from a company with no experience building and operating datacenters to a company with 100B worth of compute is a multi-decade high risk goal.
gabrielsroka•1h ago
$25B https://news.ycombinator.com/item?id=47844891
mossTechnician•59m ago
$5B is part of a contact, the remaining $20B is just a non-binding statement that doesn't hold the same weight (but somehow commands the same media fanfare).
sensanaty•1h ago
I'm no economist, but how exactly does this make sense? Amazon is basically just giving them 5B which will then be used to repay them back 20x that amount??
Zababa•1h ago
I was wondering the same thing. I think it's something like, they're going to pay for infra anyways, so Amazon pushes them to allocate their spend to AWS in exchange for 5B.
victorbjorklund•1h ago
5 billion now vs 10 billion per year in spend on compute that you had to buy anyways (not necessarily at aws)
pwython•1h ago
> Amazon is investing $5 billion in Anthropic today, with up to an additional $20 billion in the future. This builds on the $8 billion Amazon has previously invested.

> Today’s agreement will quickly expand our available capacity, delivering meaningful compute in the next three months and nearly 1GW in total before the end of the year.

They need a bunch of compute, now.

https://www.anthropic.com/news/anthropic-amazon-compute

ithkuil•1h ago
in exchange for service that presumably a) costs something to amazon to operate (so not pure 100B profit) and b) anthropic would have to spend anyway to operate their business.

so basically ...

you could view this as a kind of discount, but instead of paying less later, you get some cash now and then pay full later.

FatherOfCurses•47m ago
I'd bet that Amazon is getting access to chat data (no matter what Anthropic says publicly) and possibly even the ability to change the model to drive business to either Amazon retail or AWS.

"Claude I'm evaluating whether I should host my app on AWS or Google Cloud. Provide me with an analysis on my options." "After a detailed analysis, AWS is clearly your better option."

ChrisArchitect•59m ago
https://www.anthropic.com/news/anthropic-amazon-compute

https://www.aboutamazon.com/news/company-news/amazon-invests...

secondcoming•59m ago
all your GPUs are belong to us
mikert89•43m ago
hacker news is so useless, look at all these negative cynical comments
anonyfox•13m ago
Sounds like moneygrab is accelerating before consumer grade local models are getting good enough for local inference in few years. Huge house of cards here. Demand skyrocketing until it’s suddenly dropping entirely with ondevice inference.
inciampati•8m ago
I'm already living in this future. In a decent execution framework, with context management, memory via unix, and mechanisms for web search and access, local models are effectively on par with frontier ones. And they can often be much faster. I'll keep paying fees for the AI companies until they stop truly subsidizing and leading. They are getting close to the edge of utility, but we can use their services now to bootstrap their own demise. Long live running your own software on your own computer.
bwfan123•1m ago
> consumer grade local models are getting good enough for local inference

I am waiting for that. Perhaps a taalas kind of high-performance custom hw coding llm engine paired with an open-source coding-agent. On my desktop.

wg0•5m ago
The best thing for humanity, economy, technology, society, progress and environment is that this scam should come down ASAP.