frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Why tariffs haven't raised inflation much (yet)

https://www.noahpinion.blog/p/why-tariffs-havent-raised-inflation
1•cwwc•1m ago•0 comments

Remember Corporate Training Programs?

https://hollisrobbinsanecdotal.substack.com/p/remember-corporate-training-programs
2•HR01•5m ago•0 comments

I spent $200 to test every LLM on a complex SQL query generation task

https://nexustrade.io/blog/i-tested-every-ai-model-on-a-complex-sql-query-generation-task-heres-where-grok-4-stands-20250711
1•sh_tomer•6m ago•0 comments

Preliminary report into Air India crash released

https://www.bbc.co.uk/news/live/cx20p2x9093t
3•cjr•8m ago•0 comments

I Use ChatGPT in Notion to Write PM Reports Faster

https://koshy8.gumroad.com/l/ai-pm-free
1•aipmtools•12m ago•0 comments

US customs duties top $100B for first time in a fiscal year

https://www.reuters.com/business/trumps-tariff-collections-expected-grow-june-us-budget-data-2025-07-11/
3•TMWNN•16m ago•0 comments

Figma's $300k Daily AWS Bill Isn't the Scandal You Think It Is

https://www.duckbillgroup.com/blog/figmas-300k-daily-aws-bill-isnt-the-scandal-you-think-it-is/
3•mooreds•16m ago•0 comments

Preserving Traditions: Unveiling the Timeless History of Lacto-Fermentation

https://www.lazyscientistsauces.co.uk/post/preserving-traditions-unveiling-the-timeless-history-of-lacto-fermentation
1•thunderbong•17m ago•0 comments

Global Measles Outbreaks

https://www.cdc.gov/global-measles-vaccination/data-research/global-measles-outbreaks/index.html
2•andsoitis•18m ago•1 comments

Show HN: SaaS Template Optimized for AI

https://github.com/TeemuSo/saas-template-for-ai-lite
1•TeemuSo•20m ago•0 comments

Flux Kontext Image editing tests

https://www.flickspeed.ai/canvas/public/6871319e239a5c68830ee64f
1•taherchhabra•21m ago•1 comments

How to Interview AI Engineers

https://blog.promptlayer.com/the-agentic-system-design-interview-how-to-evaluate-ai-engineers/
1•jzone3•23m ago•2 comments

Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs

https://arxiv.org/abs/2504.06219
1•layer8•24m ago•0 comments

Creating a Website from Obsidian

https://lwgrs.bearblog.dev/creating-a-website-from-obsidian/
2•speckx•24m ago•0 comments

Talking Postgres with Shireesh Thota, Microsoft CVP

https://talkingpostgres.com/episodes/how-i-got-started-leading-database-teams-with-shireesh-thota/transcript
2•clairegiordano•26m ago•0 comments

Pasilalinic-Sympathetic Compass

https://en.wikipedia.org/wiki/Pasilalinic-sympathetic_compass
1•frabert•26m ago•0 comments

Ask HN: Advice for someone choosing a college path

2•spacebuffer•28m ago•3 comments

Chinese TV uses AI to translate broadcasts to sign language. It's not going well

https://www.theregister.com/2025/07/10/china_ai_sign_language_translation/
1•xbmcuser•28m ago•0 comments

Do Longevity Drugs Work?

https://www.economist.com/science-and-technology/2025/06/20/do-longevity-drugs-work
1•bookofjoe•31m ago•1 comments

I created an open source AI first Kanban tool

https://vibecodementor.net/kanban
1•wavh•34m ago•1 comments

Bela Gem Brings Ultra-Low Latency Audio to PocketBeagle 2

https://www.beagleboard.org/blog/2025-07-10-bela-gem-brings-ultra-low-latency-audio-to-pocketbeagle-2
2•ofalkaed•34m ago•0 comments

Hunting Russian Spies in Norway's 'Spy Town' [video]

https://www.youtube.com/watch?v=KcVxl08XYzQ
2•mgl•35m ago•0 comments

I'm more proud of these 128 kilobytes than anything I've built since

https://medium.com/@mikehall314/im-more-proud-of-these-128-kilobytes-than-anything-i-ve-built-since-53706cfbdc18
8•mikehall314•36m ago•0 comments

Once-in-a-Generation Copper Trade Upends a $250B Market

https://www.bloomberg.com/news/features/2025-07-11/trump-s-copper-tariffs-deadline-marks-end-of-once-in-a-generation-trade
1•mgl•37m ago•1 comments

SSPL is BAD

https://ssplisbad.com/
2•lr0•39m ago•1 comments

Krafton slams ex-Subnautica 2 execs – who now say they're suing

https://www.theverge.com/news/704606/subnautica-2-delay-krafton-unknown-worlds-bonus
3•mrkeen•41m ago•0 comments

Show HN: Prepin just launched 15 interview categories for mock interviews

1•OlehSavchuk•43m ago•0 comments

Stages of Adoption

https://www.robertotonino.com/adoption
1•RobTonino•43m ago•0 comments

A New Kind of AI Model Lets Data Owners Take Control

https://www.wired.com/story/flexolmo-ai-model-lets-data-owners-take-control/
1•CharlesW•44m ago•0 comments

xAI seeks up to $200B valuation in next fundraising

https://www.ft.com/content/25aab987-c2a1-4fca-8883-38a617269b68
3•mfiguiere•52m ago•0 comments
Open in hackernews

Anthropic Is Bleeding Out

https://www.wheresyoured.at/anthropic-is-bleeding-out/
31•speckx•3h ago

Comments

handfuloflight•3h ago
Have to enter an email address to the read the full story?

In any case: author does not factor in Anthropic's potential gross margins on the tokens via the API. He assumes if $10,000 is consumed via the Claude Code API tokens, then Anthropic must be losing $10,000.

We don't know. What's their cost of inference?

danmarket•2h ago
You don't have to enter an email address! There is a close button very conveniently placed. I will leave the rest as an exercise for the reader.

Ed acknowledges that we don't know their inference costs in the article. But unless they made a Deepseek level of breakthrough in serving through API, they are either breaking even or losing money at each call.

There is a race to the bottom and survival of the deepest pockets right now in the field. And this "subsidy" funded by investors will not last.

handfuloflight•2h ago
Not to be troublesome, but I don't see the close button and I zoomed in and combed for it.
danmarket•2h ago
Might be your browser or add-ons or something, on both of my devices it is visible. Also clicking in greyed out region dismisses it.
handfuloflight•44m ago
Same thing in incognito. There's no button. There's no greyed out region either.
mcphage•2h ago
> And this "subsidy" funded by investors will not last.

That’s what they said about Uber, too, but it’s still around.

danmarket•1h ago
https://www.nytimes.com/2021/06/08/technology/farewell-mille...

Well, they are around because they increased the prices for customers, reduced the pay for gig-workers and added ads.

handfuloflight•1h ago
But their prices didn't increase by a factor of 30. That's roughly how much inference I'm getting for each dollar put into the Max subscription.
danmarket•1h ago
I should clarify that I don't think Anthropic will go out of business. Similar to Ed, I am purely looking at this as business analysis and their actions indicate that they are starting to change parameters of their business model.

Comparing Uber to Anthropic is not correct, because their cost models are not the same. Uber has mainly labor cost which is low-skill and high-volume, followed by service costs which are high-skill but low-volume. Which leaves a lot of room for optimization.

Anthropic has very big bills to pay, and those bills will only go up as they scale. In depth analysis of frontier model companies is difficult, since they are private and secretive.

handfuloflight•1h ago
You really don't think compute costs won't be more tameable than human labor cost? What very big bills are you referring to?
danmarket•1h ago
If they remain as ambitious as they are/were in interviews, they are going to build larger multi-modal models. If they are loyal to their initial philosophy "Achieve safe AGI by using lead time" then they will try to outspend everybody else. Content of their spending cannot be known without insider information (which I don't have), but this business model is ripe for inefficiency for the sake of obtaining first mover's advantage.

It is "unpopular" to say this, especially this bluntly, but low-skill labor can be made as cheap as you want it to be. If my numbers aren't wrong, average Uber/Lyft worker works for less than hourly local minimum wage (don't say tips, ~80% of Uber customers don't tip). But they accept it because of lack of opportunity, flexibility of gig jobs, and potential for working many jobs.

handfuloflight•58m ago
I mean that human labor costs can't be optimized in the same way compute can.

There's absolutely a floor at which point drivers will revolt, especially since they know how much the rider is paying.

oidar•1h ago
You have to subscribe to see the full article. Scroll all the way down to:

Read the full story

Sign up now to read the full story and get access to all posts for paying subscribers only.

csunoser•3h ago
People love a widget. But the manufacturer loses money at the current price. Manufacturer rises their price for the widget. The resellers are still willing to buy the widget, but is now charging retail consumer more money.

I am very confused why this means Anthropic is bleeding out. The most important thing is that Anthropic has a thing people love. And can raise the price just fine.

beering•3h ago
I’m tired of the breathless hyperbole in every AI piece trying to get clicks. Every new thing means that “OpenAI is finished” or “It’s over for Anthropic” or whatever. It’s telling that this take is that Anthropic having a popular product is somehow really really bad for Anthropic, bad enough to warrant expletives.
AstroBen•2h ago
Same. It's not going away though unfortunately - clickbait works
daft_pink•3h ago
Reality is that Claude Code is very powerful and people are happily paying $200 per month for it and would probably pay a lot more. Cursor isn’t really a huge factor in this. Claude isn’t going to go bankrupt they can just raise prices.

Their real fear should be that Google has a signficant cost edge in AI inference and will be able to offer Gemini CLI at a much cheaper rate than they will even if Gemini CLI isn’t quite as good.

thephyber•2h ago
I never believed that such a high value and such a capital heavy investment could be sustained on just $20/mo.

But I don’t blame the foundation model platforms for giving users a taste and then rolling back the features / quantity of tokens per $ after giving them a taste. They needed to get started somewhere and they were able to offer an amazing product for an affordable price.

It’s not until after the heaviest users start to abuse the subscription tier they are on that the platform understands how their product scales.

xnx•2h ago
I can't see how niche LLM providers beat Hyperscalers (Google and Microsoft) in quality or cost. Claude is popular, but I bet a double-digit percent of customers would drop it if they tried Gemini.
seatac76•2h ago
I could see Amazon trying to acquire Anthropic for $50B+ at some point in the future.
VirusNewbie•2h ago
How is Microsoft even in the conversation? They weren't able to develop their own foundational models despite trying. They certainly wish they had full control rather than the frenemy relationship with OpenAI...
xnx•1h ago
> How is Microsoft even in the conversation?

Who knows how things will work out, but there might be a scenario where Microsoft gets 2% more of OpenAI and controls the whole thing.

handfuloflight•53m ago
Right, the idea isn't that the hyperscaler is incapable of developing the foundational models, its the foundational model developer incapable of continuing to afford to provide the foundational models.
brilee•2h ago
Back when Claude Code had per-token pricing, almost nobody used it because it was clearly much more expensive than the Cursor pricing - $20 a month flat for Cursor vs $5-10 a day for per-token Claude. The incentives manifested in the way both products used tokens - Claude Code has no particular qualms about submitting a gigantic number of tokens and letting Sonnet figure it all out, whereas Cursor puts in a lot of traditional software engineering to figure out the correct minimal context to put in. Now that Claude Code is on a fixed price plan, it strangely doesn't seem like Anthropic is doing anything to optimize the number of tokens consumed by Claude.

I think it's quite plausible that Anthropic is bleeding out ~100/month on token costs per $20/month user, and even at 80% margin, this is just merely breakeven. Their limited capacity also means that they are _losing_ the opportunity to sell the same capacity at a per-token marginal profit. I think the only plausible endgame here is that Anthropic uses the usage data to RL-finetune Claude Code to the point where it is actually worth a $200/month subscription.

Enjoy the $20/month Claude Pro plan while it lasts; I don't really see it sticking around for more than a year at best.

mwigdahl•2h ago
Compared to when Claude Code was originally released in late February, its token use is greatly reduced now. Since the late May Claude 4 releases I agree with you; it hasn't decreased much since then.
jsnell•2h ago
The Claude Code privacy policy[0] is pretty explicit that by default they train on neither the prompts, usage data, or even explicitly provided feedback data (presumably /bug?) that can be used for other product improvements.

> By default, Anthropic does not train generative models using code or prompts that are sent to Claude Code.

> We aim to be fully transparent about how we use your data. We may use feedback to improve our products and services, but we will not train generative models using your feedback from Claude Code.

[...]

> If you choose to send us feedback about Claude Code, such as transcripts of your usage, Anthropic may use that feedback to debug related issues and improve Claude Code’s functionality (e.g., to reduce the risk of similar bugs occurring in the future). We will not train generative models using this feedback. Given their potentially sensitive nature, we store user feedback transcripts for only 30 days.

For understanding what value they place on that data, they do have a program where you can opt-in to have your data be used for training[1] in exchange for a discount on the API rates.

[0] https://docs.anthropic.com/en/docs/claude-code/data-usage

[1] https://support.anthropic.com/en/articles/11174108-about-the...

brilee•1h ago
As a former big tech engineer, I can't help but come up with a gazillion ways to work around these sorts of seemingly straightforward policies.

Here's one way they could get around their own privacy policy: keep track of what % of Claude-generated code is retained in the codebase over time (as an indicator of how high-quality / bug-free the code was); A/B test variations of Claude Code to see which variations have higher retention percentages.

No usage data is retained, no code is retained, no data is used (other than a single floating point number) and yet they get to improve their product atop your usage patterns.

Here's another idea: use a summarization model to transform your session transcript into a set of bits saying "user was satisfied/dissatisfied with this conversation", "user indicated that claude was doing something dangerous", "user indicated that claude was doing something overly complicated / too simple", "user interrupted claude", "user indicated claude should remember something in CLAUDE.md", etc. etc. and then train on these auxiliary signals, without ever seeing the original code or usage data.

handfuloflight•1h ago
They can train all they want on my code all they want if I keep getting $10,000 in inference for $200.
jasonthorsness•2h ago
I subsidize all you power users: with my own $20 Claude Code subscription I use it for a few days per month for maybe a few hours. For my day job it's pay-by-token for the same thing.

So, I am not sure it is as bad as the article says. And there is a huge advantage to capturing developer mindshare worth burning millions on.

And isn't Anthropic backed by Amazon?