frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Uber Launched a Women-Only Service. Will It Work?

https://knowledge.wharton.upenn.edu/article/uber-launched-a-women-only-service-will-it-work/
1•herecomethefuzz•4m ago•1 comments

The Hysteresis of Vibe Coding

https://the-nerve-blog.ghost.io/the-hysteresis-of-vibe-coding/
1•mprast•4m ago•0 comments

The Origins of Gaff Taxidermy as Historical Oddities

https://lethbridgenewsnow.com/2017/09/20/the-origins-of-gaff-taxidermy-as-historical-oddities/
1•speckx•5m ago•0 comments

Major study used to support affirmative action in med schools was faked – PNAS

https://www.pnas.org/doi/10.1073/pnas.2409264121
1•janandonly•6m ago•0 comments

We did a DB migration without logical replication – with zero downtime

https://reducto.ai/blog/reducto-database-migration-zero-downtime
2•raunakchowdhuri•6m ago•0 comments

There Was a RadioShack Ponzi

https://www.bloomberg.com/opinion/newsletters/2025-09-24/there-was-a-radioshack-ponzi
2•ioblomov•7m ago•1 comments

This link will send you to a random Web 1.0 website

https://wiby.me/surprise/
2•pfexec•8m ago•0 comments

Depictions of Celestial Objects Spanning Nearly a Millennium (2014)

https://publicdomainreview.org/collection/flowers-of-the-sky/
1•NaOH•11m ago•0 comments

Pathword, a neat new puzzle from The Daily Baffle

https://dailybaffle.com/pathword/
1•skywardacoustic•11m ago•1 comments

GPU Implementation of Second-Order Linear and Nonlinear Programming Solvers

https://arxiv.org/abs/2508.16094
1•adgjlsfhk1•12m ago•0 comments

Ask HN: Looking for a Book

2•phoenixhaber•13m ago•3 comments

Full Self Driving Cars

https://dan.bulwinkle.net/blog/full-self-driving-cars/
1•pilingual•14m ago•1 comments

To become a good C programmer (2011)

https://fabiensanglard.net/c/
2•pykello•16m ago•0 comments

Gen Z are eating dinner at 6pm – and it's because they're losers

https://www.standard.co.uk/comment/gen-z-eating-early-dining-alcohol-b1241442.html
4•mathattack•21m ago•3 comments

Broken Trust: Fixed Supermicro BMC Bug Gains New Life in Two New Vulnerabilities

https://www.binarly.io/blog/broken-trust-fixed-supermicro-bmc-bug-gains-a-new-life-in-two-new-vul...
2•gnabgib•22m ago•0 comments

A Guide to Fluent Bit Processors for Conditional Log Processing

https://thenewstack.io/a-guide-to-fluent-bit-processors-for-conditional-log-processing/
1•k8tgreenley•23m ago•0 comments

Show HN: I send you weekly insights from your bookmarks

https://tryeyeball.com/
1•quinto_quarto•25m ago•0 comments

Tether CEO confirms major capital raise at a reported $500B valuation

https://www.cnbc.com/2025/09/23/tether-reportedly-seeks-lofty-500-billion-valuation-in-capital-ra...
1•arvindh-manian•26m ago•2 comments

Unitree R1: A Next-Generation Humanoid Robot Platform for Real-World Use

https://www.dronesplusrobotics.com/post/unitree-r1-a-next-generation-humanoid-robot-platform-for-...
1•DPRobotics•27m ago•0 comments

Emmett Shear and Patrick McKenzie on AI Alignment

https://www.complexsystemspodcast.com/episodes/ai-alignment-with-emmett-shear/
2•surprisetalk•27m ago•0 comments

Drones Plus Robotics – Industrial Enterprise Robotics and Drone Solutions

https://www.dronesplusrobotics.com
1•DPRobotics•28m ago•0 comments

JRuby and JDK 25: Startup Time with AOTCache

https://blog.headius.com/2025/09/jruby-jdk25-startup-time-with-aotcache.html
1•todsacerdoti•30m ago•0 comments

Bluffing in Scrabble

https://arxiv.org/abs/2509.10471
3•fanf2•31m ago•0 comments

Microsoft microfluidic channels cool GPU 65%, outperform cold plates by up to 3x

https://www.tomshardware.com/pc-components/liquid-cooling/microsoft-develops-breakthrough-chip-co...
1•westurner•32m ago•2 comments

NFS at 40

https://nfs40.online/
1•fjarlq•33m ago•0 comments

Can Liberalism Be Saved?

https://www.newyorker.com/news/q-and-a/can-liberalism-be-saved
3•paulpauper•36m ago•1 comments

Do Soil Methanotrophs Remove About 5% of Atmospheric Methane?

https://www.mdpi.com/2073-445X/14/9/1864
1•PaulHoule•36m ago•0 comments

GitHub MCP Registry

https://github.com/mcp/
2•saikatsg•36m ago•0 comments

Build a Bear Success

https://www.washingtonpost.com/business/2025/09/22/build-a-bear-success-tariffs/
1•paulpauper•37m ago•0 comments

We should not auction off all H1B visas

https://marginalrevolution.com/marginalrevolution/2025/09/why-we-should-not-auction-off-all-h1-b-...
2•paulpauper•38m ago•0 comments
Open in hackernews

Zed's Pricing Has Changed: LLM Usage Is Now Token-Based

https://zed.dev/blog/pricing-change-llm-usage-is-now-token-based
86•meetpateltech•1h ago

Comments

input_sh•1h ago
Entirely predictable and what should've been done from the start instead of this bait-and-switch mere months after introducing agentic editing.
relativeadv•1h ago
is this effectively what Cursor did as well? I seem to remember some major pricing change of their in the past few months.
input_sh•1h ago
In a way I would say they were even worse, instead of outright saying "we've increased our prices", they "clarified their pricing".
cactusplant7374•1h ago
How much are companies spending per developer on tokens? From what I read it seems like it might be quite high at $1,000 or more per day?
trenchpilgrim•59m ago
No, not at all! At my org it's around $7000 a month for the entire org - my personal usage is around $2-10 a day. Usually less than the price of my caffeinated beverages.
binwang•1h ago
Now I see little value in subscribing to Zed Pro compared to just bringing my own API key. Am I missing something?
prasoon2211•1h ago
Presumably the tab based edit-prediction model + $5 of tokens is worth the (new) $10 / mo price.

Though from everything I've read online, Zed's edit prediction model is far, _far_ behind that of Cursor.

agrippanux•1h ago
Their burn agent mode is pretty badass, but is super costly to run.

I'm a big fan of Zed but tbf I'm just using Claude Code + Nvim nowadays. Zed's problem with their Claude integration is that it will never be as good as just using the latest from Claude Code.

morgankrey•1h ago
(I work at Zed) No, you aren't. We care about you using Zed the editor, and we provide Zed Pro for folks who decide they'd like to support Zed or our billing model works for them. But it's simply an option, not our core business plan, and this pricing is in place to make that option financially viable for us. As long as we don't bear the cost, we don't feel the need (or the right) to put ourselves in the revenue path with LLM spend.
jsheard•1h ago
> [Zed Pro is] not our core business plan

What is the core business plan then?

morgankrey•58m ago
https://zed.dev/blog/sequoia-backs-zed#introducing-deltadb-o...
maxbond•59m ago
Will you consider providing a feature to protect me from accidentally using my Zed account after the $5 is exhausted (or else a plan that only includes edit predictions)? I can't justify to myself continuing my subscription if there's a risk I will click the wrong button with identical text to the right button, and get charged an additional 10% for it. I get you need to be compensated for risk if you pay up front on my behalf, but I don't need you to do that.

I understand that there's nothing you could do to protect me if I make a prompt that ends up using >$5 of usage but after that I would like Zed to reject anything except my personal API keys.

morgankrey•57m ago
Yep, you can set your spend limit to $0 and it will block any spend beyond your $10 per month for the subscription

https://zed.dev/docs/ai/plans-and-usage#usage-spend-limits

maxbond•57m ago
Excellent. Thanks.
bluehatbrit•1h ago
Token based pricing generally makes a lot of sense for companies like Zed, but it sure does suck for forecasting spend.

Usage pricing on something like aws is pretty easy to figure out. You know what you're going to use, so you just do some simple arithmetic and you've got a pretty accurate idea. Even with serverless it's pretty easy. Tokens are so much harder, especially when using it in a development setting. It's so hard to have any reasonable forecast about how a team will use it, and how many tokens will be consumed.

I'm starting to track my usage with a bit of a breakdown in the hope that I'll find a somewhat reliable trend.

I suspect this is going to be one of the next big areas in cloud FinOps.

prasoon2211•1h ago
This is partially why, at least for LLM-assisted coding workloads, orgs are going with the $200 / mo Claude Code plans and similar.
jsheard•1h ago
Until the rug inevitably gets pulled on those as well. It's not in your interest buy a $200/mo subscription unless you use >$200 of tokens per month, and long term it's not in their interest to sell you >$200 of tokens for a flat $200.
Hamuko•47m ago
The pricing model works as long as people (on average) think they need >$200 worth of tokens per month but actually do something less, like $170/month. Is that happening? No idea.
jsheard•39m ago
Maybe that is what Anthropic is banking on, from what I gather they obscure Max accounts actual token spend so it's hard for subscribers to tell if they're getting their moneys worth.

https://github.com/anthropics/claude-code/issues/1109

hombre_fatal•9m ago
Well, the $200/mo plan model works as long as people on the $100/mo plan is insufficient for some people which works as long as the $17/mo plan is insufficient for some people.

I don't see how it matters to you that you aren't saturating your $200 plan. You have it because you hit the limits of the $100/mo plan.

baq•7m ago
meanwhile me hiding from accounting for spending $500 on cursor max mode in a day
Spartan-S63•1h ago
> I suspect this is going to be one of the next big areas in cloud FinOps.

It already is. There’s been a lot of talk and development around FinOps for AI and the challenges that come with that. For companies, forecasting token usage and AI costs is non-trivial for internal purposes. For external products, what’s the right unit economic? $/token, $/agentic execution, etc? The former is detached from customer value, the latter is hard to track and will have lots of variance.

With how variable output size can be (and input), it’s a tricky space to really get a grasp on at this point in time. It’ll become a solved problem, but right now, it’s the Wild West.

mdasen•50m ago
I agree that tokens are a really hard metric for people. I think most people are used to getting something with a certain amount of capacity per time and dealing with that. If you get a server from AWS, you're getting a certain amount of capacity per time. You still might not know what it's going to cost you to do what you want - you might need more capacity to run your website than you think. But you understand the units that are being billed to you and it can't spiral out of control (assuming you aren't using autoscaling or something).

When you get Claude Code's $20 plan, you get "around 45 messages every 5 hours". I don't really know what that means. Does that mean I get 45 total conversations? Do minor followups count against a message just as much as a long initial prompt? Likewise, I don't know how many messages I'll use in a 5 hour period. However, I do understand when I start bumping up against limits. If I'm using it and start getting limited, I understand that pretty quickly - in the same way that I might understand a processor being slower and having to wait for things.

With tokens, I might blow through a month's worth of tokens in an afternoon. On one hand, it makes more sense to be flexible for users. If I don't use tokens for the first 10 days, they aren't lost. If I don't use Claude for the first 10 days, I don't get 2,160 message credits banked up. Likewise, if I know I'm going on vacation later, I can't use my Claude messages in advance. But it's just a lot easier for humans to understand bumping up against rate limits over a more finite period of time and get an intuition for what they need to budget for.

scuff3d•30m ago
Also seems like a great idea to create a business models where the companies aren't incentivised to provide the best product possible. Instead they'll want to create a product just useful enough to not drive away users, but just useless enough to temp people to go up a tier, "I'm so close, just one more prompt and it will be right this time!"

Edit: To be clear, I'm not talking about Zed. I'm talking about the companies make the models.

potlee•4m ago
While Apple is incentivized to ship a smaller battery to cut costs, it is also incentivized to make their software efficient as possible to make the best use of the battery they do ship
qsort•1h ago
I wonder if first-party offerings like Codex and Claude will follow suit. Most "agents" are utter nonsense, but they cooked with the CLI tools. It'd be a shame to let go of them.
hashbig•1h ago
Eventually that is the plan. Like we saw with Claude Code, they want developers to get a taste of that unlimited and unrestrained power of a state of the art model like Opus 4, then slowly limit usage until you fully transition to metered billing and deprecate subscription based billing.
prymitive•1h ago
I can imagine the near future where companies “sponsor” open source projects by donating tokens to “mine” a PR for a feature they need.
ebrescia•1h ago
I love this! Finally a more direct way for companies to sponsor open source development. GitHub Sponsors helps, but it is often so vague where the funding is going.
bsnnkv•40m ago
More often than not, for individuals, it's barely contributing to their living costs
scuff3d•27m ago
If companies want to help they can just... I don't know... give projects some money
drakythe•14m ago
Unless companies also donate money to sponsor the code review that will be required to be done by real human being I could see this idea being a problem for maintainers. Yes you have to code review a human being as well but a human being is capable of learning and carrying that learning forward and their next PR will be better, as well as being able to look at past PRs to evaluate whether the user is a troll/bad actor or someone who genuinely wants to assist with the project. An LLM won't learn and will always spit out valid _looking_ code.
hombre_fatal•4m ago
But the reason LLMs aren't used to build features isn't because they are expensive.

The hard work is the high level stuff like deciding on the scope of the project, how it should fit in to the project, what kind of extensibility the feature might need to be built with, what kind of other components can be extended to support it, (and more), and then reviewing all the work that was done.

sharkjacobs•1h ago
This whole business model of trying to shave off or arbitrage a fraction of the money going to OpenAI and Anthropic just sucks. And it seems precarious. There's no honest way to resell tokens at a profit, and everyone knows it.
Havoc•35m ago
>There's no honest way to resell tokens at a profit, and everyone knows it.

Agree with the sentiment, but I do think there are edge cases.

e.g. I could see a place like openrouter getting away with a tiny fractional markup based on the value they provide in the form of having all providers in one place

Lalabadie•3m ago
The issue with a model like this (fixed small percentage) is that your biggest clients are the most incentivized to move away.

At scale, OpenRouter will instead get you the lower high-volume fees they themselves get from their different providers.

thelastbender12•31m ago
Sorry, how is this new pricing anything but honest? They provide an editor you can use to - optimize the context you send to the LLM services - interact with the output that comes out of them

Why does not justify charging a fraction of your spend on the LLM platform? This is pretty much how every service business operates.

drakythe•6m ago
For companies where that is their entire business model I absolutely agree. Zed is a solid editor with additional LLM integration features though, so this move would seem to me to just cover their costs + some LLM integration development funds. If their users don't want to use the LLM then no skin off Zed's back unless they've signed some guaranteed usage contract.
andrewmcwatters•1h ago
Am I wrong in that GitHub Copilot Pro apparently has the best overall token spend when considering agentic editors?
ramon156•1h ago
Better than Gemini Pro 2.5? Github Copilot doesn't even support tooling in Zed yet. It's been months..
genshii•1h ago
I'm personally looking forward to this change because I currently pay $20/month just to get edit prediction. I use Claude Code in my terminal for everything else. I do wish I could just pay for edit prediction at an even lower price, but I can understand why that's not an option.

I'm curious if they have plans to improve edit prediction though. It's honestly kind of garbage compared to Cursor, and I don't think I'm being hyperbolic by calling it garbage. Most of the time it's suggestions aren't helpful, but the 10-20% of the time it is helpful is worth the cost of the subscription for me.

morgankrey•1h ago
We have a significant investment underway in edit predictions. We hear you, more soon.
genshii•1h ago
That's great to hear, thanks!
sippeangelo•12m ago
This is the one thing keeping me from switching from Cursor. I much prefer Zed in every other way. Exciting!
okokwhatever•58m ago
This is going to be a blood bath for many freelancers if the trend continues with other platforms. Mark my words.
WD-42•47m ago
Good change. I’m not a vibe coder, I use Zed Pro llm integration more like glorified stack overflow. I value Zed more for being an amazing editor for the code I actually write and understand.

I suspect I’m not alone on this. Zed is not the editor for hardcore agentic editing and that’s fine. I will probably save money on this transition while continuing to support this great editor for what it truly shines at: editing source code.

AbuAssar•43m ago
Zed and Warp were two promising Rust-based projects that I closely monitor. Currently, both projects are progressing towards becoming a generic AI Agentic code platform.
scuff3d•25m ago
Until now I've never really come across a comment on Hackernews I thought was AI generated...
muratsu•41m ago
For those of us building agentic tools that require similar pricing, how does one implement it? OpenRouter seems good for the MVP, but I'm curious if there are alternatives down the line.
VGHN7XDuOXPAzol•35m ago
> Token-agnostic prompt structures obscure the cost and are rife with misaligned incentives

Saying that, token-based pricing has misaligned incentives as well: as the editor developer (charging a margin over the number of tokens) or AI provider, you benefit from more verbose input fed to the LLMs and of course more verbose output from the LLMs.

Not that I'm really surprised by the announcement though, it was somewhat obviously unsustainable

dmix•34m ago
I just asked this exact question about Zed pricing 2 days ago

https://news.ycombinator.com/item?id=45333425

dinobones•31m ago
Making this prediction now, LLM pricing will eventually be priced in bytes.

Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token. Also, it's difficult to compare across competitors because tokenization is different.

Eventually prices will just be in $/Mb of data processed. Just like bandwidth. I'm surprised this hasn't already happened.

vtail•29m ago
Hm... why not tokens as reported by each LLM provider? They already handle pricing for images etc.
jermaustin1•28m ago
The problem is that tokens don't all equate to the same size. A megabyte of some random json is a LOT more tokens than a megabyte of "Moby Dick".
dragonwriter•24m ago
> Why: LLMs are increasingly becoming multimodal, so an image "token" or video "token" is not as simple as a text token.

For autoregressive token-based multimodal models, image tokens are as straightforward as text tokens, and there is no reason video tokens wouldn’t also be. (If models also switch architecture and multimodal diffusion models, say, become more common, then, sure, a different pricing model more tied to actual compute cost drivers for that architecture are likely but... even that isn’t likely to be bytes.)

> Also, it's difficult to compare across competitors because tokenization is different.

That’s a reason for incumbents to prefer not to switch, though, not a reason for them to switch.

> Eventually prices will just be in $/Mb of data processed.

More likely they would be in floatint point operations expended processing them, but using tokens (which are the primary drivers for the current LLM architectures) will probably continue as long as the architecture itself is doninant.

jstummbillig•16m ago
Why this instead of cpu/gpu time?
vtail•24m ago
Prediction: the only remaining providers of AI-assisted tools in a few years will be the LLM companies themselves (think claude code, codex, gemini, future xai/Alibaba/etc.), via CLIs + integrations such as ASP.

There is very little value that a company that has to support multiple different providers, such as Cursor, can offer on top of tailored agents (and "unlimited" subscription models) by LLM providers.

oakesm9•21m ago
I completely get why this pricing is needed and it seems fair. There’s a major flaw in the announcement though.

I get that the pro plan has $5 of tokens and the pricing page says that a token is roughly 3-4 characters. However, it is not clear:

- Are tokens input characters, output characters, or both?

- What does a token cost? I get that the pricing page says it varies by model and is “ API list price +10%”, but nowhere does it say what these API list prices are. Am I meant to go to The OpenAI, Anthropic, and other websites to get that pricing information? Shouldn’t that be in a table on that page which each hosted model listed?

—

I’m only a very casual user of AI tools so maybe this is clear to people deep in this world, but it’s not clear to me just based on Zelda pricing page exactly how far $5 per month will get me.

morgankrey•10m ago
List here: https://zed.dev/docs/ai/models. Thanks for the feedback, we'll make sure this is linked from the pricing page. Think it got lost in the launch shuffle.
bananapub•10m ago
seems fine - they're aligning their prices with their costs.

presumably everyone is just aiming or hoping for inference costs to go down so much that they can do a unlimited-with-tos like most home Internet access etc, because this intermediate phase of having to count your pennies to ask the matrix multiplier questions isn't going to be very enjoyable or stable or encourage good companies to succeed.

giancarlostoro•6m ago
I was just thinking this morning about how I think Zed should rethink their subscription because its a bit pricey if they're going to let you just use Claude Code. I am in the process of trying out Claude and figured just going to them for the subscriptions makes more sense.

I think Zed had a lot of good concepts where they could make paid AI benefits optional longer term. I like that you can join your devs to look at different code files and discuss them. I might still pay for Zed's subscription in order to support them long term regardless.

I'm still upset so many hosted models dont just let you use your subscription on things like Zed or JetBrains AI, what's the point of a monthly subscription if I can only use your LLM in a browser?

dinvlad•3m ago
Another one bites the dust :-( I hope at least Windsurf stays the same..
pkilgore•2m ago
This is much better for me but I really want a plan that includes zero AI other than edit prediction and BYOK for the rest.

But as a mostly claude max + zed user happy to see my costs go down.

blutoot•2m ago
Why is most of the AI-tooling industry still stuck on this "bring your own key" model?