frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Intel's make-or-break 18A process node debuts for data center with 288-core Xeon

https://www.tomshardware.com/pc-components/cpus/intels-make-or-break-18a-process-node-debuts-for-...
124•vanburen•1h ago•62 comments

I'm reluctant to verify my identity or age for any online services

https://neilzone.co.uk/2026/03/im-struggling-to-think-of-any-online-services-for-which-id-be-will...
736•speckx•6h ago•455 comments

GPT‑5.3 Instant

https://openai.com/index/gpt-5-3-instant/
149•meetpateltech•2h ago•76 comments

MacBook Pro with new M5 Pro and M5 Max

https://www.apple.com/newsroom/2026/03/apple-introduces-macbook-pro-with-all-new-m5-pro-and-m5-max/
516•scrlk•6h ago•515 comments

Physics Girl: Super-Kamiokande – Imaging the sun by detecting neutrinos [video]

https://www.youtube.com/watch?v=B3m3AMRlYfc
306•pcdavid•5h ago•45 comments

MacBook Air with M5

https://www.apple.com/newsroom/2026/03/apple-introduces-the-new-macbook-air-with-m5/
294•Garbage•6h ago•302 comments

Why payment fees matter more than you think

https://cuencahighlife.com/why-payment-fees-matter-more-than-you-think/
40•dxs•2h ago•18 comments

GitHub Is Having Issues

https://www.githubstatus.com/incidents/n07yy1bk6kc4
122•Simpliplant•1h ago•82 comments

An Interactive Intro to CRDTs

https://jakelazaroff.com/words/an-interactive-intro-to-crdts/
13•evakhoury•1h ago•0 comments

Don't become an engineering manager

https://newsletter.manager.dev/p/dont-become-an-engineering-manager
218•flail•6h ago•160 comments

I Audited the Privacy of Popular Free Dev Tools, the Results Are Terrifying

https://www.toolbox-kit.com/blog/i-audited-popular-dev-tools-privacy-results-are-scary
24•WaitWaitWha•53m ago•5 comments

Iran War Cost Tracker

https://iran-cost-ticker.com
159•TSiege•1h ago•150 comments

Claude's Cycles [pdf]

https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf
324•fs123•9h ago•156 comments

The Xkcd thing, now interactive

https://editor.p5js.org/isohedral/full/vJa5RiZWs
952•memalign•9h ago•134 comments

Apple Studio Display and Studio Display XDR

https://www.apple.com/newsroom/2026/03/apple-unveils-new-studio-display-and-all-new-studio-displa...
170•victorbjorklund•6h ago•192 comments

I Built a Spy Satellite Simulator in a Browser. Here's What I Learned

https://www.spatialintelligence.ai/p/i-built-a-spy-satellite-simulator
11•CGMthrowaway•1h ago•2 comments

TorchLean: Formalizing Neural Networks in Lean

https://leandojo.org/torchlean.html
39•matt_d•2d ago•3 comments

Show HN: Explain Curl Commands

https://github.com/akgitrepos/explain-my-curl
16•akgitrepos•2d ago•0 comments

Launch HN: Cekura (YC F24) – Testing and monitoring for voice and chat AI agents

48•atarus•6h ago•16 comments

Disable Your SSH access accidentally with scp

https://sny.sh/hypha/blog/scp
71•zdw•3d ago•23 comments

Arm's Cortex X925: Reaching Desktop Performance

https://chipsandcheese.com/p/arms-cortex-x925-reaching-desktop
240•ingve•13h ago•132 comments

The Two Kinds of Error

https://evanhahn.com/the-two-kinds-of-error/
8•zdw•1d ago•0 comments

I'm losing the SEO battle for my own open source project

https://twitter.com/Gavriel_Cohen/status/2028821432759717930
350•devinitely•7h ago•186 comments

A US Government iPhone-Hacking Toolkit Is Now in Foreign Spy and Criminal Hands

https://www.wired.com/story/coruna-iphone-hacking-toolkit-us-government/
28•alwillis•1h ago•1 comments

Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act

23•systima•10h ago•0 comments

The beauty and terror of modding Windows

https://windowsread.me/p/windhawk-explained
94•wild_pointer•9h ago•79 comments

Points on a ring: An interactive walkthrough of a popular math problem

https://growingswe.com/blog/points-on-ring
36•evakhoury•1d ago•9 comments

Simplifying Application Architecture with Modular Design and MIM

https://codingfox.net.pl/posts/mim/
21•codingfox•11h ago•0 comments

Meta’s AI smart glasses and data privacy concerns

https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-e...
1338•sandbach•22h ago•750 comments

Most-read tech publications have lost over half their Google traffic since 2024

https://growtika.com/blog/tech-media-collapse
184•Growtika•6h ago•136 comments
Open in hackernews

You are going to get priced out of the best AI coding tools

https://newsletter.danielpaleka.com/p/you-are-going-to-get-priced-out-of
68•fi-le•2h ago

Comments

iambateman•2h ago
I think Warhol’s quote is nostalgic but incomplete.

I’m priced out of the best cars, best houses, best home theater systems, best schools. Even someone making $300k/year can’t afford all of the best of everything.

Sure, the iPhone has been “the best” possible phone which was also used by nearly everyone, but I think that’s an anomaly even in the short run.

Right now I’m paying $200/mo for Claude code to do an amount of work I would’ve had to pay $10,000/mo for. Of course I’m expecting those numbers to get closer to each other.

No VC-funded gravy train lasts forever.

orthogonal_cube•2h ago
It’s a common tactic. Shock an industry with a new product and advertise it as being very affordable. Once you get a solid consumer base with enough organizations that have rebuilt their operations around it, slowly increase the cost and find more ways to produce revenue.
skybrian•1h ago
It all depends. Yes, something like that happened with Uber, but computers and consumer electronics have Moore's law working for them, so prices usually go down. (With occasional shortages like we see now with RAM - not for the first time, but it's usually temporary.)

My guess is that AI will be more like consumer electronics than like Uber.

orthogonal_cube•54m ago
I agree that consumer goods normally get cheaper over time. Software that becomes commercialized, or sees a surge in enterprise demand, tends to go the other way. Splunk, Elasticsearch, and Slack for example.
whynotmaybe•1h ago
Why do you expect the price to get closer?

You can get a table from Ikea that costs a fraction of what an artisan makes. They're not the same final product but their functions is the same.

hahn-kev•1h ago
Either AI gets more expensive, or the 10k outsourcing gets cheaper.
AstroBen•2h ago
The one saving grace I see here is that open models are getting really good, and they're already profitable at an affordable price.

So maybe it's true you won't get "the best", but I don't think you'll be that far off.

sharkjacobs•1h ago
There's this weird race where I have in my head some level of LLM performance which is "good enough" and the open models keep improving to that level, but by the time that they do my "good enough" has acclimatized to what I'm used to doing with the latest frontier models and what the open models are isn't good enough anymore.

The "good enough" points so far have been

- "as good as ChatGPT"

- "as good as GPT4"

- "as good as Sonnet 3.5"

- "as good as Opus 4.5 or Codex 5.2"

Anyway, we'll see where the chinese models are in a year, and we'll see where my expectations are. Hopefully they overlap at some point.

Hansenq•2h ago
If you know anything about tech, you will know that tech as an industry is highly deflationary--billionares use the same iPhones as you do! (in contrast, they don't drive the same cars you do)

This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech. So the more chips they can produce for a given fab, the more profit they can make, meaning that companies are incentivized to sell as many products as possible for as low a price as possible.

We're supply constrained in the short-term because demand for these AI tools is so high that TSMC and other chip manufacturers can't keep up. But long term, supply/demand will equalize and tech will continue its deflationary trend. Sure, the frontier will always require the best possible chips, but AI coding is highly competitive, and competition drives price decreases. So prices may stay high right now, but it seems unlikely to me that this will stay true long-term.

All four of the author's steelmanned arguments at the end for a price decrease seem likely to come true already: competition is intense (OAI brags about how much cheaper they are compared to Claude), OAI subsidizes open-source influencers already, companies' earnings calls all call for more investment in fabs, and we're already close to saturating all of the benchmarks used for RL!

jsheard•1h ago
> billionares use the same iPhones as you do!

Not if they have the brain disease which makes this kind of thing appealing:

https://caviar.global/catalog/custom-iphone/iphone-17/?sort=...

Yes that flagship model incorporates an actual Rolex Daytona in solid gold.

A_D_E_P_T•1h ago
For me, it's London.

> https://caviar.global/catalog/custom-iphone/iphone-17/london...

phpnode•1h ago
Both of these are great, true expressions of a total void of taste
stronglikedan•1h ago
I guess billionaires don't charge wirelessly. They just grab their backup custom iphone when the first one dies.
necovek•1h ago
They've got someone carrying their battery packs with an extendable cable.

Or maybe Big Ben hides the charging coils too.

wffurr•1h ago
That's got the exact same processing hardware in it though, which was the OP's point. Not that they can't have a fancier case.
necovek•1h ago
It is still exactly the same iPhone tech-wise, just with a custom "case".

I wouldn't go as far to call it "brain disease" though: in a sense, it is OK for someone well off to spend on expensive products (made by less rich), so things would equalize at least a bit.

Just like we in IT might happily pay 3% of our salary on slightly better shoes, and someone else would claim we have a "brain disease" because you can get perfectly good shoes for 5x less money.

relaxing•1h ago
That iphone is not 5x more.

Furthermore, who in IT is paying 3% on shoes? Even if you’re the hypebeast buying $1200 Balenciagas, I don’t see how the math works out.

necovek•58m ago
That iPhone is 15x more (I don't know the exact price, sorry)? Same order of magnitude.

3% of your monthly salary for $200-$500 shoes (I see plenty amateur runners getting carbon sole shoes in this price range, for instance), when you could get a pair for less than $50.

floatrock•1h ago
> This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech.

But what powers the chips?

You're talking about chip economics. Inference economics requires electricity dynamics.

runarberg•1h ago
> competition drives price decreases

This is something that is often cited as trueism, but there is no natural laws which makes this a necessary true. There is plenty of room for black swans in market laws. So much so that the term black swan is probably better known in the field of economy then any other field.

Competition may drive down the price of LLMs, however there is a greater then zero probability that it won‘t, and if it won’t, your whole counter-argument falls apart.

beachy•1h ago
I can't think of any high volume/consumer electronics/computer technology that has not been driven down in price over time. So based on historical precedent, I think your "greater than zero probability" might be only a tiny bit greater than zero.
runarberg•51m ago
xkcd did one about graphic calculators

https://xkcd.com/768/

But other that comes to mind are MRI scanners, superconductors, quantum computers.

I think in general this market law is subject to selection bias. The technology which does decrease in price will become commonplace and easy to find, whereas the technology which doesn’t risks becoming obscure and maybe even removed from consumer markets.

EDIT: just to clarify, the point about black swans is that the prediction is always close to 0 probability of the existence of black swans, until we actually observe one, then the probability is suddenly exactly 1. If LLMs are a black swan for this market law, most people will assign a close to 0 probability ... until they don’t.

Simulacra•2h ago
"OpenAI reportedly discussed charging $20k/month on PhD-level research agents with investors."

I've been wondering about this, that there might be a day when certain models are sold at a much higher price, like luxury cars, and only people who are willing to pay a lot of money get them. Everyone else has to settle for a cheaper LLM.

pluc•2h ago
Or ad-supported LLMs where you can't guarantee the answer isn't sponsored.
Leynos•1h ago
Why doesn't the same caveat apply to a paid account?

I mean, you have to declare when content is an advert, and if you are asserting that the owners of the chatbot are going to just ignore that requirement, won't they just do the same thing for paid accounts?

pluc•1h ago
I would assume they would until the profit of subscriptions surpasses the profit of anything else they can monetize. Why would they leave money on the table?
Leynos•17m ago
So "ad-supported" is redundant in your comment since you believe it applies to the paid accounts too?
827a•2h ago
> The top tier subscription prices are increasing exponentially

WILD graph that misrepresents what is happening.

There's a bunch of $20 subscriptions, and a bunch of $200 subscriptions. Devin has a $500 subscription. That's it.

The cost per unit of intelligence has been dropping every month. The cost per "completed task" has also been dropping. There is no sign of this reversing course. Graphing the price of a subscription, without taking into account what that subscription is getting you, is poor authorship.

MattDaEskimo•1h ago
What's also wild is this being the first comment to mention it!

Although there is an underlying truth: using LLMs for large-context tasks like coding is still extremely expensive.

croes•1h ago
> The cost per "completed task" has also been dropping. There is no sign of this reversing course.

Didn’t happen for me.

On the Plus plan newer models reached the limit faster so less tasks where done until I had to wait 5 hours

NewsaHackO•1h ago
I mean when it equated the $10 a month Copilot to Claude Max, I stopped taking the article seriously.
TIPSIO•2h ago
An even worst day is probably coming:

Imagine if a model ever does get scary good, would the big labs even release it for general use? You couldn't even buy it if you wanted to. Exceptions would be enterprise deals / e.g.: $AMZN niche super contracts.

eatsyourtacos•1h ago
Very true.. also I would say even what I get out of claude code is absolutely phenomenal right now.. but sometimes it does take minutes. I just had it take 15 minutes to do something. But what if you had access to the hardware to run it basically instantly?

Just think how these big companies will use that kind of power for themselves to get even more extreme uses out of it.

deadbabe•2h ago
It’s worse than this…

Companies have built entire systems of such complexity and slop that they require AI just to do the maintenance. They have fired engineers thinking they can just replace them with AI.

Well when the prices rise, they have no choice but to stay locked in, paying whatever it takes just to keep their companies running. If they stop using AI, their workforce suddenly does not have capacity to do the work required because of the layoffs. And there are not enough people to hire because people are quickly turning away from software engineering as a career. What a disaster it will be.

allthetime•1h ago
Or, they have to hire people who still know what they're doing at a premium.

Be those people.

mackeye•2h ago
i don't entirely disagree, but

> the cheapest usable tier of Claude Code is $100/mo

is, imo, false. cc pro, $20 per month, gets you a lot of sonnet usage, and code review with opus (which i find very valuable, even as someone who tries to use ai little). i guess it depends how you use ai, but if you use it to plan, debug, and review, rather than having it write code, i think pro is pretty comfortable.

to add, i've seen people say these subscriptions will get far more expensive, as they're offered at a loss. but, it seems far more likely that free tiers will be degraded or disappear, as (especially for openai?) the relative number of subscribers to free users is very small, so the latter probably dominates compute time greatly. anthropic probably has a higher relative number of people who pay for claude code (and use it to its fullest), so this is probably less true. i can see pro getting less usage, and max increasing in cost.

Sevii•2h ago
AI providers can only charge what the market can bear. AI isn't worth 20k/month for 'PHD' level work. But people are willing to pay for several $200/month subscriptions.

But fundamentally AI compute is a commodity. GPUs are made in factories at scale. Assuming AI quality tapers off eventually supply will catch up to demand.

Finally open weights models are good enough that the leading labs cannot charge high margins.

elashri•2h ago
> OpenAI reportedly discussed charging $20k/month on PhD-level research agents with investors.

At this price point, it will be cheaper to hire a bunch of actual PhDs. The vast majority who will not earn anything close to 250k per year in most of the world.

ottah•1h ago
I also seriously question what even does PhD-level mean in the context of a model? Someone with a PhD has developed a very deep but narrow knowledge of a particular domain and has contributed to at least pushing out our sphere of knowledge a tiny bit in that pillar of competency. A model is a best a brittle, fractured and often inconsistent representation of written human knowledge and lacks most basic intuitive grounding in the world due to the lack of embodiment.

In my experience, to safely get any value out of an LLM, you have to be more knowledgeable than the LLM on a topic. So in this case, you'd really need a PhD to use this tool, so at best its a $20k a month research aid, which honestly is far more expensive than a handful of grad students, and probably less effective.

skybrian•2h ago
I've already switched to Sonnet 2.6 by default. It seems okay for the coding I do (working on a personal website) and it's 40% cheaper.

Businesses will pay more since they can justify the cost. That seems fine?

armchairhacker•2h ago
We’re already priced out of the best coding tools: human domain experts (https://news.ycombinator.com/item?id=47234325)

Unless you’re a top-tier domain expert. Then you’re safe until (if…) ASI.

pram•2h ago
From my recent experience with Qwen 3.5 I am less concerned about this. It certainly will never be “the best” but I did some TS refactoring with Qwen + Opencode over the weekend and it was surprisingly good. I even asked Opus 4.6 to grade the commits and it usually gave it a B- haha..

Anyway, it might be worth it to invest in an LLM rig today if you’re paranoid.

reenorap•1h ago
I used Qwen 3.5 for image descriptions and I was shocked at how great it was. Open Source models may be very useful now, one year ago they were really bad.
biddit•2h ago
Strongly disagree with the thesis.

Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.

Regulatory capture is the only thing I’m scared of with regards to tooling options and cost.

supern0va•1h ago
>Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.

Yes, but every high performing open weights model coming out of China has (supposedly) been caught distilling frontier models.

It seems like a lot of people are making assumptions about the state of the open weights ecosystem based on information that may not be accurate. And if the big labs are able to reliably block distillation, we could see divergence between the two groups in terms of performance.

dragonwriter•1h ago
> And if the big labs are able to reliably block distillation,

The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.

reenorap•1h ago
No, you're wrong. It won't tip it away from commercial models. Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies. If they get rid of free usage people will simply pay for it.
dragonwriter•1h ago
> Trying to run open weight modesl to do inference is something 99% of people around the world can't do because it's expensive and technically challenging and the results are poor compared to the main companies.

Just because a model is open doesn't mean that there aren't services that will run it for you (and which won't share any limits that the commercial model vendors impose to fight distillation because neither the host not the model creator cares if you are using the service to distill the model.)

Many users of, particularly the larger, open models now are using such services, not running them using their own local or cloud compute.

madrox•2h ago
I've been thinking about this as well, and I'm glad the author is talking about it. However, I don't think he took it far enough.

It is correct to say there's near-infinite demand for AI, and supply is limited. It stands to reason that wealthier people will pay more, and therefore get more, out of AI.

However, this has always been true, but historically instead of AI it's been workers. The economics of labor haven't changed. So it will, as always, be a game of how you deploy the workers you hire. Are you generating useless morning briefs or are you actually generating value for yourself and others with the AI you buy? If you generate more value that the tokens you burn, you'll get ahead.

This will be true in academia as well, the area of interest to the author. He writes like, before AI, grad student level intelligence came for free.

Ok, wait, sorry, bad example...

dimgl•1h ago
The only way this happens is if models that are specifically made to do certain kinds of coding start to exist. Then this would start to become an issue, yes, until those models are distilled into smaller models.
raincole•1h ago
> The top tier subscription prices are increasing exponentially

"Let's just make random shit up and expand it into a whole blog post."

Seriously does anyone believe this premise? The Claude Max ($200/mo) is the same kind of product as Github Copilot ($10/mo) so the price 20x-ed?

cactusplant7374•1h ago
OpenAI doubled Codex limits until April. If there is an issue with their platform they reset the limits early. This happened many times in December. They also added the 5.3 Spark model that has its own limit!

The author doesn't even mention Codex even though it likely will out compete Claude Code.

9cb14c1ec0•1h ago
I don't agree. There are a lot of inference performance improvements to make. I think the cost of inference continues to fall, and pretty much every application of AI becomes a commodity with brutal competition.
yieldcrv•1h ago
traders use bloomberg terminals at $30,000/yr

they don't theoretically have to aside from that industry going that direction and the stickiness of communication through it, but it simplifies some of their job

it didn't revolutionize trading or make it more democratized, despite simplifying some aspects of the industry

the technology could have but it remains a specialized tool

thats the way I see agentic coding tools and the trend is following it

once the UX designers, PMs and ideas guys get bored of their newfound SaaS slop capabilities, it will be back to specialists doing this and nobody else

viblo•1h ago
Regardless if the exponential trend the author writes about is correct or not, I do think the cost of AI is reversing the trend for coding. For quite some time the tools we used have become cheaper and cheaper and more avaliable than before. Nowadays compilers, IDEs and other tools are increasingly open source or at least free or very cheap. But with AI its no longer the case.

I wrote more about this in a blog, at https://www.viblo.se/posts/ai-hobbycoding/

profstasiak•1h ago
Idiots will Pay for AI to kill their skills
overgard•1h ago
I saw a quote today that resonated with me:

"The underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth. --@jeffowski"

While I don't think that's the only purpose, I can't help but think that people that become dependent on these tools will have neither wealth nor skill. Keep your skills sharp!

simianwords•1h ago
Same thing can be said about it a personal computer
floatrock•1h ago
That just sounds like "controlling the means of production" with more clever wordplay.
overgard•4m ago
Not really. The "means of production" in software has basically been free (other than the cost of a computer) for like 20+ years.
reenorap•1h ago
I predicted months ago that $20/month isn't going to fly anymore. I think if it produces code, it will jump to $1000/month at least. The value of the LLM nowadays is much higher than $1000/month and I think we will see that happen in 2026, because these companies need to make money ASAP in order to get more funding for the next round of training.
barrkel•1h ago
Demand (value created) isn't enough to make prices rise. You need supply to be constrained. If there was only one competent coding model, I'd be worried. But between competition and open weight models, we're not looking constrained on the supply side any time soon.
llm_nerd•1h ago
I agree with the core assumption (to a point -- there is a point of diminishing return where pretty excellent tools are cheap), but this line is ridiculous-

"the cheapest usable tier of Claude Code is $100/mo"

Bullshit. I have the $20 plan and seldom hit the quota. I used to hit the distinct Opus quota, but now that isn't separate I just don't anymore. Even enabled the extra quota charging and have never paid a penny more.

And to be clear, to most people I'm a pretty heavy user. Like, practically it has a heavy influence on my day to day work, and is an amazing contributor to my functions.

The people who think only the $100+ tier is "usable" are often (albeit not always) usually the people doing the worthless, but "forward-thinking" nonsense, throwing millions of tokens aspirational. Like the OpenClaw nonsense is 99.99% worthless filler where people chased a productivity hack that in reality is just hobbyist silliness.

It's token shredding for almost no value as people show that they're with it. People gloating about their swarms of agents doing effectively nothing are another "I need to max out everything" people. These are the ones who yield the result that AI has no benefit to productivity, as they overdo something to such ridiculous extremes.

The same for the laughably poorly thought out MCP servers that flood a service with a quarter million tokens for negligible value. So much insanely poorly considered nonsense is in use, to the great glee of the AI companies. And, I mean, I guess I should thank these people for basically subsidizing it for the rest of us.

The rest of us are surgically applying AI precisely, to incredible effect. The cheap plans are ridiculously valuable.

jatari•1h ago
You get plenty from the $20 a month claude subscription. Just don't expect to leave it running on its own for hours a day.
ekjhgkejhgk•1h ago
None of this matters. Open weights will continue to be released. I don't need to have the absolute greatest LLM. I run Qwen3 locally and that's not even the best Qwen.
relaxing•1h ago
How’s that going for you? Honestly, I’d like to read a review.
bambax•1h ago
It's possible that prices will go up (although the cost of pure inference tends to go down; the question is more how to amortize the cost of training new models).

But this is absurd:

> the cheapest usable tier of Claude Code is $100/mo

If you pay by the token instead of with a subscription, and don't send the entirety of your code base with each request, costs are ridiculously low. Like, $50 will last a minimum of 3 months of heavy use on openrouter.

It's also far from certain everyone needs the latest version of the best "frontier model"; it very much depends on what you do.

cactusplant7374•1h ago
I am imagining working for a company in the future where prompt reviews are required because the company is cheap.
barrkel•1h ago
$50 will last you a good long time if you don't use many tokens, and you are judicious about which model you use, and don't need web search much.

However, on a fixed price plan your behavior changes. It's a qualitative change in how you work, rather than quantitative. Ideation and product design and specification start becoming bottlenecks.

I started out the API route. I started spending $100 a month once I was spending upawards of $10 in tokens a session.

EliRivers•1h ago
"There was a time when everyone used Github Copilot."

There was no such time. Even if everyone means "every software engineer" or any variation thereof, and we substitute any other such tool for GC.

firefoxd•1h ago
I used to take Uber to work daily in 2016. It cost around 3 - 4 dollars per 5 miles ride. Now the same ride cost $24 [0]. There's no indication that AI coding tools won't follow the same path given they are funded by VC.

But I think what matters is that the new generation of coders will adopt it as the norm. Gone are the days where you download a free text editor and just trial and error with the documentation one tab away. Every bootcamp is teaching react with clause and cursor. You have to pay to for a subscription to build your BMI calculator.

[0]: https://idiallo.com/blog/paying-for-my-8-years-old-ride

ottah•1h ago
This chart is missing all of the non-us competition, which while not at the top, is always right behind the flagship models. These competitors also have much lower inference costs, due to the model architectures being built with a focus on efficiency. Silicon valley is building big block Chevys, while China is making Datsuns.
stephc_int13•1h ago
This is very early.

If there is an AI boom, what we're seeing is its infancy. Semi-autonomous coding is the first and most natural use case, thanks to vast amount of training material, opportunities for closed-loop RL with minimal human supervision and the eagerness of the community to try and embrace new tools.

But it is still not much more than QoL improvement at this stage, and maybe some velocity gains for the most hardcore users willing to spend time and money to stay at the bleeding edge.

But there is also a rather large appetite for local models, I am not sure the future of AI will be 100% cloud based.

sparkler123•1h ago
I was continually hitting quota on the $20/mo Claude sub. I started doing the "pay for extra tokens" thing when I did hit it, but just upgrading proved to be far more cost effective. I upgraded to the $200/mo Max subscription and have been using almost exclusively Opus and barely get to 25% quota in any session (a couple times I got over 50% but I was having it go wild in concurrent sessions). I could probably downgrade to the $100/mo one and be fine, though.

Sounds like a lot, but in the few weeks I've had it, I was able to complete two projects I had given up on due to not having time in the past. I re-jiggered some other monthly subscriptions so the net cost wasn't ultimately that much more than what I was paying previously. I also weighed it against buying something like a DGX Spark for local inference, but ultimately I don't want to mess with serving models (and the ones available just aren't as good, realistically), I just want a good one that works.

I probably can't justify much more than $200/mo, but for what I get out of it, I'm happy to pay it. I've done more in the past few weeks on side projects than I had in a couple years.

samiv•1h ago
I expect that in the "post scarcity" world where the capital class doesn't need majority of human labor for anything most people will be priced out of everything. Including the basic necessities.

But sure, lets just keep automating ourselves out of jobs (and help other industries do it too) with no plan as to how to help all the displaced people.