frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Changes to OpenTTD Distribution on Steam

https://www.openttd.org/news/2026/03/14/steam-changes
42•canpan•46m ago•10 comments

Claude March 2026 usage promotion

https://support.claude.com/en/articles/14063676-claude-march-2026-usage-promotion
135•weldu•2h ago•81 comments

Show HN: Han – A Korean programming language written in Rust

https://github.com/xodn348/han
46•xodn348•1h ago•8 comments

Anthropic invests $100M into the Claude Partner Network

https://www.anthropic.com/news/claude-partner-network
21•gmays•1h ago•5 comments

Learning Creative Coding

https://stigmollerhansen.dk/resume/learning-creative-coding/
10•ammerfest•48m ago•0 comments

Fedora 44 on the Raspberry Pi 5

https://nullr0ute.com/2026/03/fedora-44-on-the-raspberry-pi-5/
36•jandeboevrie•2h ago•6 comments

Marketing for Founders

https://github.com/EdoStra/Marketing-for-Founders
72•jimsojim•3h ago•15 comments

Show HN: Ichinichi – One note per day, E2E encrypted, local-first

46•katspaugh•3h ago•19 comments

Library of Short Stories

https://www.libraryofshortstories.com/
12•debo_•2h ago•0 comments

Bumblebee queens breathe underwater to survive drowning

https://www.smithsonianmag.com/science-nature/bumblebee-queens-breathe-underwater-to-survive-drow...
9•1659447091•2h ago•0 comments

Montana passes Right to Compute act (2025)

https://www.westernmt.news/2025/04/21/montana-leads-the-nation-with-groundbreaking-right-to-compu...
227•bilsbie•8h ago•195 comments

A Recursive Algorithm to Render Signed Distance Fields

https://pointersgonewild.com/2026-03-06-a-recursive-algorithm-to-render-signed-distance-fields/
22•surprisetalk•3d ago•3 comments

CSMWrap: Legacy BIOS booting on UEFI-only systems via SeaBIOS

https://github.com/CSMWrap/CSMWrap
24•_joel•4d ago•3 comments

An ode to bzip

https://purplesyringa.moe/blog/an-ode-to-bzip/
77•signa11•6h ago•48 comments

Baochip-1x: What it is, why I'm doing it now and how it came about

https://www.crowdsupply.com/baochip/dabao/updates/what-it-is-why-im-doing-it-now-and-how-it-came-...
255•timhh•3d ago•42 comments

Offloading FFmpeg with Cloudflare

https://kentcdodds.com/blog/offloading-ffmpeg-with-cloudflare
11•heftykoo•4d ago•4 comments

MCP is dead; long live MCP

https://chrlschn.dev/blog/2026/03/mcp-is-dead-long-live-mcp/
90•CharlieDigital•3h ago•80 comments

Show HN: GitAgent – An open standard that turns any Git repo into an AI agent

https://www.gitagent.sh/
80•sivasurend•9h ago•10 comments

Python: The Optimization Ladder

https://cemrehancavdar.com/2026/03/10/optimization-ladder/
241•Twirrim•4d ago•87 comments

An interactive presentation about the Grammar of Graphic

https://timeplus-io.github.io/gg-vistral-introduction/
3•gangtao•3d ago•0 comments

Hostile Volume – A game about adjusting volume with intentionally bad UI

https://hostilevolume.com/
65•Velocifyer•4h ago•49 comments

9 Mothers Defense (YC P26) Is Hiring in Austin

https://jobs.ashbyhq.com/9-mothers?utm_source=x8pZ4B3P3Q
1•ukd1•9h ago

Postgres with Builtin File Systems

https://db9.ai/
6•ngaut•1h ago•0 comments

Generalizing Knuth's Pseudocode Architecture From Algorithms to Knowledge

https://www.researchgate.net/publication/401189185_Towards_a_Generalization_of_Knuth%27s_Pseudoco...
24•isomorphist•3d ago•1 comments

Sunsetting Jazzband

https://jazzband.co/news/2026/03/14/sunsetting-jazzband
115•mooreds•5h ago•40 comments

Starlink militarization and its impact on global strategic stability

https://interpret.csis.org/translations/starlink-militarization-and-its-impact-on-global-strategi...
101•msuniverse2026•13h ago•133 comments

XML is a cheap DSL

https://unplannedobsolescence.com/blog/xml-cheap-dsl/
220•y1n0•10h ago•231 comments

It's time to move your docs in the repo

https://www.dein.fr/posts/2026-03-13-its-time-to-move-your-docs-in-the-repo
79•gregdoesit•3h ago•54 comments

Megadev: A Development Kit for the Sega Mega Drive and Mega CD Hardware

https://github.com/drojaazu/megadev
112•XzetaU8•13h ago•7 comments

GIMP 3.2 released

https://www.gimp.org/news/2026/03/14/gimp-3-2-released/
182•F3nd0•2h ago•46 comments
Open in hackernews

Claude March 2026 usage promotion

https://support.claude.com/en/articles/14063676-claude-march-2026-usage-promotion
134•weldu•2h ago

Comments

colingauvin•1h ago
Presumably they have unused compute in those hours and figure they may as well enable people to use it and get more invested into their ecosystem.

What I wish Anthropic would do is be a lot more explicit about what windows apply when. Surely they have the data to say "you get X usage from hours A to B, Y usage from B to C"

timmg•1h ago
I’m trying to figure out how this affects weekly limits, since those overlap peak hours. My observation is that it doesn’t. But I could be wrong.

If they are doing it “right” I think any off peak usage should count 50% toward your weekly limits.

Edit: it does look like they are doing it the "right" way.

itsyonas•1h ago
> Does bonus usage count against my weekly usage limit?

> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.

linolevan•1h ago
Oops! Looks like we posted at the same time.
linolevan•1h ago
> Does bonus usage count against my weekly usage limit?

> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.

timmg•1h ago
I just watched my "weekly limit" get used while I ran a claude code command.

I'm not sure how to square that with the quote you gave.

jakubadamw•1h ago
Did you exhaust the five-hour usage limit already? As I understand it, the ”additional usage” refers to anything beyond the standard five-hour usage limit.
lxgr•10m ago
So the first 100% of 5-hour usage are billed against weekly usage at normal rates, but the second additional 100% are not counted?
yokoprime•1h ago
all weekend is off-peak
twtw99•1h ago
This is great, but i guess they are feeling the heat from Codex resetting limits in the last month quite a bit.
stavros•1h ago
I think they're feeling the heat from growing too quickly so they want to incentivize people to spread the load more evenly.
toomuchtodo•1h ago
Very much like electric utility time of day pricing, using economic incentives to shift demand to trough periods.

Perhaps an opportunity for them to improve workload scheduling orchestration, like submitting a job to a distributed computing cluster queue, to smooth demand and maximize utilization.

stavros•1h ago
Everything bursty will use economic incentives to smooth the load. I'm not sure how they'd do that with workload scheduling orchestration when you have latency-sensitive loads and there are e.g. twice as many requests at midday as at midnight.
toomuchtodo•40m ago
You decouple the workloads from human interaction (ie when you submit the job to the queue vs when it is scheduled to execute) so when they run is not a consideration, if possible. The economic incentives encourage solving this, and if it can’t be solved, it buckets customer cohort by willingness (or unwillingness) to pay for access during peak times.
stavros•37m ago
Sure, but if I ask the LLM a question, I'd like it to respond now, instead of tonight.
toomuchtodo•30m ago
Certainly, interactive workloads aren’t realistic for time shifting, but agentic coding likely is. Package everything up and ship it as a job, getting a bundle back asynchronously.
stavros•27m ago
I don't know, my agentic coding is pretty interactive. Maybe once the plan is done, sure. That would be interesting, though OpenAI already does this with batch workloads.
Analemma_•1h ago
The insanely competitive market for LLMs is great for us, but if I were one of the investors in these companies it wouldn't exactly fill me with confidence that my $500 billion spent on datacenters and Nvidia cards is going to get repaid ten times over like they're claiming. I'm still getting very strong "this is a commodity; margins will be driven inexorably to zero" vibes from these products.
estebarb•1h ago
I didn't understood "your five-hour usage" I thought plans were per interaction or per token, not per hour.
stavros•1h ago
There's a limit that resets every five hours and one that resets every week.
rmi_•1h ago
My usage only shows daily and weekly, though. I never got that.
stavros•1h ago
It has "current session" and "weekly". If you notice, "current session" is never more than five hours away from expiration.
rmi_•1h ago
Oh, you're right. I don't know why I've always misread "current session" as daily.

Thanks for clearing that up. It'll help me schedule stuff in the future.

minimaxir•49m ago
For Claude Code, you use up 12% of your weekly allotment every session, so 8 sessions per week.

If you are only using a session a day, you're wasting a session. :)

bpodgursky•1h ago
You can pay either for API usage or a fixed monthly plan (which is way cheaper but you can't use it for applications, just personal use).
michaelhoney•1h ago
Living in Tasmania as competitive advantage
burticlies•23m ago
Tassie represent
JoshGlazebrook•1h ago
I just know there has to be some psychology in play with these promos. The promo during December got me to upgrade to the $100 plan, and I know I'm not the only one.
Analemma_•1h ago
There's definitely psychology in play, but I think it might be less "trying to get you to spend more" and more "trying to incentivize load-shifting", which (to me at least) is a lot less sinister-- my utility does this too for electricity, and nobody attributes malicious intent to it.

We all know these services see huge load spikes and sometimes service degradation when America wakes up, and I bet they'd appreciate it if as many "chug-and-plug" agent workflows moved to overnight hours as possible.

sobjornstad•1h ago
My assumption was always that the December promo was a combination – they were presumably way under capacity because everyone was on holiday given how enterprise-heavy they are, so giving people a bunch of extra usage with a loud promo meant a whole bunch of people would try Claude and see how good it had gotten at very little cost to Anthropic.
llm_nerd•1h ago
The psychology is to hook you on the usage. A lot of people see a little movement in the usage meter and get cold feet about heavy usage. The prior $70 credit deal and now this offering are to try to get people to dive in, and hopefully retain that usage pattern afterwards.
operatingthetan•1h ago
Anthropic's models are obviously superior at coding right now but using 2-3 $20 accounts between different providers is still a very effective way to get good value. Gemini CLI and Codex seem to be at least 2x more permissive on usage. The models are good enough.

Plus we are technologists, we want to try out different stuff and compare.

llm_nerd•56m ago
That's precisely what I do, with subscriptions to all of them. Gemini almost seems unlimited...like I never hit limits with it. Don't even know how to check my usage for the subscription plans on that.

But increasingly I'm using Claude for basically all real coding. I ask Gemini and Codex questions, but I'm honestly in awe at Opus' ridiculous capabilities.

hermanzegerman•50m ago
/stats session shows you the remaining quota in Gemini CLI and when the quota resets, and they dropped the quota badly in the last few days.

Before that I would totally agree with you, it felt really endless

3rodents•53m ago
I suspect it’s much more about understanding user behavior, i.e: given more allowance off-peak, do users change when they use Claude? And from there, that will inform how plans are designed long term. If they discover that offering higher off-peak limits meaningfully changes how/when users interact with the service, they can use discounted off-peak plans to flatten usage. I would be very surprised if this promotion had anything to do with encouraging people to upgrade.
samdjstephens•43m ago
Interesting - the first thing my mind went to was the DoD supply chain risk designation, and wanting to boost metrics to calm investors nerves
sigmar•20m ago
You're probably right. I've been thinking about why anthropic's revenue keeps soaring. I think in terms of "new users trying the product" we're definitely somewhere in the slowing part of the S-curve (at least in the US), but there are other growth contributors. Two bigs ones are people finding new use-cases and people figuring out how to scale up current use-cases to use more tokens. Perhaps little temporary-usage-boosts like this give people permission to attempt new use-cases or more scale and realize they could use a higher tiered plan.
UltraSane•15m ago
I found the $250 in free credit for Claude Code hard to actually use before it expired. I think I got down to less than $50
candeira•1h ago
Australia here we come.
egeozcan•1h ago
So afternoon in Germany or am I misreading?
trelbutate•1h ago
Outside 4pm to 10pm
pdpi•1h ago
DST shenanigans aside (we're in the "US has changed but Europe hasn't" window), 10:00 in SF is 18:00 in London. Meaning their peak time window is 13:00–19:00 London time, or 14:00–20:00 Berlin time.

So us European folks get promotional rates during the morning and evening.

EDIT: Actually, because the promo ends at the end of March, it'll all be within DST shenanigans. So peak times are 12:00–18:00 London, 13:00–19:00 Berlin.

andkenneth•1h ago
This is a psyop to recruit more Australians I'm sure of it
leothelion_•43m ago
Can't complain honestly!
Freedom2•1h ago
I believe Claude is still designated a supply chain risk by the United States government. Whether this affects usage of it or not, that's up to each individual, but it's definitely a curious fact (by HN standards).
embedding-shape•1h ago
That sounds to me similar to "Telegram banned by Russian government", more of a seal of approval than anything.
minimaxir•34m ago
That is only relevant if you are in the government/military. The US government has not made using Claude Code a crime, yet.
blondie9x•1h ago
These promos should be based on when more renewable energy is available for inference not when less people are likely to be using the AI. We need to adjust usage to when supply is more renewable for both training and inference in order to better protect our grid and the planet.
Analemma_•1h ago
Long ago in the ancient days of punchcards and IBM mainframes, you’d write your programs during the day, then submit them to run overnight and pick up your results in the morning. It would be funny and sort of romantic if time-based LLM pricing returned us to that: write your specs all day, run agents on them overnight, check out the results in the morning.
cyanydeez•57m ago
I find that incredibly optimistic.
jimmytucson•44m ago
They have this. It’s called batch pricing and it’s 50% off.
walthamstow•1h ago
Dear line manager, I will be taking a very long lunch 12-6pm in London's Chinatown then heading back to the office half cut to vibe code
unglaublich•1h ago
Ah crap I was hoping to benefit more of my sub because I'm in an off-hours tz.
rvz•1h ago
> After March 27, 2026, usage limits return to their standard levels at all hours. There’s no change to your plan or billing.

Translation: Give the gamblers and vibe coders free $20 bets on a spin at the casino until March 27, 2026.

reilly3000•21m ago
This. The best way to win a net promoting customer is to show them that given more tokens, you can do more amazing things, by giving them something they want that looks amazing (at first glance). They then feel indebted and grateful, and go off to show what they have made. Paying greater sums feels to them like gaining greater leverage.

I dunno y’all; feels like free drug samples. Who would ever think of coding without it?

tiku•58m ago
I still hate Claude for turning down limits. I use z.ai in Claude code now, haven't hit the limit yet.
daemonologist•49m ago
Would be cool to have a $5-10/month plan that only works off-peak, for people who want to do the occasional side project after work. Right now it's hard to justify anything but Copilot (because it's cheaper, offers the same models, and I'm nowhere near the usage limits).
nycdatasci•42m ago
You’re not using Claude Code?
nikcub•39m ago
the $20 pro plan would also have double offpeak limits - just set it to sonnet and you'll get a reasonable level of output
salomonk_mur•36m ago
Hard to justify? 20/month for like 5x output is a great deal (be it Claude or Codex or whatever), even if it lasts only 2-3 hours per day.
mavilia•27m ago
I canceled my plan today and wrote my reason as: now that I have a job again I don’t have the time or needs for the pro plan. If there was a $5 a month option, I would gladly take it to make use of Opus for my rare side ideas.
szatkus•3m ago
Pay as you go. I never spent more than $10/month working on my side project (usually a few evenings per month).
lxgr•23m ago
I suspect that any GPU cycle not spent on inference will just be dedicated to training (which as I understand it can “soak up” essentially unlimited compute at constant value per token), and I’d not expect to see time-based billing until that changes.
paulddraper•18m ago
Claude Pro is $20/month.
dist-epoch•48m ago
They are learning from Codex

https://hascodexratelimitreset.today

AussieWog93•45m ago
That is doubled usage between 5AM and 11PM for anyone playing along from Sydney/Melbourne.
canpan•38m ago
JST here, it' basically add day.
qwertyuiop_•35m ago
I guess extra compute opened up after they were canned by Department of War.
speakbits•34m ago
Is this going to cause another outage?
rokhayakebe•30m ago
This company is clearly on a mission. I would just like to know what that mission is. I mean this in a good way.
gslin•28m ago
Using timezone not UTC for a global service is a crime, especially mixed with daylight saving.
delduca•24m ago
Wtf is ET? Is an alien time?
CharlesW•19m ago
My fellow Californians would agree that, yes, ET is an alien time. https://en.wikipedia.org/wiki/Eastern_Time_Zone
Footprint0521•24m ago
changes sleep schedule
phendrenad2•22m ago
I don't really understand why AI providers don't charge like the electric company, or AWS. Instead of increasing usage limits, just charge less for off-hours use.
lxgr•9m ago
LLM inference is much more geographically fungible than electricity, so maybe it’s just not worth the complexity yet and there is enough (not highly latency sensitive) load on average globally.
podviaznikov•18m ago
Travelling salesman problem in 2026 is Travelling Engineer Problem to find optimal location to maximize tokens usage.
johnhamlin•17m ago
AI psychosis intensifies
MagicMoonlight•9m ago
But the best part is, those usage levels are hidden, arbitrary, and they change them all the time.

So they could “double” your usage by keeping it the same and then simply halving peak usage.

megadragon9•8m ago
Interesting to see more demand shaping mechanisms applied to LLM inference. Even though the "batch processing" feature is already available. I guess this "promotion" is to test the hypothesis of sliding along the spectrum towards more "real-time" demand shaping.