frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Researchers develop ‘transparent paper’ as alternative to plastics

https://japannews.yomiuri.co.jp/science-nature/technology/20250605-259501/
165•anigbrowl•5h ago•67 comments

A year of funded FreeBSD development

https://www.daemonology.net/blog/2025-06-06-A-year-of-funded-FreeBSD.html
184•cperciva•7h ago•67 comments

The time bomb in the tax code that's fueling mass tech layoffs

https://qz.com/tech-layoffs-tax-code-trump-section-174-microsoft-meta-1851783502
580•booleanbetrayal•2d ago•402 comments

How we decreased GitLab repo backup times from 48 hours to 41 minutes

https://about.gitlab.com/blog/2025/06/05/how-we-decreased-gitlab-repo-backup-times-from-48-hours-to-41-minutes/
349•immortaljoe•11h ago•138 comments

Falsehoods Programmers Believe About Aviation

https://flightaware.engineering/falsehoods-programmers-believe-about-aviation/
57•cratermoon•4h ago•14 comments

Medieval Africans Had a Unique Process for Purifying Gold with Glass (2019)

https://www.atlasobscura.com/articles/medieval-african-gold
58•mooreds•4h ago•21 comments

What "Working" Means in the Era of AI Apps

https://a16z.com/revenue-benchmarks-ai-apps/
41•Brysonbw•4h ago•22 comments

Sharing everything I could understand about gradient noise

https://blog.pkh.me/p/42-sharing-everything-i-could-understand-about-gradient-noise.html
10•ux•12h ago•0 comments

Highly efficient matrix transpose in Mojo

https://veitner.bearblog.dev/highly-efficient-matrix-transpose-in-mojo/
78•timmyd•7h ago•26 comments

I Read All of Cloudflare's Claude-Generated Commits

https://www.maxemitchell.com/writings/i-read-all-of-cloudflares-claude-generated-commits/
38•maxemitchell•4h ago•22 comments

The Illusion of Thinking: Understanding the Limitations of Reasoning LLMs [pdf]

https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
119•amrrs•8h ago•53 comments

Sandia turns on brain-like storage-free supercomputer

https://blocksandfiles.com/2025/06/06/sandia-turns-on-brain-like-storage-free-supercomputer/
160•rbanffy•11h ago•55 comments

Odyc.js – A tiny JavaScript library for narrative games

https://odyc.dev
186•achtaitaipai•13h ago•42 comments

Show HN: AI game animation sprite generator

https://www.godmodeai.cloud/ai-sprite-generator
46•lyogavin•7h ago•39 comments

A masochist's guide to web development

https://sebastiano.tronto.net/blog/2025-06-06-webdev/
180•sebtron•13h ago•23 comments

A leaderless NASA faces its biggest-ever cuts

https://www.economist.com/science-and-technology/2025/06/04/a-leaderless-nasa-faces-its-biggest-ever-cuts
61•libraryofbabel•9h ago•38 comments

Smalltalk, Haskell and Lisp

https://storytotell.org/smalltalk-haskell-and-lisp
50•todsacerdoti•6h ago•17 comments

Onyx (YC W24) – AI Assistants for Work Hiring Founding AE

https://www.ycombinator.com/companies/onyx/jobs/Gm0Hw6L-founding-account-executive
1•yuhongsun•6h ago

Workhorse LLMs: Why Open Source Models Dominate Closed Source for Batch Tasks

https://sutro.sh/blog/workhorse-llms-why-open-source-models-win-for-batch-tasks
41•cmogni1•8h ago•12 comments

Meta: Shut down your invasive AI Discover feed

https://www.mozillafoundation.org/en/campaigns/meta-shut-down-your-invasive-ai-discover-feed-now/
439•speckx•11h ago•189 comments

Too Many Open Files

https://mattrighetti.com/2025/06/04/too-many-files-open
98•furkansahin•11h ago•81 comments

Series C and scale

https://www.cursor.com/en/blog/series-c
58•fidotron•9h ago•47 comments

Why You Should Move Your Site Away from Weebly (YC W07)

https://www.articulation.blog/p/why-you-should-move-your-site-away-from-weebly
8•dustywusty•3h ago•2 comments

Curate your shell history

https://esham.io/2025/05/shell-history
98•todsacerdoti•13h ago•64 comments

SaaS is just vendor lock-in with better branding

https://rwsdk.com/blog/saas-is-just-vendor-lock-in-with-better-branding
167•pistoriusp•8h ago•87 comments

Wendelstein 7-X sets new fusion record

https://www.heise.de/en/news/Wendelstein-7-X-sets-new-fusion-record-10422955.html
124•doener•3d ago•13 comments

United States Digital Service Origins

https://usdigitalserviceorigins.org/
128•ronbenton•7h ago•58 comments

What you need to know about EMP weapons

https://www.aardvark.co.nz/daily/2025/0606.shtml
114•flyingkiwi44•16h ago•141 comments

4-7-8 Breathing

https://www.breathbelly.com/exercises/4-7-8-breathing
198•cheekyturtles•11h ago•82 comments

Researchers find a way to make the HIV virus visible within white blood cells

https://www.theguardian.com/global-development/2025/jun/05/breakthrough-in-search-for-hiv-cure-leaves-researchers-overwhelmed
184•colinprince•10h ago•23 comments
Open in hackernews

An Interactive Guide to Rate Limiting

https://blog.sagyamthapa.com.np/interactive-guide-to-rate-limiting
120•sagyam•12h ago

Comments

fside•11h ago
I wonder if anyone has switched algorithms after hitting real-world scaling issues with one of those? Curious if there are any “gotchas” that only show up at scale. I only have experience with fixed window rate limiting
eknkc•11h ago
We used a token bucket one to allow say 100 requests immediately but the limit would actually replenish 10 per minute or something. Makes sense to allow bursts. This was to allow free tier users to test things. Unless they go crazy, they would not even notice a rate limiter.

Sliding window might work good with large intervals. If you have something like a 24h window, fixed window will abruply cut things off for hours.

I mostly work with 1 minute windows so its fixed all the way.

mparnisari•11h ago
We used leaky bucket IIRC and the issue I saw was that the distributed aspect of it was coded incorrectly and so depending on the node you hit you were rate-limited or not :facepalm:
hotpocket777•7h ago
So it wasn’t really implemented correctly then.
smadge•7h ago
I have experience with token bucket and leaky bucket (or at least a variation where a request leaves the bucket when the server is done processing it) to prevent overload of backend servers. I switched from token bucket to leaky bucket. Token bucket is “the server can serve X requests per second,” while leaky bucket is the “the server can process N requests concurrently.” I found the direct limit on concurrency much more responsive to overload and better controlled delay from contention of shared resources. This kind of makes sense because imagine if your server goes from processing 10 QPS to 5 QPS. If the server has a 10 QPS token bucket limit it keeps accepting requests and the request queue and response time goes to infinity.
mcdow•11h ago
Super cool. What did you use to build the interactive bits?
leoff•11h ago
It really looks AI generated
sagyam•10h ago
Yes, I usually prompt (Claude, GPT and Deepseek),on my rough vision, and take ideas from all of them. They never quite get it right on their own. But for a code that's deploy and forget, AI generated code is good enough.
leoff•4h ago
you're 80% there, especially with functionality, but it needs some polishing
sagyam•10h ago
It was Shadcn, Tailwind.
onionbagle•11h ago
Curious to hear if anyone has implemented these and what technology was used.
tra3•9h ago
Hierarchical token buckets are part of the linux kernel for traffic management for instance: https://linux.die.net/man/8/tc-htb
chrisweekly•11h ago
Excellent dataviz.

Related tangent, "HPBN" (High-Performance Browser Networking) is a great book that includes related concepts.

https://hpbn.co/

softfalcon•11h ago
Seconded, this book goes hand-in-hand with "Designing Data-Intensive Applications" by Martin Kleppmann [0].

[0](https://www.oreilly.com/library/view/designing-data-intensiv...)

loevborg•7h ago
Thanks for sharing!
buggeryorkshire•11h ago
No mention of CGNAT which caused me many problems at a previous role?
sagyam•3h ago
Does CGNAT do rate limiting? If so then is there some documentation I can lookup.
mdaniel•1h ago
I'm pretty sure GP means: all those users have egress from a finite number of IPv4 and thus if rate limiting is done by IP those behind the NAT are going to have a real bad time. It's true of all NAT setups, but the affected audience size for GCNAT could be outrageous
jsw•11h ago
I’ve found the AIMD algo (additive increase, multiplicative decrease) paired with a token bucket gives a nice way to have a distributed set of processes adapt to backend capacity without centralized state.

Also found that AIMD is better than a circuit breaker in a lot of circumstances too.

Golang lib of the above https://github.com/webriots/rate

hxtk•11h ago
Something I’ve long wondered is why you never hear about rate limiting algorithms that are based on the cost to serve the request or algorithms that dynamically learn the capacity of the system and give everyone a fair share.

In the field of router buffer management, there are algorithms like Stochastic Fair Blue, which does the latter, but is somewhat hard to apply to HTTP because you’d have to define a success/failure metric for each request (a latency target, for example), and clients would have to tolerate a small probability a request being rejected even when they’re not being rate limited.

In Google’s paper on the Zanzibar authorization system, they give brief mention to rate limiting clients based on a CPU time allocation, but don’t go into any detail since it’s not a paper on rate limiting.

It’s something that matters less today with ubiquitous autoscaling, where the capacity of the system is whatever you need it to be to give each user what they ask for up to their rate limit, but I’m surprised at my inability to find any detailed account of such a thing being attempted.

ithkuil•10h ago
Yes, autoscale is a thing but it's rarely instantaneous ; you'll still benefit from having a good handle on load fairness.

Furthermore, modern GPU workloads are way less elastic in capacity scaling

ucarion•10h ago
Shuffle-sharding is similar to stochastic Blue stuff, and you'll find Amazon talking about it:

https://aws.amazon.com/builders-library/workload-isolation-u...

Which isn't exactly what you're talking about, but between that and other things in the "Builder's Library" series, you can see that people are doing this, and writing about it.

wonnage•9h ago
Envoy has a latency-based adaptive concurrency feature: https://www.envoyproxy.io/docs/envoy/latest/configuration/ht...

Netflix has a blog post for their implementation: https://netflixtechblog.medium.com/performance-under-load-3e...

remus•9h ago
My assumption would be that it is a complexity thing. As a consumer of the service having a rate limit that is easy to understand and write retry logic for is a big plus. If the criteria is "x requests per 5 minute window" and I start getting rate limit errors it's very clear what back off behaviour I need to implement. If the criteria is CPU usage of my requests, as a consumer it's hard for me to reason about how much CPU a given request is going to take so my retry logic is going to be fairly dumb.
jsw•9h ago
I mentioned this in another place on this thread, but a simple AIMD algorithm paired with a token bucket is surprisingly effective at dynamically adjusting to available capacity, even across a fleet of services not sharing state other than the contended resource.

Pretty easy to pair AIMD with token bucket (eg https://github.com/webriots/rate)

hinkley•6h ago
One of those times when I was still learning that asking forgiveness is easier than asking permission, I wanted to eliminate a very expensive presence calculation that I and a coworker determined were accounting for almost 10% of average page load time. Some idiot in product has decided they wanted an OLTP-ish solution that told you -exactly- how many people were online and like a fool I asked if we could do a sane version and they said no. If you don't ask, then it's not insubordination.

For situations where eventual consistency is good enough, you can run a task in a loop that tries every n seconds to update a quantity. But as you say that can also saturate, so what you really want is for the task to update, then wait m seconds and go again, where m is more than the time you expect the task to complete in (<<50% duty cycle). As the cost of the operation climbs the time lag increases but the load on the system increases more slowly. If you want to, collect telemetry on how often it completes and set up alarms for if it doesn't for a duration that is several times longer than your spikiest loads happen.

I don't think voluntary rate limiting on the client side gets enough column inches. Peer to peer you end up footguning yourself if you bite off more than you can chew, and if you start stalling on responses then you gum up the server as well.

phelm•11h ago
See also https://smudge.ai/blog/ratelimit-algorithms
mtlynch•10h ago
This seems like someone used AI to generate the article and examples without much review. It's all bullet points, and it repeatedly uses "Working:" as a heading, which doesn't make any sense to me.

The site defaults to dark mode for me (I'm assuming based on my system preferences), but all the examples are unusable with dark mode. The examples all seem to be within fixed-width divs that cut off part of the content unless you scroll within the example.

sagyam•10h ago
- I like bullet points, they are easy to read.

- "Working" I wanted to keep things consistent.

- Content getting cut was a limitation of iframe. Most blogging platform don't allows you to embed another page. This was best I could do given the limitation.

- I do use AI to bounce ideas, but a lot of effort went into getting the apps working as intended.

mtlynch•8h ago
Why "Working?" It's unclear what that means.

Is it supposed to say, "How it works"?

sagyam•3h ago
Now that you have mentioned it should have been working principle or algorithm. It made sense in my head. English isn't my first language sorry about that.
tonyhart7•10h ago
another article with great visualization for rate limit

https://smudge.ai/blog/ratelimit-algorithms

behnamoh•10h ago
> Follow Sagyam's Blog's journey

> By following, you'll have instant access to our new posts in your feed.

> Continue with Google

> More options

As soon as I see this in a blog, I quit tab. Why do authors do this to themselves?

Strom•4h ago
They do it when their real goal is to funnel you into a newsletter to later sell you stuff. The only purpose of the article is to show you that prompt.
sagyam•3h ago
Sorry about that that is my blogging platform Hashnode. It was lesser of four evils:

- Medium which paywalls the article and forces you to sign up just to read.

- Substack has same problem, it's great for funneling people to your paid newsletter but there is a sign up banner as soon the page loads.

- Build your own and miss out on the social aspect and there's no proof if the numbers are real.

tra3•9h ago
I tried to explain the benefits of circuit breakers and adaptive concurrency to improve the performance of our distributed monolith, but I failed. I tried to visualize it using step by step packet diagrams but failed. This is hard stuff to understand.

Great visualization tools. Next time I have to explain it, I'll reach for these.

jhlee525•8h ago
Easy to understand.
conradludgate•5h ago
My favourite algorithm is generic cell rate algorithm (GCRA). It works like token bucket in this post. The implementation is dead simple and requires no background tasks and needs very minimal state.

Instead of storing the current number of tokens, you instead store when the bucket will be full. If you take a token from the bucket, you increment the timestamp accordingly by 1/rps. The only complication is it the filled timestamp was in the past, you have to first update it with the current timestamp to avoid overfilling.

What's even nicer is that it doubles as a throttle implementation rather than just a rate limiter. You know the bucket is empty if you compute empty_at=filled_at-(max_tokens/rps) which is still in the future. From that calculation you now know when it will have capacity again, so you can sleep accordingly. If you use a queue before the gcra, it then starts sowing down new connections rather than just dropping them.

You should still have a limit on the queue, but it's nice in that it can gracefully turn from token bucket into leaky bucket.

sagyam•2h ago
Intresting, I am working on another list of more advance rate limiting algorithms. I will add GCRA there.
deadfa11•1h ago
Ahh this has a name! I started doing this years ago and figured this must be used frequently because it’s so simple, elegant and can be done lock free. Thanks for calling it the name!