frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•2m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•4m ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
1•michaelchicory•7m ago•0 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•10m ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•11m ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•12m ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
1•calcifer•18m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•22m ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
2•MilnerRoute•23m ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
4•alaserm•24m ago•3 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•25m ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•25m ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•27m ago•1 comments

Global Bird Count Event

https://www.birdcount.org/
1•downboots•27m ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•29m ago•0 comments

Jon Stewart – One of My Favorite People – What Now? with Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
2•consumer451•31m ago•0 comments

P2P crypto exchange development company

1•sonniya•45m ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
2•jesperordrup•50m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•50m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•51m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•58m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•1h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
7•keepamovin•1h ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•1h ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•1h ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•1h ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•1h ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•1h ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•1h ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•1h ago•0 comments
Open in hackernews

Show HN: Cachey, a Read-Through Cache for S3

https://github.com/s2-streamstore/cachey
3•shikhar•4mo ago
Cachey (https://github.com/s2-streamstore/cachey) is an open source read-through cache for S3-compatible object storage.

It is written in Rust with a hybrid memory+disk cache powered by foyer [1], accessed over a simple HTTP API. It runs as a self-contained single-node binary – the idea is to distribute yourself and lean on client-side logic for key affinity and load balancing.

If you are building something heavily reliant on object storage, the need for something like this is likely to come up! A bunch of companies have talked about their approaches to distributed caching atop S3 (such as Clickhouse [2], Turbopuffer [3], WarpStream [4], RisingWave [5]).

Why we built it:

Recent records in s2.dev are owned by a designated process for each stream, and we could return them for reads with minimal latency overhead once they were durable. However this limited our scalability in terms of concurrent readers and throughput, as well as implied cross-zone network costs when the zones of the gateway and stream-owning process did not align.

The source of durability was S3, so there was a path to slurping recently-written data straight from there (older data would already be read directly), and take advantage of free bandwidth. But even S3 has RPS limits [6], and avoiding the latency overhead as much as possible is desirable.

Caching helps reduce S3 operation costs, improves the latency profile, and lifts the scalability ceiling. Now, regardless of whether records are recent or old, our reads always flow through Cachey.

Cachey internals:

- It borrows an idea from OS page caches by mapping every request into a page-aligned range read. This did call for requiring the typically-optional Range header, with an exact byte range. Standard tradeoffs around picking page sizes apply, and we went with fixing it at the high end of S3's recommendation (16 MB). If multiple pages are accessed, some limited intra-request concurrency is used. The sliced data is sent as a streaming response.

- It will coalesce concurrent requests to the same page (another thing an OS page cache will do). This was easy since foyer provides a native 'fetch' API that takes a key and thunk.

- It mitigates the high tail latency of object storage by maintaining latency statistics and making a duplicate request when a configurable quantile is exceeded, picking whichever response becomes available first. Jeff Dean discussed this technique in "The Tail at Scale" [7], and S3 docs also suggest such an approach.

A more niche thing Cachey lets you do is specify more than 1 bucket an object may live on, and attempt up to 2, prioritizing the client's preference blended with its own knowledge of recent operational stats. This is actually something we rely on since we offer regional durability with low latency by ensuring a quorum of zonal S3 express buckets for recently-written data, so the desired range may not exist on an arbitrary one. This capability may end up making sense to reuse for multi-region durability in future, too.

I'd love to hear your feedback and suggestions! Hopefully other projects will also find Cachey to be a useful part of their stack.

[1] https://github.com/foyer-rs/foyer

[2] https://clickhouse.com/blog/building-a-distributed-cache-for...

[3] https://turbopuffer.com/docs/architecture

[4] https://www.warpstream.com/blog/minimizing-s3-api-costs-with...

[5] https://risingwave.com/blog/risingwave-elastic-disk-cache

[6] https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimi...

[7] https://cacm.acm.org/research/the-tail-at-scale/#body-7

Comments

shikhar•4mo ago
How we run it:

Auto-scaled Kubernetes deployments, one for each availability zone, currently on m*gd instances which give us local NVMe. The pods are able to easily push GiBps with 1-2 CPUs used — network is the bottleneck so we made it a scaling dimension (thanks KEDA).

On the client side, each gateway process uses kube.rs to watch ready endpoints in the same zone as itself, and frequently polls /stats exposed by Cachey for recent network throughput as a load signal.

To improve hit rates with key affinity, clients use rendezvous hashing for picking a node, with bounded load (https://arxiv.org/abs/1608.01350) – if a node exceeds a predetermined throughput limit, the next choice for the key is picked.

We may move towards consistent hashing – it would be a great problem to have, if we needed so many Cachey pods in a zone that O(n) hashing was meaningful overhead! An advantage with the current approach is it does not suffer from the cascaded overflow problem (https://arxiv.org/abs/1908.08762).

whyandgrowth•4mo ago
To be honest: for use as a local cache/S3 accelerator for large files – it’s fine. The API is simple but flexible. The only point is that the documentation is in English, and you need to understand how “hedged fe” works.