frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ggml.ai joins Hugging Face to ensure the long-term progress of Local AI

https://github.com/ggml-org/llama.cpp/discussions/19759
248•lairv•2h ago•43 comments

I found a useful Git one liner buried in leaked CIA developer docs

https://spencer.wtf/2026/02/20/cleaning-up-merged-git-branches-a-one-liner-from-the-cias-leaked-d...
178•spencerldixon•1h ago•98 comments

Child's Play: Tech's new generation and the end of thinking

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/
36•ramimac•1h ago•22 comments

Show HN: A native macOS client for Hacker News, built with SwiftUI

https://github.com/IronsideXXVI/Hacker-News
78•IronsideXXVI•1h ago•40 comments

Trump's global tariffs struck down by US Supreme Court

https://www.bbc.com/news/live/c0l9r67drg7t
187•blackguardx•34m ago•106 comments

The path to ubiquitous AI (17k tokens/sec)

https://taalas.com/the-path-to-ubiquitous-ai/
420•sidnarsipur•5h ago•276 comments

Untapped Way to Learn a Codebase: Build a Visualizer

https://jimmyhmiller.com/learn-codebase-visualizer
112•andreabergia•7h ago•19 comments

PayPal discloses data breach that exposed user info for 6 months

https://www.bleepingcomputer.com/news/security/paypal-discloses-data-breach-exposing-users-person...
89•el_duderino•2h ago•14 comments

Web Components: The Framework-Free Renaissance

https://www.caimito.net/en/blog/2026/02/17/web-components-the-framework-free-renaissance.html
111•mpweiher•7h ago•70 comments

Minions – Stripe's Coding Agents Part 2

https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2
81•ludovicianul•4h ago•38 comments

Mothers (YC X26) Is Hiring

https://jobs.ashbyhq.com/9-mothers?utm_source=x8pZ4B3P3Q
1•ukd1•2h ago

The Rediscovery of 103 Hokusai Lost Sketches (2021)

https://japan-forward.com/eternal-hokusai-the-rediscovery-of-103-hokusai-lost-sketches/
25•debo_•4d ago•2 comments

Consistency diffusion language models: Up to 14x faster, no quality loss

https://www.together.ai/blog/consistency-diffusion-language-models
168•zagwdt•11h ago•60 comments

Raspberry Pi Pico 2 at 873.5MHz with 3.05V Core Abuse

https://learn.pimoroni.com/article/overclocking-the-pico-2
82•Lwrless•7h ago•18 comments

AI is not a coworker, it's an exoskeleton

https://www.kasava.dev/blog/ai-as-exoskeleton
382•benbeingbin•20h ago•400 comments

Infrastructure decisions I endorse or regret after 4 years at a startup (2024)

https://cep.dev/posts/every-infrastructure-decision-i-endorse-or-regret-after-4-years-running-inf...
374•Meetvelde•3d ago•165 comments

Nvidia and OpenAI abandon unfinished $100B deal in favour of $30B investment

https://www.ft.com/content/dea24046-0a73-40b2-8246-5ac7b7a54323
213•zerosizedweasle•3h ago•176 comments

Reading the undocumented MEMS accelerometer on Apple Silicon MacBooks via iokit

https://github.com/olvvier/apple-silicon-accelerometer
101•todsacerdoti•10h ago•52 comments

Notes on Clarifying Man Pages

https://jvns.ca/blog/2026/02/18/man-pages/
36•surprisetalk•1d ago•21 comments

Show HN: Micasa – track your house from the terminal

https://micasa.dev
598•cpcloud•1d ago•190 comments

FreeCAD

https://www.freecad.org/index.php
294•doener•3d ago•116 comments

I tried building my startup entirely on European infrastructure

https://www.coinerella.com/made-in-eu-it-was-harder-than-i-thought/
553•willy__•6h ago•293 comments

US plans online portal to bypass content bans in Europe and elsewhere

https://www.reuters.com/world/us-plans-online-portal-bypass-content-bans-europe-elsewhere-2026-02...
398•c420•1d ago•767 comments

Silicon Valley engineers were indicted for allegedly sending secrets to Iran

https://www.cnbc.com/2026/02/20/three-engineers-charged-stealing-google-trade-secrets-data-iran-s...
75•giuliomagnifico•5h ago•41 comments

The Popper Principle

https://theamericanscholar.org/the-popper-principle/
4•lermontov•1d ago•0 comments

A beginner's guide to split keyboards

https://www.justinmklam.com/posts/2026/02/beginners-guide-split-keyboards/
194•thehaikuza•4d ago•205 comments

Defer available in gcc and clang

https://gustedt.wordpress.com/2026/02/15/defer-available-in-gcc-and-clang/
230•r4um•4d ago•200 comments

Gemini 3.1 Pro

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/
878•MallocVoidstar•1d ago•861 comments

Fast KV Compaction via Attention Matching

https://arxiv.org/abs/2602.16284
54•cbracketdash•11h ago•10 comments

An ARM Homelab Server, or a Minisforum MS-R1 Review

https://sour.coffee/2026/02/20/an-arm-homelab-server-or-a-minisforum-ms-r1-review/
101•neelc•14h ago•80 comments
Open in hackernews

Garbage collection of object storage at scale

https://www.warpstream.com/blog/taking-out-the-trash-garbage-collection-of-object-storage-at-massive-scale
96•ko_pivot•9mo ago

Comments

juancn•9mo ago
Another possible mechanism for doing GC at scale (a variation on Asynchronous Reconciliation in the article) in some file/object store, is doing a probabilistic mark and sweep using bloom filters.

The mark phase can be done in parallel building many bloom filters for the files/objects found.

Then the bloom filters are merged (or'ed together essentially) and then a parallel sweep phase can use the bloom filter to answer the question: is this file/object live?

The bloom filter then answers either "No" with 100% certainty or "Maybe" with some probability p that depends on the parameters used for the bitset and the hash function family.

cogman10•9mo ago
What does the bloom filter solve?

The expensive portion of the mark and sweep for the object store is the mark phase, not the storage of what's been marked. 100s, 1000s, or even millions of live objects wouldn't hardly take any space to keep in a remembered set.

On the other hand, querying the S3 bucket to list those 1M objects would be expensive no matter how you store the results.

But this does tickle my brain. Perhaps something akin to the generational hypotheses can be applied? Maybe it's the case that very old, very young, or very untouched objects are more likely to be garbage than not. If there's some way to divide the objects up and only look at objects whose are in "probably need to be collected" regions, then you could do minor fast sweeps semi frequently and schedule more expensive "really delete untracked stuff" infrequently.

Cicero22•9mo ago
I was thinking they could use something like cloudwatch events, or something, to trigger sweeps and significantly reduce scheduled sweeps.

They could even use cost allocation tags to predict if a bucket or group of buckets should be scanned if it's growing unexpectedly. Cost isn't a perfect metric but there's definitely signal there.

juancn•9mo ago
Building the set of used files or objects (which is what mark does in a mark/sweep).

Sometimes it's too expensive to mark in place, even if it's a bit that you need to write to disk and keeping a set of references may be prohibitive (or the structure holding the references is mostly/effectively immutable).

If it's all memory and mutable it doesn't (normally) really matter, but when it's not, you ideally would have some mechanism to move the code to where the data is, rather than stream the data to where the compute is (it is really wasteful for large scale data processing).

In any case, you would not be moving/scanning the files themselves, but the metadata is what you want to read for the mark phase.

The article if I understood correctly implies that the files and the metadata of the files (Kafka queues and so on) are separate, so presumably, the metadata is much much smaller than the data itself, but still potentially large.

For example if you had a large scale content addressed store (think a massive version of git's blob storage), you typically add to something like that and keep a few mutable root references to start your GC from to seed a mark/sweep.

Following the git example, the roots would be the branches, tags and reflogs, and the metadata you scan the transitive closur of the trees that are reachable from those (simplifying a bit) but not the file blobs themselves.

I use git as an example because a a CAS lends itself very well to large scale distributed systems because you can reason about it as an immutable data structure, but you can still change it effectively with sane semantics.

donavanm•9mo ago
If you like big beautiful storage and probabilistic structures check out https://www.usenix.org/conference/osdi14/technical-sessions/.... The coho data folks ended up in AWS S3 a few years later.
juancn•9mo ago
Thanks! I hadn't seen it and it may come handy!
deathanatos•9mo ago
> Why Not Just Use a Bucket Policy?

I've heard these words so many times, it's refreshing to see someone dig into why bucket policies aren't a cure-all.

As for "Why not use synchronous deletion?" — regarding the pitfall there, what about a WAL? I.e., you WAL the deletions you want to perform into an object in the object store, perform the deletions, and then delete the WAL. If you crash and find a WAL file, you repeat the delete commands contained in the WAL.

(I've used this to handle this problem where some of the deletions are mixed: i.e., some in an object store, some in a SQL DB, etc. The object store is essentially being used as strongly consistent storage.)

(Perhaps this is essentially the same as your "delayed queue"? All I've got is an object store though, not a queue, and it's pretty useful hammer.)

telotortium•9mo ago
> HN Disclaimer: WarpStream sells a drop-in replacement for Apache Kafka built directly on-top of object storage.

First time I’ve seen one of these. That’s actually a better way to advertise your product than putting it at the end.

hencq•9mo ago
Yes, though I think they meant to say disclosure instead of disclaimer.
siscia•9mo ago
What I see working extremely well, arguably in a setting where cost was not really an issue was a much simpler approach.

Keep compacting files at some regular cadence `t` and keep a bucket policy to delete files older than `t+delta+buffer`.

Then have an alarm for files older than `t+buffer`