frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cloudflare Global Network experiencing issues

https://www.cloudflarestatus.com/?t=1
2094•imdsm•6h ago•1355 comments

Gemini 3 for developers: New reasoning, agentic capabilities

https://blog.google/technology/developers/gemini-3-developers/
330•janpio•2h ago•104 comments

Gemini 3 Pro Preview Live in AI Studio

https://aistudio.google.com/prompts/new_chat?model=gemini-3-pro-preview
406•preek•3h ago•170 comments

Pebble, Rebble, and a Path Forward

https://ericmigi.com/blog/pebble-rebble-and-a-path-forward/
72•phoronixrly•58m ago•12 comments

A Day at Hetzner Online in the Falkenstein Data Center

https://www.igorslab.de/en/a-day-at-hetzner-online-in-the-falkenstein-data-center-insights-into-s...
73•speckx•2h ago•19 comments

5 Things to Try with Gemini 3 Pro in Gemini CLI

https://developers.googleblog.com/en/5-things-to-try-with-gemini-3-pro-in-gemini-cli/
78•keithba•2h ago•26 comments

Gemini 3

https://blog.google/products/gemini/gemini-3/
326•meetpateltech•2h ago•95 comments

Solving a Million-Step LLM Task with Zero Errors

https://arxiv.org/abs/2511.09030
33•Anon84•1h ago•8 comments

Strix Halo's Memory Subsystem: Tackling iGPU Challenges

https://chipsandcheese.com/p/strix-halos-memory-subsystem-tackling
25•PaulHoule•1h ago•9 comments

Google Brings Gemini 3 AI Model to Search and AI Mode

https://blog.google/products/search/gemini-3-search-ai-mode/
71•CrypticShift•2h ago•5 comments

How Quake.exe got its TCP/IP stack

https://fabiensanglard.net/quake_chunnel/index.html
366•billiob•10h ago•75 comments

Nearly all UK drivers say headlights are too bright

https://www.bbc.com/news/articles/c1j8ewy1p86o
461•YeGoblynQueenne•4h ago•448 comments

Do Not Put Your Site Behind Cloudflare If You Don't Need To

https://huijzer.xyz/posts/123/do-not-put-your-site-behind-cloudflare-if-you-dont
332•huijzer•5h ago•249 comments

Show HN: Guts – convert Golang types to TypeScript

https://github.com/coder/guts
7•emyrk•26m ago•0 comments

Google Antigravity

https://antigravity.google/
182•Fysi•2h ago•130 comments

Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality

https://github.com/neul-labs/fast-litellm
19•ticktockten•1h ago•3 comments

Google Antigravity, a New Era in AI-Assisted Software Development

https://antigravity.google/blog/introducing-google-antigravity
181•meetpateltech•2h ago•138 comments

The Miracle of Wörgl

https://scf.green/story-of-worgl-and-others/
98•simonebrunozzi•7h ago•55 comments

Gemini 3 Pro Model Card

https://pixeldrain.com/u/hwgaNKeH
399•Topfi•6h ago•262 comments

A squeaky nail, or the wheel that sticks out

https://prashanth.world/squeaky-nail/
4•mangoman•6d ago•2 comments

Beauty in/of mathematics: tessellations and their formulas

https://www.tandfonline.com/doi/full/10.1080/00036811.2025.2510472
16•QueensGambit•5d ago•0 comments

Short Little Difficult Books

https://countercraft.substack.com/p/short-little-difficult-books
89•crescit_eundo•3h ago•44 comments

Mathematics and Computation (2019) [pdf]

https://www.math.ias.edu/files/Book-online-Aug0619.pdf
44•nill0•5h ago•9 comments

Ruby 4.0.0 Preview2 Released

https://www.ruby-lang.org/en/news/2025/11/17/ruby-4-0-0-preview2-released/
152•pansa2•4h ago•51 comments

Looking for Hidden Gems in Scientific Literature

https://elicit.com/blog/literature-based-discovery
10•ravenical•5d ago•1 comments

How many video games include a marriage proposal? At least one

https://32bits.substack.com/p/under-the-microscope-ncaa-basketball
308•bbayles•5d ago•74 comments

GoSign Desktop RCE flaws affecting users in Italy

https://www.ush.it/2025/11/14/multiple-vulnerabilities-gosign-desktop-remote-code-execution/
45•ascii•5h ago•19 comments

I've Wanted to Play That 'Killer Shark' Arcade Game Briefly Seen in 'Jaws'

https://www.remindmagazine.com/article/15694/jaws-arcade-video-game-killer-shark-atari-sega-elect...
23•speckx•4d ago•8 comments

Langfuse (YC W23) Hiring OSS Support Engineers in Berlin and SF

https://jobs.ashbyhq.com/langfuse/5ff18d4d-9066-4c67-8ecc-ffc0e295fee6
1•clemo_ra•11h ago

Azure hit by 15 Tbps DDoS attack using 500k IP addresses

https://www.bleepingcomputer.com/news/microsoft/microsoft-aisuru-botnet-used-500-000-ips-in-15-tb...
457•speckx•1d ago•287 comments
Open in hackernews

Lock-Free Rust: How to Build a Rollercoaster While It's on Fire

https://yeet.cx/blog/lock-free-rust/
133•r3tr0•6mo ago

Comments

r3tr0•6mo ago
hope you enjoy this article on lock free programming in rust.

I used humor and analogies in the article not just to be entertaining, but to make difficult concepts like memory ordering and atomics more approachable and memorable.

tombert•6mo ago
Interesting read, I enjoyed it and it answered a question that I didn't even realize I had been asking myself for years, which is how lock-free structures work.

Have you looked at CTries before? They're pretty interesting, and I think are probably the future of this space.

nmca•6mo ago
Did you get help from ChatGPT ooi? The humour sounds a bit like modern ChatGPT style but it’s uncanny valley.
bobbyraduloff•6mo ago
at the very least that article was definitely edited with ChatGPT. i had someone on my team write “edgy” copy with ChatGPT last week and it sounded exactly the same. short paragraphs and overuse of bullet points are also a dead giveaway. i don’t think it’s super noticeable if you don’t use ChatGPT a lot but for the people that use these systems daily, it’s still very easy to spot.

my suggestion to OP: this was interesting material, ChatGPT made it had to read. use your own words to explain it. most people interested in this deeply technical content would rather read your prompt than the output.

zbentley•6mo ago
As someone who overused bullet points before it was AI-cool and doesn’t write with the assistance of AI (not due to a general anti-AI belief, I just like writing by hand) I have also started getting that feedback a lot lately.

Who knows, maybe someone accidentally over-weighted my writing by a factor of a trillion in ChatGPT’s training set?

r3tr0•6mo ago
i had help GPT help with some grammar, editing, and shortening.

The core ideas, jokes, code, and analogies are 100% mine.

Human chaos. Machine polish.

tombert•6mo ago
Pretty interesting.

I have finally bitten the bullet and learned Rust in the last few months and ended up really liking it, but I have to admit that it's a bit lower level than I generally work in.

I have generally avoided locks by making very liberal use of Tokio channels, though that isn't for performance reasons or anything: I just find locks really hard to reason about for anything but extremely trivial usecases, and channels are a more natural abstraction for me.

I've never really considered what goes into these lock-free structures, but that might be one of my next "unemployment projects" after I finish my current one.

forgot_old_user•6mo ago
definitely! Reminds me of the golang saying

> Don't Communicate by Sharing Memory; Share Memory by Communicating

https://www.php.cn/faq/1796714651.html

tombert•6mo ago
Yeah, similarly, Joe Armstrong (RIP), co-creator of Erlang explained it to me like this:

> In distributed systems there is no real shared state (imagine one machine in the USA another in Sweden) where is the shared state? In the middle of the Atlantic? — shared state breaks laws of physics. State changes are propagated at the speed of light — we always know how things were at a remote site not how they are now. What we know is what they last told us. If you make a software abstraction that ignores this fact you’ll be in trouble.

He wrote this to me in 2014, and it has really informed how I think about these things.

throwawaymaths•6mo ago
The thing is that go channels themselves are shared state (if the owner closes the channel and a client tries to write you're not gonna have a good time)! Erlang message boxes are not.
kbolino•6mo ago
You don't have to close a channel in Go and in many cases you actually shouldn't.

Even if you choose to close a channel because it's useful to you, it's not necessarily shared state. In a lot of cases, closing a channel behaves just like a message in its queue.

tombert•6mo ago
Strictly speaking they’re shared state, but the way you model your application around channels is generally to have independent little chunks of work and the channels are just a means of communicating. I know it’s not one-for-one with Erlang.
Kubuxu•6mo ago
You can think of closing the channel as sending a message “there will be no further messages”, the panic on write is enforcement of that contract.

Additionally the safe way to use closing of a channel is the writer closing it. If you have multiple writers, you have to either synchronise them, or don’t close the channel.

throwawaymaths•6mo ago
Sure but the fact that it is shared state is why you can't naively have a go channel that spans a cluster but Erlang's "actor" system works just fine over a network and the safety systems (nodedowns, monitors etc) are a simple layer on top.
aatd86•6mo ago
Isn't entanglement in quantum physics the manifestation of shared state? tongue-in-cheek
psychoslave•6mo ago
Maybe. Or maybe we observe the same point of information source from two points which happen to be distant in the 3-coordinates we are accustomized to deal with, but both close to this single point in some other.
aatd86•6mo ago
but that still means that there is shared state on the projection from the higher tensor space.

orthogonality needs to be valid for all subspaces.

gpderetta•6mo ago
> Don't Communicate by Sharing Memory; Share Memory by Communicating

that's all well and good until you realize you are reimplementing a slow, buggy version of MESI in software.

Proper concurrency control is the key. Shared memory vs message passing is incidental and application specific.

revskill•6mo ago
How can u be unemployed ?
tombert•6mo ago
Just the market. I don’t have a lot of reasons outside of that.
psychoslave•6mo ago
By the default state of any entity in universe which is to not be employed?
0x1ceb00da•6mo ago
> AtomicUsize: Used for indexing and freelist linkage. It’s a plain old number, except it’s watched 24 / 7 by the CPU’s race condition alarm.

Is it though? Aren't atomic load/store instructions the actual important thing. I know the type system ensures that `AtomicUsize` can only be accessed using atomic instructions but saying it's being watched by the CPU is inaccurate.

eslaught•6mo ago
I'm not sure what the author intended, but one way to implement atomics at the microarchitectural level is via a load-linked/store-conditional pair of instructions, which often involves tracking the cache line for modification.

https://en.wikipedia.org/wiki/Load-link/store-conditional

It's not "24/7" but it is "watching" in some sense of the word. So not entirely unfair.

ephemer_a•6mo ago
did 4o write this
fefe23•6mo ago
To borrow an old adage: The determined programmer can write C code in any language. :-)
MobiusHorizons•6mo ago
Atomics are hardly “C”. They are a primative exposed many CPU ISAs for helping to navigate the complexity those same CPUs introduced with OOO execution and complex caches in a multi-threaded environment. Much like simd atomics require extending the language through intrinsics or new types because they represent capabilities that were not possible when the language was invented. Atomics require this extra support in Java just as they do in rust or C.
MobiusHorizons•6mo ago
I enjoyed the content, but could have done without the constant hyping up of the edginess of lock free data structures. I mean yes, like almost any heavily optimized structure there are trade offs that prevent this optimization from being globally applicable. But also being borderline aroused at the “danger” and rule breaking is tiresome and strikes me as juvenile.
bigstrat2003•6mo ago
To each their own. I thought it was hilarious and kept the article entertaining throughout, with what would otherwise be a fairly dry subject.
atoav•6mo ago
It is juvenile, but what do we know? Real Men use after free, so they wouldn't even use Rust to begin with.

The edgy tones sound like from an LLM to me..

lesser23•6mo ago
The bullet points and some of the edge definitely smell like LLM assistance.

Other than that I take the other side. I’ve read (and subsequently never finished) dozens of programming books because they are so god awfully boring. This writing style, perhaps dialed back a little, helps keep my interest. I like the feel of a zine where it’s as technical as a professional write up but far less formal.

I often find learning through analogy useful anyway and the humor helps a lot too.

zero0529•6mo ago
Like the writing style but would prefer if it was dialed down maybe 10 %. Otherwise a great article as an introduction to lock-free datastructures.
Animats•6mo ago
I've done a little bit of "lock-free" programming in Rust, but it's for very specialized situations.[1] This allocates and releases bits in a bitmap. The bitmap is intended to represent the slots in use in the Vulkan bindless texture index, which resides in the GPU. Can't read that those slots from the CPU side to see if an entry is in use. So in-use slots in that table have to be tracked with an external bitmap.

This has no unsafe code. It's all done with compare and swap. There is locking here, but it's down at the hardware level within the compare and swap instruction. This is cleaner and more portable than relying on cross-CPU ordering of operations.

[1] https://github.com/John-Nagle/rust-vulkan-bindless/blob/main...

jillesvangurp•6mo ago
This is the kind of stuff that you shouldn't have to reinvent yourself but be able to reuse from a good library. Or the standard library even.

How would this compare to the lock free abstractions that come with e.g. the java.concurrent package? It has a lot of useful primitives and data structures. I expect the memory overhead is probably worse for those.

Support for this is one of the big reason Java and the jvm has been a popular choice for companies building middleware and data processing frameworks for the last few decades. Exactly the kind of stuff that the author of this article is proposing you could build with this. Things like Kafka, Lucene, Spark, Hadoop, Flink, Beam, etc.

gpderetta•6mo ago
> This is the kind of stuff that you shouldn't have to reinvent yourself but be able to reuse from a good library. Or the standard library even.

Indeed; normally we call it the system allocator.

A good system allocator will use per thread or per cpu free-lists so that it doesn't need to do CAS loops for every allocation though. At the very least will use hashed pools to reduce contention.

gpderetta•6mo ago
The claim that the lock free array is faster then the locked variant is suspicious. The lock free array is performing a CAS for every operation, this is going to dominate[1]. A plain mutex would do two CAS (or just one if it is a spin lock), so the order of magnitude difference is not explainable by the lock free property.

Of course if the mutex array is doing a linear scan to find the insertion point that would explain the difference but: a) I can't see the code for the alternative and b) there is no reason why the mutex variant can't use a free list.

Remember:

- Lock free doesn't automatically means faster (still it has other properties that might be desirable even if slower)

- Never trust a benchmark you didn't falsify yourself.

[1] when uncontended; when contended cache coherence cost will dominate over everything else, lock-free or not.

michaelscott•6mo ago
For applications doing extremely high rates of inserts and reads, lock free is definitely superior. In extreme latency sensitive applications like trading platforms (events processing sub 100ms) it's a requirement; locked structures cause bottlenecks at high throughput
bonzini•6mo ago
Yes the code for the alternative is awful. However I tried rewriting it with a better alternative (basically the same as the lock free code, but with a mutex around it) and was still 40% slower. See FixedVec in https://play.rust-lang.org/?version=stable&mode=release&edit...
gpderetta•6mo ago
Given twice the number of CASs, about twice as slow is what I would expect for the mutex variant when uncontended. I don't know enough rust to fix it myself, but could you try with a spin lock?

As the benchmark is very dependent on contention, it would give very different results if the the threads are scheduled serially as opposed to running truly concurrently (for example using a spin lock would be awful if running on a single core).

So again, you need to be very careful to understand what you are actually testing.

r3tr0•6mo ago
totally valid.

that benchmarking is something i should have added more alternatives to.

j_seigh•6mo ago
I did a lock-free ABA-free bounded queue in c++ kind of an exercise. I work mostly with deferred reclamation schemes (e.g. refcounting, quiescent state based reclamation, and epoch based reclamation). A queue requiring deferred reclamation, like the Michael-Scott lock-free queue is going to perform terribly so you go with an array based ring buffer. It uses a double wide CAS to do the insert for the enqueue and a regular CAS to update the tail. Dequeue is just a regular CAS to update the head. That runs about 57 nsecs on my 10th gen i5 for single producer and consumer.

A lock-free queue by itself isn't very useful. You need a polling strategy that doesn't involve a busy loop. If you use mutexes and condvars, you've basically turned it into a lock based queue. Eventcounts work much better.

If I run more threads than CPUs and enough work so I get time slice ends, I get about 1160 nsecs avg enq/deq for mutex version, and about 146 nsecs for eventcount version.

Timings will vary based on how man threads you use and cpu affinity that takes your hw thread/core/cache layout into consideration. I have gen 13 i5 that runs this slower than my gen 10 i5 because of the former's efficiency cores even though it is supposedly faster.

And yes, a queue is a poster child for cache contention problems, une enfant terrible. I tried a back off strategy at one point but it didn't help any.

convivialdingo•6mo ago
I tried replacing a DMA queue lock with lock-free CAS and it wasn't faster than a mutex or a standard rwlock.

I rewrote the entire queue with lock-free CAS to manage insertions/removals on the list and we finally got some better numbers. But not always! We found it worked best either as a single thread, or during massive contention. With a normal load it wasn't really much better.

sennalen•6mo ago
The bottleneck is context switching
scripturial•6mo ago
So don’t bother to optimize anything?
jonco217•6mo ago
> NOTE: In this snippet we ignore the ABA problem

The article doesn't go into details but this is subtle way to mess up writing lock free data structures:

https://en.wikipedia.org/wiki/ABA_problem

r3tr0•6mo ago
i will do another one on just the ABA problem and how many different ways it can put your program in the hospital.
lucraft•6mo ago
Can I ask a dumb question - how is Atomic set operation implemented internally if not by grabbing a lock?
moring•6mo ago
Two things that come to my mind:

1. Sometimes "lock-free" actually means using lower-level primitives that use locks internally but don't expose them, with fewer caveats than using them at a higher level. For example, compare-and-set instructions offered by CPUs, which may use bus locks internally but don't expose them to software.

2. Depending on the lower-level implementation, a simple lock may not be enough. For example, in a multi-CPU system with weaker cache coherency, a simple lock will not get rid of outdated copies of data (in caches, queues, ...). Here I write "simple" lock because some concepts of a lock, such as Java's "synchronized" statement, bundle the actual lock together with guaranteed cache synchronization, whether that happens in hardware or software.

gpderetta•6mo ago
Reminder that lock-free is a term of art with very specific meaning about starvation-freedom and progress and has very little to do with locking.
gpderetta•6mo ago
The hardware itself is designed to guarantee it. For example, the core guarantees that it will perform the load + compare + store from a cacheline in a finite number of cycles, while the cache coherency protocol guarantees that a) the core will eventually (i.e. it is fair) be able to acquire the cacheline in exclusive mode and b) will be able to hold it for a minimum number of clock cycles before another core forces an eviction or a downgrade of the ownership.
sophacles•6mo ago
Most hardware these days has intrinsic atomics - they are built into the hw in various ways, both in memory model guarantees (e.g. x86 has a very strong guarantees of cache coherency, arm not so much), and instructions (e.g. xchg on x86). The deatails vary a lot between different cpu architectures, which is why C++ and Rust have memory models to program to rather than the specific semantics of a given arch.
Asraelite•6mo ago
It does use locks. If you go down deep enough you eventually end up with hardware primitives that are effectively locks, although they might not be called that.

The CPU clock itself can be thought of as a kind of lock.

Fiahil•6mo ago
You can go one step further if :

- you don't reallocate the array

- you don't allow updating/ removing past inserted values

In essence it become a log, a Vec<OnceCell<T>> or a Vec<UnsafeCell<Option<T>>>. Works well, but only for a bounded array. So applications like messaging, or inter-thread communication are not a perfect fit.

It's a fixed-size vector that can be read at the same time it's being written to. It's no a common need.

pjmlp•6mo ago
That screenshot is very much CDE inspired.
sph•6mo ago
Obligatory video from Jon Gjengset “Crust of Rust: Atomics and Memory Ordering”: https://youtu.be/rMGWeSjctlY?si=iDhOLFj4idOOKby8
gmm1990•6mo ago
Is the advantage of a freelist over just an array of the values (implemented like a ring buffer), that you can don't have to consume values in order? It just seems like throwing a pointer lookup would add a lot of latency for something thats so latency sensitive.
rurban•6mo ago
It still wouldn't lead to proper Rust concurrency safety, because their IO is still blocking.
zbentley•6mo ago
What’s blocking IO have to do with this topic?

Also, I don’t feel like that’s true: Rust has the exact same non blocking IO primitives in the stdlib as any other systems language: O_NONBLOCK, multiplexers, and so on. Combined with async/await syntax sugar for concurrency and backends like Tokyo, I’m not sure how you end up at “rust IO is still blocking”.

rurban•6mo ago
Lock-freeness is an important step to concurrency safety. Rust falsely calls itself concurrency safe.

For a system language it's great to finally have a lock free library for some containers, but that's still not safe.

IX-103•6mo ago
> We don’t need strict ordering here; we’re just reading a number.

That's probably the most scary sentence in the whole article.

Havoc•6mo ago
wow - that's a huge speed improvement.

I wonder if this is connected to that rust optimisation bounty post we saw the other day where they couldn't get rust safe decoder closer than 5% to their C implementation. Maybe that's just the cost of safety