frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
479•klaussilveira•7h ago•120 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
818•xnx•12h ago•490 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
40•matheusalmeida•1d ago•3 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
161•isitcontent•7h ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
158•dmpetrov•8h ago•69 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
97•jnord•3d ago•14 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
53•quibono•4d ago•7 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
211•eljojo•10h ago•135 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
264•vecti•9h ago•125 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
332•aktau•14h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
415•todsacerdoti•15h ago•220 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
27•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
344•lstoll•13h ago•245 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
5•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
53•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
202•i5heu•10h ago•148 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
116•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
153•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
248•surprisetalk•3d ago•32 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
28•gfortaine•5h ago•4 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1004•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
49•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
74•ray__•4h ago•36 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
8•gmays•2h ago•2 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2275•HellsMaddy•1d ago•981 comments
Open in hackernews

Preserving Order in Concurrent Go Apps: Three Approaches Compared

https://destel.dev/blog/preserving-order-in-concurrent-go
77•destel•5mo ago

Comments

destel•5mo ago
Hi everyone, I’m the author of the article. Happy to answer any questions or discuss concurrency patterns in Go. Curious how others tackle such problems.
Traubenfuchs•5mo ago
> Curious how others tackle such problems.

What do you think about the order preserving simplicity of Java?

  List<Input> inputs = ...;

  List<Output> results = inputs.parallelStream()
                             .map(this::processTask)
                             .collect(toList());
If you want more control or have more complex use cases, you can use an ExecutorService of your choice, handle the futures yourself or get creative with Javas new structured concurrency.
kamranjon•5mo ago
Often in go I’ll create some data structure like a map to hold the new value keyed by the original index (basically a for loop with goroutines inside that close over the index value) - then I just reorder them after waiting for all of them to complete.

Is this basically what Java is doing?

I think that maybe the techniques in this article are a little more complex, allowing you to optimize further (basically continue working as soon as possible instead of just waiting for everything to complete and reordering after the fact) but I’d be curious to know if I’ve missed something.

gleenn•5mo ago
It's a reasonable solution. The problem with this solution is mentioned in the article, you necessarily have the worst case memory usage because you have to store everything in the map first. If you don't have too much to store, it will work.
destel•5mo ago
I haven’t used Java for about a decade, so I’m not very familiar with streams api.

Your snippet looks good and concise.

One thing I haven’t emphasized enough in the article is that all algorithms there are designed to work with potentially infinite streams

Groxx•5mo ago
Their planned semantics don't allow for that - there's no backpressure in that system, so it might race ahead and process up to e.g. item 100 while still working on item 1.

If everything fits in memory, that's completely fine. And then yeah, this is wildly overcomplicated, just use a waitgroup and a slice and write each result into its slice index and wait for everything to finish - that matches your Java example.

But when it doesn't fit in memory, that means you have unbounded buffer growth that might OOM.

kunley•5mo ago
chan chan Foo seems like a cool trick, looking forward to use it in the code; thanks for the idea.

PS. I realize you present even better solution; still, first version seems like a thing nice enough to have in a toolbox

destel•5mo ago
Thanks. This replyTo pattern is very similar to promises in other languages.
gabesullice•4mo ago
I might be too late for you to see this, but I'm curious why your final example requires the function "f" to receive the canWrite channel. I must be missing something. Couldn't the Ordered map signature be:

  func OrderedLoop[A, B any](in <-chan A, done chan<- B, n int, f func(a A))
Instead of:

  func OrderedLoop[A, B any](in <-chan A, done chan<- B, n int, f func(a A, canWrite <-chan struct{}))
Or is there a reason that "f" can't be wrapped in an anonymous function to handle the blocking logic?
destel•4mo ago
You're not late. Just yesterday I've configured new reply notifications via RSS.

Typically function "f" does two things. 1. Performs calculations (this can be parallelized). 2. Writes results somewhere (this must happen sequentially and in the correct order).

Here's the typical OrderedLoop usage example from the article:

  OrderedLoop(in, out, n, func(a A, canWrite <-chan struct{}) {
   // [Do processing here]
   
   // Everything above this line is executed concurrently,
   // everything below it is executed sequentially and in order
   <-canWrite
   
   // [Write results somewhere]
  })

Without the "canWrite" the entire body of the "f" will be executed either concurrently or sequentially. With it - we can have this dual behavior, similar to critical sections protected by mutexes.

It's worth mentioning that OrderedLoop is a low-level primitive. User-level functions like OrderedMap, OrderedFilter and others do not need the "canWrite" channel to be exposed.

abtinf•5mo ago
Another scenario where order matters is in Temporal workflows. Temporal’s replay capability requires deterministic execution.
Groxx•5mo ago
That's a rather special case: they and Cadence control when calls into their code unblocks, and they use that to run your code as if it was a single-threaded event loop. That way, the stuff they do can be deterministic while simulating parallel execution (but only concurrency).
tetraodonpuffer•5mo ago
Thanks for the write up! In my current application I have a few different scenarios that are a bit different from yours but still require processing aggregated data in order

1. Reading from various files where each file has lines with a unique identifier I can use to process in order: I open all the files and create a min heap reading the first line of each, then process by grabbing the lowest from the min-heap repeatedly, after reading a line from a file, I read another and put it in the min-heap again (the min heap cells contain the opened file descriptor for that file)

2. Aggregating across goroutines that service data generators with different latencies and throughputs. I have a goroutine each that interfaces with them and consider them “producers”. Using a global atomic integer I can quickly assign a unique increasing index to the messages coming in, these can be serviced with a min-heap same as above. There are some considerations about dropping too old messages, so an alternative approach for some cases is to index the min-heap on received time and process only up to time.Now()-some buffering time to allow more time for things to settle before dropping things (trading total latency for this).

3. Similar to the above I have another scenario where throughput ingestion is more important and repeated processing happens in-order but there is no requirement on all messages to have been processed every time, just that they are processed in order (this is the backing for a log viewer). In this case I just slab allocate and dump what I receive without ordering concerns but I also keep a btree with the indexes that I iterate over when it’s time to process. I originally had this buffering like (2) to guarantee mostly ordered insertions in the slabs themselves (which I simply iterated on) but if a stall happened in a goroutine then shifting over the items in the slab when the old items came in became very expensive and could spiral badly.

destel•5mo ago
Wow, that’s some seriously sophisticated stuff - it’s not that often you see a heap used in typical production code (outside of libraries)!

Your first example definitely gives me merge-sort vibes - a really clean way to keep things ordered across multiple sources. The second and third scenarios are a bit beyond what I’ve tackled so far, but super interesting to read about.

This also reminded me of a WIP PR I drafted for rill (probably too niche, so I’m not sure I’ll ever merge it). It implements a channel buffer that behaves like a heap - basically a fixed-size priority queue where re-prioritization only happens for items that pile up due to backpressure. Maybe some of that code could be useful for your future use cases: https://github.com/destel/rill/pull/50

tetraodonpuffer•5mo ago
Hah not sure about “production”, I am currently in between jobs and am taking advantage of that to work on a docker/k8s/file TUI log viewer.

I am using those techniques respectively for loading backups (I store each container log in a separate file inside a big zip file, which allows concurrent reading without unpacking) and for servicing the various log producing goroutines (which use the docker/k8s apis as well as fsnotify for files) since I allow creating “views” of containers that consequently need to aggregate in order. The TUI itself, using tview, runs in a separate goroutine at configurable fps reading from these buffers.

I have things mostly working, the latest significant refactoring was introducing the btree based reading after noticing the “fix the order” stalls were too bad, and I am planning to do a show hn when I’m finished. It has been a lot of fun going back to solo-dev greenfield stuff after many years of architecture focused work.

I definitely love golang but despite being careful and having access to great tools like rr and dlv in goland, it can get difficult sometimes to debug deadlocks sometimes especially when mixing channels and locks. I have found this library quite useful to chase down deadlocks in some scenarios https://github.com/sasha-s/go-deadlock

candiddevmike•5mo ago
Personally, I've come to really hate channels in Go. They are a source of some seriously heinous deadlock bugs that are really hard to debug, and closing channels in the wrong spot can crash your entire app. I try using plain locks until it hurts before I reach for channels these days.
Groxx•5mo ago
Well over half of the code I've ever seen that uses three or more channels (i.e. two semantic ones plus a cancellation or shutdown) has had serious flaws in it.

Granted, that generally means they're doing something non-trivial with concurrency, and that correlates strongly with "has concurrency bugs". But I see issues FAR more frequently when they reach for channels rather than mutexes. It's bad enough that I just check absolutely every three-chan chunk of code proactively now.

I lay part of the blame on Go's "♥ safe and easy concurrency with channels! ♥" messaging. And another large chunk at the lack of generics (until recently), making abstracting these kinds of things extremely painful. Combined, you get "just do it by hand lol, it's easy / get good" programming, which is always a source of "fun".

ifoxhz•5mo ago
I completely agree with your point. I also strongly dislike this programming model. However, are there better handling mechanisms or well-established libraries for managing concurrency and synchronization in Go? Previously, when I used C, I relied heavily on libraries like libuv to handle similar issues.
Groxx•5mo ago
There are some (the article's author builds a pretty sophisticated one: https://github.com/destel/rill ), but the ecosystem spent a very long time vilifying abstraction and generics and we're going to be paying that price for another decade at least. Possibly forever.

Generics in particular are rather important here because without them, you are forced to build this kind of thing from scratch every time to retain type safety and performance, or give up and use reflection (more complicated, less safe, requires careful reading to figure out how to use because everything is an `interface{}`). This works, and Go's reflection is quite fast, but it's not a good experience for authors or users, so they're rather strongly incentivized to not build it / just do it by hand lol.

Now that we have a somewhat crippled version of generics, much of this can be solved in an ideal way: https://pkg.go.dev/slices works for everything and is fast, safe, easy to use, and reasonably easy to build. But there's a decade of inertia (with both existing code and community rejection of the concept) to turn around.

__turbobrew__•5mo ago
Agreed. I especially think it was common to overuse channels when golang was younger as that was “the go way”. I think people have started to realize that channels are complex and a sharp abstraction and they should not be used frivolously.

I can’t think of the last time I actually wrote code which directly created channels. Of course things like contexts, tickers, etc are implemented with channels and I think that is ideally how they should be used — in well defined and self contained library code.

kunley•5mo ago
Totally different perspective here. Never dissapointed with channels, can't stand async.
latchkey•5mo ago
For something like this, I would instinctively reach for an external queue mechanism instead of trying to work through the complexity of golangs concurrency.

Create a bunch of sequentially numbered jobs that then update their output into postgres database. Then have N number of workers process the jobs. Something like GCP's CloudTasks is perfect for this because the "workers" are just GCP Cloud Functions, so you can have a near infinite number of them (limited by concurrent DB connections).

This approach also buys you durability of the queue for free (ie: what happens when you need to stop your golang process mid queue?).

Then it is just a query:

  select * from finished_jobs order by job_num;
destel•5mo ago
I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.
destel•5mo ago
UPD.

I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.

latchkey•5mo ago
Thanks, but I'd still use a queue over this solution.

Real-time Log Enrichment: perfect for my example [0], you're firing off endless tasks. RT logs have a timestamp.

Finding the First Match in a File List: Files tend to be static. I'd use a queue to first build an index and then a queue to process the index.

Time Series Data Processing: Break the data into chunks, you mention 600MB, which isn't that big at all given that Cloud Run memory maxes out at 32GB.

[0] https://news.ycombinator.com/item?id=45094387