frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Le Chat. Custom MCP Connectors. Memories

https://mistral.ai/news/le-chat-mcp-connectors-memories
21•Anon84•26m ago•2 comments

30 minutes with a stranger

https://pudding.cool/2025/06/hello-stranger/
435•MaxLeiter•5h ago•134 comments

Use Bayes rule to mechanically solve probability riddles

https://cloud.disroot.org/s/Ec4xTMFDteTrFio
10•zaik•3d ago•0 comments

The Color of the Future: A history of blue

https://www.hopefulmons.com/p/the-color-of-the-future
36•prismatic•2h ago•5 comments

Polars Cloud and Distributed Polars now available

https://pola.rs/posts/polars-cloud-launch/
53•jonbaer•8h ago•30 comments

I Should Have Loved Electrical Engineering

https://blog.tdhttt.com/post/love-ee/
17•tdhttt•3d ago•13 comments

Show HN: A roguelike game that runs inside Notepad++

https://github.com/thelowsunoverthemoon/NeuroPriest
94•lowsun•3d ago•10 comments

Claude Code: Now in Beta in Zed

https://zed.dev/blog/claude-code-via-acp
607•meetpateltech•20h ago•384 comments

Étoilé – desktop built on GNUStep

http://etoileos.com/
152•pabs3•8h ago•58 comments

Liquid Glass? That's what your M4 CPU is for

https://idiallo.com/byte-size/apple-liquid-glass
48•luismedel•1h ago•53 comments

Neovim Pack

https://neovim.io/doc/user/pack.html#vim.pack
190•k2enemy•11h ago•108 comments

Reverse engineering Solos smart glasses

https://jfloren.net/b/2025/8/28/0
98•floren•3d ago•14 comments

Minesweeper thermodynamics

https://oscarcunningham.com/792/minesweeper-thermodynamics/
128•robinhouston•2d ago•34 comments

The Bitter Lesson Is Misunderstood

https://obviouslywrong.substack.com/p/the-bitter-lesson-is-misunderstood
284•JnBrymn•6d ago•172 comments

AR Fluid Simulation Demo

https://danybittel.ch/fluid
93•danybittel•3d ago•19 comments

Melvyn Bragg steps down from presenting In Our Time

https://www.bbc.co.uk/mediacentre/2025/melvyn-bragg-decides-to-step-down-from-presenting-in-our-t...
155•aways•5h ago•92 comments

Nuclear: Desktop music player focused on streaming from free sources

https://github.com/nukeop/nuclear
336•indigodaddy•19h ago•211 comments

A Rebel Writer's First Revolt

https://www.vulture.com/article/arundhati-roy-mother-mary-comes-to-me-review.html
7•lermontov•1d ago•1 comments

Hledger 1.50

https://github.com/simonmichael/hledger/releases/tag/1.50
21•olexsmir•1h ago•1 comments

Google was down in eastern EU and Turkey

https://www.novinite.com/articles/234225/Google+Down+in+Eastern+Europe+%28UPDATED%29
65•nurettin•3h ago•16 comments

William Wordsworth's letter: "The Law of Copyright" (1838)

https://gutenberg.org/cache/epub/76806/pg76806-images.html
28•petethomas•6h ago•15 comments

New knot theory discovery overturns long-held mathematical assumption

https://www.scientificamerican.com/article/new-knot-theory-discovery-overturns-long-held-mathemat...
110•baruchel•1d ago•19 comments

Half an year on Alpine: just musl aside

https://blog.jutty.dev/posts/half-an-year-on-alpine/
34•zdw•2d ago•12 comments

Writing a C compiler in 500 lines of Python (2023)

https://vgel.me/posts/c500/
208•ofou•19h ago•62 comments

Understanding Transformers Using a Minimal Example

https://rti.github.io/gptvis/
221•rttti•20h ago•14 comments

Eels are fish

https://eocampaign1.com/web-version?p=495827fa-8295-11f0-8687-8f5da38390bd&pt=campaign&t=17562270...
137•speckx•21h ago•136 comments

What is it like to be a bat?

https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
160•adityaathalye•17h ago•219 comments

ReMarkable Paper Pro Move

https://remarkable.com/products/remarkable-paper/pro-move
240•ksec•11h ago•287 comments

Say Bye with JavaScript Beacon

https://hemath.dev/blog/say-bye-with-javascript-beacon/
22•moebrowne•3d ago•14 comments

Speeding up PyTorch inference on Apple devices with AI-generated Metal kernels

https://gimletlabs.ai/blog/ai-generated-metal-kernels
172•nserrino•18h ago•26 comments
Open in hackernews

Preserving Order in Concurrent Go Apps: Three Approaches Compared

https://destel.dev/blog/preserving-order-in-concurrent-go
76•destel•3d ago

Comments

destel•2d ago
Hi everyone, I’m the author of the article. Happy to answer any questions or discuss concurrency patterns in Go. Curious how others tackle such problems.
Traubenfuchs•2d ago
> Curious how others tackle such problems.

What do you think about the order preserving simplicity of Java?

  List<Input> inputs = ...;

  List<Output> results = inputs.parallelStream()
                             .map(this::processTask)
                             .collect(toList());
If you want more control or have more complex use cases, you can use an ExecutorService of your choice, handle the futures yourself or get creative with Javas new structured concurrency.
kamranjon•2d ago
Often in go I’ll create some data structure like a map to hold the new value keyed by the original index (basically a for loop with goroutines inside that close over the index value) - then I just reorder them after waiting for all of them to complete.

Is this basically what Java is doing?

I think that maybe the techniques in this article are a little more complex, allowing you to optimize further (basically continue working as soon as possible instead of just waiting for everything to complete and reordering after the fact) but I’d be curious to know if I’ve missed something.

gleenn•2d ago
It's a reasonable solution. The problem with this solution is mentioned in the article, you necessarily have the worst case memory usage because you have to store everything in the map first. If you don't have too much to store, it will work.
destel•2d ago
I haven’t used Java for about a decade, so I’m not very familiar with streams api.

Your snippet looks good and concise.

One thing I haven’t emphasized enough in the article is that all algorithms there are designed to work with potentially infinite streams

Groxx•2d ago
Their planned semantics don't allow for that - there's no backpressure in that system, so it might race ahead and process up to e.g. item 100 while still working on item 1.

If everything fits in memory, that's completely fine. And then yeah, this is wildly overcomplicated, just use a waitgroup and a slice and write each result into its slice index and wait for everything to finish - that matches your Java example.

But when it doesn't fit in memory, that means you have unbounded buffer growth that might OOM.

kunley•2d ago
chan chan Foo seems like a cool trick, looking forward to use it in the code; thanks for the idea.

PS. I realize you present even better solution; still, first version seems like a thing nice enough to have in a toolbox

destel•2d ago
Thanks. This replyTo pattern is very similar to promises in other languages.
abtinf•2d ago
Another scenario where order matters is in Temporal workflows. Temporal’s replay capability requires deterministic execution.
Groxx•2d ago
That's a rather special case: they and Cadence control when calls into their code unblocks, and they use that to run your code as if it was a single-threaded event loop. That way, the stuff they do can be deterministic while simulating parallel execution (but only concurrency).
tetraodonpuffer•2d ago
Thanks for the write up! In my current application I have a few different scenarios that are a bit different from yours but still require processing aggregated data in order

1. Reading from various files where each file has lines with a unique identifier I can use to process in order: I open all the files and create a min heap reading the first line of each, then process by grabbing the lowest from the min-heap repeatedly, after reading a line from a file, I read another and put it in the min-heap again (the min heap cells contain the opened file descriptor for that file)

2. Aggregating across goroutines that service data generators with different latencies and throughputs. I have a goroutine each that interfaces with them and consider them “producers”. Using a global atomic integer I can quickly assign a unique increasing index to the messages coming in, these can be serviced with a min-heap same as above. There are some considerations about dropping too old messages, so an alternative approach for some cases is to index the min-heap on received time and process only up to time.Now()-some buffering time to allow more time for things to settle before dropping things (trading total latency for this).

3. Similar to the above I have another scenario where throughput ingestion is more important and repeated processing happens in-order but there is no requirement on all messages to have been processed every time, just that they are processed in order (this is the backing for a log viewer). In this case I just slab allocate and dump what I receive without ordering concerns but I also keep a btree with the indexes that I iterate over when it’s time to process. I originally had this buffering like (2) to guarantee mostly ordered insertions in the slabs themselves (which I simply iterated on) but if a stall happened in a goroutine then shifting over the items in the slab when the old items came in became very expensive and could spiral badly.

destel•2d ago
Wow, that’s some seriously sophisticated stuff - it’s not that often you see a heap used in typical production code (outside of libraries)!

Your first example definitely gives me merge-sort vibes - a really clean way to keep things ordered across multiple sources. The second and third scenarios are a bit beyond what I’ve tackled so far, but super interesting to read about.

This also reminded me of a WIP PR I drafted for rill (probably too niche, so I’m not sure I’ll ever merge it). It implements a channel buffer that behaves like a heap - basically a fixed-size priority queue where re-prioritization only happens for items that pile up due to backpressure. Maybe some of that code could be useful for your future use cases: https://github.com/destel/rill/pull/50

tetraodonpuffer•2d ago
Hah not sure about “production”, I am currently in between jobs and am taking advantage of that to work on a docker/k8s/file TUI log viewer.

I am using those techniques respectively for loading backups (I store each container log in a separate file inside a big zip file, which allows concurrent reading without unpacking) and for servicing the various log producing goroutines (which use the docker/k8s apis as well as fsnotify for files) since I allow creating “views” of containers that consequently need to aggregate in order. The TUI itself, using tview, runs in a separate goroutine at configurable fps reading from these buffers.

I have things mostly working, the latest significant refactoring was introducing the btree based reading after noticing the “fix the order” stalls were too bad, and I am planning to do a show hn when I’m finished. It has been a lot of fun going back to solo-dev greenfield stuff after many years of architecture focused work.

I definitely love golang but despite being careful and having access to great tools like rr and dlv in goland, it can get difficult sometimes to debug deadlocks sometimes especially when mixing channels and locks. I have found this library quite useful to chase down deadlocks in some scenarios https://github.com/sasha-s/go-deadlock

candiddevmike•2d ago
Personally, I've come to really hate channels in Go. They are a source of some seriously heinous deadlock bugs that are really hard to debug, and closing channels in the wrong spot can crash your entire app. I try using plain locks until it hurts before I reach for channels these days.
Groxx•2d ago
Well over half of the code I've ever seen that uses three or more channels (i.e. two semantic ones plus a cancellation or shutdown) has had serious flaws in it.

Granted, that generally means they're doing something non-trivial with concurrency, and that correlates strongly with "has concurrency bugs". But I see issues FAR more frequently when they reach for channels rather than mutexes. It's bad enough that I just check absolutely every three-chan chunk of code proactively now.

I lay part of the blame on Go's "♥ safe and easy concurrency with channels! ♥" messaging. And another large chunk at the lack of generics (until recently), making abstracting these kinds of things extremely painful. Combined, you get "just do it by hand lol, it's easy / get good" programming, which is always a source of "fun".

ifoxhz•2d ago
I completely agree with your point. I also strongly dislike this programming model. However, are there better handling mechanisms or well-established libraries for managing concurrency and synchronization in Go? Previously, when I used C, I relied heavily on libraries like libuv to handle similar issues.
Groxx•1d ago
There are some (the article's author builds a pretty sophisticated one: https://github.com/destel/rill ), but the ecosystem spent a very long time vilifying abstraction and generics and we're going to be paying that price for another decade at least. Possibly forever.

Generics in particular are rather important here because without them, you are forced to build this kind of thing from scratch every time to retain type safety and performance, or give up and use reflection (more complicated, less safe, requires careful reading to figure out how to use because everything is an `interface{}`). This works, and Go's reflection is quite fast, but it's not a good experience for authors or users, so they're rather strongly incentivized to not build it / just do it by hand lol.

Now that we have a somewhat crippled version of generics, much of this can be solved in an ideal way: https://pkg.go.dev/slices works for everything and is fast, safe, easy to use, and reasonably easy to build. But there's a decade of inertia (with both existing code and community rejection of the concept) to turn around.

__turbobrew__•2d ago
Agreed. I especially think it was common to overuse channels when golang was younger as that was “the go way”. I think people have started to realize that channels are complex and a sharp abstraction and they should not be used frivolously.

I can’t think of the last time I actually wrote code which directly created channels. Of course things like contexts, tickers, etc are implemented with channels and I think that is ideally how they should be used — in well defined and self contained library code.

kunley•2d ago
Totally different perspective here. Never dissapointed with channels, can't stand async.
latchkey•2d ago
For something like this, I would instinctively reach for an external queue mechanism instead of trying to work through the complexity of golangs concurrency.

Create a bunch of sequentially numbered jobs that then update their output into postgres database. Then have N number of workers process the jobs. Something like GCP's CloudTasks is perfect for this because the "workers" are just GCP Cloud Functions, so you can have a near infinite number of them (limited by concurrent DB connections).

This approach also buys you durability of the queue for free (ie: what happens when you need to stop your golang process mid queue?).

Then it is just a query:

  select * from finished_jobs order by job_num;
destel•2d ago
I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.
destel•2d ago
UPD.

I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.

latchkey•2d ago
Thanks, but I'd still use a queue over this solution.

Real-time Log Enrichment: perfect for my example [0], you're firing off endless tasks. RT logs have a timestamp.

Finding the First Match in a File List: Files tend to be static. I'd use a queue to first build an index and then a queue to process the index.

Time Series Data Processing: Break the data into chunks, you mention 600MB, which isn't that big at all given that Cloud Run memory maxes out at 32GB.

[0] https://news.ycombinator.com/item?id=45094387