frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
136•theblazehen•2d ago•39 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
665•klaussilveira•14h ago•201 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
948•xnx•19h ago•550 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•31 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
51•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
221•dmpetrov•14h ago•117 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
492•todsacerdoti•22h ago•242 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
380•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
24•jesperordrup•4h ago•15 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
42•helloplanets•4d ago•40 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
18•bikenaga•3d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•4 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
57•gfortaine•12h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1065•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•135 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

C++ Coroutines Advanced: Converting std:future to asio:awaitable

https://www.ddhigh.com/en/2025/07/15/cpp-coroutine-future-to-awaitable/
66•xialeistudio•6mo ago

Comments

xialeistudio•6mo ago
In modern C++ development, coroutines have brought revolutionary changes to asynchronous programming. However, when using boost::asio or standalone asio, we often encounter scenarios where we need to convert traditional std::future<T> to asio::awaitable<T>. This article will detail an efficient, thread-safe conversion method.
userbinator•6mo ago
Did you just copy-paste the first paragraph of the article.
vlovich123•6mo ago
It's literally his article.
tgv•6mo ago
If you post to HN, you can choose between a text or a url. When you pick url, you can add some text, but it's added as a comment. I guess that's what happened.
mgaunard•6mo ago
So the solution is to have a thread waiting on the future. Technically you'd need a thread per future, which is not exactly scalable. The article uses a pool which has its own problems.

The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.

Those integrations are not exactly good designs regardless; simply don't use std::future is the solution, and use non-blocking async mechanisms that can cooperate on the same thread instead. Standard C++ has one albeit somewhat overcomplicated, senders and receivers. Asio also works.

gpderetta•6mo ago
Boost also has a better future that at least allows composition.

But yes, do not use std::future except for the most simple tasks.

Davidbrcz•6mo ago
I work with C++, but the amount of don't use standard feature X because reason' is crazy.
account42•6mo ago
It is a bit sad to see this for newer features. Maybe the committee should re-evaluate how quickly new designs are pushed into the standard and allow for a bit more time for evaluation. Moving fast makes sense when it's ok to break thinks, not so much when you need to support the result forever.
m-schuetz•6mo ago
On the contrary, I think they should move faster and provide more convenience functions that are "good enough" for 90% of use cases. For power users, there will always be a library that addresses domain-specific issues better than the standard could ever hope to.

Instead, the comitee attempts to work towards perfect solutions that don't exist, and ends up releasing overengineered stuff that is neither the most convenient, performant, nor efficient solution. Like <random>

pjmlp•6mo ago
And who gets to implement those ideas faster, many of which were never implemented before being added into the standard in first place?

The surviving three compilers are already lagging as it is, none of them is fully 100% C++20 compliant, C++23 might only become 100% on two of them, lets see how C++26 compliance turns out to be, meanwhile C++17 parallel algorithms are only fully available in one of them, while the two other ones require TBB and libstdc++ to actually make use of them.

m-schuetz•6mo ago
I'm obviously not talking about modules-level features that may never get to see the light of day.

A random(min, max) function isn't rocket science and already a major inprovement over the three-liner that is currently necessary. The major compiler devs won't take long to implement these cases, just as it did not take them long to implement simple yet useful functionality in previous versions of the standard. And the standard library is full with these cases of missing convenience functions over deliberately over-engineered functions.

pjmlp•6mo ago
Modules have already seen the light of day in VC++ and clang.

Anyone using a recent version of Office, is using code that was written with C++20 modules.

It is relatively easy to see how far behind compiler developers are regarding even basic features.

Note that two of the three major surviving compilers are open source projects, and in all three major compilers, the big names have ramped down their contributions, as they rather invest into their own languages, seeing the current versions as good enough for existing codebases.

account42•6mo ago
Well designed functions that deliberately target only 90% of use cases are fine.

Badly designed library types that end up being effectively deprecated but you still need to deal with for decades because they end up in all kinds of interfaces are not.

Davidbrcz•6mo ago
I wouldn't mind if they actually _fix_ features afterward, even if it means breaking change.
account42•6mo ago
Users would mind though. Strong backwards compatibility is a very useful feature even if it does mean that you need to be careful about new additions.
lenkite•6mo ago
C++ really needs a fast-deprecate and kick out strategy for features that have proven to be poor - whether by bad design or bad implementation. And compilers should auto warn about such features.
pjmlp•6mo ago
Although C++ is one of my favourite languages, I feel the current WG21 process is broken, it is one of the few language evolution processes where proposals are allowed to be voted in without any kind of preview implementation for community feedback, or even to actually validate the idea.

I have to acknowledge that none of the other ISO languages, including C, are this radical.

That is how we are getting so much warts of lately.

Unfortunelly there doesn't seem to exist any willingness to change this, until it will be too late to matter.

mgaunard•6mo ago
that's not true, implementations are more often required than not
gpderetta•6mo ago
std::future was caught in coroutine/network/concurrency/parallelism master plan that has been redesigned way too many times. Sender/Receivers is the the current direction, and while I don't dislike it, we are still far for a final design to cover all use cases (we still don't have a sender/receiver network library proposal I think).

Whatever we end up with, std::future just wasn't a good base for an high performance async story. Still just adding a readiness callback to std::future would make it infinitely more useful even if suboptimal. At least it would be usable where performance is not a concern.

mgaunard•6mo ago
There is such a proposal
gpderetta•6mo ago
A recent one? Do you remember the doc number?

edit: if you mean the concurrency TS, I think that's dead right?

mgaunard•6mo ago
http://wg21.link/P2762
meindnoch•6mo ago
>The article even mentions an arguably better approach (check on a timer), but for some reasons claims it is worse.

How do you know what timeout to use for the timer? You may end up with tons of unnecessary polling if your timeout is too short, or high latency if your timeout is too long.

>Standard C++ has one albeit somewhat overcomplicated, senders and receivers

*in C++26

pjmlp•6mo ago
Or CUDA, which is getting its own compute version.
mgaunard•6mo ago
> how do you know what timeout to use?

adjust based on your minimum expected latency, potentially with exponential backoff. Potentially account for scheduler overhead of the spurious wake-ups.

Basically if you check every millisecond to every 10ms, you should be fine.

> in C++26

it's a library, language support is not necessary.

There are implementations you can use right now.

OskarS•6mo ago
I think the idea is that you're using some external library (e.g. database drivers) which do not use asio but returns a std::future. You can't just "not use std::future" if that's what your library uses without fully rewriting your external library.

The other option is as you mention polling using a timer, but I don't see how that's better, I'd rather move the work off of the event loop to a thread. And you then have to do the "latency vs. CPU time" tradeoff dance, trying to judge how often to poll vs. how much latency you're willing to accept.

mgaunard•6mo ago
I'd argue using third-party libraries in a C++ project is a bad idea to begin with. Bad ones especially.
benreesman•6mo ago
An imperfect but really useful standard for whether or not a C++ feature is going to work out is how long and troublesome the standardization process is. Modules? Never happening boys, the time has passed when anyone cares. Coroutines? Break gdb forever? Keep dreaming.

Look at what they can do when it's clearly a good idea and has the backing of the absolute apex predator experts: reflection. If reflection can fucking sail through and your thing struggles, it's not going to make it.

Andrew Kelley just proposed the first plausible IO monad for a systems language, and if I wanted to stay relevant in async C++ innovation I'd just go copy it. Maybe invert the life/unlift direction.

The coroutines TS is heavily influenced by folly coroutines (or vice versa), a thing with which I have spent many a late night debugging segfaults. Not happening.

Besides, if threads are too slow or big now? Then everything but liburing is.

cherryteastain•6mo ago
Coroutines are well supported in boost::asio and are deployed in production in more places than you would think.
benreesman•6mo ago
A second heuristic for things that aren't going to work out well is stuff that came from Boost. Oh sure, there was a time back in the TR11 days when it was practically part of the standard. But if it's on anyone's "cool, we'll link that, no problem" list in 2025? I don't know them.
spacechild1•6mo ago
Asio is actually independent from boost and it is by far the most popular C++ networking library.
pjmlp•6mo ago
Not everyone is using C++ on Linux with GCC.

There are people where modules and co-routines already happened, and there is better debugging experiences out there than gdb.

spacechild1•6mo ago
I'm not sure what you're trying to say. Coroutines have been standardized in C++20 and they are fully supported by all major compilers. They are successfully used in production. I've switched to coroutines for all networking in my personal projects and I'm not looking back.
mgaunard•6mo ago
Reflection is purely a compile-time frontend thing, so is easy to roll out.

Things that affect the runtime or ecosystem of tools are obviously more complicated, especially given that those things aren't really covered by the standard.

flakes•6mo ago
Seems prone to deadlocking- I would avoid making the thread-pool globally scoped, and instead provide it as arguments to the helper methods.
usrnm•6mo ago
Every time I read an article like this I thank the day when I switched from C++ to go. I know why C++ is like this, I understand all the hard work that went into evolving it over 40 years, but I simply refuse to deal with all this stuff anymore. I have better things to worry about in my life.
IshKebab•6mo ago
Yeah... I used C++ coroutines a bit and they're super powerful and can do anything you want... But... I mean look at how complex co_await is:

https://en.cppreference.com/w/cpp/language/coroutines.html#c...

It does about 20 different steps with a ton of opportunities for overloading and type conversion. Insanely complicated!

And they kept up the pattern of throwing UB everywhere:

> Falling off the end of the coroutine is equivalent to co_return;, except that the behavior is undefined if no declarations of return_void can be found in the scope of Promise.

Why?? Clearly they have learnt nothing from decades of C++ bugs.

Hopefully Rust gets coroutines soon...

Yoric•6mo ago
Out of curiosity, what would you use coroutines for, in Rust?

My personal use was clearly `async`/`await`, and this landed quite some time ago.

IshKebab•6mo ago
For stimulus generation for SystemVerilog tests. I think you might be able to use `async`/`await` for that but I'm not 100% sure - I haven't tried.
jcelerier•6mo ago
> I mean look at how complex co_await is

it doesn't look meaningfully more complex than C#'s spec (which has absolutely horrendous stuff like :throw-up-emoji: inheriting from some weird vendor type like "System.Runtime.CompilerServices.INotifyCompletion")?

https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

monkeyelite•6mo ago
Who is asking for C++ coroutines and how did we get to C++20 without it?
quietbritishjim•6mo ago
I'm interested in this too.

Coroutines in Python are fantastically useful and allow more reliable implementation of networking applications. There is a complexity cost to pay but it's small and resolves other complexity issues with using threads instead, so overall you end up with simpler code that is easier to debug. "Hello world" (e.g., with await sleep(1) to make it non-trivially async) is just a few lines.

But coroutines in C++ are so stupendously complicated I can't imagine using them in practice. The number of concepts you have to learn to write a "hello world" application is huge. Surely just using callback-on-completion style already possible with ASIO (where the callback is usually another method in the same object as the current function), which was already possible, is going to lead to simpler code, even if it's a few lines longer than with coroutines?

Edit: We have a responsibility as senior devs (those of us that are) to ensure that code isn't just something we can write but that other can read, including those that don't spend their spare time reading about obscure C++ ideas. I can't imagine who in good faith thinks that C++ coroutines fall into this category.

rhaen•6mo ago
The majority of the complexity is in the library/executor, rather than in callers. We have an implementation at my company which is now being widely rolled out and it's a pretty dramatic readability win to convert callback based codes to nearly-straight line coroutine code.
quietbritishjim•6mo ago
That's very promising.

Boost ASIO seemed to be the first serious coroutine library for C++ and that seemed complex to use (I'm saying that as a long-time user of its traditional callback API) but that's perhaps not surprising given that it had to fit with its existing API. But then there was a library (I forget which) posted to HN that was supposed to be a clean fresh coroutine library implementation and that still seems more complex than ASIO and callbacks - it seemed like you needed to know practically every underlying C++ coroutine concept. But maybe there just needed to be time for libraries to mature a bit.

spacechild1•6mo ago
I was just going to mention ASIO.

> and that seemed complex to use

Actually. I found it pretty straightforward. I switched from callbacks to coroutines un my personal project and it is a massive win! Now I can write simple loops instead of nested callbacks. Also, most state can now stay in local variables.

monkeyelite•6mo ago
There is another way to write code which lets you write simple loops and isn’t coroutines. Blocking code.
spacechild1•6mo ago
Sure, but then you need one thread per socket, which has its own set of problems (most notably, the need for thread synchronization). I definitely prefer async + coroutines over blocking + thread-per-socket.
horizion2025•6mo ago
Java's new philosophy (in "Loom" - in production OpenJDK now) seems to be virtual threads that are cheap and can therefore be plentiful compared to native threads. This allows you to write the code in the old way without programmer-visible async.
spacechild1•6mo ago
Ok, but virtual threads still need thread synchronization.
monkeyelite•6mo ago
which isn't a problem unless you are abusing threads.

If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.

spacechild1•6mo ago
> which isn't a problem unless you are abusing threads.

Well, some people would call this a problem (or downside). Many real-world programs need to access shared state or exchange data between client. This is significantly less error prone if everything happens on a single thread.

> If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.

When we are talking about networking, most of the time is spent waiting for I/O. We need concurrency, but there's typically no need for actual CPU level parallelism.

I'm not saying that we shouldn't use threads at all - on the contrary! -, but we should use them where they make sense. In some cases we can't even avoid it (e.g. audio).

A typical modern desktop application, for example, would have the UI on the main thread, all the networking on a network thread, audio on an audio thread, expensive calculations on a worker thread (pool), etc.

IMO it just doesn't make sense to complicate things by having one thread per socket when all the networking can easily be served by a single thread.

monkeyelite•6mo ago
> having one thread per socket

I didn’t say that. You can serve multiple sockets on a thread.

I could respond to more points. But ultimately my point is that if, for, switch etc is the kind of code you can read and debug. And async/callback is not. Async await tries to make the code look more like regular code but doesn’t succeed. I’m just advocating for actually writing normal blocking code.

A thread is exactly the right abstraction - a program flow. Synchronization is a reality of having multiple flows of execution.

I’m interested in the project mentioned in the sibling comment about virtual threads which maybe reduces the overhead (alleviating your I/O bound concern) but allows you to write this normal code.

spacechild1•6mo ago
> You can serve multiple sockets on a thread.

But how would you do that with blocking I/O (which you have been suggesting)? As soon as multiple sockets are receiving data, blocking I/O requires threads.

> Async await tries to make the code look more like regular code but doesn’t succeed.

Can you be more specific? I'm personally very happy with ASIO + coroutines.

> A thread is exactly the right abstraction - a program flow.

IMO the right abstraction for concurrent program flow are suspendable and resumable functions (= coroutines) because you know exactly how the individual subprograms may interleave.

OS threads add parallelism, which means the subprograms can interleave at arbitrary points. This actually takes away control from you, which you then have to regain with critical sections, message queues, etc.

> Synchronization is a reality of having multiple flows of execution.

Depends on what kind of synchronization you're talking about. Thread synchronization is obviously only required when you have more than one thread.

monkeyelite•6mo ago
> But how would you do that with blocking I/O

when you read/write to a socket you can configure a timeout with the kernel to wait. If no data is ready, you can try another socket. The timeout can be 0

So you can serve N sockets in a while loop by checking one at a time which is ready.

> Can you be more specific? I'm personally very happy with ASIO + coroutines

1. You now have to color every function as async and there is an arbitrary boundary between them.

2. The debugger doesn’t work.

3. Because there is no pre-emption long tasks can starve others.

4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.

> Thread synchronization is obviously only required when you have more than one thread.

Higher level concept. if you have two running independent computations they must synchronize. Or they aren’t really independent (what you’re praising).

spacechild1•6mo ago
> when you read/write to a socket you can configure a timeout with the kernel to wait. If no data is ready, you can try another socket. The timeout can be 0

That's non-blocking I/O ;-) Except you typically use select(), poll() or epoll() to wait on multiple sockets simultaneously. The problem with that approach is obviously that you now have a state machine and need to multiplex between several sockets.

> You now have to color every function as async and there is an arbitrary boundary between them.

Not every function, only the ones you want to yield from/across. But granted, function coloring is a well-known drawback of many async/await implementations.

> 2. The debugger doesn’t work.

GDB seems to work just fine for me: I can set breakpoints, inspect local variabels, etc. I googled a bit and apparently debugging coroutine used to be terrible, but has improved a lot recently.

> 3. Because there is no pre-emption long tasks can starve others.

If you have a long running task, move it to a worker thread pool, just like you would in a GUI application (so you don't block the UI thread).

Side note: Java's virtual threads are only preempted at specific points (I/O, sleep, etc.), so they can also starve each other if you do expensive work on them.

> 4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.

Same with error handling (e.g. error codes VS exceptions). Often you can provide both styles, but it's more work for library authors. I'll give you that.

You're right, coroutines are no silver bullet and certainly have their own issues. I just found them pretty nice to work with so far.

monkeyelite•6mo ago
I think we have a shared understanding. Just wanted to comment here:

> That's non-blocking I/O ;-)

In other words, blocking code is so desirable that the kernel has been engineered to enable you to do it, and abstracts away the difficult engineering of dealing with async I/O devices.

I personally find great leverage from using OS kernel features, that I just don't get from languages and libraries.

> Java's virtual threads are only preempted at specific points (I/O, sleep, etc.)

Yes, this is a general weakness of the language run-time async. If we accept the premise that OS threads have too much overhead, then from the little bit I know about Java, that approach seems conceptually cleaner than the coloring one.

monkeyelite•6mo ago
That sounds interesting, I'll take a look! (although not using native threads is almost never about perf)
quietbritishjim•6mo ago
But the great thing about async (at least it's the killer feature for me) is the really top notch support for cancellation. You can also typically create and join async tasks more easily than spawning and joining threads.
horizion2025•6mo ago
As to the complexity: It is complex because it is very low-level. In JavaScript (using that as an example but I suspect Python is the same) they build in async/await keywords such that they are in cahoots with the Promise class. C++ takes a different path where there isn't a built-in Promise class, rather it provides you lower-level primitive you can use to build a Promise class. You can build a library around it, and it will be as simple as in other languages - both for the awaiter and for implementing libraries that you can await on :) But I agree it is really complicated. I remember once in a while thinking "ah it can't really be that complicated" only to dive into it again. It doesn't help that practically any term they use (promise, awaiter etc.) is used differently than in all other contexts I've worked. If you just expect it will be as easy as understand JavaScript async/await/promise you are in for a rude surprise. Raymond Chen has written co-routines tutorials which span three SERIES. Here's a map of those: https://devblogs.microsoft.com/oldnewthing/20210504-01/?p=10...

As for how we got to here without:

1) Using large number processes/threads 2) Raw callback oriented mechanisms (with all the downsides) 3) Structured async where you pass in lambda's - benefit is you preserve the sequential structure and can have proper error handlign if you stick to the structure. Downside is you are effectively duplicating language facilities in the methods (e.g. .then(), .exception() ). Stack traces are often unreadable. I. 4) Raw use of various callback-oriented mechanisms like epoll and such, with the cost in code readability etc. and/or coupled with custom-written strategies to ease readability (so a subset of #3 really)

With C++ couroutines the benefit is you can write it almost like you usually do (line-by-line sequentially) even though it works asynchronously.