frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
37•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•125 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
11•__natty__•3h ago•0 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
265•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Message Queues: A Simple Guide with Analogies (2024)

https://www.cloudamqp.com/blog/message-queues-exaplined-with-analogies.html
111•byt3h3ad•3w ago

Comments

coronapl•3w ago
While queues definitely play an important role in microservices architecture, I think it’s worth clarifying that they’re not unique to it. A queue can fit perfectly in a monolith depending on the use case. I regularly use queues for handling critical operations that might require retrying, for having better visibility into failed jobs, ensuring FIFO guarantees, and more. Queues are such a useful tool for building any resilient architecture that framing them as primarily a microservices concern might cause unnecessary confusion.
robertlagrant•3w ago
Totally agree. Banks use durable queues a lot to make sure things get processed. Or at least they used to.
coronapl•3w ago
The analogy could be: “Queues are the like the todos list of your team. The todo item (message) stays there until it is successfully completed. It can be handled by the producer (monolith) or it can be handled by someone else (microservices).”
Aurornis•3w ago
Monoliths also have to scale to multiple servers eventually, so message queues are an important architectural component to understand regardless of the organization of your services.
charcircuit•3w ago
Even without multiple servers a single server itself has many cores. So if you aren't using multiple threads you are leaving performance on the table.
Aurornis•3w ago
A single multithreaded process usually doesn’t need an external message queue for sharing, though.

I guess if you’re stuck with a single threaded language you would want a message queue though.

charcircuit•3w ago
No one specified external message queue. You can have a message queue within the process itself that can deliver messages from one thread to another. There are different kinds of messages queues for this purpose depending on how many threads will be producing messages to the message queue and how many threads will be consuming messages from it. While using message queues may not be neccessary for multithreading it is a very common setup. The ability to have a message queue to schedule work on a background thread is very common between many programs.
NikolaNovak•3w ago
Absolutely, 100%.

I work on PeopleSoft Enterprise Resource Planning applications - the "boring" back-office HR, Pay, Financials, Planning etc stuff.

The core architecture is late 80s - mid 90s. Couple of big architectural changes when internet/browsers and then mobile really hit. But fundamentally it's a very legacy / old school application. Lots of COBOL, if that helps calibrate :->

We use queues pervasively. It's PeopleSoft's preferred integration method for other external applications, but over the years a large number of internal plumbing is now via queues as well. PeopleSoft Integration Broker is kind of like an internal proprietary ESB. So understanding queues and messaging is key to my PeopleSoft Administrator teams wherever I go (basically sysadmins in service of PeopleSoft application:).

coronapl•3w ago
Recently, I also started using queues for integrating with legacy health care applications. Most of them run on-promise and they don't have incoming internet connection for security reasons. The strategy is to send a message to a queue. The consumer application uses short polling to process the messages and then it can call a webhook to share the status of the job. Do you also follow a similar approach?
NikolaNovak•3w ago
If I understand it correctly, no; PeopleSoft is Legacy in some ways but it is actively developed and improved/maintained. The Peoplesoft Integration Broker is "modern-ish" from that perspective, and a proper middleware messaging system:

https://docs.oracle.com/cd/E92519_02/pt856pbr3/eng/pt/tibr/c...

It'll do XML messages in somewhat proprietary format with other PeopleSoft applications, and "near-real-time" queues via web services with other applications in a fairly standardized way (WSDL etc). I think of PeopleSoft Integration Broker as a "mini, proprietary ESB", as inaccurate as it may be in details :).

prhn•3w ago
This is surprisingly basic knowledge for ending up on the front page.

It’s a good intro, but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

There are also different types of queues/exchanges and this is critical depending on the types of consumer or consumers you have. Should I use direct, fan out, etc?

The next interesting question is when should I use a stream instead of a queue, which RabbitMQ also supports.

My advice, having just migrated a set of message queues and streams from AWS(AvtiveMQ) to RabbitMQ is think long and hard before you add one. They become a black box of sorts and are way harder to debug than simple HTTP requests.

Also, as others have pointed out, there are other important use cases for queues which come way before microservice comms. Async processing to free up servers is one. I’m surprised none of these were mentioned.

Aurornis•3w ago
> This is surprisingly basic knowledge for ending up on the front page.

Nothing wrong with that! Hacker News has a large audience of all skill levels. Well written explainers are always good to share, even for basic concepts.

coronapl•3w ago
Agree! In fact, I would appreciate more well written articles explaining basic concepts on the front page of Hacker News. It is always good to revisit some basic concepts, but it is even better to relearn them. I am surprised by how often I realize that my definition of a concept is wrong or just superficial.
SAI_Peregrinus•3w ago
Also it's nice to have a set of well-written explainers for when someone asks about a concept.
p1anecrazy•3w ago
In principle, I agree, but “a message queue is… a medium through which data flows from a source system to a destination system” feels like a truism.
sigbottle•3w ago
For me, I've realized I often cannot possibly learn something if I can't compare it to something prior first.

In this case, as another user mentioned, the decoupling use case is a great one. Instead of two processes/API directly talking, having an intermediate "buffer" process/API can save you headache

nyrikki•3w ago
To add to this,

The concept of connascence, and not coupling is what I find more useful for trade off analysis.

Synchronous connascence means that you only have a single architectural quanta under Neil Ford’s terminology.

As Ford is less religious and more respectful of real world trade offs, I find his writings more useful for real world problems.

I encourage people to check his books out and see if it is useful. It was always hard to mention connascence as it has a reputation of being ivory tower architect jargon, but in a distributed system world it is very pragmatic.

chasil•3w ago
This has more depth on System V/POSIX IPC, and a youtube video.

https://www.softprayog.in/programming/interprocess-communica...

Fun fact: IPC was introduced in "Colombus UNIX."

https://en.wikipedia.org/wiki/CB_UNIX

SpaceManNabs•3w ago
I think the article would be a little bit more useful to non-beginners if it included an update on the modern landscape of MQs. Are people still using apache kafka lol?

it is a fine enough article as it is though!

deepsun•3w ago
Kafka is a distributed log system. Yes, people use Kafka as a message queue, but it's often a wrong tool for the job, it wasn't designed for that.
arter45•3w ago
> but I’d love to read more about when to know it’s time to replace my synchronous inter service http requests with a queue. What metrics should I consider and what are the trade offs. I’ve learned some answers to this question over time, but these guys are theoretically message queue experts. I’d love to learn about more things to look out for.

Not OP but I have some background on this.

An Erlang loss system is like a set of phone lines. Imagine a special call center where you have N operators, each of which takes calls, talks for some time (serving the customer) and hungs up. Unlike many call centers, however, they don’t keep you in line. Therefore, if all operators are busy the system hungs up and you have to explicitly call again. This is somewhat similar to a server with N threads.

Let's assume N=3.

Under common mathematical assumptions (constant arrival rate, time between arrivals modeled by a Poisson distribution, exponential service time) you can define:

1) “traffic intensity” (rho) has the ratio between arrival time and service time (intuitively, how “heavy” arrivals are with respect to “departures”)

2) the blocking probability is given by the Erlang B formula (sorry, not easy to write here) for parameters N (number of threads) and rho (traffic intensity). Basically, if traffic intensity = 1 (arrival rate = service rate), the blocking probability is 6.25%. If service rate is twice the arrival rate, this drops to 1% approximately. If service rate is 1/10 of the arrival rate, the blocking probability is 73.3%.

I will try to write down part 2 when I find some time.

EDIT - Adding part 2

So, let's add a buffer. We said we have three threads, right? Let's say the system can handle up to 6 requests before dropping, 1 processed by each thread plus an additional 3 buffered requests. Under the same distribution assumptions, this is known as a M/M/3/6 queue.

Some math crunching under the previous service and arrival rate scenarios:

- if service = arrival time, blocking probability drops to 2%. Of course there is now a non-zero wait probability (close to 9%).

- if service = twice the arrival time, blocking probability is 0.006% and there is a 1% wait probability.

- if service = 1/10 of the arrival time, blocking probability is 70%, waiting probability is 29%.

This means that a buffer reduces request drops due to busy resources, but also introduces a waiting probability. Pretty obvious. Another obvious thing is that you need additional memory for that queue length. Assuming queue length = 3, and 1 KB messages, you need 3 KB of additional memory.

A less obvious thing is that you are adding a new component. Assuming "in series" behavior, i.e. requests cannot be processed when the buffer system is down, this decreases overall availability if the queue is not properly sized. What I mean is that, if the system crashes when more than 4 KB of memory are used by the process, but you allow queue sizes up to 3 (3 KB + 3 KB = 6 KB), availability is not 100%, because in some cases the system accepts more requests than it can actually handle.

An even less obvious thing is that things, in terms of availability, change if you consider server and buffer as having distinct "size" (memory) thresholds. Things get even more complicated if server and buffer are connected by a link which itself doesn't have 100% availability, because you also have to take into account the link unavailability.

BWStearns•3w ago
> when to know it’s time to replace my synchronous inter service http requests with a queue

I've found that once it's inconveniently long for a synchronous client side request, it's less about the performance or metrics and more about reasoning. Some things are queue shaped, or async job shaped. The worker -> main app communication pattern can even remain sync http calls or not (like callback based or something), but if you have something that has high variance in timing or is a background thing then just kick it off to workers.

I'd also say start simple and only go to Kafka or some other high dev-time overhead solution when you start seeing Redis/Rabbit stop being sufficient. Odds are you can make the simple solution work.

ImPleadThe5th•3w ago
After spending most of my career hacking on these systems, I feel like queues very quickly become a hammer and every entity quickly becomes a nail.

Just because you can keep two systems in complete sync doesn't mean you should. If you ever find yourself with more-or-less identical tables in two services you may have gone too far.

Eventually you find yourself backfilling downstream services due to minor domain or business logic changes and scaling is a problem again.

emmanueloga_•3w ago
I’ve been thinking that defaulting to durable execution over lower‑level primitives like queues makes sense a lot of the time, what do you think?

A lot of the "simple queue" use cases end up needing extra machinery like a transactional‑outbox pattern just to be reliable. Durable‑execution frameworks (DBOS/Temporal/etc.) give you retries, state, and consistency out of the box. Patterns like Sagas also tend to get stitched together on top of queues, but a DE workflow gives you the same guarantees with far less complexity.

The main tradeoff I can think of is latency: DE engines add overhead, so for very high throughput, huge fan‑out, or ultra‑low‑latency pipelines, a bare‑bones queue + custom consumers might still be better.

Curious where others draw the line between the two.

jedberg•3w ago
Highly biased opinion here since I'm the CEO of DBOS:

It'll be rare that the overhead actually has an effect, especially if you use a library like DBOS, which only adds a database write. You still have to write to and read from your queue, which is about as expensive as a database write/read.

abelanger•3w ago
Drawing the boundary at high throughput, huge fan-out and ultra-low-latency is correct - I'd also add that MQs are often used for pub/sub and signaling.

MQs are heavily optimized for reducing E2E latency between publishers and consumers in a way that DE engines are not, since DE engines usually rely on an ACID compliant database. Under load I've seen an order of magnitude difference in enqueue times (low single-digit milliseconds for the MQ p95 vs 10ms p95 for Postgres commit times). And AMQP has a number of routing features built-in (i.e. different exchange types) that you won't see in DE engines.

Another way to think about it is that message queues usually provide an optional message durability layer alongside signaling and pub/sub. So if you need a very simple queue with retries _and_ you need pub/sub, I'd be eyeing an MQ (or a DE execution engine that supports basic pub/sub, like Hatchet).

I wrote about our perspective on this here: https://hatchet.run/blog/durable-execution

( disclaimer - I'm one of the people behind https://github.com/hatchet-dev/hatchet )

charcircuit•3w ago
It doesn't make sense to persist by default. If I send a message to my rendering thread's message queue that the window is fully occluded I never want that message to be persisted. If the process crashes the fact that the window was occluded back then has no relevance to the current insurance of the process.

Trying to persist things has a performance cost that you don't want to pay everytime you want a thread to communicate with another.

Do you think ArrayList or std::vector should be persisted by default?

keithnz•3w ago
this is not really a good guide to Message Queues, it's really only talking about them in context of one use of them. It doesn't really talk about the message queue at all and the basic differences between various message queue implementations. Your local AI chatbot is going to give you a much better overview, just take the title "Message Queues: A Simple Guide with Analogies" and it does a much better job