You really don't want your streams/aggs to come close to being that large.
Immutable history sounds like a good idea, until you're writing code to support every event schema you ever published. And all the edge cases that inevitably creates.
CQRS sounds good, until you just want to read a value that you know has been written.
Event sourcing probably has some legitimate applications, but I'm convinced the hype around it is predominantly just excellent marketing of an inappropriate technology by folks and companies who host queueing technologies (like Kafka).
This is for you and the author apparently: Prating CQRS does not mean you're splitting up databases. CQRS is simply using different models for reading and writing. That's it. Nothing about different databases or projections or event sourcing.
This quote from the article is just flat out false:
> CQRS introduces eventual consistency between write and read models:
No it doesn't. Eventual consistency is a design decision made independent of using CQRS. Just because CQRS might make it easier to split, it doesn't in any way have an opinion on whether you should or not.
> by folks and companies who host queueing technologies (like Kafka).
Well that's good because Kafka isn't an event-sourcing technology and shouldn't be used as one.
Most all CQRS designs have some read view or projection built off consuming the write side.
If this is not the case, and you're just writing your "read models" in the write path; where is the 'S' from CQRS (s for segregation). You wouldn't have a CQRS system here. You'd just be writing read optimised data.
- Read side is a SELECT on a Postgres view
It's a real stretch to be describing a postgres view as CQRS
That's EXACTLY what CQRS.
I think you might struggle to understand CQRS.
You can scale them independently in that you can control the rate at which your views are read and the batch size of your updates.
The whole big win wirh CQRS is it allows for very efficient batching.
You use POST for your Cs and GET for your Qs. Tada!
This is flat out false.
Or segregate even.
On the other hand CQRS + single writer pattern on their owncan be a massive performance win because it allows for efficient batching of views and updates. It's also much simpler to implement than a fullblown event sourcing system.
> vents stop being an internal persistence detail and become a public contract.
You can't blame event sourcing for people not doing it correctly, though.
The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues.
> Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.
This is true, but all you're really saying it "Use the right tool for the right job".
Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.
It's not complicated or complex.
You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach.
You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with.
How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement.
In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no?
This thread appears to have stories from several people who have though, and have credible criticisms:
https://news.ycombinator.com/item?id=45962656#46014546
https://news.ycombinator.com/item?id=45962656#46013851
https://news.ycombinator.com/item?id=45962656#46014050
What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?
In fact, I think most complexity I create or encounter is in response to trying to future-proof stuff I know will change.
I'm in healthcare. And it changes CONSTANTLY. Like, enormous, foundation changes yearly. But that doesn't mean there aren't portions of that domain that could benefit from event sourcing (and have long, established patterns like ADT feeds for instance).
One warning I often see supplied with event sourcing is not to base your entire system around it. Just the parts that make sense.
Blood pressure spiking, high temperature, weight loss, etc are all established concepts that could benefit from event sourcing. But that doesn't mean healthcare doesn't change or that it is a static field per se. There are certainly parts of my system that are CRUD and introducing event-sourcing would just make things complicated (like maintaining a list of pharmacies).
I think what's happening is that a lot of hype around the tech + people not understanding when to apply it is responsbile for what we're seeing, not that it's a bad pattern.
Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts.
> Could be that some of the bad experiences we hear about are from people applying it to fields like content management (I've been tempted to try it there) or applying it to whole systems rather than individual parts
Amen. And I think what most people miss is that it's really hard to do for domains you're just learning about. And I don't blame people for feeling frustrated.
I've been on an ES team at my current job, and switched to a CRUD monolith.
And to be blunt, the CRUD guys just don't know that they're wrong - not their opinion about ES - but that the data itself is wrong. Their system has evaluated 2+2=5, and with no way to see the 2s, what conclusion can they draw other than 5 is the correct state?
I have been slipping some ES back into the codebase. It's inefficient because it's stringy data in an SQL database, but I look forward to support tickets because i don't have to "debug". I just read the events, and have the evidence to back up that the customer is wrong and the system is right.
I think one of the draws of ES is that it feels like the ultimate way to store stuff. The ability to pinpoint exact actions in time and then use that data to create different projections is super cool to me.
Flip it on its head.
Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?
You get similar levels of historical insight, with the disadvantage that to replay things you might need to put a little CLI or script together to infer commands out of the audit log (which if you do a lot, you can make a little library to make building those one off tools quite simple - I've done that). But you avoid all the many well documented footguns that come from trying to run an event sourced system in a typical evolving business.
We have a customer whom we bill for feature X.
Does he actually have feature X or are we billing him for nothing?
With ES: We see his Subscriptions and Cancellations and know if he has feature X.
Without ES: We don't know if he subscribed or cancelled.
With audit log: We almost know whether he subscribed or cancelled.
A single `events` table falls apart as the system grows, and untyped JSONB data in `event_data` column just moves the mess into code. Event payloads drift, handlers fill with branching logic, and replaying or migrating old events becomes slow and risky. The pattern promises clarity but eventually turns into a pile of conditionals trying to decode years of inconsistent data.
A simpler and more resilient approach is using the database features already built for this. Stored procedures can record both business data and audit records in a controlled way. CDC provides a clean stream for the tables that actually need downstream consumers. And even carefully designed triggers give you consistent invariants and auditability without maintaining a separate projection system that can lag or break.
Event sourcing works when the domain truly centers on events, but for most systems these database driven tools stay cleaner, cheaper, and far more predictable over time.
You rarely replay them to reconstruct business state; you just pump them into analytics or enrichment pipelines.
How can you be sure that the data stuffed into JSONB fits a particular schema, and that future changes are backwards compatible with rows added long ago?
The way this article suggests using JSONB would also be problematic because you're stuffing potentially varying structures into one column. You could technically create one massive jsonschema that uses oneOf to validate that the event conforms to one of your structures, but I think it would be horrible for performance.
Currently working on a DDDd event sourced system with CQRS and really enjoy it.
xlii•2mo ago
Too much dry code for my taste and not many remarks/explanations - that's not bad because for prose I'd recommend Martin's Fowler articles on Event processing, but _could be better_ ;-)
WRT to tech itself - personally I think Go is one of the best languages to go for Event Sourcing today (with Haskell maybe being second). I've been doing complexity analysis for ES in various languages and Go implementation was mostly free (due to Event being an interface and not a concrete structure).
mrsmrtss•2mo ago
azkalam•2mo ago
Can you explain this? Go has a very limited type system.