I have never heard anyone mention this, ever.
tl;dr is that the French and German governments are really ahead of the curve then
I read through this, joined their matrix server and talked.
Signal is decent enough to be used. (what I am using with my one friend lol)
it looks in this that simplex is good but I would argue its not when I actually looked into its protocol. Its group feature is actually experimental and you trust the original nodes and they can be malicious... and people could send messages behind your back too... So a no go.
Tor based are good too if you can manage to keep a device open but I have heard somewhere that tor can be tracked through bgp too but honestly tor might be the best option really :/
There are threat models. One has to find their threat model and work on it. There is not any "best alternative" in my honest opinion
TL;DR: the only valid points here really are complaints about state resets (being addressed in https://matrix.org/blog/2025/07/security-predisclosure/) and canonical json edge cases (which are on the radar). We should probably also remove device_display_names entirely. Stuff about "you have to trust other people's servers when you ask them to delete data!" is not exactly earth-shattering, and the encryption & authenticated media issues mentioned got fixed in 2024.
Point by point:
> 1. the graph is append-only by design
Nope, Matrix rooms are designed to let server prune old data if they want - https://element-hq.github.io/synapse/latest/message_retentio... is how you configure Synapse for it, for instance. The DAG can also have gaps in it (see point 6 below).
> 2. if you do want to delete something, you can send a redaction event which asks other servers very nicely to delete the content of the event, but redactions are advisory
If you ask a server to delete data, you have to trust it actually deletes it. That goes for any protocol; it's nothing to do with Matrix.
> 3. however, servers that choose to ignore redactions, or fail to process them for some other reason, can leak supposedly-deleted data to other servers later on.
see above.
> 4. certain events, like membership changes, bans or pretty much any event that exercises some control over another user can't be deleted ever as they become woven into the "auth chain" of future events
This one's almost true. The fact that "events which exercise control over another use" (i.e. access control) can't be deleted should not be surprising, given access control that doesn't disappear from under you is generally considered a good thing. However, if you really do want to delete it, you could 'upgrade' the room by pointing it to a new room ID, and vape the previous one (although admittedly there's no 'vape room' API yet).
> 5. the only way to discard all of this spam complexity is to recreate the room.
...or upgrade it, which is increasingly a transparent operation (we've been doing a bunch of work on it in preparation for https://matrix.org/blog/2025/07/security-predisclosure/). Meanwhile, mitigating state spam is part of the scope of the ongoing security work mentioned there.
> 6. it's exceptionally hard to linearize history if you don’t know the entire history of the room partially.
Yup, this is a feature. We don't want servers to have to sync full room history; they're allowed to do it in chunks. The tradeoff is that ordering the chunks is a heuristic, although we're currently in the process of improving that.
> 7. it is also somewhat possible to insert messages into history by crafting events in the graph that refer to older ancestor events
Decentralisation means that servers are allowed to branch from old commits (in git parlance), much like git. This is desirable if you're handling delayed traffic from a network partition or outage; we're working on avoiding it in other scenarios.
> 8. another thing that is worth noting is that end-to-end encryption in matrix is completely optional.
Sometimes E2EE makes no sense (e.g. massive public rooms, or clients which don't implement E2EE). Any client that speaks E2EE makes it abundantly clear when a room is encrypted and when it isn't; much like https v. http in a browser.
> 9. the end-to-end encryption is also annoyingly fragile
Not any more; we fixed it over the course of 2024 - see https://2024.matrix.org/documents/talk_slides/LAB4%202024-09... or the recording at https://www.youtube.com/watch?v=FHzh2Y7BABQ. If anyone sees Unable To Decrypt messages these days (at least on Element Web or Element X + Synapse) we need to know about it.
> 10. sometimes these device list updates updates also leak information about your device
Clients send a default device name (e.g. "Element X on iPhone 12 Pro Max") to the server, to help the user tell their own sessions apart, and to give users on the same server some way of debugging encryption problems. Admittedly this is no longer needed (clients typically hide this data anyway), so the API should be cleaned up.
> 11. the spec doesn’t actually define what the canonical json form is strictly
This one is accurate; we need to tighten/replace canonical json, although in practice this only impacts events which deliberately exploit the ambiguities.
> 12. matrix homeservers written in different languages have json interoperability issues
See above.
> 13. [server] signing key expiry is completely arbitrary
Server signing keys are definitely a wart, and we're working on getting rid of them.
> 14. split-brained rooms are actually a common occurrence
Once https://matrix.org/blog/2025/07/security-predisclosure/ lands, things should be significantly improved.
> 15. state resets happen quite a bit more often when servers written in different languages interoperate
See above.
> 16. room admins and moderators have lost their powers over public rooms many times due to state resets
See above.
> 17. you can’t actually force a room to be shut down across the federation
Same as point 2 and 3, you can't force other people's servers to do anything on the Internet (unless we end up in some kind of DRM or remote attestation dystopia)
> 18. moderation relies entirely on the functioning of the event auth system
See above for upcoming state reset fixes.
> 19. media downloads are unauthenticated by default
Not since https://matrix.org/blog/2024/06/26/sunsetting-unauthenticate...
> 20. you can ask someone else’s homeserver to replicate media
Only if you're authenticated on it, as of https://matrix.org/blog/2024/06/26/sunsetting-unauthenticate...
> 21. media uploads are unverified by default
Yes, being an end-to-end encrypted comms system, the server can't scan your uploads given it can't decrypt them, by default. Clients can scan though if they want, although in practice few do.
> 22. you could become liable for hosting copies of illegal media
This is true of any federated system. If you run a mail server, and one of your users subscribes to a malicious mailing list, your mail server will fill up with bad content. Similarly if you run a usenet server. Or a git forge, and someone starts mirroring malicious content.
I've see this "critique" come up with multiple different protocols, and it's almost infuriating.
How does the protocol ensure the server removes the data in question from backups? How does the protocol handle someone copying the data off the server onto another device before deletion? How does the protocol handle servers that don't properly implement the data deletion portion of the protocol? How does the protocol handle wiping the neural synapses associated with the memory of that piece of data for each person who ever interacted with it? Without solving these problems, it seems to me the "delete" feature is nothing more than a suggestion, and therefor the protocol itself is fundamentally broken!!
Like, how do you take back something horrible you said to a friend? You don't. You fucking live with your mistake. Don't send shit to other people if you don't want them to, you know, have that information.
One could argue that the right implementation is to not have a delete option at all. Having a delete option gives people the false idea that their data was deleted, but no one can guarantee you that. Pretty sure this makes people even more annoyed and outraged. Imagine the typical responses you will see on a HN post titled “Twitter now is refusing to let you delete your tweets” and Elon Musk saying something like your last sentence to people.
Then you also trust there's no lawful interception on those services anyway?
>Not any more; we fixed it over the course of 2024 [...] If anyone sees Unable To Decrypt messages these days (at least on Element Web or Element X + Synapse) we need to know about it.
You haven't "fixed" anything. I just opened Element X to an E2EE room hosted on a Synapse server, and I see a dozen "Waiting for this message"'s from three different people. Half the conversation in this room is people saying so-and-so's messages are unreadable, and of course it's a different person for everyone. Another client I have can see those people's messages, presumably because it was online at the time those people's clients joined / rotated their keys, because Matrix E2EE apparently depends on all parties' clients being online at the same time to be able to share keys.
This is exactly how it's been for years, in multiple rooms and clients, so it's hard to believe anything's changed, let alone been fixed.
The same thing happens with Element Web too, but at least that supports manually exporting and importing keys so that I can manually union all the working keys between all the clients. But of course Element X doesn't support that feature.
Arathorn has argued before that this is "only an issue for malicious homeservers" but that is literally the entire threat model for E2EE -- if you didn't care about malicious servers you could just use TLS and avoid all of the "unable to decrypt message" issues.
Let's not get into the fact that even though they copied the Signal protocol (which is a good thing!), the amount of metadata stored by users' homeservers probably invalidates any of the deniability properties that motivated the development of OTR and Signal in the first place. Yes, the ability to keep chat logs forever is a key feature of Matrix, but you really have to ask whether trying to adapt the Signal protocol to their needs really made sense.
If you're sending encrypted to untrusted on a public room, you're trusting the homeserver.
It's literally like trusting a web server that it won't MITM you.
This defeats the entire point of end-to-end encryption. The entire point is that you shouldn't have to trust intermediary servers. If Matrix clients had a giant red flashing warning if you disabled the "do not send to untrusted devices" knob, then maybe this would be excusable -- but they don't and it is off by default.
I said that if you want to be sure who’s in your group you need to verify their identity, at which point you get warned very clearly if malicious devices get added and most clients will refuse to send messages in the room.
Or is it no longer the case that even if you verify a user, unverified devices of that users' account will still receive peer keys and messages? (The description of the global knob implies that this still is the case. I don't have time to test this at the moment.)
The only scenario i can think of here is either you logged out of all devices on your account (so nobody could encrypt for you as you didn’t exist any more) or the room is impressively corrupt (state resets) or the server is impressively buggy (eg a beta) or the servers can’t actually talk to each other (eg messages are being replicated transitively but keys aren’t).
Please can you report with details so we can actually investigate?
As per the linked conference talks, we went on a gigantic mission to fix this, which based on our telemetry was successful.
Matrix has been in development since 2014 (which is around when I first started using it) so pointing to solutions only just implemented in 2024 or "still being addressed" in 2025 as proof that this is a "hit piece" seems disingenuous to me (especially since this blog post is from 2023, and thus was correct at the time of writing according to your own comment). I understand that you are protective of your project, but having read your comments over the past decade, it seems that you feel the need to reply somewhat dismissively to any criticism.
I don't want to rehash the entire E2EE history here, but that decade-long saga has always bothered me.
The fact I then went through point by point to acknowledge that a few of the other points are still relevant shows that I’m not dismissing it.
Yes, I’m protective of the project, but I’m also irritated at folks who push fake information against it (especially if it’s padding out valid points).
p.s. sorry if this comes across as dismissive of your complaint O:-)
(Massive /s in case it’s not obvious - they should suck it up and migrate)
Likewise point 8: there's nothing a protocol that isn't just a walled garden for a set of Trusted™ proprietary client binaries can do to prevent a client from doing whatever it likes with the decrypted information.
It’s not a perfect control by means, but if your objective is to minimise the amount of sensitive material just laying around, it definitely helps (and makes your adversary’s life a bit harder.)
It works really nicely, can recommend. The matrix protocol in that sense does work wonder.
There are new protocols like session, simplex (preferred?) but simplex has the issue that it is doing client side search for csam etc and I don't have a problem with csam search but then they will actually go into that group and shut that group, again I don't have an issue but their wordings have been extremely vague and it just seems like that anybody can report any server and they have the power to shut them down..., doesn't sound decentralized.
Again I am sure that csam is used as shield for privacy and yes I also would want to eradicate csam completely from the face of the earth but maybe in the process we would end up completely 1984.
I read into simplex protocol and the groups are honestly glue code tbh, you trust the nodes to give you safe info but in effect they can be malicious too. Simplex says that it is for 1-1 conversations but at that point, using something like tor based communication for live messaging is better.
The only use case of simplex I can find is 1-1 chats when the other person isn't online. But I guess I don't trust that either and at that point I would much rather use something like proton docs or proton drive as the storage layer...
Deltachat which is based on email looks really nice too.
There is this secuchat created by bkilm on gitlab which is worth a read actually. I am not finding this at the moment but I remember actually going through all of them and going on their matrix [1]
Both signal,matrix and maybe even simplex are good tbh.
Edit: Found it! [1]: https://bkil.gitlab.io/secuchart/
To the project’s credit: A lot of work has gone into supporting better UX (QR code sign in, sliding sync) and the Rust library is also apparently pretty good.
Yet, somehow, there still hasn’t been an explosion of clients. IMO it’s because the protocol carries too much cruft and the standardization process (including vendoring) makes it hard to use new features when not using the official libraries, which are a) under-documented and b) only available from Rust, Swift and Android.
I also want to add, I'm using beeper and there new app is fantastic but the old desktop app (and current webapp) are based on element.
Which for me had the worst experience possible, startup times way beyond 1 minute, the webapp just runs into a chrome oom when I keep it open for longer than 1 hour and also the user interface is super unresponsive and laggy.
I tried element (without beeper) and had the same issues, that's why I never got into matrix.
ashton314•6mo ago
frollogaston•6mo ago
teekert•6mo ago
timbit42•6mo ago
eddythompson80•6mo ago
gerdesj•6mo ago
Are you implying, with that statement, that few words might trick a reader into a different interpretation from the one they might reach otherwise or that few words are enough (do the trick)?
I'm all in favour of parsimony but please deploy the bare useful minimum with an eye to clarity and not obfuscation. Also note that a full stop (period) is an indicator that you have finished dribbling.
eddythompson80•6mo ago
https://youtu.be/_K-L9uhsBLM
hammyhavoc•6mo ago
shawkinaw•6mo ago
lowercaser•6mo ago
john-h-k•6mo ago
ashton314•6mo ago
esseph•6mo ago
Unless you are writing for yourself, then you are communicating with others. If you want others to be able to understand what you are saying, then you would benefit from doing what every presenter is taught to do for any event or slide deck - cater to your audience. In the case of long form written media, that means using punctuation and capitals.
frollogaston•6mo ago
CAPSLOCKON•6mo ago
jeltz•6mo ago
BLKNSLVR•6mo ago
Very brave.
throwaway328•6mo ago
BLKNSLVR•6mo ago
https://www.youtube.com/watch?v=_K-L9uhsBLM
tptacek•6mo ago
https://news.ycombinator.com/newsguidelines.html
abnercoimbre•6mo ago
pezezin•6mo ago
NaOH•6mo ago
Discussions about spelling, grammar, and punctuation on other sites transgress the HN goal to normalize thoughtful, curiosity-driven discussion here.
pezezin•6mo ago
NaOH•6mo ago
https://news.ycombinator.com/newswelcome.html
GlacierFox•6mo ago
gerdesj•6mo ago
If you are going to studiously ignore convention, please do it consistently weirdly.
The list of issues is not a list.
Oh well. In the end the message is the thing and not the medium. Ideally the medium might be a bit more conventional to enable readers to follow the argument and not have to mentally adjust the text.
ashton314•6mo ago
Ah, but to quote the philosopher Marshal McLuhan, the medium is the message. What does a tweet say? “This is a bite-sized idea that’s easy to understand.” What does a poorly-punctuated blog post say? “These ideas are sloppy.” What does a book say? “This is a more serious argument than you find online.” (alternatively, “I am looking for profit.”) What does a HN comment say? “This is my hot-take trying to sound smart.”
kop316•6mo ago
I personally agree that it makes it much harder to read.