Unlikely.
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
Apparently I'm a luddite now, because yes, this. Stop using social media to communicate with people you ostensibly care about.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
Major citation needed
https://www.unesco.org/en/articles/generative-ai-unesco-stud...
> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.
https://arxiv.org/html/2410.19775v1
> We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written por- trayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An inter- sectional lens further reveals tropes that domi- nate portrayals of marginalized groups, such as tropicalism and the hypersexualization of mi- noritized women. These representational harms have concerning implications for downstream applications like story generation.
I guess it should have been marked clearly as such.
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
Many apps, like Slack and LinkedIn, use it to display a link card with a description.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
Do not try LinkedIn. Not even once.
And is it just me, or has LinkedIn Recruiter become all the more useless after the LLM age? At least we're not renewing that abomination next year, opting to use more flesh-and-blood headhunters.
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
They track and log every reel viewed.
I suppose everyone does it but actually seeing it is another level of creepy.
Not quite what you’re saying, but a couple of steps in that direction.
I am never, ever requesting that they delete the account.
If anyone using palantir wants to draw incorrect conclusions based on unverified data, the impact to them is certainly going to worse than it is to any of us normal citizens
If your credit is impacted because someone made a mistake, that still fucks you over. It doesn't matter if it's real or not because the entire point of centralized data collection and analytics is that you don't need to care, the people doing the collecting and analyzing do it for you. So you just trust them with whatever. It's on YOU, the consumer, to catch these mistakes and spend a painstaking amount of time trying to fix them, and ultimately the consumer is the only one who will face any consequences. And when it comes to credit, these consequences are very material. It means maybe you can't get a car, or a home, or even a job these days. I know my job ran a credit check.
If we imbue these new-age data collection and analysis companies like Palantir and Flock in our systems, a lot of people will suffer, and I don't think anyone cares.
Poison their data. If they have evidence against you, and you can prove their data is even partially bad, you have your reasonable doubt.
Juries are increasingly on the side of the citizen , which is better than nothing
My credit example is actually giving the opponents too much credit here. The bureaus are kinda government. Even that is better!
I have a cellular hotspot with a phone number apparently recycled from someone who still has it tied to a fintech account (Venmo, or something similar). Every time this person makes a purchase, my hotspot screen lights up with an inbound text message notification.
This person makes dozens of purchases each day, but unlike my previous hotspots, this one does not have a web interface that allows me to log in and see the purchase confirmations. All I get to see is "Purchase made for $xx.xx at" on the tiny screen several dozen times a day.
Social media was a mistake.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
I almost no longer exist and it’s kinda nice.
Only PeopleFinder and such show otherwise.
"If you want to have a baby, you won't be able to conceive. If you want to stay childfree, the condom will break."
If you want to find old logs of your IRC and AIM buddies from 20 years ago, they're gone. If you say something stupid once, it's kept forever.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
Users are $$$. Nobody wants to talk about which are human and which aren’t. It’s all a game of hot potato.
Who in marketing doesn’t want to champion the success of “we got 25% more views this month!”
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
As long as no one figures out it’s all fake, the line can keep going up and to the right and everyone is happy.
Anyone who starts asking hard questions may be up first on the chopping block.
Unless the line breaks, then bam. Everyone rushes to be the first for the door as the bubble pops.
Via discounts, promo codes, gamification, whatever else they’re using today to get people to install their apps and sign over their privacy.
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
I’m thinking that Facebook could do something like slightly alter the text in your posts, to incite rage in others. They already arrange your feed to induce “engagement” (their term for rage).
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant, or that you are angry at the company that turned you down, as opposed to just disappointed.
All that sweet, sweet innovation!
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
But I am a pedantic person who prefers to focus on the literal statements in text rather than the perceived underlying emotional current. So I’ll pedantically plod through what she actually said.
She’s dealing with two dimensions of divorce: who initiated it (husband, wife, or collaborative), and whether it was surprising or unsurprising.
That gives several possibilities, but she lists three. What unifies them is that they are all written from the perspective of the abstract woman undergoing the experience.
1. Woman initiated, surprise unspecified.
2. Collaborative, so assume unsurprising.
3. Man initiated, surprising (her situation).
She doesn’t claim this covers all possibilities. The point of that bit is just to emphasize that divorces are different, and to object to treating them as a genre for wellness AI slop.
Here is the original text containing that part so others can easily form their own opinion.
“I also object to the flattening of the contours of my particular divorce. There are really important distinctions between the experiences of women who initiate their own divorces versus women who come to a mutual agreement with their spouses to end the marriage versus women, like me, who are completely blindsided by their husbands’ decisions to suddenly end the marriage. All divorces do involve self-discovery and rebuilding your life, but the ways in which you begin down that path often involve dramatically different circumstances.”
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
I feel terrible asking whether her accusation against Instagram is true... The comments below https://news.ycombinator.com/item?id=46354298 discuss how she might be mistaken. I can't read the link she referred to 404media but that appears to be about retitling headlines, and not about rewriting content.
If Meta were generating text, how would Meta avoid trouble with the Section 230 carveout?
A more cynical me would think they were just trying to juice links for their SEO.
I probably shouldn't be commenting on human slop - that doesn't help either.
X owner: Man
Soon-to-be TikTok US owner: Man
But this doesn’t change the fact that she shouldn’t share anything personal on social media. Consider social media the new "streets". A street with dim lights or an alley that you go at 3am and shout something or showing your images/videos to strangers there. This is exactly what you should keep in mind before you share anything personal on social media.
And either way, who wants to be an unpaid Meta employee that provides any kind of content for free?
HN doesn't even have a downvote button.
It’s fascinating to see which stories take a dive.
Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
No, but if it was flagged, it's possible that it was by Meta employees. I can't see why it would be flagged, otherwise. I'll bet there's enough for a good flash mob, like the Elon stans that always flag down stories critical of him.
I'll bet they will also be flagging my comment.
It's nice to be loved...
https://news.ycombinator.com/active
It is the answer to the question. "What stories do Hacker News users not want me to see?"
Sorry, I thought you were the OP, which made the claim of
>Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
Technically, no we don't know this is going on. Only HN's admins can know this. But come on...
1. I find her description of what happened here ("an AI impersonated me!!") to be inaccurate and misleading.
2. I find her blatantly misandrist victimhood stance to be disgusting.
But I do not understand why someone who's so passionate about the issues raised in the post would do something as silly as post this on a Meta-owned property at all. The end result is blindingly obvious, and anyone who doesn't expect exactly this is living in a bizarre fantasy-world, where social media (and moreso Meta-owned social media) isn't inherently evil and run/maintained by evil people (and yes, I understand the irony).
Much of privacy law is based on a "reasonable" expectation of privacy. What counts as "reasonable" can change depending on what people in general believe it to be.
Here's an essay [1] by an appeals court judge from 2012 for some more on this.
[1] https://www.stanfordlawreview.org/online/privacy-paradox-the...
1. Attention.
2. You a have a public image that includes you being married and social media is one of the main channels over which you reach the people who know you. Now you get divorced and you do not want these people to have the false image of you being happily married and potentially even getting comments referencing your marriage anymore.
No one(*almost) is posting to reddits AITA(am I the asshole) expecting to hear that they are wrong.
This is also how echo chambers form.
This is about her husband divorcing her. I find this to be a very unfair way to frame someone else's decision to not spend their life with you anymore. Your partner does not owe you a relationship. Interestingly it is not even me coming up with the word "framing". She herself describes her Instagram post as deliberate framing.
She also claims that the AI chose words dismissive of her pain because she is a woman (rather than just because it's fake-positive corpo slop) and does not substantiate that in any way.
I'm all against this AI slop BS, especially when it's impersonating people. The blog post is mostly not about that.
You cannot control that you will love someone forever, so you cannot promise that. What you can promise someone is that you plan on spending the rest of your life with them and that you have so much love that you trust it will last forever. Sometimes that does not work out. That is no one's fault and no one owes to anyone to stay together with a person they no longer love.
And it has been one of the greatest mistakes humanity has ever made. If there is a good reason, sure, you cannot be expected to live with someone who has been cruel or irresponsible towards you. But no-fault divorce just because you got bored? Fuck off, you made a commitment at the time. Relationships do take work, always have and always will. Especially when there are children a no-fault divorce is pure selfishness.
With that said, we only know one side of this story, so I'm not going to argue for either side in this particular case. I'm talking in general here.
Does fault only include cheating? Can the fault be on the same one who initiated the divorce? What if the fault is simply someone has changed so much that they're no longer compatible with person they fell in love with before? The fault could be on oneself without any inkling of infidelity.
Til death do us part has been ironically dead for decades now since people have been divorcing at high rates for long enough that it doesn't really mean much anymore, and that's okay. Things change.
No, it's a legal term. From wikipedia:
>No-fault divorce is the dissolution of a marriage that does not require a showing of wrongdoing by either party.[1][2] Laws providing for no-fault divorce allow a family court to grant a divorce in response to a petition by either party of the marriage without requiring the petitioner to provide evidence that the defendant has committed a breach of the marital contract.
It quite literally means that people can request divorce for any reason.
Humanity has a lot more variation than the our standard modern marriage.
Making marriage the norm would not fix any of the issues I see in my own society, and marriage causes other problems.
Many people I know (including solo mums) that I know would love to find someone worthwhile marrying... However they haven't (or can't) find someone worthwhile, so it is better to remain single than get married into a dangerous relationship.
I suspect you are mistaking cause and effect. Marriage isn't a cause, it is an outcome.
Yet, people routinely do in wedding vows. Maybe that tradition should end. Maybe the traditional wedding vows should be changed to "Hey, we'll give it a shot but no promises!"
That would probably be her default position: whoever it is did not sufficiently empathize, and only "I" can be the judge of what sufficient means.
But I'd pay for a social media site that respected my preferences / content choices and had everyone using real names / validated and so on.
Either way, I don't know what to tell people. Social media exists to take advantage of you. If you use it, your choices are "takes more advantage" vs. "takes less advantage," but that's as good as gets.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Another thing I've noticed recently on youtube suddenly my feed is full of AI fakes of well known speakers like Sarah Paine an eminent historian who talks about Russia and the like but there's all this slop with her face speaking and "Why Putin's War Was ALWAYS Inevitable - Sarah Paine" but with AI generated words. They usually say somewhere in the small print that it's an AI fan tribute but it's all a bit weird.
(update they now say 'video taken down' but were there for a while)
The legal framework is completely unprepared for this. Current identity theft laws require financial harm or fraud intent. But what's the legal status of an AI that impersonates you with your own data on a platform you actively use? It's not fraud in the traditional sense, but it's definitely some kind of identity violation. We need new categories: "computational identity theft," "algorithmic impersonation," something that recognizes the harm of having your digital self puppeteered by a corporate AI.
The metadata implications are worse than people realize. Even if you never post personal content, Meta can infer relationship status, location patterns, health issues, political leanings from likes, tags, and behavioral signals. An AI profile built from that could plausibly interact in your name with significant accuracy. The person being impersonated might not even know unless someone explicitly asks "wait, did you really say that?"
The immediate solution is legislation requiring explicit opt-in for any AI feature that generates content attributed to a user's identity. No defaults, no dark patterns, no "we'll enable it and let you opt out later." But the deeper problem is the power asymmetry - these companies own the platforms and the data, so they define what's acceptable. We need data portability rights and mandatory AI disclosure so users can at least migrate to platforms that don't pull this.
cc: @dang
While I don't think it has a high risk of causing anyone any harm, I kinda hate it, like I DID NOT POST A SUBMISSION WITH THAT TITLE and I MADE THAT COMMENT AT A DIFFERENT TIME. I'd prefer if texts that are altered got a [last edited at [date] by moderator] stamp.
Incredibly sorry this happened to you. Unfortunately, Silicon Valley could not care less. Consent is not a concept they understand.
I hope you find healing and strength.
Regardless of the title and the full story, I mostly feel empathy for the writer of the article.
Half a year ago a colleague of mine told me, in tears, that her spouse suddenly left her after living together for 7 years. I felt her pain and cried with her, i tried to comfort her with kind words. I had a nightmare that night. She's truly a very good person and I still feel very sad for her. She told me later that his personality suddenly seemed to have changed, he was not the man she used to know.
The bottom line of what I want to say: please have empathy with people going through a break of relationships, even if such things happen every day. Be thankful if you are in a good relationship.
The last section of the blogpost.
I think the trouble is I can’t imagine being the kind of person who would post about something like this.
Communication enables us to live lives we couldn't otherwise, accruing both lessons and solidarity. These are benefits of titanic value.
The long and short of it is, we've got to lie down, but it's hard to keep the dogs off the bed.
jwr•1mo ago
I keep trying to convince people not to use Instagram, WhatsApp, Facebook, Twitter/X, but I'm not getting anywhere.
Write your own content and post it on your own terms using services that you either own or that can't be overtaken by corporate greed (like Mastodon).
pmlnr•1mo ago
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
saubeidl•1mo ago
nunobrito•1mo ago
normie3000•1mo ago
Would this depend on threat model?
nunobrito•1mo ago
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
c22•1mo ago
lukan•1mo ago
And where can I find such a story from a trusworthy source? Quick google search rather turned up this:
https://euvsdisinfo.eu/report/us-intelligences-services-cont...
(Debunking it as russian information warfare)
nunobrito•1mo ago
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Signal kept being advertised over the years as "private" to the tune of 14 million USD in funding per year provided by the US government (CIA) until it ran out some two years ago: https://english.almayadeen.net/articles/analysis/signal-faci...
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
robtherobber•1mo ago
I'd be interested in reading that blog post eventually.
mr_mitm•1mo ago
gardenerik•1mo ago
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
vachina•1mo ago
kuschku•1mo ago
And other mastodon servers, just like other email servers, can of course still modify the data they receive how they'd like.
darkwater•1mo ago
keiferski•1mo ago
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
anonym29•1mo ago
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
HiPhish•1mo ago
anonym29•1mo ago
array_key_first•1mo ago
Signal is the best messaging app, but not by metrics people use to measure messaging apps, because not a ton of people use it. I use signal, but I also still use SMS (garsp!) because ultimately sometimes I just need to send a message.
It sucks and it's stupid, what we need more than anything else, more than any app, is open and federated messaging protocols.
nunez•1mo ago
ryandrake•1mo ago
There is no feasible way for a normie like me to convince enough people to take any kind of action collectively that will be noticed by FAANG.
I think we like to pretend otherwise, like oh if enough people stop using Instagram, they will fail. This is only true in the most literal sense, because "enough" is an enormous number, totally unachievable by advocacy.
WorldMaker•1mo ago
We need far better strategies than "vote with your wallet". I think it is at least time to get rid of "vote with your wallet" from our collective vocabularies, for the sake of actual democracy.
rchaud•1mo ago
If something is bad, it's said that the free market will offer an alternative and the assumed loss of market share will rein in the bad thing. It ignores, as does most un-nuanced discourse about economy and society, that capitalism does not equate to a free market outside of a beginner's economics textbook, and democracy doesn't prevent incumbents from buying up the competition (FB/Instagram) or attempting to block competition outright (Tiktok).
raincole•1mo ago
Plus, what about videos? How is a non-tech savvy creator going to host their content if it's best in video format?
chistev•1mo ago
I'm with you, but WhatsApp is tough. How do you keep in touch?
the_other•1mo ago
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
the_af•1mo ago
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
So WhatsApp it is!
chistev•1mo ago
the_other•1mo ago
the_af•1mo ago
And what's the alternative? How do I keep in daily touch with my close friends that live across the world?
the_other•1mo ago
At any point they might insert an advert, a bot, change the UI or the share features, some AI slop, etc. you will have no recourse.
Just by using their platforms they’re able to update their models of you, your family, your friends. The timing of chats, the data they have on you through Insta or FB, all flesh out and refine their model of you. You are doing their work for them, helping them get richer, all whilst they oversee everything you do.
As for alternatives? I already listed several. You rejected most of them for whatever reasons you gave. Those were primarily your choices rather than firm barriers.
Here’s some more options: Discord, Matrix, blogs +RSS, your own mastodon instance, mailing lists, FaceTime, Zoom, WhereBy, MS Teams, irc, Slack, Mattermost, a custom chat server you wrote yourself.
chistev•1mo ago
the_af•1mo ago
KurSix•1mo ago