Cambridge Analytica The Rohingya Genocide Suppressing Palestinian content during a genocide Damage to teenage (and adult) mental health
Anyway, I mention this because some friends are building a social media alternative to Instagram: https://upscrolled.com, aiming to be pro-user, pro-ethics, and designed for people, not just to make money.
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
Of course Facebook's JS won't add itself to websites, so half of the blame goes to webmasters willingly sending malware to browsers.
https://www.mozillafoundation.org/en/privacynotincluded/cate...
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
If AWS wanted to eavesdrop and/or record conversations of some random B2C app user, for sure they would need to ask for permission.
https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful. When a dictator-ruled country’s app does so, you can be certain every character you type in there is logged and processed by the government.
What do you mean by cut all ties? The owners and management have no assets in Belarus or ties to the country?
A list of contact addresses is not a list of all locations, or all employees or all a contractors or all shareholders or all financial interests.
The one thing the site tells me is that it is operated by two separate companies - Flo Inc and Flo health UK. The directors of Flo Health Limited live in the UK and Cypress, two are Belarusian nationals and one Russian.
> CEO and CTO are Belarusian (probably there are more C-level people who are Belarusian or Russian).
Actually, quick google search shows slavic (either Russian or Belarusian) names for CFO and CMO. Changing physical location means very little these days.
* Dmitry Gurski; CEO
* Tamara Orlova; CFO
* Anna Klepchukova; Chief Medical Officer
* Kate Romanovskaia; Chief Brand & Communications Officer
* Joëlle Barthel; Director of Brand Marketing
* Nick Lisher (British); Chief Marketing Officer
Also, here is what Pavel Durov mentioned recently in interview to Tucker Carlson
> In the US you have a process that allows the government to actually force any engineer in any tech company to implement a backdoor and not tell anyone about it with using this process called the gag order.
It doesn't matter what anyone claims on the landing page. Assume if it's stored somewhere, it'll get leaked eventually and the transitioning/hosting government already has an access and decryption keys.
[1] https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
[2] https://storage.courtlistener.com/recap/gov.uscourts.cand.37... page 6, line 1
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
With this in mind, I personally believe groups will always come to better conclusions than individuals.
Being tried by 12 instead of 1 means more diversity of thought and opinion.
Not to be ageist, but I find this highly counterintuitive.
Let's just say with a full jury you're almost guaranteed to get someone on the other side of the spectrum, regardless of age.
Innocent until proven guilty is the right default, but at some point when you've been accused of misconduct enough times? No jury is impartial.
But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.
"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.
We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"
LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.
Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.
If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.
None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.
You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.
Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.
> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.
That context matters.
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
Does that clarify my position?
In my opinion, that isn't something that should be allowed or encouraged.
Really the only blame here should be on Flo.
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
What do you think this should look like?
Simply put, it should not be possible to simply send arbitrary data without some sort of user consent/control, and to me, this is where the GDPR has utterly failed. I hope one day users are given a legal right to control what data is sent off their device to a remote server with serious consequences for non-compliance.
In case you want to sync between multiple devices, networking is the least hassle way.
> Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
I don't know that it ends up being useful, because wherever the data is sent to can also send the data further on.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
Cycle data in the hands of many country's authorities is outright dangerous. If you're storing healthcare data, it should require IN BIG RED LETTERS an explicit opt-in, every single time, when that data leaves your device.
itsalotoffun•4h ago
dkiebd•4h ago
pbiggar•4h ago
NickC25•1h ago
Etheryte•4h ago
Ekaros•4h ago
j33zusjuice•3h ago
potato3732842•3h ago