Cambridge Analytica The Rohingya Genocide Suppressing Palestinian content during a genocide Damage to teenage (and adult) mental health
Anyway, I mention this because some friends are building a social media alternative to Instagram: https://upscrolled.com, aiming to be pro-user, pro-ethics, and designed for people, not just to make money.
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
I'm not using "fine" very literally. Damages paid to the victims.
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
Of course Facebook's JS won't add itself to websites, so half of the blame goes to webmasters willingly sending malware to browsers.
https://www.mozillafoundation.org/en/privacynotincluded/cate...
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
If AWS wanted to eavesdrop and/or record conversations of some random B2C app user, for sure they would need to ask for permission.
https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
If the company sends your conversation data to Facebook that's bad and certainly a privacy violation but at this point nothing has actually been done with the data yet. Then Facebook accesses the data and folds it into their advertising signals; they have now actually looked at the data and acted on the information within. And that to me is easedropping.
By that logic, if I listen in on your conversations but don’t do anything about it I’m not eavesdropping?
And I know it sounds pedantic but I don't think it is, it's why that data is allowed to be stored in S3 and Amazon isn't considered easedropping but Facebook is.
Possession of data does not give you complete legal freedom.
Which is what happened here.
Flo is wrong for using an online database for personal data.
Meta is wrong for facilitating an online database for personal data.
They're both morally and ethically wrong.
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful. When a dictator-ruled country’s app does so, you can be certain every character you type in there is logged and processed by the government.
What do you mean by cut all ties? The owners and management have no assets in Belarus or ties to the country?
A list of contact addresses is not a list of all locations, or all employees or all a contractors or all shareholders or all financial interests.
The one thing the site tells me is that it is operated by two separate companies - Flo Inc and Flo health UK. The directors of Flo Health Limited live in the UK and Cypress, two are Belarusian nationals and one Russian.
* Dmitry Gurski; CEO
* Tamara Orlova; CFO
* Anna Klepchukova; Chief Medical Officer
* Kate Romanovskaia; Chief Brand & Communications Officer
* Joëlle Barthel; Director of Brand Marketing
* Nick Lisher (British); Chief Marketing Officer
Also, here is what Pavel Durov mentioned recently in interview to Tucker Carlson
> In the US you have a process that allows the government to actually force any engineer in any tech company to implement a backdoor and not tell anyone about it with using this process called the gag order.
It doesn't matter what anyone claims on the landing page. Assume if it's stored somewhere, it'll get leaked eventually and the transitioning/hosting government already has an access and decryption keys.
Hey guys, that ycombinator "hacker" forum thing full Champagne socialists employed by the Zucks/Altmans/Musks of the world told me everything is fine and I shouldn't worry. I remain trustful.
Surely not even some, ahem, spilled tea can't possibly occur again, right? I remain trustful.
Speaking of tea, surely all the random "id verification" 3rd parties used since the UK had a digital aneurysm have everything in order, right? I remain trustful.
---
Nah, I'll just give my data to my bank and that's about it. Everyone else can fuck right off. I trust Facebook about as much as I trust Putin.
[1] https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
[2] https://storage.courtlistener.com/recap/gov.uscourts.cand.37... page 6, line 1
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
With this in mind, I personally believe groups will always come to better conclusions than individuals.
Being tried by 12 instead of 1 means more diversity of thought and opinion.
There is a wisdom of the crowd, and that wisdom comes in believing that we are all equal under the law. This wisdom is more self evident in democratic systems, like juries.
Not to be ageist, but I find this highly counterintuitive.
Let's just say with a full jury you're almost guaranteed to get someone on the other side of the spectrum, regardless of age.
The judge decided to pursue this career, studied law, is fairly well paid for the position, and has nowhere better to be. The jury is likely losing pay, is worried about parking and where and when they're going to eat lunch, and probably just wants the trial to be over (for all that they would of course prefer the outcome to be correct) so they can go home and return to their normal lives.
First off averages aren’t good enough here.
Second I can’t imagine a better example of an appeal to authority.
Trials are an administrative action. Interest in the process is no indication that the outcome will be just.
Your idea falls apart for all of the reasons an appeal to authority is a fallacy.
Additionally you’re fundamentally changing the nature of society by holding the people accountable to power rather than each other.
This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.
I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.
So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.
Innocent until proven guilty is the right default, but at some point when you've been accused of misconduct enough times? No jury is impartial.
But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.
"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.
We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"
LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.
Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.
If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.
None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.
You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.
Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.
> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.
That context matters.
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
Does that clarify my position?
You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.
To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.
What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.
Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.
> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents
This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.
If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.
In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.
Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"
In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.
Probably what it looked like 20 years ago.
Also, relaredly, if there's no moral or ethical way to conduct your business model, that doesn't mean that you're off the hook.
The correct outcome is your business model burns to the ground. That's why I don't run a hitman business, even though it would be lucrative.
If mass scale automated targeted advertising cannot be done ethically, then it cannot be done at all. It shouldn't exist.
But working beyond your competence when it results in people getting hurt is… negligent.
You can't, or shouldn't, outsource responsibility to computers. You can give them tasks. But if computers fail the tasks you give them, that's your responsibility. Because code doesn't have a conscious or an understanding of morality.
In my opinion, that isn't something that should be allowed or encouraged.
Why are you scaling up a business that can't refrain from fucking over customers?
Without knowing how it works at Facebook, it's quite possible the data points got slurped in, the models found meaning in the data and acted on it, and no human knew anything about it.
There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.
We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?
Really the only blame here should be on Flo.
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
"chose" is doing a lot of the heavy lifting here. Suppose you ran a Mastodon server and it turned out some people were using it to share revenge porn unbeknownst to you. Suppose further that they did it in a way that didn't make it easily detectable by you (eg. they did it in DMs/group chats). Sure, you can dump out the database and pore over everything just to be sure, but it's not like you're going to notice it day to day. If a few months later the revenge porn ring got busted should you be charged with "intentionally eavesdropping" on revenge porn or whatever? After all, to some extent, you "chose" to run the Mastodon server.
Not knowing those details (they are probably available but I'm not interested enough to read the court documents) I'm going to defer to the courts on this. Understand that depending on ongoing appeals I may have to change my stance a few times. If this keeps coming up I may eventually have to get interested and learn more details so I can pressure my representative to change the laws, but for now this just isn't important enough - to me - to dig farther than the generalizations I made above.
This happens accidentally every single day and we don't punish the victim
At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.
I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.
I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.
Of course not. You can, however, control what you then do with said data.
If a courier accidentally dropped a folder full of nuclear secrets in your mailbox, I promise you that if you do anything with it other than call the FBI (in the US), you will be in trouble.
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
What do you think this should look like?
My honest answer that I know is impossible:
Targeted advertising needs to die entirely.
Like, for example, running a gambling operation is very risky and has a high compliance barrier. So most companies just don't. In fact, most B2B won't even sell to gambling companies depending on what exactly they're selling.
AIUI, they have a system for using data they receive to target ads. They tell people not to put sensitive data in it. Someone does anyway, and it gets automatically picked up to target ads. What are they supposed to do on their end? Even if they apply heuristics for "probably sensitive data we shouldn't use"[1], some stuff is still going to get through. The fault should still lie with the entity that passed on the sensitive data.
An analogy might be that you want to share photos of an event you hosted, and you tell people to send in their pics, while enforcing the norm, "oh make sure to ask before taking someone's photo", and someone insists that what they sent in was compliant with that rule, when it wasn't. And then you share them.
[1] Edit: per your other comment, they indeed had such heuristics: https://news.ycombinator.com/item?id=44901198
If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
At this point it is becoming barely an analogy though.
You can't, though -- not perfectly, anyway. Whatever the informal norms, there are going to be people who violate them, and so the fault shouldn't pass on to you when you don't know someone is doing that. If anything, the analogy understates how unreasonable it is to FB, since they had an explicit contractual agreement for the other party not to send them sensitive data.
And as it stands now, websites aren't expected to pre-filter for some heuristic on "non-consensual user-uploaded photographs" (which would require an authentication chain), just to take them down when informed they're illegal ... which FB did (the analog of) here.
>If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.
It depends on the event and the nature of the venue. But yes, it is a bad analogy. For one thing Facebook is not an event with clearly delineated borders. It should naturally be given much higher scrutiny than anything like that.
"We're scot free, because we told *wink* people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"
Please don't sell me cocaine *snifffffffff*
> The fault should still lie with the entity that passed on the sensitive data.
Some benefits to making it be both:
* Centralize enforcement with more knowledgable entities
* Enforce at a level where the misdeeds can actually be identified and have scale, rather than death from a million cuts
* Prevent the central entity from using deniable proxies and cut-throughs to do bad things
This whole notion that we want so much scale, and that scale is an excuse for not paying attention to what you're doing or exercising due diligence, is repugnant. It pushes some cost down but also causes a lot of social harm. If anything, we should expect more ownership and responsibility from those with concentrated power, because they have more ability to cause widescale harm.
>Please don't sell me cocaine snifffffffff
Maybe there's something in discovery that substantiates this, but so far as I can tell there's no "wink" happening, officially or unofficially. A better analogy would be charging amazon with drug distributing because some enterprising drug dealer decided to use FBA to ship drugs, but amazon was unaware.
Unless, of course, Facebook is held accountable for not enforcing it.
Companies don't get to do whatever they want just because they didn't put any safegaurds in place to prevent illegally using the data they collected.
The correct answer is to look at the data and verify it's legal to use.
I might be sympathetic of a tiny startup who has increased costs, but it's a cost of doing business just like anything else. And Facebook has more than enough resources to put safegaurds in place, and they definitely should have known better by now, so they should get punished for not complying.
So repeal Section 230 and require every site to manually evaluate all content uploaded for legality before doing anything with it? If it’s not reasonable to ask sites to do that, it’s not reasonable to ask FB to do the same for data you send them.
Your position seems to vary based on how big/sympathetic the company in question is, which is not very even-handed and implicitly recognizes the burden of this kind of ask.
Running a forumn is fine, and I don't care if someone inputs a fake SSN on a forumn post.
I DO care if someone inputs a fake SSN on a financial form I provided, and it is actually my responsibility to prevent that. That's what KYC is and more.
You know exactly what it would look like. It would look like Facebook being legally responsible for using the data they get. If they are too big to do that or are getting too much data to do that, the answer isn't to let them off the hook. Also, lets not pretend Facebook doesn't have a 15 year history of actively misusing data. This is not a one off event.
No, because this is begging the question. The point being disputed is whether facebook offering a SDK and analytics service counts as "intentionally eavesdropping". Anyone with a bit of understanding of how SDKs work should think it's not. If you told your menstrual secrets to a friend, and that friend then told me, that's not "eavesdropping" to any sane person, but that's essentially what the jury ruled here.
I might be sympathetic if facebook was being convicted of "trafficking private information" or whatever, but if that's not a real crime, we shouldn't be using "intentionally eavesdropping" as a cudgel against it just because we hate it. That goes against the whole concept of rule of law.
Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.
But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
Facebook isn't running an electronic medical records business. It has no expectation that it's going to be receiving sensitive data, and specifically discourages it. What more are you expecting? That any company dealing with bits should have a moderation team poring over all records to make sure they don't contain "sensitive data"?
>But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
Running an analytics service that allows apps to send arbitrary events is "move fast, break things" now?
If this is used by targeting, I’m afraid we can’t call this just an “analytics service”.
Evidently Facebook does use medical data for targeted advertising. So they are a medical records business.
Cambridge Analytica was entirely a third party using "Click here to log in via Facebook and share your contacts" via FB's OpenGraph API.
Everyone in their mind is sure that it was Facebook just giving away all user details and that's what the scandal was about but if you look at the details the company was using the Facebook OpenGraph API and users were blindly hitting 'share', including all contact details (allowing them to do targeted political campaigning) when using the Cambridge Analytica quiz apps. Facebooks fault was allowing Cambridge Analytica permission to that API (although at the time they granted pretty much anyone access to it since they figured users would read the popups).
Now you might say "a login popup that confirms you wish to share data with a third party is not enough" and that's fair. Although that pretty much describes every OAuth flow out there really. Also think about it from the perspective of any app that has a reasonable reason to share a contacts list. Perhaps you wish to make an open source calendar and have a share calendar flow? Well there's precedent that you're liable if someone misuses that API.
We all hate big tech. So do juries. We'll jump at the chance to find them guilty and no one else in tech will complain. But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.
Doesn't everything else in your post kinda point to the industry needing a little stifling? Or, more kindly, a big rethink on privacy and better controls over one's own data?
Do you have an example of a similarly terrible precedent in your opinion? One that doesn't include the blatant surveillance state power-grabbing "think of the children" line. Just curious.
The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.
Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.
I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...
[3] https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
Are you talking about this?
>As one of the largest advertisers in the nation, Facebook knew that the data it received
>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,
>Facebook continued to receive, analyze, and use this information for its own purposes, including
>marketing and data analytics.
Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.
In the part of my post that you quoted I'm literally just talking about the cover page of [1] where the defendants are listed, and at the time only Flo is listed. So nothing against Facebook/Meta is being alleged in [1]. They got added to the suit sometime between that document and [3] - at a glance probably as part of consolidating some other case with this one.
Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.
The quote from my previous comment was taken from the amended complaint ([3]) that you posted. Skimming that document it's unclear what facebook actually did between 2019 and 2021. The complaint only claims flo sent data to facebook between 2016 and 2019, and after a quick skim the only connection I could find for 2021 is a report published in 2021 slamming the app's privacy practices, but didn't call out facebook in particular.
21 - For the claim that there was public reporting that Facebook was presumably aware of in 2019.
26 - For the claim that in February 2021 Facebook refused to review and destroy the data they had collected from Flo to that date, and thus presumably still had and were deriving value from the data.
I can't say I read the whole thing closely though.
Simply put, it should not be possible to simply send arbitrary data without some sort of user consent/control, and to me, this is where the GDPR has utterly failed. I hope one day users are given a legal right to control what data is sent off their device to a remote server with serious consequences for non-compliance.
In case you want to sync between multiple devices, networking is the least hassle way.
> Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
I don't know that it ends up being useful, because wherever the data is sent to can also send the data further on.
Don't need to "root" mobile phone and install GrapheneOS. Netguard app blocks connections on a per-app basis. It generally works.
But having to take these measures, i.e., installing GrapheneOS or Netguard (plus Nebulo, etc.), is why "mobile OS" all suck. People call them "corporate OS" because the OS is not under the control of the computer owner, it is controlled by a corporation. Even GrapheneOS depends on Google's Android OS, relies on Google hardware, makes default remote connections to a mothership that happen without any user input (just like any corporate OS), and uses a Chromium-based default browser. If one is concerned about being tracked, perhaps it is best to avoid these corporate, mobile OS.
It is easy to control remote connections on a non-corporate, non-mobile OS where the user can compile the OS from source on a modestly resourced computer. The computer user can edit the source and make whatever changes they want. For example, I use one where, after compilation from source, everything is disabled by default (this is not Linux). The user must choose whether to create and enable network interfaces for remote connectivity.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
I guess also people feel that corporations _shouldn't_ be allowed to do bad things with it.
Sadly, we already know with experience in the last 20 years, that many people don't care about what information they give to large corporations.
However, I do see more and more people increasingly concerned about their data. They are still mainly people involved in tech or related disciplines, but this is how things start.
Cycle data in the hands of many country's authorities is outright dangerous. If you're storing healthcare data, it should require IN BIG RED LETTERS an explicit opt-in, every single time, when that data leaves your device.
From that article:
“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards. [...] The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.”In this case, I'm confident that whoever wrote this section, just checking their hard drive should be sufficient to send them to jail.
Proposed resolution:
1. Wipe out Flo with civil damages, and also wipe out the C-suite and others at the company personally.
2. Prison for Flo's C-suite, and everyone else at Flo who knew what was going on and didn't stop it, or who was grossly negligent when they knew they were handling sensitive info.
3. Investigate what Flo's board and investors knew, for possible criminal and civil liability.
4. Investigate what Flo's data-sharing partner companies knew, and what was done with the data, for possible criminal and civil liability.
Tech industry gold rushes have naturally attracted much of the shittiest of society. And the Overton window within the field has shifted so much due to this, with some dishonest and underhanded practices as SOP, that even decent people have lost references for what's right and wrong. So the tech industry is going to keep doing every greedy, underhanded, and reckless thing they can, until society starts holding them accountable. That doesn't mean regulatory handslaps; that means predatory sociopaths rotting in prison, and VCs and LPs wiped out, as corporate veils of companies that the VCs knew were underhanded are pierced.
Along a similar vein, I cannot believe after the stunts LinkedIn pulled, that they're even allowed on app stores at all.
itsalotoffun•5mo ago
dkiebd•5mo ago
pbiggar•5mo ago
NickC25•5mo ago
Etheryte•5mo ago
Ekaros•5mo ago
j33zusjuice•5mo ago
potato3732842•5mo ago