Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery? Aren't employees saying anything against these features? What culture are they building? What leadership are they demonstrating? It can't all be about money by any cost damn the users and their privacy/rights, right?
Now imagine what would happen without any regulation at all…
As far as I can tell, that's the mandate.
Brands are about trust, so it makes sense.
Likely, but it's just demoralized workplace grousing. Tech employees are statistically sycophantic, and the exceptions get burned out or tossed out by executives who say things like "why don't they just shut up and work"
And the tech industry leaders are mostly spoiled, entitled mill/billionaires who eject anyone who crosses them. A team of moral imbeciles.
Society is screwed until Zuckerberg, Bezos, Musk et al are stripped of their power and wealth.
Which will never happen.
A lot of times unless you're on a team or in a department that's doing this stuff for nefarious purposes, you wouldn't know.
Anecdotal evidence:
I worked in RPA (robotic process automation) for a large software company. We were tasked with automating a decision process. The decision process was completely benign. Something like a payments process. Pretty easy. We finished it in a few months start to finish.
We hand it off, it goes into production. A lot of back slapping and high fives happen. I go on with my work, move out of the company a few months later. About a year later, a guy I worked with at the company emails me a news story about a company using the RPA program we built to auto deny insurance claims for their clients. Massive class action lawsuit.
The insurance company had taken our script and just re-purposed it to auto deny a certain percentage of claims. I was shocked and dismayed that someone would use the stuff we built for their own shady business practices. It was a huge wakeup call that even when you're building something completely benign, a company can pay millions for it and reuse it to do bad things to people and you'll never know until its too late.
people are usually motivated by more and faster
this means whatever opportunities present themselves for short-term more and faster will have no shortage of takers
TL;DR myopic greed
Companies are something else beyond people, which has a mind of its own. And leadership selects for sociopathy.
It's really not that complicated, if I squint hard enough it looks like an obvious tragedy of the commons variant.
Also, the incentive structure for reward (stock price etc) is predicated by squeezing every last bit of monetization out. They already know ahead of time how much money a new feature could bring in, and the potential litigation cost before hand. If revenue > litigation/fine expense, they are going to do it.
Money
> Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
At worst they get some bad PR but most people aren't listening to security researchers (or don't care). Companies aren't being held "responsible" for anything, maybe a small fine but that's just the cost of doing business.
> And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery?
Yes, probably all these things exist. See also Apple talking about how to make linking-out sound as scary as possible. There are chats with employees brainstorming how to make it scarier to end-users to drive them back into the IAP flow.
> Aren't employees saying anything against these features?
Some are afraid of losing their job or getting branded as a non-team-player. Some don't care. It can be easy to get lost in "implement the spec" instead of "is what I'm building morally ok?"
> What culture are they building?
A bad one.
> What leadership are they demonstrating?
That they only care about money and what they can get away with. Even if they get caught the profits outstrip the fine by a large margin. "It's just numbers/business".
> It can't all be about money by any cost damn the users and their privacy/rights, right?
It is.
https://en.wikipedia.org/wiki/Obedience_to_Authority:_An_Exp...
I recommend reading the book, it has a ton of useful stuff in it beyond what everyone knows about the Milgram experiment.
if (true) { trackUser(); }
Even some (most?) off the shelf cookie consent libraries don't handle the turning the actual cookies off.
Case in point, FTC did start an antitrust lawsuit against Facebook in December 2020 and the trial finally started last month. Hence, Zuckerberg being at the inauguration, getting rid of fact-checkers etc.
EDIT: well, that's probably not true, there are probably many people working for e.g. the FTC that would love to go after big companies. This is certainly true for the CFPB. But collectively, there's very little will from either party to do so. I mean, a large portion of congress holds Meta stock; some got rich off it.
Why won't Meta be punished for this? Because they won't be convicted.
Why won't they be convicted? Because the court system is corrupt, trials are expensive, and Meta has more money to throw behind lawyers than anyone who cares to sue them.
Why is the court system corrupt? Because corruption is spreading through every aspect of the US government, as corruption, like black mold in a kitchen sink, tends to do when not actively fought.
Why are trials expensive? Because the law is complicated and there isn't much support for privacy, so this isn't open-and-shut like a murder case, it would be a long drag-out fight between the few people with little money who care about privacy, and a huge corporation powered by the fear and apathy and helplessness of billions of people.
Why does Meta have so much money? Because nobody stopped them when it would have been easy. Because corruption of democracy and erosion of privacy have been ongoing for decades, maybe forever.
Why isn't corruption being actively fought? The working class (Net worth under say 5 million) are on back foot against it. Nobody has time to vote, protest, organize, unionize, run for office, fix things, help the homeless, when you can't afford a damned thing, when a pregnancy is a career-ending problem, when you can't even get an abortion safely in many states, when you're one ambulance bill from a "not a debtor's prison" constructed debtor's prison.
Why isn't there much support for privacy? Same reasons, plus public education lags behind on this issue. Tech moved very fast the last few decades, and there's no public education for adults, so people don't really get sat down and told what the risk is of mass surveillance. The stories get written, sometimes, but not listened to. Nothing very serious gets done when you can always flip channels and see something funny, which is what Meta and TikTok sell to people. You can always flip channels and tune out, and it might take years to say to yourself "I think I have a crippling Internet addiction".
What's the actionable?
1. Always fight anyway. Get stuff off of Meta. Publish Own Site, Syndicate Elsewhere. Help friends get away. Do what you can.
2. Do things in physical reality.
3. Vote in every god-damned election you can vote in. Vote blue even if it's a shit sandwich, because most Americans are stuck under FPTP voting, and life is better under Democrats, do I need to explain the word "gradient" to a website full of startup hackers?
* Apps can open local TCP/UDP ports on the device to talk to each other. That's good, often. It's a valuable capability, just like access to shared storage is. And it doesn't leak any info the app doesn't want to leak.
* Apps can get a reasonably unique fingerprint for the device they're running on. Again, that's desired: backends want to know who they're talking to, and that it's not a MitM or hijacked account.
* Browsers can talk to those local servers too. Again, this seems useful to me, as long as native apps have capabilities that web apps lack, they should be able to offer them as extensions to the web API.
So basically we have a situation where Facebook's native app tells Facebook's web app (or rather Facebook's web code being distributed by Facebook's partner sites) what machine it's operating on (and maybe some more details about the history, like user account cookies, etc...), in contravention of the user's expectations about what they told the partner site to store about them.
It's that last bit that's the bad part. But it's not really about device security as I see it. Facebook broke the spirit of the cookie tracking rule, but not the letter.
Not sure if I agree on that one... at a minimum I think that should be behind a user consent prompt.
The reason they didn't do that is pure obscurity: if they blasted a tracking cookie over the internet they would have been caught faster. Trying to design "security" features by pushing the "obscurity" boundary around is usually wasted effort.
* App-to-app network communication really only seems sane between apps from the same vendor.
* Unique fingerprinting of the device is explicitly not desirable. They should have access to a unique anonymous identifier for the device, which cannot be compared between apps from multiple vendors.
* Browsers talking to native apps again seems like something that should only be possible if both the web domain and the app belong to the same vendor. I don't have a huge problem with Facebook.com talking to the Facebook app - I do have a problem with random 3rd-party sites talking to the Facebook app.
Every working system vulnerability is explicitly allowed by the code, or else it wouldn't work.
Do you mean that the OS provides for this possibility in a technical sense?
Or do you mean that the OS explicitly gave them permission to do this exact thing (and that permission was from the user?).
That something is technical possible isn't what makes something not a violation of the computer fraud and abuse act. Similarly, leaving my front door unlocked doesn't mean you aren't trespassing.
> Organi-cultural deviance is a recent philosophical model used in academia and corporate criminology that views corporate crime as a body of social, behavioral, and environmental processes leading to deviant acts.
> This reflects the view that corporate cultures may encourage or accept deviant behaviors that differ from what is normal or accepted in the broader society.
https://en.m.wikipedia.org/wiki/Corporate_crime
In other words, whether or not what Facebook is doing is a crime, they are doing it because their corporate culture fundamentally believes that it is okay and acceptable for them to do it, even though wider society doesn’t agree.
And that’s the justification for filing criminal charges against organizations. It is the culture of the organization that encourages criminal acts, so the organization itself should be charged, and dissolved (the corporate death penalty) if necessary.
>It said Meta and Yandex used Android's capabilities "in unintended ways that blatantly violate our security and privacy principles".
Did Google immediately remove these apps with blatant security and privacy violations from their app store?
I wish there was a way to prevent an app from running in the background.
Uninstall the app.
CEO MANIFESTO
Do illegal thing,
increase revenue,
bonus for me,
if caught,
punishment for the company.
https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...
Does anyone know a sane way to monitor the API calls from an Android app so I can see the endpoints that aren't available on the web apps?
I got likes from what looked like bot accounts from random countries (Kazahstan etc) even when I was quite certain I had limited the ads to certain countries. Then I was spammed with scams on the ads dashboard who appeared out of nowhere.
Absolute waste of money.
We found this out (I was the first to recreate / prove it) when testing the COVID contact tracing apps in NL, at the time Google were logging the seeds to the main system log. That allowed anyone with access to said logs to build a real-time map of ever Android user in the world who had the GAEN framework installed.
EDIT:
Here's the press release in English covering the app shutdown:
https://nltimes.nl/2021/04/29/coronamelder-app-taken-offline...
Here's a paper detailing Facebook's access infecting systems with no Facebook installed:
https://nltimes.nl/2021/04/29/coronamelder-app-taken-offline...
Here's the paper showing the Facebook had access to the logging and whole range of very suspect permissions:
I tracked it down to an app he had called "7 Min Workouts" which was spamming the ads. Wild.
I'm especially curious if Google shares any of the blame. Was this a known issue and they assumed no one would actually exploit it, or a subtle bug that only just got caught? Either way it's a huge security vulnerability.
The App listens on localhost:xxyyzz when backgrounded. You open your browser and go to onesite.com and then differentsite.com the ID you are known as on those two sites is transmitted by having the JS on each site that supports Facebook functionality / ads etc for that site, and runs in your browser, make a request for an asset on your localhost with args <your ID on that website>. The app gets the args, and sends it off to HQ. That ties your signed-in account on the app to your activity on all the websites that was using this. And to be clear, FB Pixel calls are tagged with the 'event' that you're doing like "checkout" "just looking" "donate" etc. While I don't know for sure, I'd assume that the fact you're in Incognito Mode is just an aspect of the data report, I would say. Nothing would stop it.
ios does the same thing. when you install an app, they allow deep linking of their urls.
for example, if you install the amazon app, any amazon link loaded on your phone can be intercepted by it (messages, mail, browser, etc)
I think the same kinds of things can be done with location services. A store app can do fine-grained bluetooth location with ibeacons in their store.
I don't know the state-of-the-art in cross-application tracking. I'm pretty sure sdks added to multiple apps can do the same sort of thing.
At some point a number of years ago, i just stopped installing apps.
It’s like Meta giving every AirBnB host a free toaster as a gift — but secretly, the toaster has a hidden microphone and internet connection that listens in on every guest’s conversation, then beams that info back to Meta.
gnabgib•1d ago
dang•1d ago
Covert web-to-app tracking via localhost on Android - https://news.ycombinator.com/item?id=44169115 - June 2025 (308 comments)
Meta pauses mobile port tracking tech on Android after researchers cry foul - https://news.ycombinator.com/item?id=44175940 - June 2025 (26 comments)