>Google says it's investigating the abuse
That's a bit ironic, considering how they're using any side channel they could lay their hands on (e.g. Wi-Fi AP names) to track everyone. Basically every large app vendor with multiple apps does something similar to circumvent OS restrictions as well.
Is affiliate marketing still allowed? Are influencers allowed to take payment? Can people be a spokesperson for a company? Can newspapers run commentary about businesses? Can companies pay to be vendors at a conference?
No matter where you end up drawing the line you’re just shifting the problems somewhere else. Look at the amount of money Meta and Google make, the incentive is just too large.
Personally I think we should start from separating good old ads(that existed before I was 15) and Internet "ads". The old ads were still somewhat heavily targeted, but less than it is now. There probably would be an agreeable line up to which level advertisement efforts can be perverted.
All we need to do is define the $thing and mandate that lawsuits can be effective.
No agency enforces that potato chips need to fill up 92% of the bag or whatever, or that McDonalds cannot show pictures of apple fritters with more apples than they actually come with (this happened).
You just incentivize a cottage industry of legal that can squeeze a profit out of suing peanut butter companies for labelling incorrectly, or advertising dishonestly and it sort of takes care of itself.
Some examples:
In most countries it’s illegal to ‘target minors’ and there’s restrictions on what ads can run on after school hours. Meta has always allowed age targeting down to 13 and has no time of day restrictions.
In parts of New Zealand you can’t advertise alcohol between 10PM and 9AM… unless you do it on Meta or Google.
Most countries have regulation about promoting casinos (or the inability to) unless they’re digital casinos being promoted in digital ads.
Or just look at the deepfake finance and crypto ads that run on Meta and X. Meta requires 24 strikes against an advertiser before they pull them down, if a TV network ran just one ad like that it would be a scandal.
Audit-ability is the biggest issue imo. If a TV ad runs we can all see it at the same time and know it ran. That is simply impossible with digital ads, and even when Meta releases some tools for auditing the caveat is that you still have to trust what they’re releasing. Similarly with data protection there’s no way to truly audit what they’re doing unless you install government agencies in the companies to provide oversight, and I don’t see how you could really make that work.
Better moderation of crappy AI-generated image ads that are just scamming you would be nice as well.
While not technically a crime, it was a disgusting, unethical market manipulation move that never really got the public outrage it deserved.
Google execs’ initial support for it was also telling: leadership at Google must literally thought they would find another way to stay as profitable as they are without third-party cookies. Put another way: Google leadership didn’t understand cookies as well as someone who’s taken a single undergrad web dev class. (Or they were lying all along, and always planned to “renege” on third-party cookie deprecation.)
It seems like damned-if-you-do, damned-if-you-don’t.
source: a developer who actually did have to do this (and did it, and now didn’t have to, but it’s done)
No one is trying to stop google from removing third party cookies. Google is just unwilling to remove them without introducing a new anticompetitive tracking tool to replace them.
That's simply not true. As I already mentioned, the CMA presented a legal challenge which you can read about online. Please review the history, as it's been going on for years now.
https://www.gov.uk/government/news/cma-to-have-key-oversight...
https://www.marketing-beat.co.uk/2024/02/06/cma-cookies-goog...
The CMA was concerned that, without regulatory oversight and scrutiny, Google’s alternatives could be developed and implemented in ways that impede competition in digital advertising markets. This would cause advertising spending to become even more concentrated on Google, harming consumers who ultimately pay for the cost of advertising. It would also undermine the ability of online publishers such as newspapers to generate revenue and continue to produce valuable content in the future.
The second link does contain the phrase “cannot proceed with third-party cookie deprecation”, but it’s simply obvious that it’s not about third party cookies per se. It’s all about Google’s (unnecessary, anticompetitive, anti-user, anti-privacy) replacements for third party cookies. … report on the implementation of Google’s Privacy Sandbox commitments, the regulator has said that although the tech giant is so far complying with its demands, there remain considerable areas of concerns …
…
That it must not “design, develop or use the Privacy Sandbox proposals in ways that reinforce the existing market position of its advertising products and services, including Google Ad Manager“
…
It must also address issues with specific Sandbox tools such as how its Topics API targeting alternative can harm smaller tech business, and clarify who will govern the Topics API taxonomy.
...and so consumers can use services/products without having to fork over money.
People love the ad-model. Given the option to pay or use the "ad-supported" option, the ad-supported one wins 10 to 1. This means in many cases it doesn't even make sense to have a paid option, because the ad option is just so much more popular.
As bad as crypto is, with all the negative things attached to it, BAT was probably one of the smartest things to be invented. A browser token that automatically dispenses micropayments to websites you visit. Forget all the details to get snagged on, the basic premise is solid: Pay for what you use. You become the customer, not the advertisers.
Also a note about ad-blocking - it only makes the problem worse. It is not a "stick it to the man" protest. You protest things by boycotting them, or paying their competitors, not by using them without compensating them.
Things like "we'll hang on to the tokens of sites that don't use BAT yet for them until they join" gave negative vibes.
It all felt a little underbaked. I swing back to Brave once in a blue moon and then remember I've got at least $20's worth of BAT lost forever somewhere.
I'd love if there was another one that was totally open and just a browser extension away. But I do not think it would ever get off the ground because...
People love the ad model and hate paying for things.
If we must pay for the internet, give me an option to pay to use it where I see no ads and my privacy is preserved. Let me know what that cost is and I'll decide what I want to do.
Right now, the actual pricing is obscured so we just "accept" that the internet in its current form is how it needs to be.
This will depress ad revenue as the people with the most money will be the people who pay to remove ads. This will make less sites and content viable.
Not every site needs to reach 1 billion people.
Plus Wikipedia seems to be doing ok occasionally asking for donations.
Those true leaders are the traditional examples who have shown success over the centuries, without letting any greed whatsoever become a dominant force, recognizing and moving in the opposite direction from those driven by overblown self-interest, who naturally have little else to offer. It can be really disgraceful these days but people don't seem to care any more.
That's one thing that made them average businessmen though.
Now if you're below-average I understand, but most companies' shareholders would be better off with a non-greedy CEO, who outperforms by steering away from underhanded low-class behavior instead.
Now if greed is the only thing on the table, and somebody like a CEO or decision-making executive hammers away using his only little tool with admirable perseverance long enough, it does seem to have a likelihood of bringing in money that would not have otherwise come in.
This can be leveraged too, by sometimes even greedier forces.
All you can do is laugh, those shareholders might be satisfied, but just imagine what an average person could do with that kind of resources. It would put the greedy cretins to shame on their own terms.
And if you could find an honest above-average CEO, woo hoo !
- Paying for services is very visible, whereas the payment for advertising is so indirect that you do not feel like you are paying for it.
- The payments for advertising are not uniformly distributed, people with more disposable income most likely pay more of overall advertising. But subscriptions cannot make distinctions between income.
- People with disposable income are typically the most willing to pay for services. However, they are also the most interesting to advertisers. For this reason, payment in place of ads is often not an option at all, because it is not attractive to websites/services.
I think banning advertising would be good. But I think a first step towards that would be completely banning tracking. That would make advertisements less effective (and consequently less valuable) and would pose services to look for other streams of income. Plus it would solve the privacy issue of advertising.
It's a game. When a merchant signs up to an ad platform (or when the platform is in need of volume), they are given good ROI, and the merchant also plays along and treats it as "marketing expenditure". Eventually, the ROI dries up i.e the marketing has saturated and the merchant starts counting it as a cost and passes it onto the customer. I don't know if this is actually done, but it's also trivial for an ad platform to force merchants to continue ads by making them feel it's important: when they reduce their ad volume, just boost the ROI and visibility for their competitors (a competitor can be detected purely by shared ad space no need to do any separate tagging). Heck, this is probably what whatever optimization algorithm they are running will end up suggesting as it's a local minima in feature space.
And yes, instead of banning ads, which would be too wide a legal net to be feasible, banning tracking is better. However, even this is complicated. For example, N websites can have legitimate uses for N browser features. But it turns out any M of the N features can be used to uniquely identify you. Oops. What can you even do about that, legally speaking? Don't say permissions most people I know just click allow on all of them.
People of course do pay for things all the time. It’s just the social media folks found a way to make a lot more money than people would otherwise pay, through advertising. And in this situation, through illegal advertising.
The best thing we can all do is refuse to work for Meta. If good engineers did that, there would be no Meta. Problem solved. But it seems many engineers prefer it this way.
Except for Spotify, News subscriptions, videogame subscriptions, video streaming services, duolingo, donations, gofundmes, piracy services!, clothing and food subscriptions! etc etc
People pay $10 for a new fortnite skin. You really pretending they won't pay for content?
People were willing to pay for stuff on the internet even when you could only do so by calling someone up and reading off your credit card number and just trusting a stranger.
Meanwhile, the norm until cable television for "free" things like news was that you either paid, or you went to the library to read it for free.
Maybe people could visit libraries more again.
If it could pay for network TV there's no reason it can't pay for a website.
(You could still do audience-level tracking, e.g. "Facbebook and NCIS are both for old people, so advertise cruises and geriatric health services on those properties")
How did you reach this conclusion? The main problem is that it works way better than traditional marketing medium.
It's the reason Google and Facebook are so massive, why would publishers choose to pay them if it doesn't work?
Because they believe it works and it's impossible to prove otherwise?
However what you're saying isn't completely wrong. I've also seen user targeting become a self-fulfilling prophecy. What happens is that it's championed by a high level executive as the panacea for improving revenue, implemented, and seen to not work. Now, as we all now, the C*O is Always Correct, so everything else around it is modified until the user-level targeting A/B test shows positive results. Usually this ends up in the product being tortured into an unusable mess.
Card payments, especially with 3D secure that use iframes, are one of the biggest problems. This often leads to creating new order several times since allowing something + reloading loses the entire flow.
Captchas are also massive pain, probably because they can't fingerprint as well as normally?
Life after having disabled uMatrix completely has been better.
I'm surprised they're allowed to listen on UDP ports, IIRC this requires special permissions?
> The Meta (Facebook) Pixel JavaScript, when loaded in an Android mobile web browser, transmits the first-party _fbp cookie using WebRTC to UDP ports 12580–12585 to any app on the device that is listening on those ports.
Borders on criminal behavior.
Apparently this was a European team of researchers, which would mean that Meta very likely breached the GDPR and ePrivacy Directive. Let's hope this gets very expensive for Meta.
Hopefully not too late to make it into the lawsuit. Assholes.
I sure hope there's a lawsuit. Over the last ten years, I've gotten over $2,000 in lawsuit settlement checks from Meta, alone.
I have a savings account at one of my banks that I use just for these settlement checks. Sometimes they're just $5. Sometimes they're a lot more. I think the most I ever got was around $500.
It's a little bit here, and a little bit there, but at the rate it's going, in another five years, I'll be able to buy a car with privacy violation money.
Just some guy working at facebook was able to ship network code in not just one but two code-bases without any senior or higher engineers in the loop?
That's the claim? If that was true (it's not) it would be even worse than high level executives being involved.
And people on HN dismiss those who choose to browse with Javascript disabled.
There's a reason that the Javascript toggle is listed under the Security tab on Safari.
I'm aware of two blockers for LAN intrusions from public internet domains, uBlock Origin has a filter list called "Block Outsider Intrusion into LAN" [0] under the "Privacy" filters, and there's a cool Firefox extension called Port Authority [1][2] that does pretty much the same thing yet more specifically targeted and without the exclusions allowed by the uBlock filterlist (stuff like Figma's use is allowed through, as you can see in [0]). I've contributed some to Port Authority, too :)
0: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
Now that the mechanism is known (and widely implemented), one could write an app to notify users about attempted tracking. All you need to do is to open the listed UDP ports and send a notification when UDP traffic comes in.
For shit and giggles I was pondering if it was possible to modify Android to hand out a different, temporary IPv6 address to every app and segment off any other interface that might be exposed just because of shit like this (and use CLAT or some fallback mechanism for IPv4 connectivity). I thought this stuff was just a theoretical problem because it would be silly to be so blatant about tracking, but Facebook proves me wrong once again.
I hope EU regulators take note and start fining the websites that load these trackers without consent, but I suspect they won't have the capacity to do it.
Yes, but (AFAIK) not out of the box (unless one of the security focused ROMs already supports this). The kernel supports network namespaces and there's plenty of documentation available explaining how to make use of those. However I don't know if typical android ROMs ship with the necessary tooling.
Approximately, you'd just need to patch the logic where zygote changes the PID to also configure and switch to a network namespace.
In theory all you need to do is have zygote constrain the app further with a network namespaces, and run a CLAT daemon for legacy networks, but in practice I'm not sure if that approach works well with 200 apps that each need their IPs rotated regularly.
Plus, you'd need to reconfigure the sandbox when switching between WiFi/5G/ethernet. Not impossible to overcome, but not the weekend project I'd hoped it would be.
I've never tested network namespace scalability on a mobile device but I doubt a few hundred of them should break anything (famous last words).
In the primary namespace you will need to configure some very basic routing. You will also need a solution for assigning IP addresses. That solution needs to be able to rotate IP assignments when the external IP block changes. That's pretty standard DHCP stuff. On a desktop distro doing the equivalent with systemd-networkd is possible out of the box with only a handful of lines in a config file.
Honestly a lot of Docker network setups are much more complicated than this. The difficult part here is not the networking but rather patching the zygote logic and authoring a custom build of android that incorporates the changes.
AFAIK this is on the Android roadmap and one of the key reasons they don't want to support DHCPv6. They want each app to have their own IP.
The alternatives are not doing evil vs starving. They’re getting paid well for doing evil, or getting paid well for doing good or at least neutral.
As time went on, computers became more mainstream, and lots more people started using them as part of daily life. This didn't mean that the number of antiestablishment computer users or hackers went down—just that they were no longer nearly so high a percentage of the total number of computer users.
So the answer to "what happened to all the hackers fighting for personal freedom and privacy?" is kinda threefold:
1) They never went away. They're still here, at places like the EFF, fighting for our personal freedom and privacy. They're just much less noticeable in a world where everyone uses computers...and where many more of the prominent institutions actually know how to secure their networks.
2) They grew up. Captain Crunch, the famous phreaker, is 82 this year. Steve Wozniak is 74. And while, sure, some people reach that age and still maintain not merely a philosophy, but a practice, of activism, it's much harder to keep up, and even many of those whose principles do not change will shift to methods that stay more within the system (eg, supporting privacy legislation, or even running for office themselves).
3) They went to jail, were "scared straight", or died. The most prominent example of this group is, of course, Aaron Swartz, but many hacktivists will have had run-ins with the law, and of those many of them will have turned their back on the lifestyle to save themselves (even Captain Crunch was arrested and cooperated with the FBI).
But it's also unquestionably true that it's much easier to be a "hacker", in the sense we think of from the 1960s-80s, in a time and field where the hardware and software is simpler, more open, and less obfuscated. As such, I think it's probably not helpful to long for those bygone days—especially the "simpler" part—which we are clearly never getting back until and unless we make a breakthrough that is just as revolutionary as the transistor and the microchip were (and I'm skeptical as to whether that's possible, both in terms of what physics allow ever, and in terms of the shape of the corporate landscape now and for the foreseeable future). Honestly, a lot of the things that were possible back then, a lot of the incentive to get into hacking, was stuff that's actually hugely dangerous or invasive. Instead, I think it's better to focus on what we can do to improve the latter two parts of that equation: more open, less obfuscated.
Personally, I would say that the way toward that is pushing for, creating, and working on more open protocols and open standards, and insisting that those be used to enable more interoperability in place of proprietary formats and integration only with other software and hardware from the same company.
https://www.theregister.com/2022/02/15/missouri_html_hacking...
I'm not a lawyer, so my question is genuine.
On the FB side, I can see a malicious user potentially poisoning a target site visitors’s ad profile or even social media algorithm with crafted cookies. Fill their feed with diaper ads or something.
1. User logged into FB or IG app. The app runs in background, and listens for incoming traffic on specific ports.
2. User visits website on the phone's browser, say something-embarassing.com, which happens to have a Meta Pixel embedded. From the article, Meta Pixel is embedded on over 5.8 million websites. Even in In-Cognito mode, they will still get tracked.
3. Website might ask for user's consent depending on location. The article doesn't elaborate, presumably this is the cookie banner that many people automatically accept to get on with their browsing?
4. > The Meta Pixel script sends the _fbp cookie (containing browsing info) to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.
You won't see this in your browser's dev tools.
5. Through the logged-in app, Meta can now associate the "anonymous" browser activity with the logged-in user. The app relays _fbp info and user id info to Meta's servers.
Also noteworthy:
> This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users’ web activity.
> On or around May 17th, Meta Pixel added a new method to their script that sends the _fbp cookie using WebRTC TURN instead of STUN. The new TURN method avoids SDP Munging, which Chrome developers publicly announced to disable following our disclosure. As of June 2, 2025, we have not observed the Facebook or Instagram applications actively listening on these new ports.
How does that even work? What can GDPR cookie notices can do that the typical tracker can't do?
However, the locally hosted FB/Yandex listener receives all of these first party cookies, from all parties, and the OPs implication is (I think) that now these non-correlateable-by-consent first party cookies can be or are being used to track you across all sites that use them.
Allow Meta tracking to connect the Facebook or Instagram app on your device to associate visits to this website with your Meta account. Yes/No (With No selected as a default.)
I am pretty sure that this is a grave violation of the GDPR.
"No" doesn't even need to be selected as a default, as long as you don't use dark patterns. Making the user manually click yes or no is perfectly valid (as long as you don't make "yes" easier than "no", so if you add an "allow all" button there should be an equally prominent "deny all" button).
I think you can make the argument that it should be behind a permission prompt these days but it's difficult. What would the permission prompt actually say, in easy to understand layman's terms? "This web site would like to transfer data from your computer to another computer in a way that could potentially identify you"? How many users are going to be able to make an informed choice after reading that?
Most users probably will click "No" and this is a good choice.
> Most users probably will click "No"
Strong disagree. When I'm loading google.com is my computer not connecting to another computer? From a layman's perspective this is the basis of the internet doing what it does. Not to mention, the vast majority of users say yes to pretty much any permission prompt you put in front of them.
"website wants to connect to another computer" basically describes all websites. Do you really expect the average user to understand the difference? The exploit is also non-trivial either. SDP and TURN aren't privacy risks in and of themselves. They only pose risks when the server is set to localhost and with a cooperating app.
The thing that's happening here isn't really a problem with WebRTC. Compare this to having an app on your phone that listens on an arbitrary port and spits out a unique tracking ID to anything that connects. Does it matter if the connection is made using HTTP or HTTPS or WebRTC or something else? Not really. The actual problem is that you installed malware on your phone.
If users don't understand, they click whatever. If the website really needs it to operate, it will explain why before requesting, just like apps do now.
Always aim for a little more knowledgeable users than you think they are.
And why enable it by default, why not disable by default?
Also, sibling comments say iOS is already asking for the permission, why not just copy it?
`media.peerconnectin.enabled`.
on cromite[1], a hardened chromium fork, there is such a setting, both in the settings page, as well as when you click on the lock icon in the address bar.
[1]: https://cromite.org
I want to be able to configure this per web site, and a permission prompt is a better interface than having an allow/deny list hidden in settings.
If a feature can be used to track people, you have to flag it off or else you are just contributing to the tech Big Brother apparatus.
Let's be clear here. Meta/other sites are abusing the technology TURN/WebRTC for a purpose it was never intended for, way beyond the comfortable confines of innocent hackery, and we all know it.
That's asshole behavior, and worth naming, shaming, and ostracizing over.
The real issue is, why are we putting up with having these apps on our devices? Why do we have laws that prohibit you from e.g. using a third party app from a trusted party or with published source code in order to access the Facebook service, instead of the untrustworthy official app which is evidently actual malware?
Agree on your first point at a practical level, but from the normative standpoint, it's unforgivable to cross those streams. At the point we're talking about with a service provider desperately wanting to leak IP info for marketability applications of an underlying dataset and using completely unrelated to the task at hand technical primitives to do it, you very clearly have the device doing something the end user doesn't want or intend. The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors. A connection to a TURN server or use of parts of the RTC stack should explain to the user they are about to engage programming intended for real-time communication when it's happening not just once at the beginning when most users would just accept it and ignore it from then on.
10 or so TURN call making notifications in a context where synchronous RTC isn't involved should make it obvious that something nefarious is going on, and would actually give the user insight into what is running on the phone. Something modern devs seem to be allergic to, because it would cause them to have to confront the sketchiness of what they are implementing instead of being transparent with the principle of least surprise.
Modern businesses though would crumble under such a model because they want to hide as much about what they are doing as possible from the customer base/competitors/regulators. >
There are two main ones.
The first is the CFAA, which by its terms would turn those ToS violations into a serious felony, if violations of the ToS means your access is "unauthorized". Courts have been variously skeptical of that interpretation because of its obvious absurdity, but when it's megacorp vs. small business or open source project, you're often not even getting into court because the party trying to interoperate immediately folds. Especially when the penalties are that scary. It's also a worthless piece of legislation because the actual bad things people do after actual unauthorized access are all separately illegal, so the penalty for unauthorized access by itself should be no more than a minor misdemeanor, and then it makes no sense as a federal law because that sort of thing isn't worth a federal prosecutor's time. Which implies we should just get rid of it.
The other one, and this one gets you twice, is DMCA 1201. It's nominally about circumventing DRM but its actual purpose is that Hollywood wants to monopolize the playback devices, which is exactly the thing we're talking about. Someone wants to make an app where you can watch videos on any streaming service you subscribe to and make recommendations (but the recommendations might be to content on YouTube or another non-Hollywood service), or block ads etc. The content providers use the law to prevent this by sticking some DRM on the stream to make it illegal for a third party app to decrypt it. Facebook can do the same thing by claiming that other users' posts are "copyrighted works".
And then the same law is used by the phone platforms to lock users out of competing platforms and app stores. You want to make your competing phone platform and have it run existing Android apps, or use microG instead of Google Play, but now Netflix is broken and so is your bank app so normal people won't put up with that and the competition is thwarted. Then Facebook goes to the now-monopoly Google Play Store and has "unauthorized" third party Facebook readers removed.
These things should be illegal the other way around. Adversarial interoperability should be a right and thwarting it should be a crime, i.e. an antitrust violation.
> The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors.
But how do you suppose that happened? Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
>Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
Besides every possible attempt being DoA because Google is intent on monopolizing the space with their TOS and OEM terms? There isn't a fork because it can't be Android if you do that sort of thing, and if you tried to it'd be you vs. Google. Nevermind the bloody rats nest of intentional one-sided architecture decisions done to ensure the modern smartphone is first and foremost a consumption device instead of a usable and configurable tool, which includes things like regulations around the base and processor, lawful interception/MITM capability, and meddling, as you mentioned, in the name of DMCA 1201.
Though there's an even more subtle reason why, too, and it's the lack of accessible system developer documentation, capability to write custom firmware, and architecture documentation. It's all NDA locked IP, and completely blobbed.
The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
Exactly. We have bad laws and therefore bad outcomes. To get better outcomes we need better laws.
But not for the user.
You can use a similar language for WebRTC.
Other P2P uses are very cool and interesting as well - abusing it for fingerprinting is just that, abusing a user-positive feature and twisting it for identification, just like a million other browser features.
But still, you could do the same for stun, turn, sdp. Disallow local host.
Depending on the country that you or your family lives in, this could be far worse than embarrassment.
I happened to be immune, I disabled Background App Refresh in iOS settings. All app notifications still work, except WhatsApp :(
https://forums.macrumors.com/threads/any-reason-to-use-backg...
> company checks out
So a takeaway is to avoid having Facebook or Instagram apps on your phone. I'm happy to continue to not have them.
Any others? e.g. WhatsApp. Sadly, I find this one a necessary communication tool for family and business in certain countries.
Thank god that Microsoft and Google don't do this. Oh, wait... /s
Web apps talking to LAN resources is an attack vector which is surprisingly still left wide open by browsers these days. uBlock Origin has a filter list that prevents this called "Block Outsider Intrusion into LAN" under the "Privacy" filters [1], but it isn't enabled on a fresh install, it has to be opted into explicitly. It also has some built-in exemptions (visible in [1]) for domains like `figma.com` or `pcsupport.lenovo.com`.
There are some semi-legitimate uses, like Discord using it to check if the app is installed by scanning some high-number ports (6463-6472), but mainly it's used for fingerprinting by malicious actors like shown in the article.
Ebay for example uses port-scanning via a LexisNexis script for fingerprinting (they did in 2020 at least, unsure if they still do), allegedly for fraud prevention reasons [2].
I've contributed some to a cool Firefox extension called Port Authority [3][4] that's explicitly for blocking LAN intruding web requests that shows the portscan attempts it blocks. You can get practically the same results from just the uBlock Origin filter list, but I find it interesting to see blocked attempts at a more granular level too.
That said, both uBlock and Port Authority use WebExtensions' `webRequest` [5] API for filtering HTTP[S]/WS[S] requests. I'm unsure as to how the arcane webRTC tricks mentioned specifically relate to requests exposed to this API; it's possible they might circumvent the reach of available WebExtensions blocking methods, which wouldn't be good.
0: https://news.ycombinator.com/item?id=44170099
1: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
2: https://nullsweep.com/why-is-this-website-port-scanning-me/
3: https://addons.mozilla.org/firefox/addon/port-authority
4: https://github.com/ACK-J/Port_Authority
5: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
https://wicg.github.io/private-network-access/
It gained support from WebKit:
https://github.com/WebKit/standards-positions/issues/163
…and Mozilla:
https://github.com/mozilla/standards-positions/issues/143
…and it was trialled in Blink:
https://developer.chrome.com/blog/private-network-access-upd...
Unfortunately, it’s now on hold due to compatibility problems:
EDIT: Looks like it does mention integrating into the permissions system [0], I guess I missed that. Glad they covered that consideration, then!
0: https://wicg.github.io/private-network-access/#integration-p...
[0] https://groups.google.com/a/mozilla.org/g/dev-platform/c/B8o...
[1] https://groups.google.com/a/chromium.org/g/blink-dev/c/CDy8L...
What is so difficult about this?
0. Define 2 blocklists: one for local domains and one for local IP addresses
1. Add a per-origin permission next to the already existing camera, mic, midi, etc... Let's call it LocalNetworkAccess, set it false by default.
2. Add 2 checks in networking stack:
2a. Before DNS resolution check the origins LocalNetworkAccess permission. If false check the URL domain against a domain blocklist, deny the request if matches.
2b. Before the TCP or UDP connect check the the origins LocalNetworkAccess permission. If false check the remote IP address against an IP blocklist, deny the request if matches.
3. If a request was denied, prompt the user to allow or disallow the LocalNetworkAccess permission for the origin, the same way how camera, mic or midi permission is already prompted for.
This is a trivial solution, there is no way this takes more than 2-300 lines of code to implement in any browser engine. Why is it taking years?!
And then of course one can add browser-specific config options to customize the blocklists, but figure that out only after the imminent vulnerability has been fixed.
I would not consider this a legitimate use. Websites have no business knowing what apps you have installed.
I guess a better example would be the automatic hardware detection Lenovo Support offers [0] by pinging a local app (with some clear confirmation dialogs first). Asus seems to do the same thing.
uBlock Origin has a fair few explicit exceptions made [1] for cases like those (and other reasons) in their filter list to avoid breakages (notably Intel domains, the official Judiciary of Germany [2] (???), `figma.com`, `foldingathome.org`, etc).
0: https://pcsupport.lenovo.com/
1: https://github.com/uBlockOrigin/uAssets/blob/master/filters/...
2: https://github.com/uBlockOrigin/uAssets/issues/23388 and https://www.bundesjustizamt.de/EN/Home/Home_node.html (they're trying to talk to a local identity verification app seems like, yet I find it quite funny)
That's the e-ID function of our personal ID cards (notably, NOT the passports). The user flow is:
1. a client (e.g. the Deutsche Rentenversicherung, Deutschland-ID, Bayern-ID, municipal authorities and a few private sector services as well) wishes to get cryptographically authenticated data about a person (name and address).
2. the web service redirects to Keycloak or another IDP solution
3. the IDP solution calls the localhost port with some details on what exactly is requested, what public key of the service is used, and a matching certificate signed by the Ministry of Interior.
4. The locally installed application ("AusweisApp") now opens and displays these details to the user. When the user wishes to proceed, the user clicks on a "proceed" button, and is then prompted to either insert the ID card into a NFC reader attached to the computer or a smartphone in the same network as the computer that also has the AusweisApp attached.
5. The ID card's chip verifies the certificate as well and asks for a PIN from the user
6. the user enters the PIN
7. the ID card chip now returns the data stored on it
8. the AusweisApp submits an encrypted payload back to the calling IDP
9. the IDP decrypts this data using its private key and redirects back to the actual application.
There is a bunch of cryptography additionally layered in the process that establishes a secure tunnel, but it's too complex to explain here.
In the end, it's a highly secure solution that makes sure that only with the right configuration and conditions being met the ID card actually responds with sensitive information - unlike, say, the Croatian ID card that will go as far as to deliver the picture on the card in digital form to anyone tapping your ID card on their phone. And that's also why it's impossible to implement in any other way - maaaaybe WebUSB but you'd need to ship an entire PC/SC stack and I'm not sure if WebUSB allows cleaving an USB device that already has a driver attached.
In addition, the ID card and the passport also contains an ICAO compliant method of obtaining the data in the MRZ, but I haven't read through the specs of that enough to actually implement this.
I think what you are thinking of are dns rebinding attacks.
Maybe it's time to invent a tax that starts at 0% and goes up 1-X% every time your hand is cought in the cookie jar. And add a corresponding website where you can clearly see all violations by company.
*: Meta Pixel script was last seen sending via HTTP in Oct 2024, but Facebook and Instagram apps still listen on this port today. They also listen on port 12388 for HTTP, but we have not found any script sending to 12388.
**: Meta Pixel script sends to these ports, but Meta apps do not listen on them (yet?). We speculate that this behavior could be due to slow/gradual app rollout.
So, could some other app send data to these ports with a fake message? I'm asking for a friend that likes to do things for science.Somebody also needs to come up with a way to peer to peer share advertiser tracking cookies.
There are apps on iOS that act as shared drives that you can attach via WebDAV. This requires listening on a port for inbound WebDAV (HTTP) requests.
My last electron app did effectively the same thing. I took the hosted version of my app and bundled in electron for offline usage with the bundled app being just a normal web application started by electron.
[1] https://arstechnica.com/security/2025/06/meta-and-yandex-are...
[2] https://en.wikipedia.org/wiki/Pentium_III#Controversy_about_...
To be fair it's not really limited to Russian apps. Many popular apps on play store have it as well:
https://play.google.com/store/apps/details?id=org.telegram.m...
https://play.google.com/store/apps/details?id=com.microsoft....
https://play.google.com/store/apps/details?id=com.pinterest&...
Apps that keep themselves open by reporting that they’re playing audio might be able to work around this, but it’d still be spotty since users frequently play media which would suspended those backgrounded apps and eventually bump them out of memory.
Google’s core business is built on tracking data, so they would be reluctant to sell, necessitating covert collection.
Quick test and if I serve on 8080 on the Userland app it can be accessed from both profiles. So probably yes.
This means an infected app on your personal profile could exchange data with a site visited from a second profile.
The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity in that app for an unknown amount of time. And even with the tracking code deactivated, cookies may still persist on those secondary profiles that still allow for linking future activity.
> The takeaway is that for all intents and purposes, anything you did in a private session or secondary profile on an Android device with any Meta app installed, was fully connected to your identity
Definitely, and that's a huge problem. I just don't think Android business profiles are a particular concern here; leaking app state to random websites in any profile is the problem.
Or do Android "business profiles" also include browser sessions? Then this would be indeed a cross-compartment leak. I'm not too familiar with Android's compartment model; iOS unfortunately doesn't offer sandboxing between environments that way.
If they are trying to fingerprint the "private compartment" of a BYOB device, that seems roughly as bad as a non-corporate side doing the same.
I'm generally against BYOD programs. They're convenient but usually come from a place of allowing employees access to things without the willingness to take on the cost (both in corp devices and inconvenience of a second phone/tablet/whatever) to run them with a high level of assurance.
Much better in my opinion to use something like PagerDuty or text/push notifications to prompt folks to check a corp device if they have alerts/new emails/whatever.
E.g. a Jira ticket links to a post on how to do something concurrency related in Python.
I get your point thought that maybe this is no worse than if they visit the site on the personal side.
However I wouldn't trust out lack of imagination on how to exploit this to be happy about the security gap!
I believe that is typical.
My business profile has it's own instance of Chrome. Mostly used for internal and external sites that require corporate SSO or client certificates. Of course it could be used to browse anything.
My healthcare provider recently yanked the mobile version of their portal website, and forces users to download their app. Personally, I see the security angle, but still feel like it’s a punch in the face and so I just went back to paper billing and using a PC for healthcare stuff. More of this is coming, I suspect.
I assume that's why those companies try to actively degrade the experience on the website.
As for the second part: no, logging out of the apps would not necessarily be enough. The apps can still link all the different web sessions together for surveillance purposes, whether or not they are also linked to an account within the app. Meta famously maintains "shadow profiles" of data not (yet) associated with a user of the service. Plus, the apps can trivially remember who was last logged in.
Seriously, why do you think all of the unquestionable things humanity has built have been built? It's because it's all just part of the job, for somebody.
People working at ad-tech question the things they're building just like the people at McDonalds flipping a burger are questioning the cholesterol levels of their customers. They're not.
I wonder whether local ports opened in isolated "work" android profile are accessible by main profile.
Not only our their websites painful which discourages use, websites are more sandboxed.
Or buy an iPhone or a Pixel.
[1]: https://f-droid.org/packages/com.MarcosDiez.shareviahttp
In the end something like GrapheneOS is the only good choice. Has all the security features of Pixel (which is similar to iPhone) and the tracking of neither.
Samsung has great tech, but I avoid because it's so bloated and abusive.
Even outside of Samsung a lot of "normal" apps come packed with Facebook crap because of Facebook's SDK (for the "log in with Facebook" button). There was that one incident where many/most iOS apps didn't open anymore because Meta fucked something up on their servers that crashed every app with the Facebook SDK inside of it (https://www.wired.com/story/facebook-sdk-ios-apps-spotify-ti...).
This isn't remotely true. It is pretty trivial for a well-resourced engineering organization to generate unique fingerprints of users with common browser features.
No response from Google. Being used by dozens of apps in the wild.
Edit: Original Research link: https://peabee.substack.com/p/everyone-knows-what-apps-you-u... (HN: https://news.ycombinator.com/item?id=43518866 , 482 comments)
It could be an idea to, you know, stop doing these things. Would be great to see another few $billion fine for this one.
I bet that most Americans would be ok with that, more privacy, more money for the state and less to the greedy bastards.
EU doesn’t have to be the cop of US technology, in fact it’s a bit pathetic to have another country policy your industry.
Please don't "don't that phone then" because it's the same all the way down to rotary telephones.
I still didn't consent to meta tracking me on my telephone. I understand the shadow profiles and tracking pixels and whatnot, but cmon.
Next year "actually meta was listening to conversations captured by android devices and using it to target ads"
Don't use a cellphone? Meta and Google track desktop web use. Don't use a computer? Okay, that's cool.
People install Instagram to look at photos and reels, not to help facebook track them better.
If I put a crypto-mining script in a game I don't get to claim "well they installed the app" when people complain. The victims thought they were installing a game, not a crypto-miner.
Here, the victims thought they were installing a photo sharing application, not a tracking relay.
If it were so, Google should be knowingly be allowing this to happen and be a co-conspirator. I mean, they surveil our devices as if it were their home. Impossible that they're not aware.
[0] https://netzpolitik-org.translate.goog/2025/databroker-files...
There probably are some legitimate uses, but I'm straining to come up with them.
I think the Yandex one slips through because CORS does a naive check against just what's in the header, not what it resolves to?
There is a cert for it in the logs: https://crt.sh/?q=yandexmetrica.com
It even looks like some of the certs were issued by Yandex to Yandex. I guess their cert division will end up writing an incident report for this.
These sites both have the same potential for abuse.
And they push REALLY hard.
Unfortunately, even if they did have such rules, in this case, Meta is a too-big-to-deplatform tech company.
(Also, even if it wasn't Meta, sketchy behavior of tech might have the secret endorsement of IC and/or LE. So, making the sketchiness stop could be difficult, and also difficult to talk about.)
Further, Netguard plus Nebulo in non-VPN mode can stop unwanted connections to Meta servers
What are some reasons to use Firefox Android instead of Firefox Nightly. The later is availlable from Aurora Store.
IME, Nightly has better add-on support. For example, uMatrix works.
I hope a judge gives them a warning.
127.0.0.0/8
::1/128
I'll update here with any issues.
I've just opened my feed in FB and let's see what ads will be today:
Group Dull Men's Club - some garbage meme dump, neither interesting nor selling any product or service.
Women emigrant group - I'm a male and in different location.
Rundown - some NN generated slop about NN industry
Car crash meme group from a different location.
Math picture meme group
LOTR meme group
Photo group with a theme I'm not interested
Repeat of the above
Another meme group
Roland-Garros page - I've never watched tennis or wrote about it. My profile has follows of a different sport pages altogether. None of those rise in the ads.
Another fact/meme group
Repeat
Repeat
Another fact/meme group
Expat group from incorrect location
And so on it goes. Like, who pays for all this junk? Who coordinates targeting? Why do they waste both their and mine capacity for something that useless both for me and Facebook? I would understood if FB had ads of products/services, or something that loosely follows by likes. But what they have is a total 100% miss. It's mindboggling.
kinda makes me nostalgic for simpler times—when tracking meant throwing 200 trackers into a <script> tag and hoping one stuck. now it’s full-on black ops.
i swear, i’m two updates away from running every browser in a docker container inside a faraday cage.
Native Apps are doing that, not webrtc. Just prove the web is safer and all that BS about native apps being better is, well, BS.
(Other bad guys are around too)
rvnx•1d ago