A lot of the big ad networks right now instead rely heavily on geo-data. Which is why you are probably seeing lots of ads in your feeds that seemingly cross between devices or are relating to interests of your spouse/friends/etc. They just look at the geo on your IP and literally flood the zone.
> They developed a measurement framework called FPTrace, which assesses fingerprinting-based user tracking by analyzing how ad systems respond to changes in browser fingerprints.
I'm curious to know a bit more about their methodology. It's more likely to me that the ad networks are probably segmenting the ads based on device settings more than they are individually targeting based on fingerprints. For example, someone running new software versions on new hardware might be lumped into a hotter buyer category. Also, simple things like time of day have huge impacts on ad bidding, so knowing how they controlled would be everything.
A fingerprint that changes only by the increase of a browser version isn’t dead; it’s stronger.
marginally given that most browsers auto-update.
I've just looked at my fingerprint and I'm told I'm unique (my mum always said that ;-) ).
Unfortunately it's impossible, using https://www.amiunique.org/fingerprint, to determine what elements of the fingerprint, if changed, would make me significantly non-unique but when I look down the list 16/58 javascript attributes are red (the lowest category of similarity ratio) and only two of those are overtly dependent on a version number, another six refer to screen size/resolution. It seems to me that leaves quite a lot of information which isn't going to change all the quickly.
While the precise value may change with time I feel like saying "has a half-life of only a few days" tends to understate the effectiveness of this technique.
But then there's other things that don't make any sense. How is "NVIDIA Corporation" only 0.74% for "WebGL Vendor?" Why does navigator.hardwareConcurrency even exist?
I disagree. Going through the list, the following attributes are basically 100% tied to the browser or browser version, because nobody is going to change them:
* User agent
* Accept
* Content encoding
* Upgrade Insecure Requests
* User agent
* Platform
* Cookies enabled
* Navigator properties
* BuildID
* Product
* Product sub
* Vendor
* Vendor sub
* Java enabled
* List of plugins (note that plugins were deprecated by major browsers years ago)
* Do Not Track (DNT has been deprecated in favor of GPC, and if you want to stay anonymous you should leave it as the default)
* Audio formats
* Audio context
* Frequency analyser
* Audio data
* Video formats
* Media devices
The following are very correlated to your geo ip, so unless you're pretending to be a Mongolian with a US geo IP, it reveals very little.
Content language
Timezone
Content language
These are actually valuable for fingerprinting, but most of these basically boil down to "what device you're using". If you're using an iPhone 16 running iOS 18.5, chances are most of the device related attributes will be the same as everyone else with an iPhone 16 on iOS 18.5.
Canvas
* List of fonts (JS)
* Use of Adblock
* Hardware concurrency
* Device memory
* WebGL Vendor
* WebGL Renderer
* WebGL Data
* WebGL Parameters
* Keyboard layout
These are basically screen dimensions but repeated several times:
* Screen width
* Screen height
* Screen depth
* Screen available top
* Screen available Left
* Screen available Height
* Screen available width
* Screen left
* Screen top
These are non-issues as long as you don't touch such settings, and are reset if you clear browsing data.
* Permissions
* Use of local storage
* Use of session storage
* Use of IndexedDB
These basically boil down to "whether you're using a phone, laptop, or desktop"
* Accelerometer
* Gyroscope
* Proximity sensor
* Battery
* Connection
The last few seem related to flash but since that's been deprecated years ago they're non-issues.
Suppose a fingerprinting site used (user agent, timezone, user language, screen resolution) as an uniqueness key for its fingerprints, and those were the only fingerprintable attributes. User agent changes often, basically every month for firefox and chrome, so the version information is basically garbage. If you had two firefox users visit the site two months apart, but with the same timezone, language, and screen size, then for all intents and purposes they're indistinguishable. However most fingerprinting sites will happily say "you're unique out of 1 million visitors!".
To make this even worse, people will inevitably revisit these sites and use "fingerprint blocking" extensions, which randomize various attributes. The fingerprinting sites aren't very sophisticated and can't tell attributes are being faked, so it'll record that as a new visitor, which has the effect of bumping the denominator even more. Instead of saying you're unique among 1 million users, it'll say you're unique among 10 million users, but that's a lie, because 9 million of those devices never existed.
Very much this. For example, according to that amiunique.org link, I am literally the only person on the planet who has their browser set to Japanese and that alone makes me unique.
I don't follow, consider hardware interrupts and their handling delays depending say on the combination of apps installed, the exact gpu driver version, etc ...
An occasional update could change the relevant timings, but would unlikely change all timing distributions (since perhaps the gpu driver wasn't updated, or the some other app wasn't)
There's zero chance that apps on iOS and Android have access to "hardware interrupts" (whatever that means), because both platforms are too sandboxed. Moreover timing resolution on javascript has been nerfed since several years ago because of fears of spectre attacks.
>the exact gpu driver version, etc ...
If you're just rendering simple polygons, it's highly implausible that timings would change in between drivers. You might be able to tell driver versions apart if you spend hundreds/thousands of man-hours reverse engineering each driver version for quirks to test against, but I doubt they're pouring that much effort into this.
How does this work in today's age where ISPs normally will have at least one level of NATing with ipv4. And given ipv6 with prefix delegation is still far away this should continue to be very imprecise?
I don't think that's generally true for home DSL/cable/fiber service. I've only seen it on mobile internet.
I don't see them and nor does my spouse. Ads aren't allowed in my house (to mangle the words of a famous adtech company).
True that. We use cookies + fingerprints to monitor for license compliance (i.e. ensure users are not id/password sharing). Sometimes we can use a fingerprint to recover a deleted cookie, but not all that often. What would really help is a fingerprint transition matrix, so we could make some probabilistic guesses.
Huh? In 2025?? Fingerprinting has been around and actively used to track users for probably at least 20 years.
Fingerprintjs [1] is a well known one that gets a lot of use. And if you check EasyPrivacy, you'll see the rules to block it [2] have been in place for a long time.
[1] https://github.com/fingerprintjs/fingerprintjs [2] https://github.com/easylist/easylist/blob/132813613d04b7228c...
https://www.obsessivefacts.com/images/blog/2020-04-04-the-ja...
https://dl.acm.org/doi/10.1109/SP.2013.43 Nick Nikiforakis, Alexandros Kapravelos, Wouter Joosen, Christopher Kruegel, Frank Piessens, and Giovanni Vigna. 2013. Cookieless Monster: Exploring the Ecosystem of Web-Based Device Fingerprinting. In Proceedings of the 2013 IEEE Symposium on Security and Privacy (SP ’13).
> your browser shares a surprising amount of information, like your screen resolution, time zone, device model and more. When combined, these details create a “fingerprint” that’s often unique to your browser. Unlike cookies — which users can delete or block — fingerprinting is much harder to detect or prevent.
Ironically, the more fine tuned and hardened your device, OS, and browser are for security and privacy, the worse your fingerprint liability becomes.
more idle thoughts - it's strange and disappointing that in the vast space and history of FOSS tools, a proper open source browser never took off. I suppose monopolizing from the start was too lucrative to let it be free. Yet there really is little recourse for privacy enthusiasts. I've entertained the idea of using my own scraper, so I can access the web offline, though seems like more trouble than its worth.
What makes you disqualify Firefox from being a "proper open source browser"?
> Between mid-December 2009 and February 2010, Firefox 3.5 was the most popular browser (when counting individual browser versions) according to StatCounter, and as of February 2010 was one of the top 3 browser versions according to Net Applications. Both milestones involved passing Internet Explorer 7, which previously held the No. 1 and No. 3 spots in popularity according to StatCounter and Net Applications, respectively - https://en.wikipedia.org/wiki/Firefox_3.5
Then Chrome appeared and flattened both IE and Firefox.
There's 5 billion people on the internet. 5% of that is 250 million.
Some companies would kill for user numbers like that. Hell, some would slaughter entire villages.
In reality people espouse this opinion then continue using Chrome or Chromium browsers.
> Yet there really is little recourse for privacy enthusiasts
That's certainly not true. Unless Red Hat, MongoDB, Chef, etc. are not open source.
While I love to believe that the FOSS world is an anarchist utopia that believes in wellbeing for all, I think there are plenty of profit driven people there. They just don't sell access to the code/software.
- June 2024. Mozilla acquires Anonym, an ad metrics firm.
- July 2024. Mozilla adds Privacy-Preserving Attribution (PPA), feature is enabled by default. Developed in cooperation with Meta (Facebook).
- Feb 2025. Mozilla updates its Privacy FAQ and TOS. "does not sell data about you." becomes "... in the way that most people think about it".
1. You could (however, I doubt the effectiveness) use something like brave which tries to randomize your fingerprint.
2. You could "blend in with the crowd" and use tor.
That's... not accurate at all. Firefox was extremely popular at one point, and completely ate the lunch of everything else out there. (And then Google used anticompetitive practices to squash it, but that came later.)
> Prior studies only measured whether fingerprinting-related scripts are being run on the websites but that in itself does not necessarily mean that fingerprinting is being used for the privacy-invasive purpose of online tracking because fingerprinting might be deployed for the defensive purposes of bot/fraud detection and user authentication. [...] a framework to assess fingerprinting-based user tracking by analyzing ad changes from browser fingerprinting adjustments - https://dl.acm.org/doi/10.1145/3696410.3714548
Unfortunately I don't have access to the paper myself, so not sure what details they share beyond that.
Given how websites are built these days, if you just turn javascript off, half of them, if not more, will become unusable.
> Given how websites are built these days, if you just turn javascript off, half of them, if not more, will become unusable.
Basically any webapp with any amount of processing being done on the device becomes unusable if JS is disabled. Photopea's a good example of this.
I think Privacy Badger may also do it.
Edit: I think I misunderstood you, you’re looking for something that adds changing noise to the viewport size. Letterboxing isn’t that, but it is another, arguably better, approach to reducing the same fingerprinting vector.
There really is no way to combat fingerprinting, other than using Tor on the "safest" mode. <- which disables javascript and some other stuff.
otherwise, you're fingerprintable.
also, check out https://demo.fingerprint.com/playground
they are tops in fingerprinting aaS AFAIK. meta and google are probably the only ones better.
The rest is just measuring the differences between "doing stuff and seeing what happens". For example if I render a box with some text and many different "font-family: [..]" then the size will differ per platform depending on what fonts you have installed, and you can measure that.
From the article, "your screen resolution, time zone, device model and more" are shared. Why? Why does a website need to know these things? I don't get it. My own device of course needs to know, but why does the website that's sending me HTML and CSS and Javascript need to know?
> if I render a box with some text and many different "font-family: [..]" then the size will differ per platform depending on what fonts you have installed, and you can measure that.
Why do you need to measure this? The whole point of HTML and CSS was supposed to be to let the user agent render the site in whatever way is best for the user. The website doesn't know what that is; the user does.
Partly because Mozilla upper leadership hasn't been sufficiently aligned with privacy, security, nor liberty. And when they try, it's like a random techbro who latches onto a marketing angle, but doesn't really know what they're doing, and might still not care beyond marketing. And would maybe rather have the same title at Big Tech, doing the exploiting.
Also, no matter how misaligned or disingenuous a commercial ambassador to a W3C meeting was, Tim Berners-Lee is nice, and would never confront someone, on lunch break, in a dimly-lit parking lot, and say "I will end you".
The referer field has had the path removed or even dropped outright for some browsers.
Of course I know that in practice websites have been modifying their behavior based on the user agent string for years. But at least that information is supposed to be shared per the specs.
What I don't understand is why browsers are sharing lots of other information beyond the user agent string.
that's why many companies tried to get you into their mobile Apps
halb•5h ago