If this works it will be good for everyone, the many issue with today's internet is the perverse incentives to get views or "engagement" so you can sell ad space. The ads are the goal, not the message.
Honestly a number of really really significant societal problems have their roots in surveillance capitalism
There’s no way the advertising industry giants will let it happen. But the thought alone clearly illustrates the damaging effects of advertising.
Press release from Belgian Data Protection Authority:
https://www.dataprotectionauthority.be/citizen/the-market-co...
IAB response post:
https://iabeurope.eu/belgian-market-court-confirms-limited-r...
IAB was on the hook for the dreadful cookie "consent" popups that ruined the web (no, it wasn't GDPR that ruined it, it was a very deliberate action by "industry groups" like IAB).
The only reason the Market Court annulled the previous decision was on procedural grounds, while agreeing that IAB is responsible, and keeping the 250 000 EUR fine in place.
Too bad. I wish Market Court would've burned IAB to the ground, salted the earth and scattered the ashes.
Does anyone know what the consequences are? I have no idea exactly what it is that applies immediately.
I would guess that starting today Google and others should stop advertising as they currently do it, it being illegal. I doubt it's that simple, and even if it was, I am sure they will not simply stop. So what happens now?
As a result, data collected through IAB about European customers was collected unlawfully, and third parties must delete that data. IAB also can't smuggle consent like this anymore, and needs to pay a fine that was handed down a few years ago.
The legal publication can be found here (translated into various languages, though I believe the original may have been Dutch or French as it was the Belgian DPA that started the suit): https://curia.europa.eu/juris/documents.jsf?num=C-604/22 and here https://www.dataprotectionauthority.be/the-market-court-rule...
I very much doubt ad companies will actually delete the illegally obtained data, but IAB and other companies in the cyberstalking industry this can be a problem, because they need to actually comply with the law.
U guess they'll either try to fight it in court somehow or find a loophole to abuse. Or yknow... just ignore the ruling as long as possible.
What more would be needed? Does the GDPR need to be amended?
> Dr Johnny Ryan said "Today's court's decision shows that the consent system used by Google, Amazon, X, Microsoft, deceives hundreds of millions of Europeans. The tech industry has sought to hide its vast data breach behind sham consent popups. Tech companies turned the GDPR into a daily nuisance rather than a shield for people."
I feel this so often gets lost in the conversation where a huge amount of people in communities like this one will loudly point out how annoying consent banners are but never give any thought as to why so many websites feel that just because you want to read a single article on their website that they are now entitled to sell your information often to hundreds and even thousands of different data brokers and that this is now so normalised that it’s almost every bit of content I consume now.
The original purpose of the GDPR was clearly to try and put an end to this kind of thing while still leaving cutouts for legitimate purposes with informed consent.
I’m so glad to see them come at this from a new angle entirely now to just firmly say that this surveillance capitalism bullshit is illegal and you can’t cookie banner your way out of it as some kind of legal protection.
Good, that makes me extremely happy as an EU resident and I wholeheartedly support whatever steps you need to take in order to enforce this. There’s no reason at this point to continue playing nice with US spyware companies masquerading as “data brokers”, let them deal with the mess they made but we don’t need it here.
Don't forget the "legitimate interest", where somehow 635 ad companies absolutely must have my data to visit a single website...
It's a win for advertisers. The court says, the logic holds, but the advertisers will not be fined and will not have to follow the 21/2022 decision.
That's common in European jurisdiction. We tend to operate on a "first strike is free" principle, especially in contested / purposefully left unclear legal environments. Only when the case law is clear, it can be shown that a law was intentionally exploited or broken or it's a repeat offender, then we bring the hammer down.
/s
Our political and legal system prefers self-regulation first, if that doesn't work then regulation will be introduced, the first offenders will get a slap on the wrist to clarify for everyone what the courts' lines on interpreting the law are, and only then the fines follow.
I can assure you that if the procedure is followed the way it should be you will be fined for any kind of violation of the GDPR no matter how small you are.
I don’t know where you are getting from this “you can violate the law for free the first time”. I know HN likes to defend the EU any way it can (not because they love the EU but because they hate the US because they dared to choose a leader they don’t like), but this is absolute bs.
That depends! If it is an innocent violation or the offender is a small business, no court will hand down the top end of the fines. Only if you are someone like, say, Facebook and intentionally piss off the EU, then you will be in the deeper end of the trouble.
> I know HN likes to defend the EU any way it can (not because they love the EU but because they hate the US because they dared to choose a leader they don’t like), but this is absolute bs.
European HNers tend to defend the EU not because of their current leadership, we already did this during the Biden and Obama terms. We defend Europe because we believe that a lot of what the US does (both its relevant economic players and its politicians) is utterly counterproductive, if not outright harmful.
So you admit you will be fined? Good we’re on the same page.
Hence, it's time for hard sanctions, even though procedurial issues made for this fine-less ruling now. Expect more fines to roll out if the ad industry doesn't get it now that playtime is over.
It's not a pure win.
But big tech can handle a few government penalties every decade. It even creates moat - artificial barriers to market entry. The multiplicity of penalties is insurmountable for new market entrants, but pocket change for the established ones. For example, the UK Online Safety Act is putting all the small social media sites out of business in the UK, but it won't change moderation standards at Facebook. Ergo, it has become Meta's moat. "If a fine is set for a crime, then it's only a crime for poor people".
Tech is full of clever and fast people who run circles around slow-moving government bureaucracies (even judicial). These courts need to resolve these cases every week. If it's 1 week for first-instance, 1 week for appeals, that's the pace that would stop big tech. Twenty-seven fines with a bite a year would have the intended effect.
But we're talking about a "landmark" GDPR win in this thread that took about 5 years. And the fines so far are less than 500 euros per data collector (250k euro fine / 600+ companies in IAB). It will not even warrant a footnote in GAAP financial statements at the end of the year for these companies; they'll just put it in operating expenses (along with the 1,500 euro office coffee machine, 3x more expensive than the privacy violations). A small blogger collecting analytics data incorrectly may not have much to eat in the month they get fined 500 euros (not that they will have had much to eat in the months of expensive court proceedings), but of course, they also risk the full extent of the penalties.
Use limited data to select advertising
Consent (91 vendors)
Legitimate interest (41 vendors)
Measure advertising performance
Consent (97 vendors)
Legitimate interest (58 vendors)
Which shouldn't be just finable but jailable.I understood as a SWE that the perfect solutions we often conjure never work as expected in the real world because we do not understand basic human nature and also how society as exists today works, including many many perverse incentives.
But is 250k euros an appropriate fine for the personally identifiable information that’s been collected and associated with behavioural metrics, political preferences, confidential health data, and other private data points by the 600+ companies that make up IAB and their partners?
This is less than 500 euros per company. They probably pay more each month to host the illegally collected data.
And they probably have the data for millions of EU citizens. Maybe a billion+ profiles worldwide. Granted, the numbers are pulled out of thin air, but what’s a reasonable estimate if not that?
Unless I’m misunderstanding…
Also, it sends a signal to wannabe competitors to this company that there are laws and there are consequences for breaking those.
And of course given that these companies have money, there are going to be lawyers paying attention to see if they can get at that money in some way. Germany is almost as bad on that front as California. Lots of enterprising lawyers here. So, one successful court case can trigger many more once the precedent is set.
Specifically, is tracking inside of a single app/property acceptable?
So much mobile tracking is added due to a lack of real HTTPS links (in mobile called deferred deep links). To just know whether a user from link X did or did not open the app.
Happy to chat with people opposed or pro, feel free to reach out for a longer discussion.
1.) use custom product pages. Users need to have a device with iOS 18, but if they do you can now assign a URL to a custom product page, and the app will receive this URL on first launch. Only works for a small number of static links because you can't dynamically create custom product pages. Good for thinks like influencer campaigns if you have a small number of influencers (e.g. a YouTube channel you support), and since iOS 18 adoption is high enough now this has very recently become a viable method.
2.) use an App Clip. I do this for referral links in my app. The user launches the App Clip, and because the App Clip can receive an invocation URL, you can store that in a shared group container - that's a shared data space between your App Clip and your app. Most users don't know what an App Clip is, so to avoid confusion and make your users assume that they already downloaded the app when in fact they didn't, I recommend just making it a single page with a download button. You can try this flow here (it's a referral link for my app); make sure to open the link on an iOS device, otherwise you will see a fallback website telling you to install the app first: https://tape.it/user_referral/1
No.
Tracking is explicitly not permitted unless you receive an informed consent from the user.
If you have any kind of unique identifier for the user (UUID, etc) and do not ask for consent before processing their data (tracking them), then this is a clear breach of privacy law.
If you get explicit consent (that means the user understands what they're consenting to) before you process their data (so no setting up identifiers and then showing a popup), then you're in the clear.
If you put unique identifiers in the link the user clicks so you can see if the user opened your app, then you need to ask for consent before generating the link.
And of course, apps/services should all function if the user doesn't provide consent. "Give consent to enter" is explicitly illegal.
Theoretically, you could build something like this, but it's not what advertisers want, because they want to track their users without interrupting the conversion flow with a yes/no popup "do you want company Y to know you installed app X because of them".
That was also the interpretation I got from this, but it's hard for me to understand how this is expected to work for any analytics unless the idea is just that every app has a consent popup to use in the EU?
Let's say you wanted to avoid the consent popup and stay within EU law. Wouldn't any analytics you collect from your app would be meaningless to the point of absurdity? This is an honest question, not trying to just make my own point.
To intercept the usual argument of "But my business can't exist without all this data!", to that I say "Good!". If your business can't exist without tracking every single iota of your customer's existence, then it truly shouldn't. I couldn't tell you the amount of times I've had to fight back against implementing yet another tracking tool at work, just to collect data that I know for a fact no one will look at after the first few weeks of the tool being there. The amount of times I've heard some stupid shit like "Well we don't need this data yet, but what if we need to have their mother's maiden name at some point in the future?!" is depressing, and I'm glad that we're starting to have legal channels to push back against such idiocy.
Cute theory. Fails in practice. Especially with LLMs on the horizon, this would be tantamount to unilateral nuclear disarmament. (Practically, it fails in that we haven't quantified the cost of breaches commensurate with what those of us who are security minded estimate it to be.)
I have advocated for privacy issues for a short while. "Data is radioactive" is the "defund the police" of our movement.
>we haven't quantified the cost of breaches commensurate with what those of us who are security minded estimate it to be
We don't estimate GDPR violations as the true materialized damages either, we put a heavy % of yearly income per offense, large enough to deter it.
Not remotely analogous to turning data into a liability. Particularly when the EU laws seem almost explicitly written to allow for offloading such risks to America and China.
> Particularly when the EU laws seem almost explicitly written to allow for offloading such risks to America and China.
I don't know what you refer to with this specifically
GDPR is a gating function. If you can afford the specific set of qualified lawyers (up to a ridiculous, plutocratic limit the likes of e.g. Google breach), you can legally offload the risks to an offshore server. If you're a tiny competitor, you should be beaten up by funded complaints.
You can't. GDPR also applies to processing EU residents' PII outside of the EU.
Right. There are plenty of foundation model companies in the US and China who would be fine training on European data without operating there. (Or doing it and paying a fine after acquisition.) It's an issue Mistral, for example, has complained about.
Nukes have clear downsides, ones one doesn't need a protractor or regression to prove. Our estimates of the costs of data breaches remain statistical.
So I'm reduced to asking again: How is banning corporate hoarding of user data similar to nuclear disarmament?
Governments (probably rightly) view ai technology as strategic so will build legal regimes that improve ai. This means that they will have power over the ones that don’t.
The last 50 years have shown pretty clearly that nuclear disarmament was a strategic mistake for regimes that did it so they won’t make the same mistake with ai.
So lets take a concrete example. Let's imagine Facebook moves out of the EU in order to skirt EU law. How do they now operate in the EU? How do they make any money from the EU users?
If the EU has neighbors who have nukes, then that is a threat to the EU, and the EU needs their own nukes for deterrence. This far I follow. If the EU has neighbors who have lax data privacy laws, then that is their problem - it's not a problem in the EU because they can be barred from running businesses in the EU. Can they store the data from EU users who visit their online services? Sure. But they will have to offer free services to entice EU users to visit, because they can be blocked from running business in the EU. I don't see the business model for keeping this up.
They train off EU users' data irrespectively. Now the EU has to contend with not having LLMs (which may be fine, we don't yet know) or relying on a foreign foundation model.
We should also be hoping for unilateral nuclear disarmament (I get your point on the infeasability though), but I don't see the parallels here. LLMs don't need personal data to work (I'd even imagine such data to be better off left out of the training data anyways, caveat for celebrities), and regardless of everything else whether the AI hypesters are to be believed about how world-changing AI/LLMs will be remains to be seen.
Also, as the OP article suggests, we can and are doing something about it. Things aren't perfect yet, but GDPR itself has already made huge waves and have made things better. From how I interpret this ruling, the dark pattern cookie banners are being scrutinized and are being put under the knife, so there's some hope that things will soon improve on that front.
> I have advocated for privacy issues for a short while. "Data is radioactive" is the "defund the police" of our movement.
Except we can already see a shift in the masses and their opinions here. People are becoming cognizant of the sheer amount of data all these tech companies harvest on them. I am consistently getting more and more of my non-technical-in-any-capacity friends asking me how to safeguard their data better, so I'm quite hopeful we're going to get there. All we need is to actually fucking hurt the FAANGs and their ilk. Cut the head off the snake and all that, if we actually hurt Meta as we should've a million times by now, then all the smaller players will automatically fall in line for fear of a similar world of hurt.
On a side note, I also don't understand your comparison to "defund the police" - were there any places that fully applied it and demonstrated that it "fails in practice".
Training data?
> I also don't understand your comparison to "defund the police" - were there any places that fully applied it and demonstrated that it "fails in practice"
It's a famous example where a minority overreacting in a presentable way set the entire movement back.
Why is it okay for companies to just vacuum up all user data without 90% of users knowing it’s happening ?
Or shall the “stealing” of knowledge and creative works without consent continue?
Outcry made Adobe and other such companies put (opt-out) user controls for gathering training data, but writers, especially writers on the internet, are usually ignored. I've seen even the angriest "AI is stealing my art if you use Dall-E you're a bad person" people use ChatGPT, because they don't seem to consider writing to be art or expression as much as they do their own works.
Textual data just doesn't seem to be valued, and as a result data scrapers often don't care about annoyances such as "ethics" or "consent" when it comes to gathering training data.
Not sure how i feel about the whole thing to be honest. (legal gray area)
No. The development is a given. Where it happens is not. That’s the point. If you want to use European data to train, you’d better not have a European nexus.
> Training data?
What purpose would user data serve in your training set? What's the application of the LLM after it's done training?
I'm sure we all understand why the user doesn't want his private data in the training set, but I also don't understand why BigAI would want to train on this data. Except for AI-enhanced advertising, of course. But maybe... nobody should do this anyway?
You need to be clear about what you collect, get clear consent (and all the courts decisions on that are actually going in the direction that it really needs to be clear and specific) and give people the ability to have their own data modified.
Plus, enforcement makes a lot of sense. Companies get a lot of warning before things escalte and fines are proportional to companies results so it hurts but is not a death sentence unless they repeatedly offend.
Yeah, we once got contacted by our local DPA about some issues. Mailed us a list of issues they had with our site. I set up a call with them for some clarification, and they were happy to go into detail. Then they just said to mail them when it’s done, or they’ll just re-check after some months. They are interested in actually changing things, not in fees.
And then we lobbied in "legitimate interest".
There's a charitable way to view it: there are a lot of human endeavours. You can spend a few centuries trying to classify all of them and put them in to law, or you allow "legitimate interests".
It's pretty much the idea of GDPR. The wording of the GPDR is "You should make your systems private by design", which they explain as "Store PII only you really have no choice"
In this case, the legal ruling means that even if they somehow fix their consent, they have to remove all the data they currently have! Also all their clients need to remove all the data. Having to tell your customers they have to remove all their data ought to completely kill their business.
That being said, it will likely not happen: It's not the first time they lose a ruling and I'm pretty sure no-one removed any data, despite being required to...
GPDR says that all governments are the ones judging whether the GDPR is violated (meaning not the courts), for example the https://ico.org.uk/ and they even formalized an exception process. You cannot sue a company for GPDR violation, you can report it to a government department that may or may not decide to action your report, that's it. GDPR only allows for the government to intervene directly in the private sector.
Needless to say, all governments have used the exception process to carve out blanket wide-ranging exceptions for themselves, for state owned or partially state owned enterprises (police, government departments, police, justice, banks, insurance, hospitals, doctors, incumbent telco's, ... exactly the people where GDPR protection would be critically important) that seem to grow in scope over time. For example the tax offices in the EU now have exceptions that allows them to mandate companies store PII as part of their regulations (meaning without an actual law).
And, in any case, if anyone violates your rights, there's nothing you can do with the GDPR. Try to get a hospital to empty your patient record and tell me how it goes (I wanted to do that after the hospital charged the insurance for an -embarassing- assessment they didn't actually do (it allowed them to charge a lot because it involves staying a few days at the hospital, I was in there about 2 hours). So I wanted it cleared of my medical record, which is one of the core things the GDPR allows for, it's given as an example in the law! Nope. Not allowed, and the government doesn't pick up the report)
Keep it running for a few days, then check on but the tracking doesn't output meaningful data that you can exploit to solve your issue.
At this point, you search for alternative tracking but do you disable the old one ? What's the benefit ? Either it's free or cost very little, none of your customer know they are being tracked and in the eventuality it may become useful later on you keep it.
Repeat a few times and you end up with bloated website that tracks where you were, are and will be. What you're watching, cursor position, scrolling, how long you spent watching that image or these one, have access to every technical details about your device because it's required for fingerprinting, all while no one actually is exploiting the data.
It's junk yet you collect it because it's free.
If there was a meaningful reason to limit the number of tracking, like the law and fear of getting sued, then it would be a different story.
Whatever it costs it reduces your profit margin so why would you keep it live?
At the same time - quantifying this is not straightforward and companies mostly ignore committing resources to such activities.
1. Entrenches Google, Facebook, etc. because they are the only people that have enough money to comply with the regulation.
2. Makes the rest of the internet worse (e.g. people show MORE ads because they are less effective because they show me boats and I hate boats)
3. Makes data brokers even more important because companies can't get data anywhere else.
4. Reduces competition because the incumbents will always have more data than startups (Nike knows I wear a size X and the startup can't ever get that data)
Everything is a tradeoff. I, for one, would rather these regulatory agencies go after the 100,000s of data brokers that mine for SSNs, birth certificates, financial info, etc., rather than them going after Facebook, TikTok, etc.
Ads are here to stay, if you don't want ads, then ban ads, and with it most of the internet, but if people keep making terrible regulations like this that try to hurt big companies and get rid of ads and in reality, you just enable and feed these massive companies. Regulation makes them MORE valuable, not less. (see Meta stock price vs. Snap after ATT)
TFA made it clear that they _aren't_ complying.
Back in the olden days, if you read a boat magazine, you’ll see ads for boat stuff. This was always fun for me — if I’m ready a motorcycle racing magazine, I’ll see ads for cool things that I had no idea existed and that would be useful to me. With “targeted” ads, it becomes an echo chamber — I see ads that are “tailored” for my alleged current interests, but nothing that helped me discover new things that I could become interested in.
What’s wrong with context-based ads? If I’m reading about Thailand travel, then the publisher should sell ads related to SE Asia travel.
Why am “I” being customized to rather than ads being relevant to the content?
If you want to reach boat enthusiasts, then advertise on content related to boats (or perhaps water sports, etc.) You then don’t need to track “me,” but instead you can track “boat content. That takes the personal data out of it. This keeps me from being followed around the web trying to sell me a vacuum cleaner I already bought.
Sure, it might be useful to try to sell me another burger, or another nicotine gum, but something went very wrong in the data processing if I'm being resold on lifetime goods.
And it happens way too often.
I've seen this kind of comments several times over the years, and I've always thought that this might actually be the optimal strategy, because I'm not convinced the alternatives work better. You'd have to see the numbers over samples bigger than n=1.
Let's say I just bought a bridge, and that's the only thing you know about me. What ads should you serve me? Maybe fridge accessories would make sense (I'm not sure that's a thing). But fridges themselves might be relevant as well, more so than some other random product:
1. I might be able to return the fridge I just bought, if I see another one I might prefer.
2. What's the life expectancy of such an appliance? I guess it either breaks quickly (manufacturing flaw) or not (hopefully it can last more than 5 years). In the first case, I'm back in the market right after my purchase.
I'm also guessing that the margin on such an appliance might be higher than on burgers and nicotine gums, such that you can afford lower conversion rates.
Who returns fridges? It's a tremendous pain in the ass to do.
But you're right, it's possible.
Logically I think I'd choose to show myself other new home related ads, or other long term appliances that are probably not bought yet or nearing end of life just like the fridge.
But you do have to optimise for that, instead of Bluetooth connectivity.
What ad is relevant to a Taylor Swift song? A news article about a shooting (a naive algorithm will say "guns")? A Youtube video explaining the Fourier series?
What about a TV Show review... that is watched by people around the world, and where the show in question is on different platforms in different countries? Does displaying Hulu ads to readers in countries without Hulu access make sense?
Non-personalized advertising favors big brands, because most content isn't contextual, and only brands with extremely broad appeal advertise on such content. This is why so much TV advertising is cars, banks, medications, detergent, shaving cream and so on.
How did radio handle this for 90+ years?
> This is why so much TV advertising is cars, banks, medications…
No, that’s because national TV advertising is super expensive.
> Does displaying Hulu ads to readers in countries without Hulu access make sense?
Yeah, perhaps it’ll generate interest in Hulu expanding to that country. You’d be reaching an audience that are ostensibly TV enthusiasts, so it’s a perfect idea for Hulu to gauge and develop interest among audiences that matter.
> A YouTube video explaining the Fourier series?
I don’t know: what kinds of products, services, or events might be interesting for someone who was interested in the Fourier series?
This isn’t a hard problem at all. Advertising worked in the 1950s just fine.
There is zero reason for a company to have my personal buying and interest habits in their database. It doesn’t benefit me. Is the advertising landscape better in 2025 than it was in 1964? Not at all. Here’s a prime example: my friend is an author, he reads blogs and websites on all matter of topics, but somehow if he’s researching a character, he’ll be followed around the web forever advertising products he’ll never buy. In the real world, if I go into a perfume shop one time and buy something, that doesn’t mean I’m a perfume enthusiast — so all of those CPM ads from the perfume companies: completely wasted.
Just because I view, visit, or even buy something, that doesn’t mean I’m interested. Am I in a feminine products affinity group because I occasionally have purchased those products for my wife?
Why not have everyone give a DNA sample to Google so they can tailor advertising based on my genetics? Where does it stop, and to what levels of absurdity shall it reach before we push back — both as consumers and well as tech people building all of this shit?
Want to sell boats? Advertise in boat content. Want to sell subscriptions to your “innovative accounting platform,” the advertise in accounting or business related content.
Again, this isn’t hard. I don’t want “personalized” advertising because the internet doesn’t know me or what I’m interested in at a particular moment. The data on me is very noisy, my interests frequently are fleeting or change, and my “buying habits” are very much contextual and situational. Not to mention I don’t want Google and potentially governments to know what I’m interested in — it’s literally none of their business.
Privacy is a human right and we should be pushing for that. If that makes it harder to monetize Taylor Swift — tough shit. Not my problem. And I don’t think Taylor Swift cares either way.
We know this one! Apparently subscription services to learn STEM.
(Coincidentally I actually were shopping for such a service the other day, and checked out the one promoted most. Turns out they seem to have spent the entire budget on marketing add had very little content, so could not solve my problem anyways)
Which advertisement needs, exactly, are being served by sharing all that data with 1498 "partners" each of which will store similar data for similar periods of time? https://x.com/dmitriid/status/1733421877119324609
Which advertisement needs are not being served by showing contextual ads?
BTW, personalised ads favour huge brands on huge advertisement platforms. Because https://www.sciencedirect.com/science/article/pii/S016781162...
--- start quote ---
Our simulation study reveals that more than 50% of audience segments ... require a minimum increase in performance larger than 700% to be at least as profitable as no-targeting. ...we find that more than half of the audience segments require an increase in CTR0, CR0, and m0 larger than 100% to be at least as profitable as no-targeting.
Approximately half of the audience segments on Spotify require a higher increase in CTR0 for the advertiser, suggesting they might be less profitable than no-targeting.
--- end quote ---
The key culprit is that user data is used not just for advertising products that the user might be interested in _today_. But to create a profile of their interests so that companies can predict what they might be interested in at any point in the future, which can then be used to design more effective advertising campaigns tailored to the type of products they're most susceptible to be manipulated into buying.
Furthermore, this profile is also generally useful to anyone who wishes to psychologically manipulate a group of people into thinking or acting a certain way. Since advertising is a branch of propaganda, governments and political agencies are particularly interested in this use case. It's pretty obvious that the current global sociopolitical instability is largely a product of this type of manipulation.
So considering that both governments and companies have an interest in user data, this genie is never going back in the bottle. The best we can hope for is for the exploitation to be contained via regulation by governments that haven't been fully corrupted yet.
The article we're commenting on makes it clear the big guys aren't complying. Also, I reject the notion that you have to spend inordinate amounts of resources to comply, in fact it is the opposite. You don't spend money on data you don't store, after all.
Co. I used to work for is microscopic in comparison to FAANG, and we didn't have a single cookie banner or anything of the sort and have absolutely no problem complying with GDPR because we track nothing and collect nothing more than what is strictly necessary, mostly because of individuals like myself who push hard against any data collection that doesn't have a well thought out reason. Hell, even Github with their massive scale has no problem with not having cookie banners or anything else of the like. This is a problem of will, not resources.
> 2. Makes the rest of the internet worse (e.g. people show MORE ads because they are less effective because they show me boats and I hate boats)
Perhaps, but we're already drowning in them as-is. The internet is unusable without uBlock and DNS-level adblocking.
> 3. Makes data brokers even more important because companies can't get data anywhere else.
If we make data radioactive, then data brokers wouldn't be able to exist. What we need is stringent and broad laws that limit data gathering, period, regardless of the source. Whether you collect it yourself or pay someone else to collect it for you is completely irrelevant, both should be made equally painful. I'd also have no qualms with making sharing any data that you do collect even more of a pain in the ass and a nightmare for everyone involved, this whole gray market has net negative benefits to everyone.
> 4. Reduces competition because the incumbents will always have more data than startups (Nike knows I wear a size X and the startup can't ever get that data)
Why would Nike have this data in the system we're talking about (data radioactivity)? How is this data even useful to anyone, other than for tracking purposes to make a unique profile out of you? Companies shouldn't have this data unless it's a podiatric clinic or something like that, whether it be Nike or this imaginary Shoe startup that needs feet sizes for whatever reason.
I guess I could see there being genuine usefulness for people who have feet sizes that aren't the norm to find footwear that fits them, but there's no reason they have to have their entire essence tracked by every company on the internet for that.
- Go to nike.com
- Select shoe, select size
- Order
- Shoe size data is only used for this 1 purchase and never stored or used again unless the user buys another pair of shoes
And to have a flow like
- Go to nike.com
- Select shoe, select size
- Order
- Nike stores this shoe size entered, tied to a identity token that is used to track you on repeat visits and across different domains, sells your shoe size (alongside all your other data they have collected like address) to data brokers, who themselves go on to sell shoe sizes to advertisers. You then get ads like "Have freaky long toes, live in City X and are size 49? Buy FreakShoes.com!" while viewing random Youtube videos about your favorite video game or whatever
- Above process repeats for all of eternity
The depressing part is that the 2nd flow isn't even that far off of what's really happening. As I was writing I was thinking "Am I making this too dramatic?" But no, this is literally how things happen right now!
I have very little sympathy for the idea of NOT storing user data is some sort of onerous regulatory burden.
Just stop collecting it
That's an idealistic, but highly unrealistic, thought.
As long as a market exists that can profit from exploiting PII, and is so large that it can support other industries, data will never be radioactive. The only way to make it so is with regulation, either to force companies to adopt fair business models, or by _heavily_ regulating the source of the problem—the advertising industry. Since the advertising industry has its tentacles deeply embedded everywhere, regulating it is much more difficult than regulating companies that depend on it.
So this is a good step by the EU, and even though it's still too conservative IMO, I'm glad that there are governments that still want to protect their citizens from the insane overreach by Big Tech.
The EU bureaucracy machine can be slow moving, but has the potential to fix this. The stricter the rules, the simpler the implementation. You could cut a LOT of the administrative burden by specifying what data is allowed to be stored at all, instead of what isn't.
Big tech needs to be put in their place, and as others have commented; if this kills your business model, your business model doesn't deserve to exist.
Europe gives me less control of my personal data than the US would. I am no longer allowed to decide that I'd rather choose services that take payment in data instead of services that take payment in Euros.
I think people who disagree with this perspective should be accommodated. It's a valid objection and technology inherently favors monopolies, so you can't really have the Facebook equivalent of a vegan restaurant or gay club. I'm not against forcing (large) tech companies to offer tracking-free plans at reasonable prices for those for whom this is the right tradeoff.
What Europe is doing is just plain stupid, though, and it will be felt most by those who can least afford it.
Google, Microsoft and Apple don't really give you a choice, you will pay in Euros for your phone/PC, and then you will pay in your data as you use it whether you like it or not.
A prime example is sharing information about DNA since that has a social impact on relatives. Less obvious problem would be people in a position of social position, like say a judge or jury, since access to personal information in that situation provide unfair position of power in society. It also is a problem with voting, since access to voters personal information has a high risk of influence elections.
To take a more direct example, if you are paying your email provider with data, then you are also selling the information of anyone who send their emails to you. The sender is in an impossible position in that they can't know who the email provider is of a recipient (email forwarding is a thing), so the social cost is on the recipient if they sell the information.
This sort of business model is problematic precisely because the poorest can't afford to refuse - that's a feature not a bug. Privacy is deemed a human right, and human rights shouldn't be for sale.
You could make the same argument supporting the legal sale of human organs, but as a society we've decided that kind of "payment" strips the poorest of their dignity and human rights.
The business model is inherently predatory for other reasons too. People see what they get right now - "free" access to the website they're on, but they're completely oblivious to the real costs because they're abstract, too many steps removed from each individual's actions, but they're very real and damaging in aggregate.
100%. Unless a cooperative model (like most businesses should be run, bit that's a different issue) exists in which I am compensated for you having my data. At that point all the time and friction I have to spend/deal with because all of you have my data is worth it. Right now all this friction in my life because you have my data and I'm dealing with your beaches is "paid" for by me, and that's lame.
I'm curious, how does such a conversation usually go? Is your main angle to point out how useless the data ultimately will be, or did you find a resonating way to point out the negative effects on users?
I also tend to highlight that we do have historical data that nobody is looking at as-is, what's different about this new data? What are the actual long-term plans for the data? Can we reuse what we already have for what we're aiming for here?
These days my default is "Oooh we'll have to check in with legal on that one, not sure if it's GDPR-friendly to include this new column like this". No one likes talking to legal unless they absolutely have to, so most will just drop it.
And unfortunately sometimes there's no winning it no matter what, so you have to "disagree and move on" as it were. If it's some manager's pet project, well, you're SOL for the most part.
> or did you find a resonating way to point out the negative effects on users?
Unfortunately I've found this to seldom work unless you're working somewhere where privacy is part of the value prop. Even pointing things out like "How would you feel if the DB were to leak and all your info were to be made public?" elicits 0 response. The marketing people and C-suite that push these kind of boneheaded things forward don't view the users as actual humans, they're all just numbers to them. Will this cause churn? How much? Those are the only questions that matter to them.
https://noyb.eu/en/microsofts-xandr-grants-gdpr-rights-rate-...
I've tried to do the same steps in the past and eventually, the xandr pages linked there were removed - now being in Microsoft something page - and being even harder to contact (even if it's still possible to fill a form asking for your data when you get there. I received the same answer as noyb)
Not true
https://fortune.com/ranking/global500/2023/?sector=Technolog...
I hope I'm wrong, but I cannot see a more plausible outcome.
For once, a corporations actions will then disproportionately affect the rich, since they will be the only ones worth holding data points on. Those best able to financially and legally enforce the rule.
A clean win win.
That said, I don't understand how TC String can be considered PII.
I haven't been following this case so probably miss a lot of context, but my understanding is that the TC String encodes user's preferences for which advertises to share your info with. For example, I visit example.com and deselect everything. This information then gets passed around so that advertisers know I don't want their advertising.
Isn't that kind of the point? I want them to know I don't want them. I'd rather setup that once and then not do it again for every site under the sun. Is the issue here that you can somehow be identified based on your tracking preferences alone?
The ruling here is confirmation of an earlier ruling where TC String was considered personal data. As an effect, the organization coordinating all this tracking (IAB) is considered data processor.
Is the ruling just a technicality (the fine is pretty low) because IAB isn't listed in data processor lists for all the sites I visit, or is there a deeper consequence arising from the ruling?
IANAL (or even Belgian).
There wasn't a direct ruling on if TC strings itself were PII. They were personal data that can be linked to individual since CMP will get both the IP address and the TC string.
As for IAB being (joint) data controller, the reason (sections 68 to 80) given for that is that they determine purpose and means of processing. They might not hold any data, but they set the rules for the framework.
jqpabc123•8mo ago
It isn't good for consumers whose privacy is being violated as they are being annoyed with unwanted, irrelevant ads and they get charged higher prices due to the cost of the advertising.
It isn't good for companies buying the ads by participating in sham "auctions" with no real insight into or control over the process. They are literally begging to be ripped off.
It doesn't have to be this way. "Context sensitive" advertising is more privacy respecting, easier to implement and monitor and can be more cost effective.
Example: The fact that I recently shopped for and bought a car is no reason to show me auto ads on a web site devoted to pet supplies. There is a logical disconnect here because context is ignored in favor of "personalization".
Those paying for these dumb "personalized" ads are wasting their money and my time and bandwidth because I already made a purchase. I'm not making another one any time soon.
By the way, this doesn't really happen to me any more because I now block these "personalized" ad networks. And you should too --- it's the only logical recourse to this stupidity.
Symbiote•8mo ago
I was recently searching for a toy across Etsy, Ali Express, eBay. I didn't buy it. A day later, I saw 'suggested' purchases on Amazon for the same toy. I boycott Amazon, so I don't often visit their website.
I normally block (successfully?) almost all of this advertising, so I find it particularly creepy when I receive it.
johannes1234321•8mo ago
This is what launched Google's money printing machine: Showing ads matching the current intent (current search) thus solving a current problem.
jonplackett•8mo ago
At least out-of-context ads can be more easily ignored.
fvdessen•8mo ago
This discrimination is quite important and before internet people would self discriminate on those basis and buy different magasines, see different movies, walk different streets and advertisers could target their demographics based on that.
Now everybody goes to the same social networks so the tracking is used to provide this discrimination
A good example is gym membership. There's 20eur/month and 300eur/month ones. The 300eur/month ones really don't want to advertise to everybody, they have a really specific demographic target in mind.
figassis•8mo ago
piva00•8mo ago
If you get another ad for a product you already bought the advertiser already paid for placement, a click through is just a bonus on top of that. Even more when information is so secretly guarded that any analysis of the impact of an ad is extremely flawed, it hasn't solved the old adage from John Wanamaker:
> Half the money I spend on advertising is wasted; the trouble is I don't know which half
It's meant to be that way, the ad platforms do not want advertisers to know what is waste and optimise their ads further than what's needed to keep them advertising, they just need to throw some bones here and there, convert a few people through clicks, to make themselves look indispensable.
There's no incentive to improve that, at all.
jqpabc123•8mo ago
Easy --- advertisers pay for bad ads the same as good ones --- why bother stopping the bad ones?
Convincing so many advertisers to just blindly trust the system and buy into the concept of black box "personalized" advertising is actually the real marketing coup here.
According to some stats, global use of ad blockers is now over 40%. Once it exceeds 50%, I believe this stupidity will slowly start to die out.
PaulKeeble•8mo ago
Seeing ads appropriate to a site however makes me a bit wary of the site itself, it needs enough difference to the context to not harm the sites reputation.
beejiu•8mo ago
According to Meta, their personalized advertising alone generates over $0.5 trillion of economic activity per year.
As much as the Hacker News crowd hates on ads, it's indisputable that it's good for businesses and the broad economy.
https://research.facebook.com/economiccontribution/
mdhb•8mo ago
unicamelkje•8mo ago
mrweasel•8mo ago
E.g. if I'm on a pages looking at watches, should I get ads for watches, or would it be better to show ads for the washing machine I was looking up last night? Google, the search engine, clearly thinks it's better to show ads relevant to my search term, but they are also in a special position that's not applicable to a news website.
People working in the field has also previously commented, here on HN, that the ad networks are basically hustling the advertisers, selling them ad space / users that they know will perform badly, to move more "inventory". That generates economical activity, but does that benefit anyone beyond the ad networks?
chgs•8mo ago
sensanaty•8mo ago
Oh, but they shuffled some money around the economy (mostly into their coffers), so it was all worth it in the end, because as we all know the magic economy is the only thing that matters.
marcus_holmes•8mo ago
I dispute this. Therefore it is not indisputable.
I support my dispute thusly: imagine a world where there were no ads. All the money spent on ads would be spent on other things. Those other things would, I assert, be better for everyone involved than ads. The world would be a better place.
I support my assertion that anything else would be better than ads by pointing out that for businesses advertising is an arms race, all your competition and you are in an auction for customer attention in which the winner is one of the duopoly that control all internet advertising. And for users I just point to waves hands at everything we hate about the modern internet all that. QED.
hgomersall•8mo ago
marcus_holmes•8mo ago
It's not "look at all the money they made, this service must be valuable", it's "is the money they made the best possible use of those resources?" and my answer is "definitely not"
[0] https://en.wikipedia.org/wiki/Parable_of_the_broken_window
microtonal•8mo ago
Why should I care that they make a boatload of money while making the life of everyone else crappier? Advertising turns everything into shit.
Crapware pre-installed onto your brand-new phone or laptop [1]? Advertising. Pervasive tracking added to Windows? Generating profiles for advertising. Smart TVs sending regular screengrabs to Samba TV? Analytics for advertising. Like your $2000 smart fridge? It's going to be much worse because Samsung is piloting advertising. Every tech product is getting infected by this disease, both shoving unwanted ads into your face and tracking your pervasively.
Of course, someone is going to argue that we cannot have 'free' products without advertising. In the end the consumer is paying for advertising as part of (increased) product prices.
czottmann•8mo ago
By your logic, that makes it a good business. I dispute that.
aucisson_masque•8mo ago
You seem to think that companies have a choice. I've been advertising for years, Google has been rolling out the standard ads in favor of automated "personalized" one for a long time.
They kept removing features again and again.
In more of removing features and making basic ads impossible to use, they have a team of people that will keep calling you and to offer you 'guidance' on how to run your ads. These guidance almost always revolved around enabling the automatic advertising algorithms and disabling your old school ads.
I consider myself quite smart with internet things but even me, at some point, got baited into switching to these automated ads because at the end you run a business that is completely different from marketing and it's not your core business. You're not expert into marketing and especially not into Google ads so it's easy for these experts to trick you.
I have seen my money syphoned by these automatic techniques with quite bad roi.
And all these self made people or very small companies (which represent the vast majority of the business), they are just as easy target for Google than I was.
Big companies can afford ads consultant that will run the advertising campaigns and optimize everything, but small ones are stuck doing things themselves on a system that is purposely made to hand over your money and let the computer do it's 'magic' with targeted advertising.
And if you're not happy with Google, what you gonna do ? It's not like there is competition. Everyone use Google, Google is a monopoly.
I tried switching to bing ads, Facebook ads, but its just not possible. No one use bing. Facebook leads were never as profitable as Google one, at least in the market I advertised.
bandrami•8mo ago
I was hosting websites in 1998 when Google was still in Larry Page's garage. We sold ads the same way magazines and newspapers always have: we had a sales staff and they did their job well. There's no reason we can't go back to that.
LocalH•8mo ago
The original vision of Google died when they bought DoubleClick.
aucisson_masque•8mo ago
If you're not there by your competitor is, you lost the game.
HenryBemis•8mo ago
_You_ think it doesn't work, but it does. Or at least 'on average' it does. As for your time, perhaps you value your time. But again, 'on average', there are so many people spending hours and hours on Insta, TT, etc. and those people clearly don't care about focus/time/ads, because it is their (mental/spiritual) bread and butter. When a young woman 'follows' 100 'influencers' and each posts twice per day, that young woman consumes at least two hundred ads per day and if she buys at least one item per day, that's a win for 'them'.
Regarding the car/pet scenario, if they are any good they should be advertising stuff to clean dog piss, brushes, etc.. items that "will keep your car clean when you got a pet".
But again.. it works. People make money.
Aloisius•8mo ago
Like say, advertising a car to you after you recently purchased one?