If online ads can be trivially used by big US tech companies to sway our elections using misinformation without it being observable to anyone or possible to refute (as would be the case for newspaper or TV ads) then why shouldn't it be monitored?
Also, minor detail, TikTok in the EU is not US.
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
instances of the EU interfering in elections
Do tell.
How can I, an end user who doesn't trust the ability of these researchers to keep my data private, prevent the platform from sharing my data with them?
Let me know when devs get stamps that make them legally liable for their decisions. Only then will that honor be applicable to software.
https://en.wikipedia.org/wiki/Regulation_and_licensure_in_en...
I worked with engineers, what we generally do isn’t engineering by the standards of those engineers.
Which isn’t to say that all software development isn’t.
People writing avionics software and medical software etc are doing what I’d recognise as engineering.
It’s about the process more than anything.
Software in the wild is simply a young field and we aren’t there yet widely.
> Collins Aerospace: Sending text messages to the cockpit with test:test [0]
___
Think about how physical engineering and architectural companies are formed for the most part. It's a collective of nerds, as it should be. The reason is that no C-Suite zero consequence b.s. can exist in a realm with real legal responsibility. Engineering org structure is written in blood.
I wonder what it will take for software to face that reality. I know that lack of regulation leads to faster everything... but there will be real consequences eventually, right? What will that look like?
* researchers, because they will have to write data access applications, including a sufficient description of planned safeguards, detailed enough to the point that their university is ready to take a legal liability (and you can imagine how easy this will be), and
* digital service coordinators, because it will take ages for them to process applications from thousands of researchers each requesting a slightly different dataset.
In the end, we need to develop standardized datasets across platforms and to streamline data access processes so that they're safe and efficient.
Over a couple large public companies, I’ve had to react to a court ruling and stop an account’s actions, work with the CA FTB for document requests, provide account activity for evidence in a case, things like that.
Delete all docs we aren't legally required to retain on topic Y before we get formally subpoena'd. We expect it to be on XXX based on our contact.
Sadly, big companies can bully the scrapers with nuisance lawsuits as long as they wear the scrapers down in legal costs before it gets to trial.
[1]https://www.proskauer.com/release/proskauer-secures-dismissa...
Also, we can have depolarized recommendation algorithms. We don't need to go back all the way to timelines.
E.g., DieselGate. Europe was more impacted but US caught Volkswagen cheating.
https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal#E...
I would have preferred the companies like this emerged as federated entities and european data stayed in european DC and was subject to european laws. I think it would have avoided a lot of this, if they had not constructed themselves to be a single US corporate sheild, with transfer pricing on the side to maximise profit.
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
paxys•2h ago
shakna•2h ago
So it falls on those misusing the data, unless you knew it would be misused but collected it anyway.
Golden rule: Don't need the data? Don't collect it.
verst•2h ago
I would not classify Cambridge Analytica as research. They were a data broker that used the data for political polling.
paxys•1h ago
> The New York Times and The Observer reported that the company had acquired and used personal data about Facebook users from an external researcher who had told Facebook he was collecting it for academic purposes.
tguvot•32m ago
The data was collected through an app called "This Is Your Digital Life", developed by data scientist Aleksandr Kogan and his company Global Science Research in 2013.[2] The app consisted of a series of questions to build psychological profiles on users, and collected the personal data of the users' Facebook friends via Facebook's Open Graph platform.[2] The app harvested the data of up to 87 million Facebook profiles
pms•18m ago
_--__--__•2h ago
pms•59m ago
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
[2] https://zenodo.org/records/14880275
pms•17m ago
loeg•12m ago