If online ads can be trivially used by big US tech companies to sway our elections using misinformation without it being observable to anyone or possible to refute (as would be the case for newspaper or TV ads) then why shouldn't it be monitored?
Also, minor detail, TikTok in the EU is not a US tech company.
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
instances of the EU interfering in elections
Do tell.
How can I, an end user who doesn't trust the ability of these researchers to keep my data private, prevent the platform from sharing my data with them?
Let me know when devs get stamps that make them legally liable for their decisions. Only then will that honor be applicable to software.
https://en.wikipedia.org/wiki/Regulation_and_licensure_in_en...
I worked with engineers, what we generally do isn’t engineering by the standards of those engineers.
Which isn’t to say that all software development isn’t.
People writing avionics software and medical software etc are doing what I’d recognise as engineering.
It’s about the process more than anything.
Software in the wild is simply a young field and we aren’t there yet widely.
> Collins Aerospace: Sending text messages to the cockpit with test:test
https://news.ycombinator.com/item?id=45747804
___
Think about how physical engineering orgs are formed. It's a collective of engineers, as it should be. The reason is that zero consequence management abstraction layers cannot exist in a realm with true legal responsibility. Real engineering org structure is written in blood.
I wonder what it will take for software to face that reality. I know that lack of regulation leads to faster everything, and I really do appreciate and love that... but as software continues to eat the world, there will be real consequences eventually, right?
The reason that real engineering liability goes back to at least the Code of Hammurabi is that people got killed by bad decisions and corner cutting.
What will that look like in software history?
This is not a gotcha. My understanding is that bad physical engineering kills people. Is that your understanding as well?
As software takes over more and more control of everything... do you see what I am getting at? Or, not at all?
To be clear, my understanding is that physical professional engineer (PE) legal responsibility is not like the medical ethical code of "do no harm." It's just follow best practices and adopted standards, don't allow test:test login on things like fighter jets, etc. If you fail that, then there may be legal repercussions.
We have allowed software "engineering" to skip all levels of basic responsibility, haven't we?
And I suspect that if you instituted such a system today the results wouldn't be what you like. Failures in complex engineering are typically multiple failures that happen simultaneously when any individual failure would have been non fatal. The bugs are lurking always and when different systems interact in unpredictable ways you get a catastrophic failure. And the way that N systems can interact is on the order of 2^N, so it's impossible to think of everything. Applying the Hammurabi Code to software engineering wouldn't lead to safer software, it would lead to every engineer getting a lottery ticket every time they push a feature, and if the numbers come up you die.
* researchers, because they will have to write data access applications, including a sufficient description of planned safeguards, detailed enough to the point that their university is ready to take a legal liability (and you can imagine how easy this will be), and
* digital service coordinators, because it will take ages for them to process applications from thousands of researchers each requesting a slightly different dataset.
In the end, we need to develop standardized datasets across platforms and to streamline data access processes so that they're safe and efficient.
Over a couple large public companies, I’ve had to react to a court ruling and stop an account’s actions, work with the CA FTB for document requests, provide account activity for evidence in a case, things like that.
Delete all docs we aren't legally required to retain on topic Y before we get formally subpoena'd. We expect it to be on XXX based on our contact.
Sadly, big companies can bully the scrapers with nuisance lawsuits as long as they wear the scrapers down in legal costs before it gets to trial.
[1]https://www.proskauer.com/release/proskauer-secures-dismissa...
Also, we can have depolarized recommendation algorithms. We don't need to go back all the way to timelines.
E.g., DieselGate. Europe was more impacted but US caught Volkswagen cheating.
https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal#E...
There is no “good” answer. Each has its pros and cons.
I would have preferred the companies like this emerged as federated entities and european data stayed in european DC and was subject to european laws. I think it would have avoided a lot of this, if they had not constructed themselves to be a single US corporate sheild, with transfer pricing on the side to maximise profit.
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
paxys•3h ago
shakna•3h ago
So it falls on those misusing the data, unless you knew it would be misused but collected it anyway.
Golden rule: Don't need the data? Don't collect it.
verst•3h ago
I would not classify Cambridge Analytica as research. They were a data broker that used the data for political polling.
paxys•2h ago
> The New York Times and The Observer reported that the company had acquired and used personal data about Facebook users from an external researcher who had told Facebook he was collecting it for academic purposes.
tguvot•1h ago
The data was collected through an app called "This Is Your Digital Life", developed by data scientist Aleksandr Kogan and his company Global Science Research in 2013.[2] The app consisted of a series of questions to build psychological profiles on users, and collected the personal data of the users' Facebook friends via Facebook's Open Graph platform.[2] The app harvested the data of up to 87 million Facebook profiles
pms•1h ago
tguvot•1h ago
brendoelfrendo•54m ago
tguvot•40m ago
so even it was happening today, whatever he did is irrelevant to EU/DSA unless they plan to chase everybody across the globe. somewhat like ofcom going after 4chan
_--__--__•3h ago
pms•2h ago
[1] https://www.techpolicy.press/x-polls-skew-political-realitie...
[2] https://zenodo.org/records/14880275
santadays•1h ago
terminalshort•1h ago
pms•1h ago
loeg•1h ago