And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
Why does anybody believe ANYthing OpenAI states?!
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Meta has never done and will never do anything in the general public's interest. All they care about is harvesting more data to sell more ads.
I'm no Meta apologist, but haven't they been at the forefront of open-source AI development? That seems to be in the "general public's interest".
Obviously they also have a business to run, so their public benefit can only go so far before they start running afoul of their fiduciary responsibilities.
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
Essentially, the goal is to establish a series of thresholds that result in significantly more complex and onerous compliance requirements, for example when a model is trained past a certain scale.
Burgeoning EU companies would be reluctant to cross any one of those thresholds and have to deal with sharply increased regulatory risks.
On the other hand, large corporations in the US or China are currently benefiting from a Darwinian ecosystem at home that allows them to evolve their frontier models at breakneck speed.
Those non-EU companies will then be able to enter the EU market with far more polished AI-based products and far deeper pockets to face any regulations.
US megatech funding our public infrastructure? Amazing. Especially after US attacked us with tarrifs.
Bad idea.
Europe is digging a hole of a combination of suffocating regulation and dependance on foreign players. It's so dumb, but Europeans are so used to it they can't see the problem.
EU's preemptive war on AI will be like the RIAA's war on music piracy. EU consumers will get their digital stuff one way or another, only EU's domestic products will just fall behind by not competing to create a equally good product that the consumers want.
I think they don't even know the term "model" (in AI context), let alone which one's the best. They only know ChatGPT.
I do think it's possible that stories spread like "the new cool ChatGPT update is US-only: Here's how to access it in the EU".
However I don't think many will make use of that.
Anecdotally, most people around me (even CS colleagues) only use the standard model, ChatGPT 4o, and don't even take a look at the other options.
Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.
They don't know how torrents work either, but they always find a way to pirate movies to avoid Netflix's shitty policies. Necessity is the mother of invention.
>However I don't think many will make use of that.
You underestimate the drive kids/young adults have trying to maximize their grades/output while doing the bare minimum to have more time for themselves.
>Additionally, AI companies could quickly get in trouble if they accept payments from EU credit cards.
Well, if the EU keep this up, that might not be an issue long term in the future, when without top of the line AI and choked by regulations and with the costs of caring for an ageing demographics sucking up all the economic output, the EU economy falls further and further into irrelevancy.
Chatgpt is more valuable than instagram. I believe people will find the way.
My issue with this is that it doesn't look like America's laissez-faire stance on this issues helped Americans much. Internet companies have gotten absolutely humongous and gave rise to a new class of techno-oligarchs that are now funding anti-democracy campaigns.
I feel like getting slightly less performant models is a fair price to pay for increased scrutiny over these powerful private actors.
If Europe wants leverage, the best plan is to tell ASML to turn off the supply of chips.
What exactly is onerous about it?
Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?
Perhaps it's easier to actually look at the points in contention to form your opinion.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
Well intentioned, but with negative side-effects.
The EU says nothing about USB-C being the bestest and greatest, they only say that companies have to come to a consensus and have to have 1 port that is shared between all devices for the sake of consumers.
I personally much prefer USB-C over the horrid clusterfuck of proprietary cables that weren't compatible with one another, that's for sure.
If one company does though they're basically screwed as I understand it.....
As in: the EU regulation literally addresses this. You'd know it if you didn't blindly repeat uneducated talking points by others who are as clueless as you are.
> Standardization like this is difficult to achieve via consensus - but via policy/regulation?
In the ancient times of 15 or so years ago every manufacturer had their own connector incompatible with each other. There would often be connectors incompatible with each other within a single manufacturer's product range.
The EU said: settle on a single connector voluntarily, or else. At the time the industry settled on micro-USB and started working on USB-C. Hell, even Power Delivery wasn't standardized until USB-C.
Consensus doesn't always work. Often you do need government intervention.
If I had to pick a connector that the world was forced to use forever due to some European technocrat, I would not have picked usb-c.
Hell, the ports on my MacBook are nearly shot just a few years in.
Plus GDPR has created more value for lawyers and consultants than it has for EU citizens.
I don't know how this problem is so much worse with USB-C or the physics behind it, but it's a very common issue.
This port could be improved for sure.
I don't know if this is a fair comparison, just an anecdote.
Monetary value, certainly, but that’s considering money as the only desirable value to measure against.
That time and effort wasted on consultants and lawyers could have been spent on more important problems or used to more efficiently solve the current one.
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
Tell that to X which disables your ability to delete your account if it gets suspended.
Fun fact, GitHub doesn't have cookie banners. It's almost like it's possible to run a huge site without being a parasite and harvesting every iota of data of your site's visitors!
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
Are you still sure you want to side blindly with the EU?
You need some perspective - Meta wouldn't even crack the top 100 in terms of evil:
https://en.m.wikipedia.org/wiki/East_India_Company
https://en.wikipedia.org/wiki/Abir_Congo_Company
https://en.wikipedia.org/wiki/List_of_companies_involved_in_...
https://en.wikipedia.org/wiki/DuPont#Controversies_and_crime...
https://www.business-humanrights.org/en/latest-news/meta-all...
I'm not saying Meta isn't evil - they're a corporation, and all corporations are evil - but you must live in an incredibly narrow-minded and privileged bubble to believe that Meta is categorically more evil than all other evils in the span of human history combined.
Go take a tour of Dachau and look at the ovens and realize what you're claiming. That that pales in comparison to targeted ads.
Just... no.
My original post was about all the comments saying they knew nothing about the regulation, but that they sided with Europe.
I think that gleeful ignorance caught me off guard.
Feels like I need to go find a tech site full of people who actually like tech instead of hating it.
I don't like meta or anything it has done, or stands for
You don't like open source ML (including or not including LLMs, depending on how you feel about them)
You don't like React?
You don't like PyTorch?
Like a lot of really smart and really dedicated people work on pretty cool stuff at Meta. You don't have to like Facebook, Instagram, etc to see that.
Plenty of great projects are developed by people working at Meta. Doesn't change the fact that the company as a whole should be split in at least 6 parts, and at least two thirds of these parts should be regulated to death. And when it comes to activities that do not improve anyone's life such as advertisement and data collection, I do mean literally regulated into bankruptcy.
The comment I responded to said Meta didn't do anything good, which is obviously affecting their opinion on whether Meta opposing European AI regulation can possibly be good. Certainly there's a lot of not great stuff Meta does.
Probably partly because reddit somehow seems to have become even worse over the last several years. So there are probably more people fleeing
https://news.ycombinator.com/item?id=44609135
That feeling is correct: this site is better without you. Please put your money where your mouth is and leave.
We don't like what trillion-dollar supranational corporations and infinite VC money are doing with tech.
Hating things like "We're saving your precise movements and location for 10+ years" and "we're using AI to predict how much you can be charged for stuff" is not hating technology
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
It was a decade too late and written by people who were incredibly out of touch with the actual problem. The GDPR is a bit better, but it's still a far bigger nuisance for regular European citizens than the companies that still largely unhindered track and profile the same.
The odds of the EU actually hitting a useful mark with these types of regulations, given their technical illiteracy, it's is just astronomically unlikely.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
And since most people click on accept, websites don't really care either.
Of course the business which depend on harvesting data will do anything they can to continue harvesting data. The regulation just makes that require consent. This is good.
If businesses are intent to keep on harvesting data by using dark patterns to obtain "consent", these businesses should either die or change. This is good.
The EU AI regulation establishes complex rules and requirements for models trained above 10^25 FLOPS. Mistral is currently the only European company operating at that scale, and they are also asking for a pause before these rules go into effect.
Meanwhile, nobody in China gives a flying fuck about regulators in the EU. You probably don't care about what the Chinese are doing now, but believe me, you will if the EU hands the next trillion-Euro market over to them without a fight.
True, but now they get to butt heads with the US, who call the tunes at ASML even though ASML is a European company.
We (the US) have given China every possible incentive to break that dependency short of dropping bombs on them, and it would be foolish to think the TSMC/ASML status quo will still hold in 5-10 years. Say what you will about China, they aren't a nation of morons. Now that it's clear what's at stake, I think they will respond rationally and effectively.
>Even in a lively discussion it was not compatible with Article 10 of the Convention to pack incriminating statements into the wrapping of an otherwise acceptable expression of opinion and claim that this rendered passable those statements exceeding the permissible limits of freedom of expression.
Although the expression of this opinion is otherwise acceptable, it was packed with "incriminating statements". But the subject of these incriminating statements is 2000 year old mythical figure.
Isn't that the exact definition of "blasphemy"[1]?
1984 wasn't supposed to be a blueprint.
It just creates barriers for internal players, while giving a massive head start for evil outside players.
Since you then admit to "assume by default", are you sure you are not what you complain about?
I, prior to reading the details of the regulation myself, was commenting on my surprise at the default inclinations of people.
At no point did I pass judgement on the regulation and even after reading a little bit on it I need to read more to actually decide whether I think it's good or bad.
Being American it impacts me less, so it's lower on my to do list.
I think perhaps you need to reread my comment or lookup "irony"
And that's the problem: assuming by default.
How about not assuming by default? How about reading something about this? How about forming your own opinion, and not the opinion of the trillion- dollar supranational corporations?
"Meta disagrees with European regulation"
That you don't have an immediate guess at which party you are most likely to agree with?
I do and I think most people do.
I'm not about to go around spreading my uninformed opinion though. What my comment said was that I was surprised at people's kneejerk reaction that Europe must be right, especially on HN. Perhaps I should have also chided those people for commenting at all, but that's hindsight for you.
Whereas EU's "heavy-handed and ill-conceived" regulations are "respect copyright, respect user choice, document your models, and use AI responsibly".
Meta is capable of having correct actions and opinions and you are grossly oversimplifying the EU regulation.
Sometimes the regulations are heavy-handed and ill-conceived. Most of the time, they are influenced by one lobby or another. For example, car emissions limits scale with _weight_ of all things, which completely defeats the point and actually makes today's car market worse for the environment than it used to be, _because of_ emissions regulations. However, it is undeniable that that the average European is better off in terms of privacy.
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
But well, I wouldn't expect Meta to sign into it either.
European aristocrats just decided that you shall now be subjects again and Europeans said ok. It’s kind of astonishing how easy it was, and most Europeans I met almost violently reject that notion in spite of the fact that it’s exactly what happened as they still haven’t even really gotten an understanding for just how much Brussels is stuffing them.
In a legitimate system it would need to be up to each sovereign state to decide something like that, but in contrast to the US, there is absolutely nothing that limits the illegitimate power grab of the EU.
I am happy to inform you that the EU actually works according to treaties which basically cover every point of a constitution and has a full set of courts of law ensuring the parliament and the European executive respect said treaties and allowing European citizens to defend their interests in case of overreach.
> European aristocrats just decided
I am happy to inform you that the European Union has a democratically elected parliament voting its laws and that the head of commission is appointed by democratically elected heads of states and commissioners are confirmed by said parliament.
If you still need help with any other basic fact about the European Union don’t hesitate to ask.
This is not meant to be an insult or a competition either, it is a caution that you are being conned and scammed if you think that the EU has anything to do with democracy, let alone the will of the people. And no, America and its people are under equal attack as the EU, even if by somewhat different methods and schemes due to the nature of our system. The very nature of self-determination is under direct assault behind the smoke and mirrors you are refusing or are unable to look past/through.
I am happy to inform you that it is always so surprising to me that you types are so quick to make excuses for and rationalize your own subversion and control over your own lives and government, your own freedom. The parliament does not make laws, only votes on it, the commission president is not exactly voted on democratically either since all those that propose them and also vote on them are also several steps removed from the voters by various means in various nations.
Take Germany and many other states (which is really what you are willingly and inexplicably relegating and subjugating yourself to), the party politics system so totally dominates you, that even the heads of state and the heads of the parties are so far removed from the citizens that even at that level it is not even really democratic, let alone at the EU commission president appointment and approval level. It's basically a Neo-aristocratic system by different methods you have not caught onto quite yet. It has never dawned on you that it is odd that somehow so many children of politicians and aristocrats somehow managed to get into the EU parliament and commission and bureaucracy? That's just a coincidence? Any representatives of the Common (as the brits still blatantly call it) are basically ineffective on a good day. The only logical conclusion a logical and rational, objective person would have to come to is that it is really all just a con job.
You do know what a con job means right? It comes from confidence trick, tricking you into having confidence into something like investing or buying something or believing you have a say in the system of the EU that dominates your life when you don't. It's just you pushing buttons that are not connected to anything!
You were not given an option to choose how the EU would be structured, the ruling class just made up rules that served themselves over several years and treaties and then let you "vote" in a system that is hardly even linked into the actual legislative structure. Or do you think the voters all voted for the EU commission to not be an elected body, but rather an appointed one?
You seem to be the quintessential representation of the blind man feeling the trunk of the elephant and declaring you are certain it is a snake.
If aristocratic figures had so much power in EU, they wouldnt be fleeing from the union.
In reality, US is plagued with greed, scams, mafias in all sectors, human rights violations and a economy thats like a house of cards. In contrast, you feel human when you're in EU. You have voice, rights and common sense!
It definitely has its flaws, but atleast the presidents there are not rug pulling their own citizens and giving pardons to crypto scammers.. Right?
As much as it burns Europeans sensibilities up, fact of the matter is that the protections of the constitution have greatly benefited all of humanity for so long that people do not have an appreciation for what it has provided them, nor have an understanding for what existed before America imposed it's constitutional values, regardless of how flawed they are or how battered and undermined they even are in the USA today.
in this case, it is clear that the EU policy resulted in cookie banners
Meta is hardly at blame here, it is the site owners that choose to add meta tracking code to their site and therefore have to disclose it and opt-in the user via "cookie banners"
If he disagrees with EU values so much, he should just stay out of the EU market. It's a free world, nobody forced him to sell cars in the EU.
Those banners suck and I wouldn't mind if the EU rolled back that law and tried another approach. At the same time, it's fairly easy to add an extension to your browser that hides them.
Legislation won't always work. It's complex and human behavior is somewhat unpredictable. We've let tech run rampant up to this point - it's going to take some time to figure out how to best control them. Throwing up our hands because it's hard to protect consumers from power multi-national corporations is a pretty silly position imo.
maybe people have rationally compared the harm done by those two
Of course it has nothing to do with rationality. They're mad at the first thing they see, akin to the smoker who blames the regulators when he has to look at a picture of a rotten lung on a pack of cigarettes
Being angry at a popup that merely makes transparent, what a company tries to collect from you, and giving you the explicit option to say no to that, is just infantile. It basically amounts to saying that you don't want to think about how companies are exploiting your data, and that you're a sort of internet browsing zombie. That is certainly a lot of things, but it isn't rational.
And consumers will bear the brunt.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
[1] https://www.lw.com/en/insights/2024/11/european-commission-r...
Same way the pols aren’t futurists and perfect neither is anyone else. Everyone should sit at the table and discuss this like adults.
You want to go live in the hills alone, go for it, Dick Proenneke. Society is people working collectively.
For reference, see every highly-regulated industry everywhere.
You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?
If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.
The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.
But what started as a good thing has become a tool of those same companies to prevent competition. How we regulate needs to be rethought beyond the simplistic "more = better"
Technology changes very quickly and the future of things is hardly decided by entrenched interests.
It's not just IT. Ask any EU farmer.
There's a certain hubris to applying rules and regulations to a system that you fundamentally don't understand.
The moment the EU shows even a small sign of protectionism, the US complains. It's a double standard.
Arguably that worked. :-)
Preferable to a burgeoning oligarchy.
In a rigidly regulated market with preemptive action by regulators (like EU, Japan) you end up with a persistent oligarchy that is never replaced. An aristocracy of sorts.
The middle road is the best. Set up a fair playing field and rules of the game, but allow innovation to happen unhindered, until the dust has settled. There should be regulation, but the rules must be bought with blood. The risk of premature regulation is worse.
That's an awfully callous approach, and displays a disturbing lack of empathy toward other people.
one day some historian will be able to pinpoint the exact point in time that europe chose to be anti-progress and fervent traditionalist hell-bent on protecting pizza recipes, ruins of ancient civilization, and a so-called single market. one day!
Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.
(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.
Your point is fundamentally philosophical, which is you can't use the past to predict the future. But that's actually a fairly reductive point in this context.
GP's point is that simply making an argument about why everything will fail is not sufficient to have it be true. So we need to see something significantly more compelling than a bunch of arguments about why it's going to be really bad to really believe it, since we always get arguments about why things are really, really bad.
Of course you can use the past to predict (well, estimate) the future. How fast does wheat grow? Collect a hundred years of statistics of wheat growth and weather patterns, and you can estimate how fast it will grow this year with a high level of accuracy, unless a "black swan" event occurs which wasn't in the past data.
Note carefully what we're doing here: we're applying probability on statistical data of wheat growth from the past to estimate wheat growth in the future.
There's no past data about the effects of AI on society, so there's no way to make statements about whether it will be safe in the future. However, people use the statistics that other, completely unrelated, things in the past didn't cause "doom" (societal collapse) to predict that AI won't cause doom. But statistics and probability doesn't work this way, using historical data about one thing to predict the future of another thing is a fallacy. Even if in our minds they are related (doom/societal collapse caused by a new technology), mathematically, they are not related.
> we always get arguments about why things are really, really bad.
When we're dealing with a completely new, powerful thing that we have no past data on, we absolutely should consider the worst, and of course, the median, and best case scenarios, and we should prepare for all of these. It's nonsensical to shout down the people preparing for the worst and working to make sure it doesn't happen, or to label them as doomers, just because society has survived other unrelated bad things in the past.
Innovation is good, but there's no need to go as fast as possible. We can be careful about things and study the effects more deeply before unleashing life changing technologies into the world. Now we're seeing the internet get destroyed by LLMs because a few people decided it was ok to do so. The benefits of this are not even clear yet, but we're still doing it just because we can. It's like driving a car at full speed into a corner just to see what's behind it.
Personally, I don’t think they’re bad. Plastic isn’t that harmful, and neither is social media.
I think people romanticize the past and status quo. Change is scary, so when things change and the world is bad, it is easy to point at anything that changed and say “see, the change is what did it!”
This is handwaving. We can be pretty well sure at this point what the effects aren’t, given their widespread prevalence for generations. We have a 2+ billion sample size.
You're hypothesizing the existence of large negative effects with minimal evidence.
But the positive effects of plastics and social media are extremely well understood and documented. Plastics have revolutionized practically every industry we have.
With that kind of pattern of evidence, I think it makes sense to discount the negatives and be sure to account for all the positives before saying that deploying the technology was a bad idea.
Why take such risks when we could take our time doing more studies and thinking about all the possible scenarios? If we did, we might use plastics where they save lives and not use them in single-use containers and fabrics. We'd get most of the benefit without any of the harm.
Do you think Heroin is good?
While some people get addicted to it, the vast majority of users are not addicts. They choose to use it.
WHAT?! Do you think we as humanity would have gotten to all the modern inventions we have today like the internet, space travel, atomic energy, if we had skipped the fossil fuel era by preemptively regulating it?
How do you imagine that? Unless you invent a time machine, go to the past, and give inventors schematics of modern tech achievable without fossil fuels.
Simply taking away some giant precursor for the advancements we enjoy today and then assuming it all would have worked out somehow is a bit naive.
I would need to see a very detailed pipeline from growing wheat in an agrarian society to the development of a microprocessor without fossil fuels to understand the point you're making. The mining, the transport, the manufacture, the packaging, the incredible number of supply chains, and the ability to give people time to spend on jobs like that rather than trying to grow their own food are all major barriers I see to the scenario you're suggesting.
The whole other aspect of this discussion that I think is not being explored is that technology is fundamentally competitive, and so it's very difficult to control the rate at which technology advances because we do not have a global government (and if we did have a global government, we'd have even more problems than we do now). As a comment I read yesterday said, technology concentrates gains towards those who can deploy it. And so there's going to be competition to deploy new technologies. Country-level regulation that tries to prevent this locally is only going to lead to other countries gaining the lead.
Regarding competition, yes you're right. Effective regulation is impossible before we learn global co-operation, and that's probably never going to happen.
Historically, all nations that developed and deployed new tech, new sources of energy and new weapons, have gained economic and military superiority over nations who did not, which ended up being conquered/enslaved.
UK would not have managed to be the world power before the US, without their coal fueled industrial era.
So as history goes, if you refuse to take part in, or cannot keep up in the international tech, energy and weapons race, you'll be subjugated by those who win that race. That's why the US lifted all brakes on AI, to make sure they'll win and not China. What EU is doing, self regulating itself to death, is ensuring its future will be at the mercy of US and China. I'm not the one saying this, history proves it.
But if such co-operation was possible, it would make sense to progress more carefully.
It's been the case since our caveman days. That's why tribes that don't focus on conquest end up removed form the gene pool. Now extend tribe to nation to make it relevant to current day.
Space travel does need a lot of oil, so it might be affected, but the beginning of it were in the 40s so the research idea was already there.
Atomic energy is also from the 40s and might have been the alternative to oil, so it would thrive more if we haven't used oil that much.
Also all 3 ARE heavily regulated and mostly done by nation states.
Your augment only work in a fictional world where oil does not exist and you have the hindsight of today.
But when oil does exist and if you would have chosen not to use it, you will have long been steamrolled by industrialized nations powers who used their superior oil fueled economy and military to destroy or enslave your nation and you wouldn't be writing this today.
> How would you have won the world wars without oil?
You don't need to win world wars to have technological advancement, in fact my country didn't. I think the problem with this discussion, is that we all disagree what to regulate, that's how we ended up with the current situation after all.
I interpreted it to mean that we wouldn't use plastic for everything. I think we would be fine having glass bottles and paper, carton, wood for grocery wrapping. It wouldn't be so individual per company, but this not important for the economy and consumers, and also would result in a more competitive market.
I also interpreted it to mean that we wouldn't have so much cars and don't use planes beside really important stuff (i.e. international politics). The cities simply expand to the travel speed of the primary means of transportation. We would simply have more walkable cities and would use more trains. Amazon probably wouldn't be possible and we would have more local producers. In fact this is what we currently aim for and it is hard, because transition means that we have larger cities then we can support with the primary means of transportation.
As for your example inventions: we did have computers in the 40s and the need for networking would arise. Space travel is in danger, but you can use oil for space travel without using it for everyday consumer products. As I already wrote, we would have more atomic energy, not sure if that would be good though.
So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.
You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.
The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.
Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.
Technically untrue, monopoly busting is a kind of regulation. I wouldn't bet on it happening on any meaningful scale, given how strongly IT benefits from economies of scale, but we could be surprised.
In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.
Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.
Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy
Oh, we already know large chunks of it, and the regulations explicitly address that.
If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.
But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers
This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.
It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.
Instead of exactly as you say: a global browser option.
As someone who has had to implement this crap repeatedly - I can’t even begin to imagine the amount of global time that has been wasted implementing this by everyone, fixing mistakes related to it and more importantly by users having to interact with it.
The law is written to encourage such defaults if anything, it just wasn't profitable enough I guess.
Im fully supportive of consent, but the way it is implemented is impractical from everyone’s POV and I stand by that.
The conversation is not about my opinion on tracking, anyway. It’s about the impracticality of implementing the legislation that is hostile and time consuming for both website owners and users alike
Drug trafficking, stealing, scams are massive revenue for gangs.
kwaigdc7 @ gmail.com
This part gave me a genuine laugh. Good joke.
adjusts tinfoil hat
playing with semantics makes you sound smart though!
The original idea was that it should be legal to track people, because it is ok in the analog world. But it really isn't and I'm glad it is illegal in the EU. I think it should be in the US also, but the EU can't change that and I have no right to have political influence about foreign countries so that doesn't matter.
Watching what is bought is fine, but walking around to do that is useless work, because you have that in the accounting/sales data already.
There is stuff like PayPal and now per company apps, that works the same as on the web: you need to first sign a contract. I would rather that to be cracked done on, but I see that it is difficult, because you can't forbid individual choice. But I think the incentive is that products become cheaper when you opt-in to data collection. This is already forbidden though, you can't combine consent with other benefits, then it isn't free consent anymore. I expect a lawsuit in the next decades.
I don't think practical is the right word here. All the businesses in the world operated without tracking until the mid 90s.
I'm constantly clicking away cookie banners on UK government or NHS (our public healthcare system) websites. The ICO (UK privacy watchdog) requires cookie consent. The EU Data Protection Supervisor wants cookie consent. Almost everyone does.
And you know why that is? It's not because they are scammy ad funded sites or because of government surveillance. It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors.
This is impractical, unreasonable, counterproductive and unintelligent.
It keeps the political grifters who make these regulations employed, that's kind of the main point in EU/UKs endless stream of regulations upon regulations.
Yup. That's what those 2000+ "partners" are all about if you believe their "legitimate interest" claims: "improve traffic"
This is a personal decision to be made by the data "donor".
The NHS website cookie banner (which does have a correct implementation in that the "no consent" button is of equal prominence to the "mi data es su data" button) says:
> We'd also like to use analytics cookies. These collect feedback and send information about how our site is used to services called Adobe Analytics, Adobe Target, Qualtrics Feedback and Google Analytics. We use this information to improve our site.
In my opinion, it is not, as described, "completely reasonable" to consider such data hand-off to third parties as implicitly consented to. I may trust the NHS but I may not trust their partners.
If the data collected is strictly required for the delivery of the service and is used only for that purpose and destroyed when the purpose is fulfilled (say, login session management), you don't need a banner.
The NHS website is in a slightly tricky position, because I genuinely think they will be trying to use the data for site and service improvement, at least for now, and they hopefully have done their homework to make sure Adobe, say, are also not misusing the data. Do I think the same from, say, the Daily Mail website? Absolutely not, they'll be selling every scrap of data before the TCP connection even closes to anyone paying. Now, I may know the Daily Mail is a wretched hive of villainy and can just not go there, but I do not know about every website I visit. Sadly the scumbags are why no-one gets nice things.
My problem is that users cannot make this personal decision based on the cookie consent banners because all sites have to request this consent even if they do exactly what they should be doing in their users' interest. There's no useful signal in this noise.
The worst data harvesters look exactly the same as a site that does basic traffic analysis for basic usability purposes.
The law makes it easy for the worst offenders to hide behind everyone else. That's why I'm calling it counterproductive.
[Edit] Wrt NHS specifically - this is a case in point. They use some tools to analyse traffic in order to improve their website. If they honour their own privacy policy, they will have configured those tools accordingly.
I understand that this can still be criticised from various angles. But is this criticism worth destroying the effectiveness of the law and burying far more important distinctions?
The law makes the NHS and Daily Mail look exactly the same to users as far as privacy and data protection is concered. This is completely misleading, don't you think?
If they only do this, they don't need to show anything.
And this is the crux of the problem. The law helps a tiny minority of people enforce an extremely (and in my view pointlessly) strict version of privacy at the cost of misleading everybody else into thinking that using analytics for the purpose of making usability improvements is basically the same thing as sending personal data to 500 data brokers to make money off of it.
What exactly do think should be allowed which still respect privacy, which isn't now?
I don't care about anything else. They can do whatever A/B testing they want as far as I'm concerned. They can analyse my user journey across multiple visits. They can do segmentation to see how they can best serve different groups of users. They can store my previous search terms, choices and preferences. If it's a shop, they can rank products according to what they think might interest me based on previous visits. These things will likely make the site better for me or at least not much worse.
Other people will surely disagree. That's fine. What's more important than where exactly to draw the line is to recognise that there are trade-offs.
The law seems to be making an assumption that the less sites can do without asking for consent the better most people's privacy will be protected.
But this is a flawed idea, because it creates an opportunity for sites to withhold useful features from people unless and until they consent to a complete loss of privacy.
Other sites that want to provide those features without complete loss of privacy cannot distinguish themselves by not asking for consent.
Part of the problem is the overly strict interpretation of "strictly necessary" by data protection agencies. There are some features that could be seen as strictly necessary for normal usability (such as remembering preferences) but this is not consistently accepted by data protection agencies so sites will still ask for consent to be on the safe side.
What you could then add to this system is a certification scheme to permit implicit consent of all the data handling (including who you hand it off to and what they are allowed to do with it, as well as whether they have demonstrated themselves to be trustworthy) is audited to be compliant with some more stringent requirements. It could even be self-certification along the lines of CE marking. But that requires strict enforcement, and the national regulators so far have been a bunch of wet blankets.
That actually would encourage organisations to find ways to get the information they want without violating the privacy of their users and anyone else who strays into their digital properties.
But other information not being absent we know that they are not the same. Just compare privacy policies for instance. The cookie law makes them appear similar in spite of the fact that they are very different (as of now - who knows what will happen to the NHS).
I would also be in favour of companies having to report all their negative data protection judgements against them and everyone they will share your data with in their cookie banner before giving you the choice as to whether you trust them.
I'm not against improving the system, and I even proposed something, but I am against letting data abusers run riot because the current system isn't quite 100% perfect.
I'll still take what we have over what we had before (nothing, good luck everyone).
But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”
The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.
It is pretty clear
The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.
Long policies can be needed depending on the litigiousness of the working environment. I used to work in an industry where that was beyond common, and it was required to be defensible in court. Accounting for factors that aren't commonly known like biases of judges towards individuals vs companies.
It really comes down to the legal landscape.
But yes, perhaps they should have worked with e.g. Mozilla to develop some kind of standard browser interface for this.
The main reason they need the banner is because they show you full page popups to ask you to take surveys about unrelated topics like climate action. They need consent to track whether or not you've taken these surveys
Their banner is just as bad as any other I have seen, it covers most of the page and doesn't go away until I click yes. If you're trying to opt out of cookies on other sites, that's probably why it takes you longer (just don't do that).
A website that sticks to being a website does not need cookie banners.
Are there any websites that don't require these banners?
If you allow users to set font size or language you need a banner btw
It's usually a click or two to "reject all" or similar with serious organisations. Some german corporations are nasty and conflate paywall and data collection and processing consent.
You just made the model not open source
You seem to not have understood that different forms of appliances need to comply with different forms of law. And you being able to call it open source or not doesn't change anything about its legal aspects.
And every law written is a compromise between two opposing parties.
I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!
> When interpreting EU law, the CJEU pays particular attention to the aim and purpose of EU law (teleological interpretation), rather than focusing exclusively on the wording of the provisions (linguistic interpretation).
> This is explained by numerous factors, in particular the open-ended and policy-oriented rules of the EU Treaties, as well as by EU legal multilingualism.
> Under the latter principle, all EU law is equally authentic in all language versions. Hence, the Court cannot rely on the wording of a single version, as a national court can, in order to give an interpretation of the legal provision under consideration. Therefore, in order to decode the meaning of a legal rule, the Court analyses it especially in the light of its purpose (teleological interpretation) as well as its context (systemic interpretation).
https://www.europarl.europa.eu/RegData/etudes/BRIE/2017/5993...
I'm not sure why you and GP are trying to use this point to draw a contrast to the US? That very much is a feature in US law as well.
Blaming tools for the actions of their users is stupid.
In some cases they can be prompted to guess a number of tokens that follow an excerpt from another work.
They do not contain all copyrighted works, though. That’s an incorrect understanding.
Commercial use of someone's image also already has laws concerning that as far as I know, don't they?
LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.
If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.
It's not like users are accidentally producing copies of Harry Potter.
They're not really "blaming" the tool though. They're using a supply chain attack against the subset of users they're interested in.
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.
I find non-literal copyrights (total concept and feel, abstraction filtration comparison/AFC) to be a perverse way to interpret "protected expression" as "protected abstraction". It is a betrayal of future creative activities to prop up the past ones.
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.
If you cannot see the difference between BitTorrent and Ai models, then it's probably not worth engaging with you.
But Ai model have been shown to reproduce the training data
https://gizmodo.com/ai-art-generators-ai-copyright-stable-di...
When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.
Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.
Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.
Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.
The world is asking for US big tech companies to be regulated more now than ever.
Facebook's power comes from how it gathered and monetised data, how it acquired rivals like Instagram and WhatsApp, and how it locked in network effects.
If regulators had blocked those acquisitions or enforced stricter antitrust and data privacy rules, there's a chance the social media landscape today would be more competitive. Politicians and regulators probably received some kind of incentive or didn't get it. They didn't see how dangerous Zuk's greedy algorithms would become. They thought it was just a social site. They had no idea what Facebook employees were building behind the scenes. By the time they realised, it was already too late.
China was the only one that acted. The US and EU looked the other way. If they'd stepped in back in 2009 with rules on privacy, neutrality, and transparency, today's internet could've been a lot more open and competitive.
Having to petition for monopoly rights on an individual basis is nothing like copyright, where the entire point is to avoid having to ask for exceptions by creating a right.
Anyway, the show must go on so were unlikely to see any reversal of this. It’s a big experiment and not necessarily anything that will benefit even the model providers themselves in the medium term. It’s clear that the ”free for all” policy on grabbing whatever data you can get is already having chilling effects. From artists and authors not publishing their works publicly, to locking down of open web with anti-scraping. Were basically entering an era of adversarial data management, with incentives to exploit others for data while protecting the data you have from others accessing it.
In limited, closed systems, it may not escape, but all it takes is one bad (or hacked) actor and the privacy of it is gone.
In a way, we used to be "protected" because it was "too big" to process, store, or access "everything".
Now, especially with an economic incentive to vacuum literally all digital information, and many works being "digital first" (even a word processor vs a typewriter, or a PDF that is sent to a printer instead of lithograph metal plates)... is this the information Armageddon?
They've just mastered the art of lying to gullible idiots or complicit psycophants.
It's not new to anyone who pays and kind of attention.
But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
> From artists and authors not publishing their works publicly
The vast majority of creators have never been able to get remotely close to make a living from their creative work, and instead often when factoring in time lose money hand over fist trying to get their works noticed.
>Copyright is 1) presented as being there to protect the interests of the general public, not creators,
yes, in the U.S in the EU creators have moral rights to their works and the law is to protect their interests.
There are actually moral rights and rights of exploitation, in EU you can transfer the latter but not the former.
>But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.
In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
And when we talk about copyright we generally talk about the rights of exploitation, where the rationale used today is about the advancement of arts and sciences - a public benefit. There's a reason the name is English is copy-right, where the other Germanic languages focuses more on the work - in the Anglosphere the notion of moral rights as separate from rights of exploitation is well outside the mainstream.
> In the EU's view of copyright the public doesn't need to get a good deal, the creators of copyrighted works do.
Most individual nations copyright law still does uphold the pretence of being for the public good, however. Without that pretence, there is no moral basis for restricting the rights of the public the way copyright law does.
But it has nevertheless been abundantly clear all the way back to the Statute of Anne that any talk of either public goods or rights of exploitation for the creator are excuses, and that these laws if anything mostly exist for the protection of business interests.
I of course do not know all the individual EU country's rules, but my understanding was that the EU's view was what it was because derived at least from the previous understanding of its member nations. So the earlier French laws before ratification and implementation of the EU directive on author's rights in Law # 92-597 (1 July 1992) were also focused on the understanding of creators having creator's rights and that protecting these was the purpose of Copyright law, and that this pattern generally held throughout EU lands (at least any lands currently in the EU, I suppose pre-Brexit this was not the case)
You probably have some other examples but in my experience the European laws have for a long time held that copyright exists to protect the rights of creators and not of the public.
French law, similar to e.g. Norwegian and German law, separated moral and proprietary rights.
Moral rights are not particularly relevant to this discussion, as they relate specifically to rights to e.g. be recognised as the author, and to protect the integrity of a work. They do not relate to actual copying and publication.
What we call copyright in English is largely proprietary/exploitation rights.
The historical foundation of the latter is firmly one of first granting righths on a case by case basis, often to printers rather than cretors, and then with the Statue of Anne that explicitly stated the goal of "encouragement of learning" right in the title of the act. This motivation was later e.g. made explicit in the US constitution.
Since you mention France, the National Assembly after the French Revolution took the stance that works by default were public property, and that copyright was an exception, in the same vein as per the Statute of Anne and US Constitution ("to promote the progress of science and useful arts").
Depository laws etc., which are near universal, are also firmly rooted in this view that copyright is a right grants that is provided on a quid pro quo basis: The work needs to be secured for the public for the future irrespective of continued commercial availability.
Doesn’t matter, both the ”public interest” and ”creator rights” arguments have the same impact: you’re either hurting creators directly, or you’re hurting the public benefit when you remove or reduce the economic incentives. The transfer of wealth and irreversible damage is there, whether you care about Lars Ulrichs gold toilet or our future kids who can’t enjoy culture and libraries to protect from adversarial and cynical tech moguls.
> 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.
> The vast majority of creators have never been able to get remotely close to make a living from their creative work
Nobody is saying copyright is perfect. We’re saying it’s the system we have and it should apply equally.
Two wrongs don’t make a right. Defending the AI corps on basis of copyright being broken is like saying the tax system is broken, so therefore it’s morally right for the ultra-rich to relocate assets to the Caymans. Or saying that democracy is broken, so it’s morally sound to circumvent it (like Thiel says).
on edit: If we had a soundtrack the Clash Know Your Rights would be playing in this comment.
Not to say this doesn't happen, I believe we can see it happening in some places in the world right now, but these are classes of laws that cannot "just" be changed at the government's whim, and in the EU copyright law is evidently one of those classes of law, strange as it seems.
No rights have existed 'forever', and both the rights and the social problems they intend to resolve are often quite recent (assuming you're not the sort of person who's impressed by a building that's 100 years old).
George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689[1]. The poor treatment of the Thirteen Colonies was due to Lord North's poor governance, the rights and liberties that the Founding Fathers demanded were long-established in Britain, and their complaints against absolute monarchy were complaints against a system of government that had been abolished a century before.
you should probably reread the text I responded to and then what I wrote, because you seem to think I believe there are rights that are not codified by humans in some way and are on a mission to correct my mistake.
>George III was certainly not surprised by Jefferson's claim to rights, given that the rights he claimed were copied (largely verbatim) from the Bill of Rights 1689
to repeat: Hence Jefferson's reference to inalienable rights, which probably came as some surprise to King George III.
inalienable modifies rights here, if George is surprised by any rights it is inalienable ones.
>Copyright originates in the Statute of Anne[0]; its creation was therefore within living memory when the United States declared their independence.
title of post is "Meta says it won't sign Europe AI agreement", I was under the impression that it had something to do with how the EU sees copyright and not how the U.S and British common law sees it.
Hence multiple comments referencing EU but I see I must give up and the U.S must have its way, evidently the Europe AI agreement is all about how copyright works in the U.S, prime arbiter of all law around the globe.
See my more extensive overview in another response.
The history of copyright law is one where it is regularly described either in the debates around the passing of the laws, or in the laws themselves, as a utilitarian bargain between the public and creators.
E.g. since you mention Jefferson and mention "inalienable", notably copyright is in the US not an inaliable right at all, but a right that the US constitution grants Congress the power to enact "to promote the progress of science and useful arts". It says nothing about being an inalienable or eternal right of citizens.
And before you bring up France, or other European law, I suggest you read the other comment as well.
But to add more than I did in the other comment, e.g. in Norway, the first paragraph of the copyright low ("Lov om opphavsrett til åndsverk mv.") gives 3 motivations: 1 a) to grant rights to creators to give incentives for cultural production, 1 b) to limit those rights to ensure a balance between creators rights and public interests, 1 c) to provide rules to make it easy to arrange use of copyrighted works.
There's that argument about incentives and balancing public interests again.
This is the historical norm. It is not present in every copyright law, but they share the same historical nucleus.
Copyright stems from the 15-1600s, while utilitarianism is a mid-1800s kind of thing. The move from explicitly religious and natural rights motivations to language about "intellect" and hedonism is rather late, and I expect it to be tied to an atheist and utilitarian influence from socialist movements.
I can find nothing to suggest a "religious and natural rights" motivation, nor any language about "intellect and hedonism".
Statute of Anne - which specifically gives a utilitarian reason 150 years before your "mid-1800s" estimate also predates socialism by a similar amount of time, and dates to a time were there certainly wasn't any major atheist influence either, so this is utterly ahistorical nonsense.
https://avalon.law.yale.edu/18th_century/anne_1710.asp
"I. Whereas printers, booksellers, and other persons have of late frequently taken the liberty of printing, reprinting, and publishing, or causing to be printed, reprinted, and published, books and other writings, without the consent of the authors or proprietors of such books and writings, to their very great detriment, and too often to the ruin of them and their families: for preventing therefore such practices for the future, and for the encouragement of learned men to compose and write useful books; may it please your Majesty, that it may be enacted, and be it enacted by the Queen's most excellent majesty, by and with the advice and consent of the lords spiritual and temporal, and commons, in this present parliament assembled, and by the authority of the same;"
This is all about ownership, and protecting the state from naughty texts being printed, which was the actual driving force behind the legislation. There is nothing utilitarian in this.
In the EU, an author’s moral rights are similar in character to human rights: https://en.wikipedia.org/wiki/Authors'_rights
AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).
The part about copyright outputs in the code is actually (measure 1.4):
> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:
> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and
> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.
> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.
Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.
As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.
In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.
[0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...
[1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...
By this they want AI model provider to have a strong grip on their users, so controling their usage to not risk issues with the regulator. Then, the European technocrats will be able control the whole field by being able to control the top providers, that then will overreach by controlling their users.
Really it does, especially with some technology run by so few which is changing things so fast..
> Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth
God forbid critical things and impactful tech like this be created with a measured head, instead of this nonsense mantra of "Move fast and break things"
Id really prefer NOT to break at least what semblance of society social media hasn't already broken.
What do surprise me is anything at all working with the existing rulesets, Effectively no one have technical competence and the main purpose of legislation seems to add mostly meaningless but parentally formulated complexities in order to justify hiring more bureaucrats.
>How to live in Europe >1. Have a job that does not need state approval or licensing. >2. Ignore all laws, they are too verbose and too technically complex to enforce properly anyway.
Oh ma dey have popups. We need dem too! Haha, we happy!
The regulations came along, but nobody told marketing how to do their job without the cookies, so every business site keeps doing the same thing they were doing, but with a cookie banner that is hopefully obtrusive enough that users just click through it.
Your choice to use frameworks subsidized by surveillance capitalism doesn't need to preclude my ability to agree to participate does it?
Maybe a handy notification when I visit your store asking if I agree to participate would be a happy compromise?
All I want is to not be forced to irritate my customers about something that nobody cares about. It doesn't have to be complicated. It is how the internet was for all of its existence until a few years ago.
And it happens to subsidize the tools you'd like to you use.
Whether you as a simple shopkeeper are aware of that or not doesn't change the equation or make anything buzzwordy.
None of that means that I should be FORCED to annoy my users with something that none of them read and none of them want to see.
It's like forcing me to click a "Accept" button every time I start my car saying I understand that my car is going to be recorded by the dash cams of other cars and traffic towers, probably there are cameras in billboards that will see me, oh and my phone's GPS is watching me, etc. etc. Nobody gives a shit.
There you go. Shopify does a bunch of analytics gathering for you. Whether you choose to use it or not, the decision was made by someone who thought it would be a value add and now you need a banner.
You could use localStorage for the purposes of tracking and it still needs to have a popup/banner.
An authentication cookie does not need a cookie banner, but if you issue lots of network requests for tracking and monitor server logs, that does now need a cookie banner.
If you don't store anything, but use fingerprinting, that is not covered by the law but could be covered by GDPR afaiu
Companies did that and thoughtless website owners, small and large, who decided that it is better to collect arbitrary data, even if they have no capacity to convert it into information.
The solution to get rid of cookie banners, as it was intended, is super simple: only use cookies if absolutely necessary.
It was and is a blatant misuse. The website owners all have a choice: shift the responsibility from themselves to the users and bugger them with endless pop ups, collect the data and don’t give a shit about user experience. Or, just don’t use cookies for a change.
And look which decision they all made.
A few notable examples do exist: https://fabiensanglard.net/ No popups, no banner, nothing. He just don’t collect anything, thus, no need for a cookie banner.
The mistake the EU made was to not foresee the madness used to make these decisions.
I’ll give you that it was an ugly, ugly outcome. :(
It's not madness, it's a totally predictable response, and all web users pay the price for the EC's lack of foresight every day. That they didn't foresee it should cause us to question their ability to foresee the downstream effects of all their other planned regulations.
1. Consent to be freely given, specific, informed and unambiguous and as easy to withdraw as to give 2. High penalties for failure to comply (€20 million or 4 % of worldwide annual turnover, whichever is higher)
Compliance is tricky and mistakes are costly. A pop-up banner is the easiest off-the-shelf solution, and most site operators care about focusing on their actual business rather than compliance, so it's not surprising that they took this easy path.
If your model of the world or "image of humanity" can't predict an outcome like this, then maybe it's wrong.
And that is exactly the point. Thank you. What is encoded as compliance in your example is actually the user experience. They off-loaded responsibility completely to the users. Compliance is identical to UX at this point, and they all know it. To modify your sentence: “and most site operators care about focusing on their actual business rather than user experience.”
The other thing is a lack of differentiation. The high penalities you are talking about are for all but of the top traffic website. I agree, it would be insane to play the gamble of removing the banners in that league. But tell me: why has ever single-site- website of a restaurant, fishing club and retro gamer blog a cookie banner? For what reason? They won’t making a turnover you dream about in your example even if they would win the lottery, twice.
How is "not selling user data to 2000+ 'partners'" tricky?
> most site operators care about focusing on their actual business
How is their business "send user's precise geolocation data to a third party that will keep that data for 10 years"?
Compliance with GDPR is trivial in 99% of cases
Writing policy is not supposed to be an exercise where you “will” a utopia into existence. Policy should consider current reality. if your policy just ends up inconveniencing 99% of users, what are we even doing lol?
I don’t have all the answers. Maybe a carrot-and-stick approach could have helped? For example giving a one time tax break to any org that fully complies with the regulation? To limit abuse, you could restrict the tax break to companies with at least X number of EU customers.
I’m sure there are other creative solutions as well. Or just implementing larger fines.
You are absolutely right... Here is the site on europa.eu (the EU version of .gov) that goes into how the GDPR works. https://commission.europa.eu/law/law-topic/data-protection/r...
Right there... "This site uses cookies." Yes, it's a footer rather than a banner. There is no option to reject all cookies (you can accept all cookies or only "necessary" cookies).
Do you have a suggestion for how the GDPR site could implement this differently so that they wouldn't need a cookie footer?
Well, it's a information-only website, it has no ads or even a login, so they don't need to use any cookies at all. In fact if you look at the page response in the browser dev tools, there's in fact no cookies on the website, so to be honest they should just delete the cookie banner.
You Tube
Internet Archive
Google Maps
Twitter
TV1
Vimeo
Microsoft
Facebook
Google
LinkedIn
Livestream
SoundCloud
European Parliament
In theory, they could rewrite their site to not require any of those services.Thus, I hold that the GPDR requires cookie banners.
---
Another part to consider that if videos (and LinkedIn for job searches and Google Maps for maps and Internet Archive for whatever they embed from there) are sufficiently onerous 3rd party cookies ("yea, we're being good with our cookies, but we use 3rd party providers and can't do anything about them, but we informed and you accepted their cookies")... then wouldn't it be an opportunity for the Federal Ministry of Transport and Digital Infrastructure https://en.wikipedia.org/wiki/Federal_Ministry_for_Transport or similar to have grants https://www.foerderdatenbank.de for companies to create a viable GDPR friendly alternative to those services?
That is, if the GDPR and other EU regulations weren't stifling innovation and establishing regulatory capture (its expensive to do and retain the lawyers needed to skirt the rules) making it impossible for such newer alternative companies to thrive and prosper within the EU.
Which is what the article is about.
Obviously some websites need to collect certain data and the EU provided a pathway for them to do that, user consent. It was essentially obvious that every site which wanted to collect data for some reason also could just ask for consent. If this wasn't intended by the EU it was obviously foreseeable.
>The mistake the EU made was to not foresee the madness used to make these decisions.
Exactly. Because the EU law makers are incompetent and they lack technical understanding and the ability to write laws which clearly define what is and what isn't okay.
What makes all these EU laws so insufferable isn't that they make certain things illegal, it is that they force everyone to adopt specific compliance processes, which often do exactly nothing to achieve the intended goal.
User consent was the compliance path to be able to gather more user data. Not foreseeing that sites would just ask that consent was a failure of stupid bureaucrats.
Of course they did not intend that sites would just show pop ups, but the law they created made this the most straightforward path for compliance.
I agree with some parts it but also see two significant issues:
1. It is even statistically implausible that everyone working at the EU is tech-illiterate and stupid and everybody at HN is a body of enlightenment on two legs. This is a tech-heavy forum, but I would guess most here are bloody amateurs regarding theory and science of law and you need at least two disciplines at work here, probably more.
This is drifting too quickly into a territory of critique by platitudes for the sake of criticism.
2. The EU made an error of commission, not omission, and I think that that is a good thing. They need to make errors in order to learn from them and get better. Critique by using platitudes is not going to help the case. It is actually working against it. The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet. So, how should that work out? Exactly like this: we will be stuck for half an eternity and no one will correct anything because if you don’t do anything you can’t do any wrong! We as a society mostly record the things that someone did wrong but almost never record something somebody should have done but didn’t. That’s an error of omission, and is usually magnitudes more significant than an error of commission. What is needed is an alternative way of handling and judging errors. Otherwise, the path of learning by error will be blocked by populism.
——- In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed. The EU as a system needs to be accelerated by a margin so that it gets to an iterative approach if an error was made. I would argue with a cybernetic feedback loop approach here, but as we are on HN, this would translate to: move fast and break things.
On point 2. My argument is that the EU is fundamentally legislating wrong. The laws they create are extremely complex and very hard to decipher, even by large corporate law teams. The EU does not create laws which clearly outlaw certain behaviors, they create corridors of compliance, which legislate how corporations have to set up processes to allow for certain ends. This makes adhering to these laws extremely difficult, as you can not figure out if something you are trying to do is illegal. Instead you have to work backwards, start by what you want to do, then follow the law backwards and decipher the way bureaucrats want you to accomplish that thing.
I do not particularly care about cookie banners. They are just an annoying thing. But they clearly demonstrate how the EU is thinking about legislation, not as strict rules, but as creating corridors. In the case of cookie banners the EU bureaucrats themselves did not understand that the corridor they created allowed basically anyone to still collect user data, if they got the user to click "accept".
The EU creates corridors of compliance. These corridors often map very poorly onto the actual processes and often do little to solve the actual issues. The EU needs to stop seeing themselves as innovators, who create broad highly detailed regulations. They need to radically reform themselves and need to provide, clear and concise laws which guarantee basic adherence to the desired standards. Only then will their laws find social acceptance and will not be viewed as bureaucratic overreach.
I am sorry but I too agree with OP's statement. The EU is full of technocrats who have no idea about tech and they get easily swayed by lobbies selling them on a dream that is completely untethered to the reality we live in.
> The next person initiating a EU procedure to correct the current error with the popups will have the burden of doing everything perfectly right, all at once, thought through front to back, or face the wrath of the all-knowing internet.
You are talking as if someone is actually looking at the problem. is that so? Because if there was such a feedback loop that you seem to think exists in order to correct this issue, then where is it?
> In my mind, the main issue is not that the EU made a mistake. The main issue is that it is not getting corrected in time and we will probably have to suffer another ten years or so until the error gets removed.
So we should not hold people accountable when they make mistakes and waste everyone's time then?
There is plenty of evidence to show that the EU as a whole is incompetent when it comes to tech.
Case and point the Chat control law that is being pushed despite every single expert warning of the dire consequences in terms of privacy, and setting a dangerous precedent. Yet, they keep pushing it because it is seen as a political win.
If the EU knew something about tech they would know that placing back-doors in all communication applications is non starter.
Yes, the problem is known and actually worked on. There are several approaches, some being initiated on country level (probably because EU is too slow) some within the institution, as this one:
https://www.edps.europa.eu/data-protection/our-work/subjects...
No, I don’t think that institutionalised feedback loops exist there, but I do not know. I can only infer from observation that they are probably not in place, as this would, I would think, show up as “move fast and break things”.
> So we should not hold people accountable when they make mistakes and waste everyone's time then?
I have not made any direct remark to accountability, but I’ll play along: what happens by handling mistakes that way is accountability through fear. What is, in my opinion, needed is calculated risk taking and responsibility on a base of trust and not punishment. Otherwise, eventually, you will be left with no one taking over the job or people taking over the job who will conserve the status quo. This is the opposite of pushing things through at high speed. There needs to be an environment in place which can absorb this variety before you can do that(see also: Peter Senge’s “Learning Organisation”).
On a final note, I agree that the whole lobbying got out of hand. I also agree on the back-door issue and I would probably agree on a dozen other things. I am not in the seat of generally approving what the European Administration is doing. One of my initial points, however, was that the EU is not “the evil, dumb-as-brick-creator” of the cookie-popup-mess. Instead, this is probably one of the biggest cases of malicious compliance in history. And still, the EU gets the full, 100% blame, almost unanimously (and no comment as to what the initial goal was). That is quite a shift in accountability you just were interested in not to loose.
The actual problem is weak enforcement. If the maximum fines allowed by the law had been levied, several companies would’ve been effectively ended or excluded from the EU. That would’ve been good incentive for non-malicious compliance.
You talking about Zuckerberg?
If you actually read it, you will also realise it’s entirely comprised of “common sense”. Like, you wouldn’t want to do the stuff it says are not to be done anyway. Remember, corps can’t be trusted because they have a business to run. So that’s why when humans can be exposed to risky AI applications, the EU says the model provider needs to be transparent and demonstrate they’re capable of operating a model safely.
Which is the path EU is choosing. EU has been enjoying colonial loot for so long that they have lost any sense of reality.
Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.
This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.
In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.
Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!
On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.
Guess what happens to the race then?
Plus, ironically, Germany's Rheinmetall is a leader in automated anti-air guns so the people's phobia of automated guns is pointless and, at least in this case, common sense won, but in many others like nuclear energy, it lost.
It seems like Germans area easy to manipulate to get them to go against their best interests, if you manage to trigger some phobias in them via propaganda. "Ohoohoh look out, it's the nuclear boogieman, now switch your economy to Russian gas instead, it's safer"
Only if you're a corrupt German politician getting bribed by Russia to sell out long term national security for short term corporate profits.
It was also considered a stupid idea back then by NATO powers asking Germany WTF are you doing, tying your economy to the nation we're preparing to go to war with.
> The idea was to give russia leverage on europe besides war, so that they don't need war.
The present day proves it was a stupid idea.
"You were given the choice between war and dishonor. You chose dishonor, and you will have war." - Churchill
Yes it was naive, given the philosophy of the leaders of the UdSSR/Russia, but I don't think it was that much problematic. We do need some years to adapt, but it doesn't meaningfully impact the ability to send weapons to the ukraine and impose sanctions (in the long term). Meanwhile we got cheap gas for some decades and Russia got some other trade partners beside China. Would we better of if we didn't use the oil in the first place? Then Russia would have bounded earlier only to China and Nordkorea, etc. . It also did have less environmental impact then shipping the oil from the US.
France and Germany were democracies under the umbrella of the US rule acting as arbiter. It's disingenuous and even stupid, to argue an economic relationship with USSR and Putin's Russia as being the same thing.
Did the US force France into it? I thought that it was an idea of the french government (Charles de Gaulle), while the population had much resentment, which only vanished after having successful business together. Germany hadn't much choice though. I don't think it would had lasting impact if it were decreed and not coming from the local population.
You could hope making Russia richer, could in them rather be rich then large, which is basically the deal we have with China, which is still an alien dictatorship.
It was a major success, contributing to the thawing of relationships with the Soviet Union and probably contributed to the peaceful end of the Soviet Union. It supported several EU countries through their economic development and kept the EU afloat through the financial crisis.
It was a very important source of energy and there is no replacement. This can be seen by the flight of capital, deindustrialisation and poor economic prospects in Germany and the EU.
But as far as I know, many countries still import energy from Russia, either directly or laundered through third parties.
There are already drones from Germany capable of automatic target acquisition, but they still require a human in the loop to pull the trigger. Not because they technically couldn't, but because they are required to.
This smells like a misconception of the GDPR. The GDPR is not about cookies, it is about tracking. You are not allowed to track your users without consent, even if you do not use any cookies.
Laws are analyzed by lawyers and they will err on side of caution, so you end up with these notices.
The EU is getting to be a bigger nuisance than they are worth.
They create mountains of regulations, which are totally unclear and which require armies of lawyers to interpret. Adherence to these regulations becomes a major risk factor for all involved companies, which then try to avoid interacting with that regulation at all.
Getting involved with the GDPR is a total nightmare, even if you want to respect your users privacy.
Regulating AI like this is especially idiotic, since currently every year shows a major shift in how AI is utilized. It is totally out in the open how hard training an AI "from scratch" will be in 5 years. The EU is incapable of actually writing laws which make it clear what isn't allowed, instead they are creating vague corridors how companies should arrive at certain outcomes.
The bureaucrats see themselves as the innovators here. They aren't trying to make laws which prevent abuses, they are creating corridors for processes for companies to follow. In the case of AI these corridors will seem ridiculous in five years.
vanderZwan•6mo ago
mhitza•6mo ago
I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
jahewson•6mo ago
Yes.
https://www.publishersweekly.com/pw/by-topic/digital/copyrig...
Though the EU has its own courts and laws.
dmbche•6mo ago
And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials
thewebguyd•6mo ago
Unless the courts are willing to put injunctions on any model that made use of illegally obtained copyrighted material - which would pretty much be all of them.
1718627440•6mo ago
zettabomb•6mo ago
rpdillon•6mo ago
You can just buy books in bulk under the first sale doctrine and scan them.
dmbche•6mo ago
Anthropic ALSO get copyrighted material legally, but they pirated massive amounts first
rpdillon•6mo ago
GuB-42•6mo ago
We have exceptions, which are similar, but the important difference is that courts decide what is fair and what is not, whereas exceptions are written in law. It is a more rigid system that tend to favor copyright owners because if what is seen as "fair" doesn't fit one of the listed exceptions, copyright still applies. Note that AI training probably fits one of the exceptions in French law (but again, it is complicated).
I don't know the law in other European countries, but AFAIK, EU and international directives don't do much to address the exceptions to copyright, so it is up to each individual country.
mikae1•6mo ago
Same in Sweden. The U.S. has one of the broadest and most flexible fair use laws.
In Sweden we have "citaträtten" (the right to quote). It only applies to text and it is usually said that you can't quote more than 20% of the original text.
tjwebbnorfolk•6mo ago
vanderZwan•6mo ago