And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
How could you possibly infer that from what I said?
Fascinating.
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
Besides, I posted from my laptop.
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
Am I the only one who assumes by default that European regulation will be heavy-handed and ill conceived?
Perhaps it's easier to actually look at the points in contention to form your opinion.
Maybe some think that is a good thing - and perhaps it may be - but I feel it's more likely any regulation regarding AI at this point in time is premature, doomed for failure and unintended consequences.
How long can we let AI go without regulation? Just yesterday, there was a report here on Delta using AI to squeeze higher ticket prices from customers. Next up is insurance companies. How long do you want to watch? Until all accountability is gone for good?
Who's to say USB-C is the end-all-be-all connector? We're happy with it today, but Apple's Lightning connector had merit. What if two new, competing connectors come out in a few year's time?
The EU regulation, as-is, simply will not allow a new technically superior connector to enter the market. Fast forward a decade when USB-C is dead, EU will keep it limping along - stifling more innovation along the way.
Standardization like this is difficult to achieve via consensus - but via policy/regulation? These are the same governing bodies that hardly understand technology/internet. Normally standardization is achieved via two (or more) competing standards where one eventually "wins" via adoption.
Well intentioned, but with negative side-effects.
You mean that thing (or is that another law?) that forces me to find that "I really don't care in the slightest" button about cookies on every single page?
The European government has at least a passing interest in the well being of human beings while that is not valued by the incentives that corporations live by
Feels like I need to go find a tech site full of people who actually like tech instead of hating it.
Europeans are still essentially on Google, Meta and Amazon for most of their browsing experiences. So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
A position which is essentially reasonable if not too polite.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
in this case, it is clear that the EU policy resulted in cookie banners
And consumers will bear the brunt.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
[1] https://www.lw.com/en/insights/2024/11/european-commission-r...
vanderZwan•3h ago
mhitza•3h ago
I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
jahewson•3h ago
Yes.
https://www.publishersweekly.com/pw/by-topic/digital/copyrig...
Though the EU has its own courts and laws.
dmbche•3h ago
And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials