Granted, they'll probably end up performing worse than US or Chinese ones operating without restrictions and being uncompetitive on the global free market, but when did EU leaders ever think about long term consequences? Certainly not when they tied their economy to Russian gas and banned nuclear, certainly not when they prioritized toxic diesel engines over gasoline, certainly not when they demilitarized or when they ceded tech innovation to US and China, but for once this will be the right call, I can feel it, this will bring EU to the forefront of tech supremacy.
I have yet to see a company that prioritized quality over profits, unless forced to by regulation.
Yes, regulatory compliance is a significant concern for software product design in the EU these days. But that's a good thing - it stops a lot of hare-brained ideas and abusive business models at the drafting phase. Also, from my own observation, the rules seem annoying at first, because they tend to shut down the most exciting ideas - but after a while you notice that this is because those ideas come with bad failure modes and bad second-order effects, and regulations are forcing you to actually consider them.
But EU citizens want good AI models, not EU approved models, and people can use VPN. Because regulatory process kills fast moving business and whatever is rubber stamped by EU bureaucracts and compliance-industrial complex law firms sucking out money by selling snake oil compliance services is already few years behind.
Would you buy a three years old car as a new?
Regardless of how I feel about the price of the old car, I wouldn't buy the new car if it's not legal to drive on public roads.
The car market is a great example. The US market has decided it doesn’t want EVs from the biggest EV producer in the world (China), so people are indeed buying cars with older technology than what the new global standard has become thanks to China’s successfully state-sponsored EV market.
It may very well be that the US leads globally in AI, while some markets handicap access and development for internal reasons.
What "unreliability" are you talking about in terms of American tech businesses?
> For some situations it's safer to do it in the EU despite the regulations
The EU has zero tech companies that rival FAANG et al here in the US. Zero. Because of it's (well-intentioned but harmful) business regulations.
I have a feeling you're projecting your dissatisfaction with election results more than anything tangible...
Not really, it's because the EU has 28 sets of business regulations, those of the 27 members states and of the EU itself. The single market is not yet all that single, especially when it comes to digital services. The now abandoned project of the ever closer union wasn't some idealistic bs, it was the plan to gradually fix this.
https://nltimes.nl/2025/05/20/microsofts-icc-email-block-tri...
This sort of stuff.
AI companies are moving to user interface innovations to try to grab more unwilling training individuals. These new UI innovations will feel like shit if they try to force adoption, full of warnings and disclaimers.
Reminiscent of the cookie law, which many people hate, but they hate because companies insist in having cookies (if you don't track, you don't need the cookie popup).
Privacy, safety and reliability debates are back into dark patterns awareness. This is a territory tech companies were trying very desperately to get out of.
I think it's also brilliant in the way it answers the black box paradigm. "Oh, we cannot explain it, it's a black box". "Then explain how you made it, otherwise it's a no go".
Ultimately, this sets the discourse straight regarding what AI skepticism is all about. This is not about being anti-commerce, it's about being good commercial entities.
The AI vendors will NEVER fix any system flaws that can be ignored or hidden. Only a public database can force these into the open.
Telling lies basically.
To me, "ask" connotes that compliance is voluntary. Which in some circumstances strikes me as an intentional, rhetorical lie.
The rules require tracking outputs, which open-weight models cannot do. So I'm wondering if open-weight models have separate rules or this effectively bans releasing such models.
Trying to understand the rules but it doesn't seem to make a clear distinction between these things. I assume that they are intending the applications that use the models, not the models.
thefz•7h ago
Where YOU live you can have all the unbridled capitalism as you want - be a product for tech bros and help make some executive a billionaire - I don't care!
Where I live, I want this shit regulated. So, good stuff, EU.
clovoak•6h ago
A bit rich when all the companies mentioned are across the pond?
thefz•6h ago
alganet•6h ago
ackfoobar•6h ago
no_wizard•6h ago
The alternative is much worse, which is having zero say in tracking cookies. I'll take a banner on every single website to have more control of that.
I really don't see the issue. If you really find them annoying, use ublock with a proper cookie banner filter or something like that
afiori•2h ago
nemomarx•6h ago
miohtama•5h ago
bigyabai•6h ago
ackfoobar•6h ago
bossyTeacher•6h ago
thefz•6h ago
shagie•5h ago
Note that's a company page that the EU pays to do it.. the EU government page is even worse. https://commission.europa.eu/law/law-topic/data-protection/r...
TeMPOraL•6h ago
You don't need a cookie banner if you aren't doing anything shady. Using cookies (or other such mechanisms) does not require information or consent popups when they're necessary to make the product/service work for technical reason - the canonical example being session cookies.
The corollary here being, you only need consent popups if you're doing something shady but not strictly illegal. They're not meant to be annoying - they're there because it's illegal to do shady shit without the user agreeing to it, and "agreeing" in the EU means "informed consent". So you have to inform, and then get consent.
It's all pretty reasonable. But of course, people doing shady shit really don't want the users to understand it and risk them not consenting - and they especially hate having to ask in the first place. The industry settled on a "malicious compliance" approach to GDPR - show popups that, as much as they can get away with it, maximize the chance of people consenting to make the popup go away, make the "informed" part as opaque as possible, and generally make this thing super annoying - and then tell people it's all EU's fault, hoping enough Europeans will buy it and the public pressure will make EU undo GDPR.
shagie•5h ago
https://european-union.europa.eu/index_en
https://gdpr.eu
If cookie banners were not designed to be required... why does the EU pages themselves use them?
TeMPOraL•4h ago
https://gdpr.eu/privacy-policy/
https://european-union.europa.eu/cookies_en
Though again, not ideal IMO; based on skimming those policies, I think they could've set it up so consent popup only shows in specific situations that trigger the need for it. That, and I don't get why they use (a minimal build of) Google Analytics, and let that data fly over to the US (which they explicitly acknowledge). That's just lazy.
resource_waste•6h ago
LLM regulation is too late, models are already at chatGPT3.5 or 4 levels, which is enough to do basically anything.
You are confusing intent for ground reality. Its like saying 'we banned drugs' but we still have a drug problem.
nickslaughter02•6h ago
saubeidl•4h ago