What does secure mean in this context? I didn't see it explained here.
Perhaps they mean this?
> Admin users can confidently control which connectors are available to whom in their organization, with on-behalf authentication, ensuring users only access data they’re permitted to.
- 1100 tokens/second Mistral Flash Answers https://www.youtube.com/watch?v=CC_F2umJH58
- 189.9 tokens/second Gemini 2.5 Flash Lite https://openrouter.ai/google/gemini-2.5-flash-lite
- 45.92 tokens/second GPT-5 Nano https://openrouter.ai/openai/gpt-5-nano
- 1799 tokens/second gpt-oss-120b (via Cerebras) https://openrouter.ai/openai/gpt-oss-120b
- 666.8 tokens/second Qwen3 235B A22B Thinking 2507 (via Cerebras) https://openrouter.ai/qwen/qwen3-235b-a22b-thinking-2507
Gemini 2.5 Flash Lite and GPT-5 Nano seem to be comparatively slow.That being said, I can not find non-marketing numbers for Mistral Flash Answers. Real-world tps are likely lower, so this comparison chart is not very fair.
They are also releasing model weights for most of their models, where companies like Antropic and until recently OpenAI were FUDing the world that open source will doom us all.
Mistral smartest model is still behind Google, Antropic but they will catch up.
Inspired by the Greek word for human: Anthropos / ἄνθρωπος, the same etymology as English words like anthropology, the study of humans.
(I'd hazard a guess that your first language is something like a Romance language such as French, where people would pronounce that "anthro..." as if there is no h? So a particularly reasonable letter to forget when typing!)
Which makes it particularly hard to write, compared to other latin languages.
Speed and cost is a relevant factor. I have pipelines that need to execute tons of completions and has to produce summaries. Mistral Small is great at it and the responses are lightning fast.
For that use case if you went with US models it would be way more expensive and slow while not offering any benefit at all.
And if I were to give over personal information to an AI company, then absolutely I'll prefer a company who actually complies with GDPR.
As to what to do if you, with a customer's permission, put their PD (PII being an American term) into the system, and then get a request to delete it... I'm not sure, sorry I'm not an expert on LLMs. But it's your responsibility to not put the PD into the system unless you're confident that the company providing the services won't spread it around beyond your control, and your responsibility not to put it into the system unless you know how to manage it (including deleting it if and when required to) going forwards.
Hopefully somebody else can come along and fill in my gaps on the options there - perhaps it's as simple as telling it "please remove all traces of X from memory", I don't know.
edit: Of course, you could sign an agreement with an AI provider for them to be a "data controller", giving them responsibility for managing the data in a GDPR-compliant way, but I'm not aware of Mistral offering that option.
edit 2: Given my non-expertise on LLMs, and my experience dealing with GDPR issues, my personal feeling is that I wouldn't be comfortable using any LLM for processing PD that wasn't entirely under my control, privately hosted. If I had something I wanted to do that required using SOTA models and therefore needed to use inference provided by a company like Mistral, I'd want either myself or my colleagues to understand a hell of a lot more about the subject than I currently do before going down that road. Thankfully it's not something I've had to dig into so far.
This said, I am really supportive of Mistral, like their work, and hope that they will get more recognition and more EU-centric institutional support.
Europe has a higher industrial output than the US. In Unterlüß a town of 3500 people, Rheinmetall makes about 50% as many 155mm shells as the entire US makes annually. There's a reason your trust fund metaphor takes place in Brooklyn.
You might also want to remember that article 5 was invoked once, and it wasn't by Europe.
Kidding aside, if Europe is this hidden powerhouse as you claim, then its even more odd to be begging the Americans for defense support/leadership from across the Atlantic, and still be importing natural gas from the "evil" Russians while supposedly in a fight with them. Seems to undermine your point no?
Uhhh it isn't Europe taking all its bad news out of its museums, friend. That's the good ol' U.S.A. attempting to hide from its own history.
> Europe is a 24-year old trust fund kid working in a vegan commune while living in a $2M Brooklyn apartment paid for by her dad who is an executive at Exxon.
What a very ... American analogy.
Yes they proudly fill their museums to the brim with colonial loot. So Europeans can reminisce about good old days when they were the top dog.
If I'm wrong though, the irony that the European tech community has to resort to a US message board to voice their opinions, only serves to further underline my point.
Same price, but dramatically better results, way more reliable, and 10x faster. The only downside is when it does fail, it seems to fail much harder. Where gpt-5-mini would disregard the formatting in the prompt 70% of the time, mistral-medium follows it 99% of the time, but the other 1% of the time inserts random characters (for whatever reason, normally backticks... which then causes it's own formatting issues).
Still, very happy with Mistral so far!
It’s pretty rare though. Really solid model, just a few quirks
Would you like to elaborate further on how the experience was with it? What was your approach for using it? How did you generate synthetic data? How did it perform?
This is maybe more maybe less insidious. It will literally just insert a random character into the middle of a word.
I work with an app that supports 120+ languages though. I give the LLM translations, transliterations, grammar features etc and ask it to explain it in plain English. So it’s constantly switching between multiple real, and sometimes fake (transliterations) languages. I don’t think most users would experience this
I got this format from writing markdown files, it’s a nice way to share examples and also specify which format it is.
In slack/teams I do it with anything someone might copy and paste to ensure that the chat client doesn't do something horrendous like replace my ascii double quotes with the fancy unicode ones that cause syntax errors.
In readme files any example path, code, yaml, or json is wrapped in code quotes.
In my personal (text file) notes I also use ``` {} ``` to denote a code block I'd like to remember, just out of habit from the other two above.
It also helps with having a field of the json be the confidence or a similar pattern to act as a cut for what response is accepted.
I think they are saying that if highest probability phrase fails the regex, the LLM is able to substitute the next most likely candidate.
e.g. DOMINO https://arxiv.org/html/2403.06988v1
Would you be willing to share an example prompt? I'm curious to see what it'sesponding well to.
Mistral medium is ranked #8 on lmsys arena IIRC, so it’s probably just not your style?
I’m also comparing this to gpt-5-mini, not the big boy
For example, I ask, what are the most common targets of removal in magic: the gathering? Mistral's answer is so-so, including a slew of cards you would prioritize removing, but also several you typically wouldn't, including things like mox amber, a 0 cost mana rock. Gemini flash gave far fewer examples, one for each major card type type, but all of them are definitely priority targets that often defined an entire metagame, like Tarmogoyf.
Was looking to both decrease costs and experiment out of OpenAI offering and ended up using Mistral Small on summarization and Large for the final analysis step and I'm super happy.
They have also a very generous free tier which helps in creating PoCs and demos.
I’m making an app to learn multiple languages. This portion of the pipeline is about explaining everything I can determine about a work in a sentence in specifically formatted prose.
Example: https://x.com/barrelltech/status/1963684443006066772?s=46&t=...
Is there an example you can show that tended to fail?
I’m curious how token constraint could have strayed so far from your desired format.
Yes I use(d) structured output. I gave it very specific instructions and data for every paragraph, and asked it to generate paragraphs for each one using this specific format. For the formatting, I have a large portion of the system prompt detailing it exactly, with dozens of examples.
gpt-5-mini would normally use this formatting maybe once, and then just kinda do whatever it wanted for the rest of the time. It also would freestyle and put all sorts of things in the various bold and italic sections (using the language name instead of the translation was one of its favorites) that I’ve never seen mistral do in the thousands of paragraphs I’ve read. It also would fail in some other truly spectacular ways, but to go into all of them would just be bashing on gpt-5-mini.
Switched it over to mistral, and with a bit of tweaking, it’s nearly perfect (as perfect as I would expect from an LLM, which is only really 90% sufficient XD)
I use Lumo a lot and usually results are good enough. To be clear though, I do fall back on gemini-cli and OpenAI’s codex systems for coding a few times a week.
I live in the US, but if I were a European, I would be all in on supporting Mistral. Strengthen your own country and region.
That's a bit of a double edged sword. My support goes as far as giving local offerings a try when I might not have done otherwise. But at that point they need to be able to compete on merit.
> Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. These run exclusively on servers Proton controls so your data is never stored on a third-party platform.
The problem is that if it's actually successful it'll just be bought by one of the big US based competitors.
https://www.bloomberg.com/news/articles/2025-09-03/mistral-s...
But when it comes to researching information is consistently among the worst performers in my comparisons.
Generally the order is Opus 4.1 > Perplexity > Gemini Pro >> GPT 5 >> Qwen.
I really like perplexity and if you get it free it's hard to justify spending 100$/month.
Few recent examples:
- find me public companies listed in Poland that have the best z-index of dividend yield, payout ratio and earnings growth
- list the most important psychological tendencies and biases that crew resource management tries to address
I find them to be a pretty good over-all model although not at the bleeding edge. Their responses are very fast. Qwen is better with code/log analysis in my experience but general coding questions hasn't presented any problems.
Mistral's agent framework is pretty good too. You can make agents very easily with the le chat side or if you want to have more deep control, you get le platforme access and agents you make there can be used in le chat without costing against API usage.
Of the AI products I've been working with, and I've been trying a lot of them, Mistral is one I plan on keeping when I reduce myself down to 2-3 I want to stick around.
Naturally, Cloudflare is in the business, too:
https://docs.mistral.ai/deployment/self-deployment/cloudflar...
mistral.ai name servers:
Name Server: ivan.ns.cloudflare.com
Name Server: ada.ns.cloudflare.com
Apart from that, Mistral appears to remain the only really relevant new player with European ties in the Gen AI space. Aleph Alpha is not heard of anymore and is essentially steered by the Schwarz Group now, so at least chances of an acquihire I guess.
I could never get anything useful out of Lovable and was frustrated with the long editing and waiting process.
I'd prefer a site builder template with dropdowns. Lovable feels like that type of product, just with an LLM facade.
I don't hate AI, I just wasn't getting into the groove with Lovable.
1. A general purpose LLM chat interface with high reasoning capacity (GPT-5 thinking on web is my go to for now)
2. An agent that has unrestricted token consumption running on my machine (Claude Code with Opus and Amp are my go to for now).
3. A fine-tuned, single purpose LLM like v0 that is really good at one thing, in this case at generating a specific UI component with good design aesthetics from a wireframe in a sandbox.
Everything else seems like getting the worst of all worlds.
Our engineer used lovable for about a day, then just cloned the repo and used cursor since it it was much more productive.
One PM I know uses it for designing prototypes then handing them off to the engineering team who then continue them in Claude Code etc.
So it's sorta of competing with Wix, Squarespace, Wordpress, and also prototyping tools like Figma.
I think AI in Europe is doable in general.
I don't think that was unclear to anyone - again, I'm sure some EU entities might want EU related AI companies more than they care about any other features, just as some Turkmenistani entities would prefer Turkmenistan AI. I hope the point about why that advantage is banausic here is more clear, now.
Besides those EU entities, do these companies offer any advantages compared to American or Chinese AI companies for the entire rest of the world? Licensing, rankings in specific benchmarks, etc?
You watch the OpenAI launch videos and its a surprising variety of Europeans accents talking about all the value they're creating in the US, instead of back home, simply due to the more favorable business/investment policies of the US.
My pet theory is, outside of the silly regulatory stance, the real reason Europe can never compete in each wave of tech (mainframe > pc > internet > mobile > social > AI > etc.) is government pension systems hoovering up all private capital and investing it into european governments (bonds) instead of european businesses (equities).
Centralizing the financial assets of an entire country, subjecting it to the whims of politics thus requiring it be invested it into extremely low risk bonds instead of a larger portion in European equity indexes or even a tiny portion in venture capital has created this situation: https://i.redd.it/fxks3skmvt4e1.png
Yes, a vast majority of VC funds lose money. Hence why it's bucketed in 'alternatives' and never a major part of pension portfolios. But the small group of winners literally create the future tax base to fund the social welfare system to continue existing (not to mention the future military tech which it turns out is useful when your neighbors get hostile). Not taking the risk means you never get the reward.
If Europe put even 1-2% of their $5T in pension assets into venture...even grossly mismanaged Softbank style...I find it hard to imagine you wouldn't accidentally create a few $100+ Billion companies in 10-20 years. More important would be creating the startup ecosystem for taking the rest of the worlds capital into these ventures as a multiplier.
- The US has been a single market for a much longer time than the EU, and the EU still is not a single market, primarily due to language barriers (Germany, France, and Italy are large enough markets to have their own localized, but slightly worse SaaS options)
- European societies are more arranged around the common good and have lower income differences between people and super-wealthy individuals by design. The US is built around being the place where talented people can make the most money out of their skills, which results in many people worldwide choosing it as the place to go to, as the talent market is a global one.
- European values tend to value making as much money as possible or competing and being the winner less, which results in people grinding less and being happy when they become rich enough to focus on other things.
Western Europe and the US had essentially the same level of government safety nets (and government spending and economic growth) from the 1950s to the 1990s.
Who do you think started all of the European industrial giants that are still globally competitive today? Europe had no problem competing in the industrial revolution, if these were actually European values there would be no competitive European industry like there is no competitive European tech. It's only the digital revolution that Europe has struggled with.
Even now where I don't think that holds true so much (there are small pools of talent elsewhere, e.g. Stockholm is hot right now), a good senior engineer in Europe may be able to get €100k, and you are looking at 2 or 3 times in the US, so it's still attractive to relocate.
Cultural differences (mainly language barriers) made it hard for somewhere like that to evolve in Europe. Yes everyone in tech speaks English, but if you move to say Poland and want to rent an apartment or see a doctor, you would have had a hard time without at least a basic understanding of Polish. It's completely different from someone moving from Texas to SFO.
Ironically all the immigration of Russian speakers over the last few years has actually helped embrace English in these countries, as for nationalistic reasons they don't want to embrace Russian.
In the 2000s and early 2010s London was the tech hub of Europe (English speaking, many high ranking universities in the vicinity), but Brexit f***d that up.
It has nothing to do with policies, or pension system or whatever, and all to do with market size: when building an American company, you have access to the whole US from the start, then you can build an international product (with all the hassle that comes with it). If you're “European”, you have 27 different markets to address and except your own, none of them is easier for you to get than for an American company.
The second hottest tech market after the US (and not that much behind) is China, and don't tell me that's because they have favorable business policies, ask Jack Ma! It's literally a totalitarian state where CEOs can get abducted if the CCP thinks they're getting too powerful. Talk about incentivizing risk taking. But that's a market of a billion people, the second highest GPD on the planet, and American companies can't monopolize every markets because they are being heavily restrained by the government.
The only way for Europe to thrive technologically, would be to close the doors to the American corporations, that's how you can have Alibaba or VKontakte.
I'm not holding my breath though.
Government spending makes up a smaller % of Chinese GDP than in the US, so definitionally their economy is more privatized than the US economy. China is at 33%, US at 36%, Europe is at 50%.
For every Jack Ma, there's a million other Chinese businesses flourishing in every niche imaginable with very little oversight from the CCP.
Also, “government spendings” isn't a good proxy for how much a government intervenes in the economy, especially when the said government can just order businesses to do this or that without handing money to them.
Definitionally, it's the percentage of economic activity that is dictated by decentralized private market actors vs. centralized government ones.
All stats are imperfect reflections of reality. But name a better one for this particular issue.
But I'd just like to point out how silly it is to dismiss the person's concerns by claiming we should all just agree to be reductive because it's easiest to discuss a single metric. It's certainly easiest to use this single metric to make the discussion about your conclusion, though, if that's what you were aiming for. I hope not, though.
Chinese utilities are all counted as SOEs, regulated utility monopolies in the US aren't, even though defacto they are government entities. The US likes to brand everything as more capitalist (just as China likes to brand everything as more communist), so this distorts the picture.
If we're just trying to capture the full picture of money flows in an economy, and whether each incremental currency unit is responding to market signals or not, % of GDP that is government spending is more reliable imo.
It's far easier to compare internationally and less fuzzy to calculate, given there's much more data on it globally.
Did you know what happened like this week, with the military parade stuff and financial institutions being told to behave around that because the government didn't want any market turbulence around their glorious parade?
The reality is that Chinese government has total control over the entirety of the Chinese economy. They don't exert their entire control all the time and let things go around as long as it doesn't interfere with their agenda, but it will interfere in absolutely anything whenever they decide for whatever preposterous reason like a military parade or anything.
Can your theory explain then the difference between San Francisco/Bay area and the rest of the United States? Perhaps it is California's generous tax policies compared to say Texas?
I would not dismiss the contribution of European companies to each one of this domains so quickly though. Especially on the mobile side, there was a time where Nokia/Siemens/Ericsson/Alcatel were big names in that industry.
My perception using them is that they have comparable models to OpenAI and others when it comes to general use 'chat' tasks, but they don't match something like gpt-5 Pro or high thinking when you need something more powerful.
Perhaps the problem is that you need to be competing right at the frontier, or not at all. I put Cohere in a similar bucket.
A year ago I wanted to use and like Mistral, mostly because it was an upstart competitor to OpenAI. Yet I found its coding ability sadly lacking. I haven’t tried since. I also have seen them on the HN front page very rarely. I’m curious too how well they stand up these days.
A bit of a shame if they're going to launch these connectors that the model isn't suitable for them.
From a similar generation all I can think about is Homer Simpson trying to put together a BBQ grill, from only the french instructions: “Le Grille?? What the hell is Le Grille??”
amelius•2d ago
Cool!