frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway

https://asteroidos.org/news/2-0-release/index.html
100•moWerk•1h ago•7 comments

Claude Sonnet 4.6

https://www.anthropic.com/news/claude-sonnet-4-6
548•adocomplete•3h ago•440 comments

Using go fix to modernize Go code

https://go.dev/blog/gofix
181•todsacerdoti•4h ago•33 comments

Gentoo on Codeberg

https://www.gentoo.org/news/2026/02/16/codeberg.html
155•todsacerdoti•3h ago•37 comments

Meta to retire messenger desktop app and messenger.com in April 2026

https://dzrh.com.ph/post/meta-to-retire-messenger-desktop-app-and-messengercom-in-april-2026-user...
37•SoKamil•1h ago•30 comments

Structured AI (YC F25) Is Hiring

https://www.ycombinator.com/companies/structured-ai/jobs/q3cx77y-gtm-intern
1•issygreenslade•4m ago

Physicists Make Electrons Flow Like Water

https://www.quantamagazine.org/physicists-make-electrons-flow-like-water-20260211/
41•rbanffy•4d ago•0 comments

So you want to build a tunnel

https://practical.engineering/blog/2026/2/17/so-you-want-to-build-a-tunnel
99•crescit_eundo•4h ago•32 comments

Async/Await on the GPU

https://www.vectorware.com/blog/async-await-on-gpu/
104•Philpax•4h ago•32 comments

GrapheneOS – Break Free from Google and Apple

https://blog.tomaszdunia.pl/grapheneos-eng/
943•to3k•11h ago•659 comments

Show HN: I wrote a technical history book on Lisp

https://berksoft.ca/gol/
100•cdegroot•5h ago•22 comments

Chess engines do weird stuff

https://girl.surgery/chess
108•admiringly•3h ago•53 comments

I converted 2D conventional flight tracking into 3D

https://aeris.edbn.me/?city=SFO
168•kewonit•6h ago•41 comments

Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans

https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-crashes-austin-month-4x-worse-than-humans/
173•Bender•2h ago•85 comments

HackMyClaw

https://hackmyclaw.com/
201•hentrep•4h ago•108 comments

Is Show HN dead? No, but it's drowning

https://www.arthurcnops.blog/death-of-show-hn/
332•acnops•10h ago•276 comments

Contra "Grandmaster-level chess without search" (2024)

https://cosmo.tardis.ac/files/2024-02-13-searchless.html
9•luu•1d ago•0 comments

An AI Agent Published a Hit Piece on Me – Forensics and More Fallout

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-3/
43•scottshambaugh•1h ago•20 comments

Climbing Mount Fuji visualized through milestone stamps

https://fuji.halfof8.com/
31•gessha•3h ago•5 comments

Launch HN: Sonarly (YC W26) – AI agent to triage and fix your production alerts

https://sonarly.com/
21•Dimittri•4h ago•1 comments

Show HN: I taught LLMs to play Magic: The Gathering against each other

https://mage-bench.com/
68•GregorStocks•4h ago•51 comments

Don't pass on small block ciphers

https://00f.net/2026/02/10/small-block-ciphers/
36•jstrieb•2d ago•14 comments

Discord Rival Gets Overwhelmed by Exodus of Players Fleeing Age-Verification

https://kotaku.com/discord-alternative-teamspeak-age-verification-check-rivals-2000669693
97•thunderbong•3h ago•41 comments

Show HN: I'm launching a LPFM radio station

https://www.kpbj.fm/
14•solomonb•49m ago•3 comments

Show HN: 6cy – Experimental streaming archive format with per-block codecs

https://github.com/byte271/6cy
24•yihac1•4h ago•8 comments

Four Column ASCII (2017)

https://garbagecollected.org/2017/01/31/four-column-ascii/
317•tempodox•2d ago•76 comments

Show HN: Continue – Source-controlled AI checks, enforceable in CI

https://docs.continue.dev
33•sestinj•3h ago•5 comments

Semantic ablation: Why AI writing is generic and boring

https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
183•benji8000•4h ago•158 comments

Labyrinth Locator

https://labyrinthlocator.org/
28•emigre•4d ago•6 comments

Sub-Millisecond RAG on Apple Silicon. No Server. No API. One File

https://github.com/christopherkarani/Wax
46•ckarani•5h ago•15 comments
Open in hackernews

Mistral Agents API

https://mistral.ai/news/agents-api
152•pember•8mo ago

Comments

orliesaurus•8mo ago
Whoever made those embedded videos, here some feedback if you want it take it, it's free:

1) It's really hard to follow some of the videos since you're just copy pasting the prompts fr your agents into the chat because the output generation comes out and hides the prompts. Instead put the prompt text as an overlay/subtitle-like so we know what you're doing

2) The clicking sound of you copy pasting and typing is not ASMR, please just mute it next time

3) Please zoom into the text more, not everyone has 20/20 super vision 4K style

ianhawes•8mo ago
4) Use a clean browser profile so you don't show unrelated autocomplete
threeducks•8mo ago
To add to 3): YouTube embedded videos default to 360p for me even if I maximize the embedded video on my 4k screen, which is completely unreadable. This is probably an attempt by YouTube to get viewers to click through to the YouTube website. It is probably not in Mistral's best interest to funnel viewers to YouTube, so they should use a different video host.

But even at maximum 1080p resolution, the image quality is not that great. And while we are at it, the wine-red (#833048) on dark-brown (#23231F) syntax highlighting for keyword arguments has very poor contrast ratio of around 1.8 to 1: https://webaim.org/resources/contrastchecker/ which earns a rating of "Fail" across the categories normal text, large text and UI elements.

moralestapia•8mo ago
I came here to see if anyone else noticed.

Very sloppy job, imo.

It costs next to nothing to come up with a little story and have someone on Fiverr narrate it (or an AI, after all that's what they sell).

bbor•8mo ago
Ok I’m behind the times in terms of MCP implementation, so would appreciate a check: the appeal of this feature is that you can pass off the “when to call which MCP endpoint and with what” logic to Mistral, rather than implementing it yourself? If so I’m not sure I completely understand why I’d want a model-specific, remote solution for this rather than a single local library, since theoretically this logic should be the same for any given LLM/MCP toolset pairing. Just simpler?

It certainly looks easy to implement, I will say that! Docs halfway down the page: https://docs.mistral.ai/agents/mcp/

potatolicious•8mo ago
It seems like the main pitch here is auto-inclusion and auto-exclusion of various tools via an orchestration agent (which may or may not be the main model itself? Unclear from their post)

Mostly this seems like an end-run around tool calling scalability limits. Model performance degrades heavily if the field of possible tools gets too large, so you insert a component into the system that figures out what tools should be in-scope, and make only those available, to get reliability higher.

In terms of "why outsource this" it seems like the idea is that their orchestration agent would be better than a cruder task state machine that you would implement yourself. Time will tell if this assertion is true!

ed•8mo ago
> auto-inclusion and auto-exclusion of various tools via an orchestration agent

Where do you see that? That would be neat, I'm under the impression orchestration is manual though – you define an agent and give it the ability to hand off tasks to sub-agents.

potatolicious•8mo ago
Sorry, maybe I could've phrased it better: it basically forces the devs to divide their tools into buckets of fewer tools manually. (The Travel Agent has N tools, the Research Agent has M tools, etc. all specified by the dev)

The pitch is that if you do this bucketization, the overall orchestrator can intelligently pick the bucket to use, but the idea is that at any moment the LLM is only exposed to a limited set of tools.

As opposed to the more pie-in-the-sky idea that given N tools (where N is very very large) the LLM can still accurately tool-select without any developer intervention. This seems pretty far off at this point.

htrp•8mo ago
is mistral a model company, an agent company, or a enterprise software company now?
nomsters•8mo ago
yes
greenavocado•8mo ago
Mistral is trying to be everything at once and it shows. To make ends meet they pivoted to selling enterprise software through Le Chat and cozying up to Microsoft. Now they're throwing around terms like "agentic AI" to stay trendy, even as competitors like DeepSeek outperform them in key areas. Their identity crisis is obvious. Are they a model company? A software vendor? A research lab? At this point, they seem more like a startup chasing hype and funding than a company with a clear direction. The 6 billion Euro valuation looks impressive, but with so many shifts in strategy, you have to wonder if they're building something lasting or just riding the AI wave until it crashes.
eigenspace•8mo ago
Their strategy doesn't make sense to you because you're looking for a technical feature that differentiates them. But technical features aren't their key differentiator, geography is their key differentiator. They'll get a lot of contracts in Europe simply because they're European. Everyone is keenly aware of how dependant European tech stacks are on increasingly unfriendly foreign powers.

If there's a local European option that does most of what an American or Chinese company does, that's simply a safer choice.

From this point of view, them trying to do everything at once makes a lot of sense. They don't actually need to be the absolute best or even the cheapest at any one thing. They need to just exist in Europe, be stable, and offer good services that people want. Casting a wide net is a better strategy for them.

Raed667•8mo ago
Do they need to pick one? Their offering doesn't seem incoherent to me
brandall10•8mo ago
Couldn't the same questions be asked of OpenAI and Anthropic?

Ultimately these are product/service companies, levering their research and innovations as differentiators.

If you're "only a model" company you likely have no moat.

FailMore•8mo ago
Is this basically a LLM that has tools automatically configured so I don’t have to handle that myself? Or am I not understanding it correctly? As in do I just make standard requests , but the LLM does more work than normal before sending me a response? Or I get the response to every step?
spmurrayzzz•8mo ago
The aspirational goal is that the model knows what tools to call and when, without human intervention. In practice, you'll see varying efficacy with that depending on the tools you need. Some of the tool usage is in-distribution / well represented in training set, but if you have some custom exotic MCP server you created yourself (or pulled off of some random github) you may see mixed results. Sometimes that can be fixed by simply augmenting your prompt with contrastive examples of how to use or not use the tool.

As an aside, my experience with devstral (both via API and locally w/ open weights) has been very underwhelming to this effect. So I'm curious how this new agent infra performs given that observation.

koakuma-chan•8mo ago
It's a software framework for orchestrating agents. Each agent can have its own system prompt, its own tools, and it can delegate ("hand off") to a different agent. When a hand off occurs, the LLM runs again but as a different agent.
manmal•8mo ago
Like Gemini Gems, but agentic?
koakuma-chan•8mo ago
Gemini Gems seems to be a ChatGPT “GPTs” equivalent, and I never figured out what those actually are. Mistral Agents API is like OpenAI Agents SDK.
LeoPanthera•8mo ago
Gems and GPTs are just a way to customize the system prompt from the web UI.
qwertox•8mo ago
The "My MCPs" button looks very promising.

I was looking around at Le Chat, a thing I haven't done in months, and I thought that they've really worked on interesting stuff in interesting ways.

The ability to enrich either a chat or generally an agent with one or more libraries has been solved in a very friendly way. I don't think OpenAI nor Anthropic have solved it so well.