frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Neural Networks: Zero to Hero

https://karpathy.ai/zero-to-hero.html
267•suioir•5h ago•17 comments

The Gentle Seduction

http://www.skyhunter.com/marcs/GentleSeduction.html
24•JumpCrisscross•1h ago•2 comments

Total monthly number of StackOverflow questions over time

https://data.stackexchange.com/stackoverflow/query/1926661#graph
1104•maartin0•11h ago•600 comments

The suck is why we're here

https://nik.art/the-suck-is-why-were-here/
291•herbertl•10h ago•149 comments

GDI Effects from the PC cracking scene

https://gdimayhem.temari.fr/index.php?p=all
62•todsacerdoti•5d ago•5 comments

Swift on Android: Full Native App Development Now Possible

https://docs.swifdroid.com/app/
212•mihael•10h ago•119 comments

Can I start using Wayland in 2026?

https://michael.stapelberg.ch/posts/2026-01-04-wayland-sway-in-2026/
17•secure•1h ago•3 comments

The Most Popular Blogs of Hacker News in 2025

https://refactoringenglish.com/blog/2025-hn-top-5/
577•mtlynch•17h ago•109 comments

The PGP Problem (2019)

https://www.latacora.com/blog/2019/07/16/the-pgp-problem/
4•croemer•51m ago•49 comments

KDE onboarding is good now

https://rabbitictranslator.com/kde-onboarding/
102•todsacerdoti•9h ago•61 comments

MyTorch – Minimalist autograd in 450 lines of Python

https://github.com/obround/mytorch
75•iguana2000•8h ago•10 comments

How Thomas Mann Wrote the Magic Mountain

https://www.theguardian.com/books/2025/dec/31/the-master-of-contradictions-by-morten-hi-jensen-re...
60•Caiero•7h ago•14 comments

Corroded: Illegal Rust

https://github.com/buyukakyuz/corroded
105•csmantle•9h ago•24 comments

Corundum – open-source FPGA-based NIC and platform for in-network compute

https://github.com/corundum/corundum
24•peter_d_sherman•4h ago•5 comments

Show HN: Claude Reflect – Auto-turn Claude corrections into project config

https://github.com/BayramAnnakov/claude-reflect
31•Bayram•5h ago•14 comments

The Late Arrival of 16-Bit CP/M

https://nemanjatrifunovic.substack.com/p/the-late-arrival-of-16-bit-cpm
47•rbanffy•5d ago•5 comments

Gershwin-desktop: OS X-like Desktop Environment based on GNUStep

https://github.com/gershwin-desktop/gershwin-desktop
31•rguiscard•6h ago•5 comments

Show HN: Replacing my OS process scheduler with an LLM

https://github.com/mprajyothreddy/brainkernel
26•ImPrajyoth•4d ago•16 comments

From silicon to Darude – Sandstorm: breaking famous synthesizer DSPs [video]

https://media.ccc.de/v/39c3-from-silicon-to-darude-sand-storm-breaking-famous-synthesizer-dsps
16•anigbrowl•4d ago•2 comments

Ed25519-CLI – command-line interface for the Ed25519 signature system

https://lib25519.cr.yp.to/ed25519-cli.html
84•INGELRII•6d ago•39 comments

Pixoo Sign Client for Ruby

https://github.com/tenderlove/pixoo-rb
13•0x54MUR41•3h ago•1 comments

Finger-Nose Stylus for Touch Screens (2011)

https://variationsonnormal.com/2011/04/28/finger-nose-stylus-for-touchscreens/
32•downboots•5d ago•14 comments

Learning to Play Tic-Tac-Toe with Jax

https://joe-antognini.github.io/ml/jax-tic-tac-toe
14•antognini•4h ago•2 comments

The Great Gatsby is the most misunderstood novel (2021)

https://www.bbc.com/culture/article/20210209-the-worlds-most-misunderstood-novel
63•1659447091•8h ago•102 comments

Developing a BLAS Library for the AMD AI Engine [pdf]

https://uni.tlaan.nl/thesis/msc_thesis_tristan_laan_aieblas.pdf
36•teleforce•8h ago•9 comments

The C3 Programming Language

https://c3-lang.org
337•y1n0•17h ago•207 comments

Take One Small Step

https://thinkhuman.com/take-one-small-step/
94•jamesgill•12h ago•21 comments

Scaling Latent Reasoning via Looped Language Models

https://arxiv.org/abs/2510.25741
65•remexre•12h ago•10 comments

Xr0 verifier, guarantee the safety of C programs at compile time

https://xr0.dev
90•Alifatisk•15h ago•28 comments

The Riven Diffs – Seeing Riven (1997) Differently

https://glthr.com/the-riven-diffs-1
73•glth•11h ago•8 comments
Open in hackernews

Mistral Agents API

https://mistral.ai/news/agents-api
152•pember•7mo ago

Comments

orliesaurus•7mo ago
Whoever made those embedded videos, here some feedback if you want it take it, it's free:

1) It's really hard to follow some of the videos since you're just copy pasting the prompts fr your agents into the chat because the output generation comes out and hides the prompts. Instead put the prompt text as an overlay/subtitle-like so we know what you're doing

2) The clicking sound of you copy pasting and typing is not ASMR, please just mute it next time

3) Please zoom into the text more, not everyone has 20/20 super vision 4K style

ianhawes•7mo ago
4) Use a clean browser profile so you don't show unrelated autocomplete
threeducks•7mo ago
To add to 3): YouTube embedded videos default to 360p for me even if I maximize the embedded video on my 4k screen, which is completely unreadable. This is probably an attempt by YouTube to get viewers to click through to the YouTube website. It is probably not in Mistral's best interest to funnel viewers to YouTube, so they should use a different video host.

But even at maximum 1080p resolution, the image quality is not that great. And while we are at it, the wine-red (#833048) on dark-brown (#23231F) syntax highlighting for keyword arguments has very poor contrast ratio of around 1.8 to 1: https://webaim.org/resources/contrastchecker/ which earns a rating of "Fail" across the categories normal text, large text and UI elements.

moralestapia•7mo ago
I came here to see if anyone else noticed.

Very sloppy job, imo.

It costs next to nothing to come up with a little story and have someone on Fiverr narrate it (or an AI, after all that's what they sell).

bbor•7mo ago
Ok I’m behind the times in terms of MCP implementation, so would appreciate a check: the appeal of this feature is that you can pass off the “when to call which MCP endpoint and with what” logic to Mistral, rather than implementing it yourself? If so I’m not sure I completely understand why I’d want a model-specific, remote solution for this rather than a single local library, since theoretically this logic should be the same for any given LLM/MCP toolset pairing. Just simpler?

It certainly looks easy to implement, I will say that! Docs halfway down the page: https://docs.mistral.ai/agents/mcp/

potatolicious•7mo ago
It seems like the main pitch here is auto-inclusion and auto-exclusion of various tools via an orchestration agent (which may or may not be the main model itself? Unclear from their post)

Mostly this seems like an end-run around tool calling scalability limits. Model performance degrades heavily if the field of possible tools gets too large, so you insert a component into the system that figures out what tools should be in-scope, and make only those available, to get reliability higher.

In terms of "why outsource this" it seems like the idea is that their orchestration agent would be better than a cruder task state machine that you would implement yourself. Time will tell if this assertion is true!

ed•7mo ago
> auto-inclusion and auto-exclusion of various tools via an orchestration agent

Where do you see that? That would be neat, I'm under the impression orchestration is manual though – you define an agent and give it the ability to hand off tasks to sub-agents.

potatolicious•7mo ago
Sorry, maybe I could've phrased it better: it basically forces the devs to divide their tools into buckets of fewer tools manually. (The Travel Agent has N tools, the Research Agent has M tools, etc. all specified by the dev)

The pitch is that if you do this bucketization, the overall orchestrator can intelligently pick the bucket to use, but the idea is that at any moment the LLM is only exposed to a limited set of tools.

As opposed to the more pie-in-the-sky idea that given N tools (where N is very very large) the LLM can still accurately tool-select without any developer intervention. This seems pretty far off at this point.

htrp•7mo ago
is mistral a model company, an agent company, or a enterprise software company now?
nomsters•7mo ago
yes
greenavocado•7mo ago
Mistral is trying to be everything at once and it shows. To make ends meet they pivoted to selling enterprise software through Le Chat and cozying up to Microsoft. Now they're throwing around terms like "agentic AI" to stay trendy, even as competitors like DeepSeek outperform them in key areas. Their identity crisis is obvious. Are they a model company? A software vendor? A research lab? At this point, they seem more like a startup chasing hype and funding than a company with a clear direction. The 6 billion Euro valuation looks impressive, but with so many shifts in strategy, you have to wonder if they're building something lasting or just riding the AI wave until it crashes.
eigenspace•7mo ago
Their strategy doesn't make sense to you because you're looking for a technical feature that differentiates them. But technical features aren't their key differentiator, geography is their key differentiator. They'll get a lot of contracts in Europe simply because they're European. Everyone is keenly aware of how dependant European tech stacks are on increasingly unfriendly foreign powers.

If there's a local European option that does most of what an American or Chinese company does, that's simply a safer choice.

From this point of view, them trying to do everything at once makes a lot of sense. They don't actually need to be the absolute best or even the cheapest at any one thing. They need to just exist in Europe, be stable, and offer good services that people want. Casting a wide net is a better strategy for them.

Raed667•7mo ago
Do they need to pick one? Their offering doesn't seem incoherent to me
brandall10•7mo ago
Couldn't the same questions be asked of OpenAI and Anthropic?

Ultimately these are product/service companies, levering their research and innovations as differentiators.

If you're "only a model" company you likely have no moat.

FailMore•7mo ago
Is this basically a LLM that has tools automatically configured so I don’t have to handle that myself? Or am I not understanding it correctly? As in do I just make standard requests , but the LLM does more work than normal before sending me a response? Or I get the response to every step?
spmurrayzzz•7mo ago
The aspirational goal is that the model knows what tools to call and when, without human intervention. In practice, you'll see varying efficacy with that depending on the tools you need. Some of the tool usage is in-distribution / well represented in training set, but if you have some custom exotic MCP server you created yourself (or pulled off of some random github) you may see mixed results. Sometimes that can be fixed by simply augmenting your prompt with contrastive examples of how to use or not use the tool.

As an aside, my experience with devstral (both via API and locally w/ open weights) has been very underwhelming to this effect. So I'm curious how this new agent infra performs given that observation.

koakuma-chan•7mo ago
It's a software framework for orchestrating agents. Each agent can have its own system prompt, its own tools, and it can delegate ("hand off") to a different agent. When a hand off occurs, the LLM runs again but as a different agent.
manmal•7mo ago
Like Gemini Gems, but agentic?
koakuma-chan•7mo ago
Gemini Gems seems to be a ChatGPT “GPTs” equivalent, and I never figured out what those actually are. Mistral Agents API is like OpenAI Agents SDK.
LeoPanthera•7mo ago
Gems and GPTs are just a way to customize the system prompt from the web UI.
qwertox•7mo ago
The "My MCPs" button looks very promising.

I was looking around at Le Chat, a thing I haven't done in months, and I thought that they've really worked on interesting stuff in interesting ways.

The ability to enrich either a chat or generally an agent with one or more libraries has been solved in a very friendly way. I don't think OpenAI nor Anthropic have solved it so well.