frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•34s ago•0 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•3m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•6m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•6m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•6m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•6m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
2•juujian•8m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•10m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•12m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•14m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•15m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•15m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•18m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•21m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•23m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•24m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•25m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•26m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•29m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•32m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•35m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•36m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•41m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•43m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•45m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•45m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•46m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•52m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•58m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•59m ago•1 comments
Open in hackernews

Smollm3: Smol, multilingual, long-context reasoner LLM

https://huggingface.co/blog/smollm3
388•kashifr•7mo ago

Comments

gardnr•7mo ago
It's small (3B) and does great on benchmarks. This is a model for edge / mobile deployments so the gains over gemma3-4b are meaningful. It has dual mode reasoning / non_reasoning AND they released the full training method:

> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.

sigmoid10•7mo ago
I hate to say it, but reasoning models simply aren't suited for edge computing. I just ran some tests on this model and even at 4bit weight quantisation it blows past 10GB of VRAM with just ~1000 tokens while it is still reasoning. So even if you're running on a dedicated ML edge device like a $250 Jetson, you will run out of memory before the model even formulates a real answer. You'll need a high end GPU to make full use of it for limited answers and an enterprise grade system to support longer contexts. And with reasoning turned off I don't see any meaningful improvement over older models.

So this is primarily great for enterprises who want to do on-prem with limited budgets and maybe high-end enthusiasts.

wizee•7mo ago
You should use flash attention with KV cache quantization. I routinely use Qwen 3 14B with the full 128k context and it fits in under 24 GB VRAM. On my Pixel 8, I've successfully used Qwen 3 4B with 8K context (again with flash attention and KV cache quantization).
sigmoid10•7mo ago
>On my Pixel 8, I've successfully used Qwen 3 4B

How many tokens/s? I can't imagine that this would run in any practical way.

tiahura•7mo ago
Can anyone estimate how much of the 3B is necessitated by multi-language support?
rockinghigh•7mo ago
The vocabulary size is fairly small (128,256) for a multilingual model. I would guess it doesn't require many additional parameters to support these 5 languages as many tokens can be shared.
ethan_smith•7mo ago
Typically, multilingual capabilities consume 20-30% of model parameters in small LLMs, primarily in token embeddings and early transformer layers. Monolingual variants of similar models often perform better on English benchmarks with the same parameter count.
netdur•7mo ago
naive look, 2/3 of model, without multi-languages this shiuld be around 1B
nateb2022•7mo ago
https://web.archive.org/web/20250708164705/https://huggingfa...
_1•7mo ago
Which small model is good for fine tuning to various enterprise data sets? Our business units are wanting to run small models in browser and on mobile devices, without dealing with RAG and cloud resources.
mhitza•7mo ago
You really need to try them all out yourself and make sure you have proper benchmarks.

While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.

A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.

I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.

ivape•7mo ago
How much data did you use to fine tune?
mhitza•7mo ago
Kilobytes to megabytes of data. I was trying to fine-tune it for some specific legislation I was expecting to be able afterwards to ask about.
magicalhippo•7mo ago
> Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.

While they answer a slightly different question in the Physics of Language Models[1], based on their results it seems to me it is likely that one needs to do such augmentation of the dataset to get good results.

However, they also show that the dataset the base model is trained on can drastically affect finetuning performance. So if the base model is trained on a poor dataset for your specific task, perhaps you'll never get good performance.

[1]: https://physics.allen-zhu.com/part-3-knowledge/part-3-1

gardnr•7mo ago
Small models are bad at knowing things. Trying to train knowledge in to small models is probably not the way you want to go. You could try building an offline embedded RAG system that is deployable as wasm. Some folks have been experiencing success with this.
_1•7mo ago
We do use WebLLM and a hosted Weaviate database, but there are complaints about speed (both retrieval and time to first token as the context will get big). The Gemma 3n "nesting doll" approach sounds like it could be useful .. but haven't found anyone specifically doing it to add domain specific knowledge.
janalsncm•7mo ago
Typically retrieval is the fast part in my experience. Have you considered cheaper retrieval methods? Bm25 does pretty well on its own. And you can augment your dataset by precomputing relevant queries for each doc.
simonw•7mo ago
What are you hoping to achieve by fine-tuning a model in this way?
netdur•7mo ago
I have fine-tuned Gemma 3N 2B and it's pretty good, but loads slow on my S23U, once it's loaded though, it works fine

Also tried SmolVLM 256M and 500M, they load faster and you can embed them in assets, they work if you know what you're doing

Just keep in mind that smaller models don't perform as well due to their limited parameters

Also on Android, since you can't ship files larger than 2GB due to Java compression issues, you need to download models separately, then you can't load the model from the download folder, you have to copy it into the app's own folder, this means a Gemma 3N 2B model that's 3.14 GB would need at least 7 GB of free space on the user's phone

thatjoeoverthr•7mo ago
Tuning is really not the way to add information.

Bite the bullet and do some kind of RAG; you need to provide clear, authoritative information to a model that is skilled enough to remix it for the user.

Tuning the model to imitate the dataset will damage the model's skills and "common sense" but won't train it reliably recall information.

WhitneyLand•7mo ago
Mostly SOTA performance at the 3B level. A notable addition to the small but truly open club of models that provide full disclosure, code, recipes to reproduce their work.

Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).

Very nice write up that’s generous in sharing their learnings.

This is a solid and positive contribution.

YetAnotherNick•7mo ago
It's 384 H100s for 24 days, costing less than half a million dollars.
Imustaskforhelp•7mo ago
Pardon me, but is the dataset public.

Like if I really really just wanted to build it from scratch, could I do so? (not that I have that money but just curious)

hynky•7mo ago
yes, both core web datasets are publicly available as well as the rest
Imustaskforhelp•7mo ago
Thanks!

To be honest, if I might argue then that this is one of the best truly open source models that we have got.

There is AllenAI and (Elmo?) and there is also this one which does distributed training but I think this looks a lot like SOTA for 3B parameters to me.

Thanks for telling me, I am not going to lie, I am going to try to test it now! (Ima try some GGUF since ollama convenience)

peatmoss•7mo ago
OLMo: https://allenai.org/olmo

AFAIK, they were the first open everything model.

diggan•7mo ago
> AFAIK, they were the first open everything model.

GPT2 (released ~5 years ago?) was "open" in the sense that weights were available for download (sans license), exact datasets that were used where outlined, the architecture explained and so on, so I guess it was also "open" in the sense that Llama is "open", but neither would be "open source" which I'd feel pretty confident to label OLMo with.

So OLMo seems to be the first actually "open source" model, but maybe not "open" as in "downloadable" (which Facebook tries to call "open source").

vixalien•7mo ago
there’s also IBM’s Granite
segmondy•7mo ago
H100 are going for about $3/hr, 384243 ~ $28k
jazzyjackson•7mo ago
Take this brother, \*, it may serve you well
dr_kretyn•7mo ago
The price just keeps on dropping with each comment. Anyone going to estimate it for less?

What's the source for $3/h?

pests•7mo ago
They miscalculated only 24 hours, not 24 days, so their number is off by a factor of 24.
jrk•7mo ago
This is indeed a reasonable cost estimate for competitive short-term H100 rentals (source: much SemiAnalysis coverage, and my own exploration of the market), but there is a critical error (besides the formatting glitch with `*`):

It was 24 days (576 hours) not 24 hours. $663,552 @ $3/hr.

mromanuk•7mo ago
According to Runpod pricing page, you can run H100 for $2.39, it can go as lower as $528,629.76

WARNING: This is highly speculative and napkin math

H200 (141 GB HBM3 - $3.99/h - 1.4x perf) 216 x 24 x 17 = 88128h = 351.895,104 (17 days and 216 cards)

B200 (192 GB HBM3e - $5.99/h - 2.8x perf) 158 x 24 x 9 = 34128h = $204.426,72

Probably wrong math, should be more efficient and cheaper. Doubt that they have 100/200 cards available for that long.

Source: I've only trained using RTX4090 and stuff like that with 8 cards.

Not affiliated in any way with Runpod.

YetAnotherNick•7mo ago
You can buy for $2.2/GPU/hr for on-demand and likely around $2 for this big order.

[1]: https://datacrunch.io/products#H100

social_quotient•7mo ago
Runpod is worth a look for these on demand workloads https://www.runpod.io/pricing I use a lot for ffmpeg workloads.

Found this a few days ago which might be neat for finding cheaper https://www.primeintellect.ai/

No affiliation with either

dconden•6mo ago
Adding one more that's worth a look https://www.shadeform.ai
lhl•7mo ago
You can go much lower: https://gpulist.ai/
refulgentis•7mo ago
I spent about 10 minutes this AM cross-checking with Phi-4-mini benchmarks, as it was very odd to not include the leader in benchmarks and it seemed universally behind.

For context, I dev an LLM client, a core tenant is keeping local as close to cloud parity as much as is possible. (via llama.cpp)

Companies aren't taking local AI seriously on a sustained basis outside Microsoft.

Overall, I usually would bite my tongue. HF is a great citizen, and I doubt this'll be a one off. However, when I see superlatives affirmed, while leaving out the local SoTA for many many moons that is a godsend in this sector, I think it is good to, rather than shy away, stand up and say this.

adrianlzt•7mo ago
From the blog post: "SmolLM3 supports tool calling, and its chat template incorporates two distinct sections for tool descriptions: XML Tools and Python Tools"
bitwize•7mo ago
There's a British comedy skit lurking in here.

"So it's a small large language model?"

"Oh yes, very small."

"How can it be small and large at the same time?"

"Well, it's small by the standards of a large language model."

"So it's large."

"Oh yes, very large."

"Large compared to what?"

"Small language models."

"And so something like ChatGPT, what would that be exactly? A large large language model?"

"Yes, precisely. An LLLM."

netdur•7mo ago
it's big little planet or small big planet?
janalsncm•7mo ago
Standards have shifted as well. Gpt2 used to be considered “large” but it is half the size of this. Oh and also Sam Altman said it was too dangerous to release. At this point I consider anything too big to run on consumer grade hardware to be large, but an exact definition is a little silly to argue about.
a_wild_dandan•7mo ago
Altman released GPT-2 despite expressing that doing so was a bad idea? That's wild.
Alifatisk•7mo ago
I think Altman meant it's too dangerous to open-source GPT-2, therefore locked it in behind a service.
janalsncm•7mo ago
It’s not locked behind a service though.

https://huggingface.co/openai-community/gpt2/blob/main/model...

Alifatisk•7mo ago
That’s only 124M param
thatjoeoverthr•7mo ago
Behold https://huggingface.co/openai-community/gpt2-xl
Alifatisk•7mo ago
Wow!!!
creshal•7mo ago
"consumer grade hardware" is a rather loose definition too, what with RAM on consumer devices being anything between 2GB (low-end phones) and >100GB (high-end laptops/desktops) these days.
papichulo2023•7mo ago
Do not mess with the Miniature giant space hamsters
_kb•7mo ago
Australian. This is straight up Clarke and Dawe / Utopia.
bitwize•7mo ago
I must confess, I was inspired by "the front fell off".
viraptor•7mo ago
"Yes, a British Australian comedy sketch."

"So it's British?"

"By heritage."

"But Australian?"

"By production."

"Ah, so it’s satire."

"It was, until someone funded it."

msgodel•7mo ago
Wow. Close to a Qwen3 distill with 75% the size. That's great!

I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.

Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)

gdiamos•7mo ago
Nice work anton et al.

I hope you continue the 50-100M parameter models.

I think there is a case for models that finish fast on CPUs in solve by llm test cases.

eachro•7mo ago
From what I've heard, the llama3 models are fairly easy to fine-tune (please correct me if I'm wrong or if there are more amenable models here). How easy is it to finetune smollm3? I know a lot of the MoE LLMs have been quite fickle in this regard.
BarakWidawsky•7mo ago
It’s interesting that it looks like they didn’t apply their own RL to the model, and instead fine tuned on reasoning traces from large datasets and generating reasoning traces from larger models
lewtun•7mo ago
Indeed we opted for offline methods like Anchored Preference Optimization as we found in the Open R1 project that doing multi-task RL on small models is quite a hassle to get right. With offline methods, you focus much more on dataset curation / generation, but that still provides faster iteration cycles for the model scale we’re dealing with!
ivape•7mo ago
Looks like it's the 3B models that are being shipped out to on device by default. Apple's on-device LLM is 3B, and I believe Canary is shipping Google nano:

https://developer.chrome.com/docs/ai/rewriter-api

ivape•7mo ago
I wonder if this will be cheaper than llama 3.1 8b on OpenRouter.
danielhanchen•7mo ago
I fixed some chat template issues for llama.cpp and other inference engines! To run it, do:

./llama.cpp/llama-cli -hf unsloth/SmolLM3-3B-GGUF:Q4_K_XL --jinja -ngl 99

segmondy•7mo ago
doing the good work, thanks daniel!
danielhanchen•7mo ago
Thank you!
v5v3•7mo ago
Thanks
danielhanchen•6mo ago
Thanks!
diggan•7mo ago
> fixed some chat template issues

This seems to be a persistent issue with almost all weight releases, even from bigger companies like Meta.

Are the people who release these weights not testing them in various inference engines? Seems they make it work with Huggingface's Transformers library, then call it a day, but sometimes not even that.

clarionbell•7mo ago
No they don't. Why would they? Most of them are using a single inference engine, most likely developed inhouse. Or they go for something like vLLM, but llama.cpp especially is under their radar.

The reason is simple. There isn't much money in it. llama.cpp is free and targets lower end of the hardware spectrum. Corporations will run something else, or even more likely, offload the task to contractor.

danielhanchen•6mo ago
The chat template issues are actually not on llama.cpp's side, but on all engines (including vLLM, SGLang etc) For eg see https://www.reddit.com/r/unsloth/comments/1l97eaz/deepseekr1... - which fixed tool calling for DeepSeek R1
danielhanchen•6mo ago
Oh so chat template issues yes are quite pervasive sadly - for eg Llama as you mentioned, but also Qwen, Mistral, Google, the Phi team, DeepSeek - it's actually very common!

My take is large labs with closed source models also did have issues during the beginning, but most likely have standardized the chat template (for eg OpenAI using ChatML). The OSS community on the other hand keeps experimenting with new templates - for example adding tool calling causes a large headache. For example in https://unsloth.ai/blog/phi3 - we found many bugs in OSS models.

simonw•7mo ago
I'm having trouble running this on my Mac - I've tried Ollama and llama.cpp llama-server so far, both using GGUFs from Hugging Face, but neither worked.

(llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'smollm3')

I've managed to run it using Python and transformers with PyTorch in device="cpu" mode but unsurprisingly that's really slow - it took 35s to respond to "say hi"!

Anyone had success with this on a Mac yet? I really want to get this running with tool calling, ideally via an OpenAI-compatible serving layer like llama-server.

tripplyons•7mo ago
Have you tried setting device="mps" to use Metal? It should be faster than PyTorch's "cpu" device on Mac.
reach-vb•7mo ago
Hey Simon, VB from Hugging Face here and the person who added the model to MLX and llama.cpp (with Son). The PR hasn't yet landed on llama.cpp, hence it doesn't work OTB on llama.cpp installed via brew (similarly doesn't work with ollama since they need to bump their llama.cpp runtime)

The easiest would be to install llama.cpp from source: https://github.com/ggml-org/llama.cpp

If you want to avoid it, I added SmolLM3 to MLX-LM as well:

You can run it via `mlx_lm.chat --model "mlx-community/SmolLM3-3B-bf16"`

(requires the latest mlx-lm to be installed)

here's the MLX-lm PR if you're interested: https://github.com/ml-explore/mlx-lm/pull/272

similarly, llama.cpp here: https://github.com/ggml-org/llama.cpp/pull/14581

Let me know if you face any issues!

kosolam•7mo ago
Could you please enlighten me regarding all these engines, I’m using lamacpp and ollama. Should I also try mlx, onnx, vllm, etc. I’m not quite sure whats the difference between all these. I’m running on CPU and sometimes GPU
pzo•7mo ago
Ollama is a wrapper around llama.cpp thei using ggml format. Onnx is different ml model format and onnxruntime developer by microsoft. Mlx is ml framework from Apple. If you want the fastest speed on MacOS most likely stick with mlx
knowaveragejoe•7mo ago
> similarly doesn't work with ollama since they need to bump their llama.cpp runtime

Just curious, how frequently does that happen?

grrowl•7mo ago
Great to see Huggingface stick to their guns with CodeEval and python tooling. Agentic turn-by-turn tool calling is fine and all, but we're underutilising their ability to write an execute code in an "agent-like" environment.
cess11•7mo ago
I've tried to use gemma3:4b which comes up better in that benchmark and found it to be quite disappointing. It breaks a lot, sucks even worse than qwen2.5-coder:7b and incept5/llama3.1-claude:7b at code, needs to be tricked or threatened into saying stuff about many everyday topics. It also commonly chugs away for minutes exercising the GPU fans before responding, at which point I'm already ahead because I figured out another way to solve my problem or get at some information.

My experience with phi4-mini and granite3.3 was about the same, and they annoy me even more when I hook them into code editors and try to get them to contribute to my work. For one because they're slow, and at best they suggest adding unnecessary error handling in the style of null checks everywhere, at worst they just start mixing or hallucinating programming languages. Where they would be useful as leverage if they worked, i.e. close to the edge of where I can debug and refactor without getting stuck, they just go into straight nonsense mode, especially on terse first-pass code.

Sometimes I've tried to query these things for descriptions of recent history in foreign countries, Wikipedia trivia basically, and they're very often wrong in subtle ways. For example, a politician might have been at it for half a century or so in a troubled country and because they've been ousted in a coup once in the eighties the model is absolutely sure they can't have been in office since.

If a person acted like these things do I'd wish for them to get immediate institutional care. Maybe the problem is somehow with me, but I have a deep suspicion it's not.

lvl155•7mo ago
This is actually good learning material for anyone getting up to speed on LLM from scratch.