frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•3m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
3•quentin101010•8m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•10m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•12m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•12m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•14m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•15m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•16m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
1•gtsnexp•18m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•21m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
1•xipz•22m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•26m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•28m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•29m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•30m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•34m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•34m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
1•jingkai_he•37m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

1•swimmingkiim•41m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•44m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•47m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
4•wjb3•47m ago•2 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•52m ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•53m ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•1h ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•1h ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
3•maheshbhatiya•1h ago•0 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
5•awaaz•1h ago•2 comments

The British Empire's Brothels

https://www.historytoday.com/archive/feature/british-empires-brothels
2•pepys•1h ago•0 comments

What rare disease AI teaches us about longitudinal health

https://myaether.live/blog/what-rare-disease-ai-teaches-us-about-longitudinal-health
2•takmak007•1h ago•0 comments
Open in hackernews

Fine-tuned small LLMs can beat large ones with programmatic data curation

https://www.tensorzero.com/blog/fine-tuned-small-llms-can-beat-large-ones-at-5-30x-lower-cost-with-programmatic-data-curation/
53•GabrielBianconi•6mo ago

Comments

alchemist1e9•6mo ago
I’ve been thinking about curating primary sources themselves and then using those for fine-tuning.

Anyone gone that route and know of projects with very high quality curated source materials? ideally categorized and labeled.

k8si•6mo ago
Maybe this is a nitpick but CoNLL NER is not a "challenging task". Even pre-LLM systems were getting >90 F1 on that as far back as 2016.

Also, just in case people want to lit review further on this topic: they call their method "programmatic data curation" but I believe this approach is also called model distillation and/or student-teacher training.

GabrielBianconi•6mo ago
Thanks for the feedback!

We chose a set of tasks with different levels of complexity to see how this approach would scale. For LLMs, the "challenge" with NER is not the task itself but the arbitrariness of the labels in the dataset. I agree it's still much simpler than the other tasks we present (agentic RAG, agentic tool use, maze navigation).

There are definitely strong parallels to model distillation and student-teacher training, with the primary difference being that we don't simply take all the data from the larger model but rather filter the dataset based on metrics from the environment. In the "Does curation even matter?" section, we show that this generally improves the result by a good margin.

We link to Vicuna, which might be the closest reference as prior art: https://lmsys.org/blog/2023-03-30-vicuna/

Thanks!

mwigdahl•6mo ago
Is this just distillation but with a step to filter out low-quality responses first?
GabrielBianconi•6mo ago
AFAIK, distillation typically refers to tuning on the logits of the larger model, so you wouldn't be able to do that with fine-tuning APIs (OpenAI + Google in our blog post). We fine-tune on the outputs themselves.

But broadly speaking, yes, we generate data using a large model, curate the best samples using metrics from the environment, and fine-tune on that data. This isn't a novel technique from an academic perspective; our focus is on applying it to different use cases (e.g. agentic RAG, agentic tool use) and models (OpenAI, Google, Qwen).

Thanks!

mwigdahl•6mo ago
Thanks for the explanation and the clarification on terminology! I've used a similar approach myself and it sounded like you were doing something similar.
littlestymaar•6mo ago
> AFAIK, distillation typically refers to tuning on the logits of the larger model

I think this is called “logit distillation” which is a particular form of distillation but not the only one.

> so you wouldn't be able to do that with fine-tuning APIs (OpenAI + Google in our blog post)

Dististillation from competitors' API is so common it has been given a name: it's called “distealing”.

6510•6mo ago
Noob question: Would it be possible to train a small model for a single prompt?
GabrielBianconi•6mo ago
With supervised fine-tuning (SFT), you'll often see good results with 100-1000+ datapoints (they can be variations of the same prompt template). If you have more limited data, reinforcement fine-tuning (RFT) can work well in the 10-100 range.

Good luck!

simianwords•6mo ago
I think its a good idea but how do you not accidentally benchmark hack here?
GabrielBianconi•6mo ago
We set up dataset splits and the usual best practices. Of course, if you overdo things, you can still hack benchmarks; our goal isn't to publish SOTA numbers but rather to illustrate results from our methodology. We didn't even tune hyperparameters, we just used the default choices. Definitely a valid concern for teams chasing SOTA though.

Thanks!