frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
479•klaussilveira•7h ago•120 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
818•xnx•12h ago•491 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
40•matheusalmeida•1d ago•3 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
161•isitcontent•7h ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
158•dmpetrov•8h ago•69 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
97•jnord•3d ago•14 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
53•quibono•4d ago•7 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
211•eljojo•10h ago•135 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
264•vecti•9h ago•125 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
332•aktau•14h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
415•todsacerdoti•15h ago•220 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
27•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
344•lstoll•13h ago•245 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
5•romes•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
53•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
202•i5heu•10h ago•148 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
116•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
153•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
248•surprisetalk•3d ago•32 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
28•gfortaine•5h ago•4 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1004•cdrnsf•17h ago•421 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
49•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
74•ray__•4h ago•36 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2275•HellsMaddy•1d ago•981 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
8•gmays•2h ago•2 comments
Open in hackernews

Nanochat

https://simonwillison.net/2025/Oct/13/nanochat/
50•bilsbie•3mo ago

Comments

Tepix•3mo ago
Amazingly, you can also do it on smaller hardware!

From the readme:

All code will run just fine on even a single GPU by omitting torchrun, and will produce ~identical results (code will automatically switch to gradient accumulation), but you'll have to wait 8 times longer. If your GPU(s) have less than 80GB, you'll have to tune some of the hyperparameters or you will OOM / run out of VRAM. Look for --device_batch_size in the scripts and reduce it until things fit. E.g. from 32 (default) to 16, 8, 4, 2, or even 1. Less than that you'll have to know a bit more what you're doing and get more creative.

ultimatefan1•3mo ago
No seagull?
drcongo•3mo ago
Pelican.
xnx•3mo ago
22 hours ago | 256 comments: https://news.ycombinator.com/item?id=45569350
ebbi•3mo ago
Can someone give me a ELI5 on what this is/does? I'm a non-coder, and recently gotten into diving into the world of AI, but I'm not sure what this is and where it sits in context with tools that I currently use (ChatGPT, Claude Code, Cursor).
tim333•3mo ago
See https://news.ycombinator.com/item?id=45569350
fragmede•3mo ago
Leading AI researcher Andrej Karpathy created a NanoLLM using available training data and $100 worth of (high-end) rented Cloud computer time. The original post is https://github.com/karpathy/nanochat/discussions/1 The post this is on is commentary from simonw about Karpathy's post. The NanoLLM he created is, um, not very good. So you wouldn't want to use it for anything other than learning and entertainment. But it's really small, which means it runs on small underpowered computers. There's a web-gui, so you interact with it just like ChatGPT on your little computer. Also for learning purposes, Karpathy shared the code he used to create NanoLLM, so you can run it at home and create your own model and chat with it.

Given that GPT-5 reportedly cost $100 million to train, being able to create one, even a terrible one, for $100, shows how the field keeps marching on.

ebbi•3mo ago
Thank you! So if I were to, say, build my own SaaS product that I wanted AI capabilities in, I could theoretically use NanoLLM to train data on my domain-specific stuff to have a domain-specific trained LLM to use in my product without having recurring fees from an API provider like OpenAI?
fragmede•3mo ago
Technically yes, but NanoLLM is stripped down and targeted more words educating AI researchers so I wouldn't recommend you use it for that (because it's output isnterrible compared to ChatGPT) (intentionally, it's a teaching tool). Nothing stopping you, but for that goal, I'd recommend starting with one of the downlodable permissibly license models like a newer Qwen3 and fine tune it. Google Collab has notebooks specifically for that.

Once you have your fine tuned model, then you wouldn't be paying OpenAI to use it, but it would need to be run somewhere, and those somewheres range in quality and price. Models come in various shapes and sizes and the bigger the model, the beefier (and more expensive to rent) a computer you need to operate this SaaS business.

ebbi•3mo ago
Thanks for this - learned a lot. I'll look into those.
simonw•3mo ago
Training (or fine-tuning) a custom model to answer domain-specific questions is almost never the right solution. It's complicated, expensive, time-consuming and the results are rarely any good. I have yet to see a demo of someone doing this that I find convincing, at least for adding new knowledge to a model.

If you want to teach an LLM to answer questions about private documents you should look into RAG or agentic search - techniques where the LLM can take a user's question and then look for additional information by searching some documents before answering.

The good news is that these tricks work reasonably with small models that you can run on your own hardware - even a 4B or 8B model (a few GBs to download) can often handle these cases.

But... even then, it's still usually cheaper to pay for the APIs from OpenAI and suchlike. Their API costs are so low that it's hard to save money by running your own model somewhere, since you have to pay to keep it in RAM the whole time while OpenAI share that cost between thousands of users.

ebbi•3mo ago
Very helpful - thanks a lot!