frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•1m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•2m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•4m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•6m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•7m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•9m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•13m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•14m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•14m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•17m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•20m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•21m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•21m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•22m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•27m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•31m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•35m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•37m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•38m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•44m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•47m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•48m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•49m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•50m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•50m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•50m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•51m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•55m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•55m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•56m ago•0 comments
Open in hackernews

Ask HN: Why don't programming language foundations offer "smol" models?

1•xrd•3mo ago
I'm using claude code A LOT. And, I'm using gemini cli A LOT. Definitely getting a ton of value as a developer from those tools. Not sure I can go back to the old way of developing.

And, I'm getting worried that someday Anthropic will say "Hey, yeah, about that Max plan which is $100/mo. Sorry, we decided we need to charge you $5000/mo. Oh, and LOL, btws, that's if you commit to an annual plan."

Or, a Google rep will email me saying "Sundar (it wasn't me!) says you were too critical of Google on HN that one time four years ago (Sundar verified it isn't a gemini hallucination, but I can't really question it). So, your gemini cli is cut off immediately."

Then, I'll be stuck and no more software engineering work because my brain rotted away.

For this reason, I want to run LLMs locally, using llama.cpp/ollama and use tools like Aider. But, running a "big" model with my hardware is tough. The quality of output and all the things that make claude and gemini so powerful are not there the combination of local LLMs and tool like aider, at least when things run locally. Perhaps I'm doing it wrong?

I wonder why I can't find a model that only does Python and is good only at that, and run that locally. When I need to do zig, I can switch to a zig model, and unload the python one from memory. If it only does a single language, and it does not need to know about US presidential elections, couldn't it be very small and something I could run on my MacOS M1 laptop with 16GB of ram?

I feel like models get big when they get generalized. I am never working on a codebase that has Rails and FastAPI and Elixir and React and Svelte and Go and Rust and COBOL. I might work on a repo with typescript and python, but never more than one, and I'm usually focused on either the frontend or backend.

If this is the solution, are language foundations building their own models? Is this already happening on huggingface or somewhere else?

This seems like an approach where a language foundation could train and certify their own model and it would be safe and "open source" and "open weights."

Is there a big stupid assumption I'm making here that makes this idea impossible?

Comments

ben_w•3mo ago
> I wonder why I can't find a model that only does Python and is good only at that, and run that locally. When I need to do zig, I can switch to a zig model, and unload the python one from memory. If it only does a single language, and it does not need to know about US presidential elections, couldn't it be very small and something I could run on my MacOS M1 laptop with 16GB of ram?

I also wonder this.

My suspicion — based on what I experienced with local image generating models, but otherwise poorly educated — is that they need all of the other stuff besides programming languages just to understand what your plain English prompt means in the first place, and they need to be quite bulky models to have any kind of coherency over token horizons longer than one single function.

Of interest: Apple does ship a coding LLM in Xcode that's (IIRC) 2 GB and it really just does feel like fancy Swift-only autocomplete.

zahlman•3mo ago
> I wonder why I can't find a model that only does Python and is good only at that, and run that locally.

Because if you want to prompt it in English, it has to be good at English as well. And it gets good at English by reading extreme quantities of it. Which incidentally is written on a wide variety of topics.