frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
101•theblazehen•2d ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•189 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•549 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•113 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
328•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
4•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
251•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

Why are your models so big? (2023)

https://pawa.lt/braindump/tiny-models/
38•jxmorris12•2mo ago

Comments

siddboots•2mo ago
I think I have almost the opposite intuition. The fact that attention models are capable of making sophisticated logical constructions within a recursive grammar, even for a simple DSL like SQL, is kind of surprising. I think it’s likely that this property does depend on training on a very large and more general corpus, and hence demands the full parameter space that we need for conversational writing.
semiinfinitely•2mo ago
I don’t understand why today’s laptops are so large. Some of the smallest "ultrabooks" getting coverage sit at 13 inches, but even this seems pretty big to me.

If you need raw compute, I totally get it. Things like compiling the Linux kernel or training local models require a high level of thermal headroom, and the chassis has to dissipate heat in a manner that prevents throttling. In cases where you want the machine to act like a portable workstation, it makes sense that the form factor would need to be a little juiced up.

That said, computing is a whole lot more than just heavy development work. There are some domains that have a tightly-scoped set of inputs and require the user to interact in a very simple way. Something like responding to an email is a good example — typing "LGTM" requires a very small screen area, and it requires no physical keyboard or active cooling. checking the weather is similar: you don’t need 16 inches of screen real estate to go from wondering if it’s raining to seeing a cloud icon.

I say all this because portability is expensive. Not only is it expensive in terms of back pain — maintaining the ecosystem required to run these machines gets pretty complicated. You either end up shelling out money for specialized backpacks or fighting for outlet space at a coffee shop just to keep the thing running. In either case, you’re paying big money (and calorie) costs every time a user types remind me to eat a sandwich.

I think the future will be full of much smaller devices. Some hardware to build these already exists, and you can even fit them in your pocket. This mode of deployment is inspiring to me, and I’m optimistic about a future where 6.1 inches is all you need.

Archelaos•2mo ago
A typical use case for large laptops is when you want to store it away after work or when you only carry it occasionally. I have a PC for coding at home, but use a thinkpad with the largest screen I could get for coding in my camper van (storing it away when not using it, because of lack of space) or when staying at my mother's home for longer (setting it up once at the start of my visit). I also have another very small, light and inexpensive subnotebook that I can carry around easily, but I rarely use it these days and not for coding at all.
bee_rider•2mo ago
I dunno. It kinda works, and points for converting the whole article. But something is lost in the switch-up here. The size of a laptop is more or less the size of the display (unless we’re going to get weird and have a projector built in), so it is basically a figure-of-merit.

Nobody actually wants more weights in their LLMs, right? They want the things to be “smarter” in some sense.

hobs•2mo ago
With a comfortable spread out my hands are 9.5 inches from pinky to thumb, a thirteen inch laptop is so painfully small I can barely use it.
tebruno99•2mo ago
Try being over 30 sitting at a desk your while life and then try and use a 13” screen. Eye strain is a huge deal.

My opinion on this changed drastically when I started interacting with people outside of tech and not my own age. A device you struggle to see is miserable.

unleaded•2mo ago
Still relevant today. Many problems people throw onto LLMs can be done more efficiently with text completion than begging a model 20x the size (and probably more than 20x the cost) to produce the right structured output. https://www.reddit.com/r/LocalLLaMA/comments/1859qry/is_anyo...
_ea1k•2mo ago
Why would you do that when you could spend months building metadata and failing to tune prompts for a >100B parameter LLM? /s
crystal_revenge•2mo ago
I used to work very heavily with local models and swore by text completion despite many people thinking it was insane that I would choose not to use a chat interface.

LLMs are designed for text completion and the chat interface is basically a fine-tuning hack to make prompting a natural form of text completion to have a more "intuitive" interface for the average user (I don't even want to think about how many AI "enthusiasts" don't really understand this).

But with open/local models in particular: each instruct/chat interface is slightly different. There are tools that help mitigate this, but the more you're working closely to the model the more likely you are to make a stupid mistake because you didn't understand some detail about how the instruct interface was fine tuned.

Once you accept that LLMs are "auto-complete on steroids" you can get much better results by programming the way they were naturally designed to work. It also helps a lot with prompt engineering because you can more easily understand what the models natural tendency is and work with that to generally get better results.

It's funny because a good chunk of my comments on HN these days are combating AI hype, but man are LLMs really fascinating to work with if you approach them with a bit more clear headed of a perspective.

hippo22•2mo ago
Maybe? The loop process of try-fail-try-again-succeed is pretty powerful. Not sure how you get that purely with text completion.
lsb•2mo ago
My threshold for “does not need to be smaller” is “can this run on a Raspberry Pi”. This is a helpful benchmark for maximum likely useful optimization.

A Pi has 4 cores and 16GB of memory these days, so, running Qwen3 4B on a pi is pretty comfortable: https://leebutterman.com/2025/11/01/prompt-optimization-on-a...

debo_•2mo ago
2000: My spoon is too big

2023: My model is too big

lynndotpy•2mo ago
> I think the future will be full of much smaller models trained to do specific tasks.

This was the very recent past! Up until we got LLM-crazy in 2021, this was the primary thing that deep learning papers produced: New models meant to solve very specific tasks.

_ea1k•2mo ago
Yeah, it is insane how many people think that tuning models is nearly impossible, or that it requires a multibillion dollar data center.

It is one of the weirdest variations of people buying into too much hype.

socketcluster•2mo ago
The incumbent are trying to fully control the market but they don't have a justification for that. A company like Google which already had a monopoly over search needs to convince the market that this will allow them to expand past search. If the narrative is that anyone can run a specialized model on their machines for different tasks, this doesn't justify AI companies selling themselves on the assumption of a total market monopoly and stranglehold over the economy.

They cannot sell themselves without concealing reality. This is not a new thing. There were a lot of suppressed projects in Blockchain industry where everyone denied the existence of certain projects and most people never heard about them and talk as if the best coin in existence can do a measly 4 transactions per second as if it's state of the art... Solutions like "Lightning network" don't actually work but they are pitched as revolutionary... I bet there are more people shilling Bitcoin's Lightning network than they are people actually using it. This is the power of centralized financial incentives. Everyone ends up operating on top of shared deception "the official truth" which may not be true at all.

forgotTheLast•2mo ago
One argument against local fine-tuning was that by the time you were done training your finetune of model N, model N+1 was out and it performed your finetune out of the box. That kinda stopped being the case last year though.
brainless•2mo ago
May I add Gliner to this? The original Python version and the Rust version. Fantastic (non LLM) models for entity extraction. There are many others.

I really think using small models for a lot of smell tasks is the best way forward but it's not easy to orchestrate.

jgalt212•2mo ago
The net $5.5T the fed printed had to go somewhere. AI Arms Race was the answer. And when the models got good, then we needed agentic to create unbounded demand for inference just as there was unbounded demand for training.

https://fred.stlouisfed.org/series/WALCL

lioeters•2mo ago
The graph is horrifying. Before the 2008 crisis, less than $1 trillion. By the time of the 2020 crisis, it had hit 4, then in the next few years more than doubled to $9 trillion. It may contribute to explaining why the rich are swimming in free money while the underclass can't afford to live anymore. With AI eating up the job market, we seem to be headed for another even bigger crisis.
K0IN•2mo ago
Im always so surprised that embedding models we had for years like minlm (80mb) are so small, and I really wonder why not more on device searches use something like it.
musicandpiss•2mo ago
thank you for sharing.-)