frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
50•thelok•3h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
117•AlexeyBrin•6h ago•20 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
812•klaussilveira•21h ago•246 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
49•vinhnx•4h ago•7 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
91•1vuio0pswjnm7•7h ago•102 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
73•onurkanbkrc•6h ago•5 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1054•xnx•1d ago•601 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
471•theblazehen•2d ago•174 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
51•alephnerd•1h ago•15 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
197•jesperordrup•11h ago•67 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
9•surprisetalk•1h ago•2 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
538•nar001•5h ago•248 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
206•alainrk•6h ago•313 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
33•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
26•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
110•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
69•speckx•4d ago•72 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
63•mellosouls•4h ago•70 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
271•isitcontent•21h ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•110 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
284•dmpetrov•21h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
553•todsacerdoti•1d ago•267 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
467•lstoll•1d ago•308 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
41•matt_d•4d ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
367•vecti•23h ago•167 comments
Open in hackernews

Model Market Fit

https://www.nicolasbustamante.com/p/model-market-fit
76•nbstme•2w ago

Comments

nbstme•2w ago
Product-market fit has a prerequisite that most AI founders ignore. Before the market can pull your product, the model must be capable of doing the job. That's Model-Market Fit. When MMF Unlocks, Markets Explode (legal, coding...)
deepsquirrelnet•2w ago
> If MMF doesn’t exist today, building a startup around it means betting on model improvements that are on someone else’s roadmap. You don’t control when or whether the capability arrives.

I love this. I think there's a tendency to extrapolate past performance gains into the future, while the primary driver of that (scaling) has proven to be dead. Continued improvements seem to be happening through rapid tech breakthroughs in RL training methodologies and to a lesser degree, architectures.

People should see this as a significant shift. With scaling, the path forward is more certain than what we're seeing now. That means you probably shouldn't build in anticipation of future capabilities, because it's uncertain when they will arrive.

curtisf•1w ago
This could be stated much more succinctly using Jobs to be Done (which is referenced in the first few paragraphs):

Your customers don't want to do stuff with AI.

They want to do stuff faster, better, cheaper, and more easily. (JtbD claims you need to be at least 15% better or 15% cheaper than the competition -- so if we're talking "AI", the classical ML or manual human alternative)

If the LLM you're trying to package can't actually solve the problem, obviously no one will buy it because _using AI_ OBVIOUSLY isn't anyone's _job-to-be-done_

minimaltom•1w ago
Some of what OP is saying generalizes to the concept of being "too early" - if you are early, your engineering / innovation spend is used to discover that at-the-time reasonable ideas don't work, or don't work with the current appetite, whereas later entrants can skip this exploration and start with a simple copycat.

My (business-school) partner reminds me that first movers are seldom winners.

echelon•1w ago
That perfectly surmised my experience. I've been "too early" far too frequently.

Before ElevenLabs, I built an AI TTS website that got 6.5 million monthly users at peak [1]. PewDiePie and various musicians were using it. It didn't have zero shot or fine tuning, so it got wiped out pretty easily when ElevenLabs arrived.

Before Image-to-Video models got good and popular, I built a ridiculous 3D nonlinear video editor [2] for crazy people that might want to use mocap gear and timelines to control AI animation. You couldn't control the starting frame, which sucked, but you could control the precise animation minus hallucination artifacts. Luma Labs Dream Machine came out just a few weeks after our launch and utterly wiped the floor with our entire approach.

I was late to build an aggregator, but I'm a filmmaker and I'm stubborn and passionate. I'm now trying to undercut the website aggregators with a fair source desktop "bring your own keys" system [3]. Hopefully I'm "just in time" for these systems to become desktop, with spatially controllable blocking, and with "world model" integration (nobody else has that yet). It's also Rust and when I port the UX to Bevy, it's gonna sing.

[1] https://news.ycombinator.com/item?id=29688048

[2] https://vimeo.com/966897398/6dd268409c

[3] https://github.com/storytold/artcraft

dvrp•1w ago
Hey, if you are ever looking for a job at Krea, just let me know!
augusteo•1w ago
This maps to what we've seen building AI at work.

When we started building a voice agent for inbound calls, the models were close but not quite there. We spent months compensating for gaps: latency, barge-in handling, understanding messy phone audio. A lot of that was engineering around model limitations.

Then the models got better. Fast. Latency dropped. Understanding improved. Suddenly the human-in-the-loop wasn't compensating, it was enhancing.

The shift was noticeable. We went from "how do we work around this limitation" to "how do we build the best experience on top of this capability." That's MMF in practice.

The timing question is real though. We started building before MMF fully existed for our use case. Some of that early work was throwaway. Some of it became the foundation. Hard to know in advance which is which.

storystarling•1w ago
The danger is that we bridge that gap with backend complexity. I spent weeks over-engineering a chain of evaluators and retries to get reliable outputs from cheaper models, thinking I was optimizing margins.

Then a smarter model dropped that handled the nuance zero-shot. That sophisticated orchestration layer immediately became technical debt—slower and harder to maintain than just swapping the API endpoint.

barrenko•1w ago
Whatever we do now to "steer" the model to do the job, my 5 cents, it will all get sucked into the model itself; skate where the puck is going as they say, and relentlessly focus on user experience and the overall product, that's how you get something like Granola.
chipsrafferty•1w ago
No AI models in 2026 even "understand the whole codebase" lol what is the author even talking about
zkmon•1w ago
The thing you are referring to as "model" is also called "technology" which always came in waves throughout centuries and decades. And it opened new markets and new needs. So, in the "team, product, market" concept, the "product" included the technology stack. Model is just another piece in the stack.
dbuxton•1w ago
The flip side of this is that if model capabilities are extremely strong such that they are able to saturate the benchmarks, the differentiation and defensibility of a wrapper solution built on top are significantly reduced.

IANAL but e.g. Claude Cowork is already good enough that it's hard to see how the legal tech startups are going to differentiate except around access controls, visual presentation of workflows, etc. And that's in a heavily enterprise/compliance-aware/security-focused context.

Don't get me wrong, that's still a big "except" - big enough for massive companies to be built. Personally the anxiety of being so close to being squashed by the foundation models would make me unhappy as an entrepreneur but looking at the market it seems like many people have a higher risk tolerance.

TeMPOraL•1w ago
I keep saying (need to coin a name for this at some point): LLMs, by their general-purpose nature, subsume software products.

Whatever domain-specific capability some software product[0] has, if it's useful to users now, it's more useful if turned into a tool an outside LLM can wield[1]. Users don't care about software products - on the contrary, the product is what stands between the user and what they actually want. If they can afford to delegate using the product to someone else, they do - whether it's to a friend, an external contractor, or an employee hired for that purpose.

This is the value offering LMMs provide to the user: general delegation. If an LLM can operate some software for you, it frees you to focus on problems you need solved. If it can operate multiple software tools, the benefit to you grows superlinearly, as the LLM can use multiple tools to solve problems not addressed individually by any of them. Problems there are no dedicated tools for at all.

This is a big problem for the software industry as it is, because we're relying on the concept of software product as a monetizable unit - some UI layer that defines what can and cannot be done, that we can charge for, and then double-dip with upsells and dark patterns, as UIs are the perfect marketing platforms. General-purpose LLMs sitting on the outside, they break all that by erasing the "product" boundary - and what's worse (for the industry, it's great for me as the user!), as the multi-modal capabilities get better, there's nothing one can do to stop it - even if you purposefully block and obscure any (classically) machine-friendly endpoints, the LLM will just take the hard way, and operate the UI the same way human does.

There's no way I see this won't upend the entire industry in the next couple of years.

--

[0] - This includes both products you buy, and products you rent, aka. SaaS.

[1] - As opposed to "inside LLMs", AKA. AI-in-product integrations everyone's doing these days, in a desperate attempt to stay relevant. Outside vs. inside LLM is a difference between your personal assistant and the assistant at some company's reception desk.