frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Governance Innovation Crisis

https://www.overcomingbias.com/p/a-governance-innovation-crisis
1•paulpauper•30s ago•0 comments

The Scramble for the Seafloor

https://www.nybooks.com/online/2025/12/10/the-scramble-for-the-seafloor/
1•mitchbob•3m ago•1 comments

Hashcards: A Plain-Text Spaced Repetition System

https://borretti.me/article/hashcards-plain-text-spaced-repetition
1•thomascountz•3m ago•0 comments

Ask HN: What Are You Working On? (December 2025)

1•david927•3m ago•0 comments

Elon Musk Is Wrong About Basic Income and Crime: Here Is the Evidence He Ignored

https://scottsantens.substack.com/p/elon-musk-is-wrong-about-universal-basic-income-ubi-and-crime
1•2noame•4m ago•0 comments

Nippon Steel's Acquisition of US Steel: A $15B Deal

https://imaa-institute.org/blog/nippon-steels-acquisition-of-us-steel/
1•eatonphil•5m ago•0 comments

Job apocalypse? Humbug AI is creating new occupations

https://www.economist.com/business/2025/12/14/job-apocalypse-humbug-ai-is-creating-brand-new-occu...
1•edward•6m ago•0 comments

The Twelve Slices of Christmas: How Vasco Chained the Chaos

https://perladvent.org/2025/2025-12-14.html
1•oalders•8m ago•1 comments

Inside The Dark and Predatory World of Crypto Casinos

https://www.nytimes.com/interactive/2025/12/09/us/crypto-casinos-gambling-streamers.html
1•thm•9m ago•0 comments

The next version of the web will be built for machines, not humans

https://www.economist.com/interactive/science-and-technology/2025/12/10/the-next-version-of-the-w...
1•edward•9m ago•0 comments

The best software podcast episodes I ever heard

https://thundergolfer.com/ten-best-software-podcast-episodes
2•jonobelotti•10m ago•0 comments

I added native time awareness to CrewAI to fix LLM date hallucinations

https://github.com/crewAIInc/crewAI/pull/4082
1•sherwin27•10m ago•1 comments

What Does Hadolint Do?

https://hadolint.com/what-does-hadolint-do/
1•mooreds•11m ago•0 comments

The Creation of America's Car Culture [audio]

https://thewaroncars.org/2025/11/11/episode-161-the-creation-of-americas-car-culture/
1•mooreds•12m ago•0 comments

Show HN: Llmwalk – explore the answer-space of open LLMs

https://github.com/samwho/llmwalk
1•samwho•14m ago•0 comments

Record $4.4B flows into Israeli cybersecurity as global VCs outpace locals in 25

https://www.ynetnews.com/business/article/rjggjusz11g
1•myth_drannon•17m ago•0 comments

Rust Coreutils 0.5.0 Release: 87.75% compatibility with GNU Coreutils

https://github.com/uutils/coreutils/releases/tag/0.5.0
3•maxloh•18m ago•1 comments

Carlito's Way

https://zmef.freeshell.org/carlitoway.html
2•zmef•21m ago•1 comments

Could a 5-day RTO be around the corner for Big Tech?

https://blog.pragmaticengineer.com/the-pulse-could-a-5-day-rto-be-around-the-corner-for-big-tech/
3•srijan4•22m ago•0 comments

A basic implementation of a virtual continuum fingerboard

https://continuum.awalgarg.me
1•todsacerdoti•22m ago•0 comments

Kaniko – Build Container Images in Kubernetes

https://github.com/osscontainertools/kaniko
1•bixilon•27m ago•0 comments

In Defense of Papyrus

https://designforhackers.com/blog/papyrus-font/
1•thimabi•27m ago•0 comments

FamFS Hopes to Go Upstream in 2026

https://www.phoronix.com/news/FamFS-2026-Upstream-Hopes
1•Bender•29m ago•0 comments

Transmutation Challenge

https://vinyasi.substack.com/p/transmutation-challenge
1•vinyasi•30m ago•0 comments

Show HN: CodeContext – Cut developer onboarding time from months to weeks

https://github.com/sonii-shivansh/CodeContext
1•shivanshsonii•31m ago•0 comments

FDA drug trials exclude a widening slice of Americans

https://medicalxpress.com/news/2025-12-fda-drug-trials-exclude-widening.html
3•bikenaga•31m ago•1 comments

I wrote JustHTML using coding agents

https://friendlybit.com/python/writing-justhtml-with-coding-agents/
1•simonw•32m ago•1 comments

Misinformation is an inevitable biological reality across nature

https://phys.org/news/2025-12-misinformation-inevitable-biological-reality-nature.html
1•Brajeshwar•34m ago•3 comments

Giant structure discovered deep beneath Bermuda

https://www.livescience.com/planet-earth/geology/giant-structure-discovered-deep-beneath-bermuda-...
1•Brajeshwar•34m ago•0 comments

America's post-apocalyptic maps reveal eerily familiar fault lines

https://bigthink.com/strange-maps/america-after-the-fall/
1•Brajeshwar•35m ago•0 comments
Open in hackernews

Hypermode Model Router Preview – OpenRouter Alternative

https://hypermode.com/blog/introducing-model-router
33•iamtherhino•7mo ago

Comments

jbellis•7mo ago
What I'm seeing with Brokk (https://brokk.ai) is that models are not really interchangeable for code authoring. Even with frontier models like GP2.5 and Sonnet 3.7, Sonnet is significantly better about following instructions ("don't add redundant comments") while GP2.5 has more raw intelligence. So we're using litellm to create a unified API to consume but the premise of "route your requests to whatever model is responding fastest" doesn't seem that attractive.

But OpenRouter is ridiculously popular so it must be very useful for other use cases!

johnymontana•7mo ago
I think the value here is being able to have a unified API to access hosted open source models and proprietary models. And then being able to switch between models without changing any code. Model optionality was one of the factors Hypermode called out in the 12 Factor Agentic App: https://hypermode.com/blog/the-twelve-factor-agentic-app

Also, being able to use models from multiple services and open source models without signing up for another service / bring your own API key is a big accelerator for folks getting started with Hypermode agents.

iamtherhino•7mo ago
Hey! Co-founder of Hypermode here.

Agreed on swapping models for code-gen doesn't make sense. We're mostly indexed on GPT-4.1 for our AgentBuilder product. I haven't found it easy to move between models for code super effective.

The most popular use case we've seen from folks is on the iteration/experimentation phase of building an agent/tool. We made ModelRouter originally as an internal service for our "prompt to agent" product, where folks are trying a few dozen models/MCPs/tools/data/etc really quickly as they try to find a local maximum for some automation or job.

0xDEAFBEAD•7mo ago
Are there any of these tools which will use your evals to automatically recommend a model to use? Imagine if you didn't need to follow model releases anymore, and you just had a heuristic that would automatically select the right price/performance tradeoff. Maybe there's even a way to route queries differently to more expensive models depending on how tricky they are.

(This would be more for using models at scale in production as opposed to individual use for code authoring etc.)

jbellis•7mo ago
Yeah, that seems possible, but a dumb preprocessing step won't help and a smart one will add significant latency.

Feels a bit halting-problem-ish: can you tell if a problem is too hard for model A without being smarter than model A yourself?

0xDEAFBEAD•7mo ago
I imagine if your volume is high enough it could be worthwhile to at least check to see if simple preprocessing gets you anywhere.

Basically compare model performance on a bunch of problems, and see if the queries which actually require an expensive model have anything in common (e.g. low Flesch-Kincaid readability, or a bag-of-words approach which tries to detect the frequency of subordinate clauses/potentially ambiguous pronouns, or word rarity, or whatever).

Maybe my knowledge of old-school NLP methods is useful after all :-) Generally those methods tend to be far less compute-intensive. If you wanted to go really crazy on performance, you might even use a Bloom filter to do fast, imprecise counting of words of various types.

Then you could add some old-school, compute-lite ML, like an ordinary linear regression on the old-school-NLP-derived features.

Really the win would be for a company like Hypermode to implement this automatically for customers who want it (high volume customers who don't mind saving money).

Actually, a company like Hypermode might be uniquely well-positioned to offer this service to smaller customers as well, if query difficulty heuristics generalize well across different workloads. Assuming they have access to data for a large variety of customers, they could look for heuristics that generalize well.

iamtherhino•7mo ago
I really like this approach.

I think there's a big advantage to be had for folks brining "old school" ML approaches to LLMs. We've been spending a lot of time looking at the expert systems from the 90s.

Another one we've been looking at is applying some query planning approaches to these systems to see if we can pull responses from cache instead of invoking the model again.

Obviously there's a lot of complexity to identifying where we could apply some smaller ML models or cache-- but it's been a really fun exploration.

0xDEAFBEAD•7mo ago
>We've been spending a lot of time looking at the expert systems from the 90s.

No way. I would definitely be curious to hear more if you want to share.

iamtherhino•7mo ago
We've been playing with that in the background. I can try to shoot you a preview in a few weeks. It works pretty well for reasoning tasks/NLP workloads but for workloads that need a "correct" answer, it's really tough to maintain accuracy when swapping models.

What we've seen most successful is making recommendations in the agent creation process for a given tool/workload and then leaving them somewhat static after creation.

0xDEAFBEAD•7mo ago
That's fair. Maybe you could even send the user an email if you detect a new model release or pricing change which handles their workload for cheaper at comparable quality, to notify them to investigate.
iamtherhino•7mo ago
That's a good idea-- then give them a link to "replay last X inferences with model ABC" so they can do a quick eyeball eval.
0xDEAFBEAD•7mo ago
Sweet, maybe you'll like my other idea in this thread too: https://news.ycombinator.com/item?id=43929194
threeducks•7mo ago
The Python API example looks like it has been written by an LLM. You don't need to import json, you don't need to set the content type and it is good practice to use context managers ("with" statement) to release the connection in case of exceptions. Also, you don't gain anything by commenting variables with the name of the variable.

The following sample (probably) does the same thing and is almost half as short. I have not tested it because there is no signup (EDIT: I was mistaken, there actually is a "signup" behind the login link, which is Google or GitHub login, so the naming makes sense. I confused it with a previously more prominent waitlist link.)

    import requests

    # Your Hypermode Workspace API key
    api_key = "<YOUR_HYP_WKS_KEY>"

    # Use the Hypermode Model Router API endpoint
    url = f"https://models.hypermode.host/v1/chat/completions"

    headers = {"Authorization": f"Bearer {api_key}"}

    payload = {
        "model": "meta-llama/llama-4-scout-17b-16e-instruct",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "What is Dgraph?"},
        ],
        "max_tokens": 150,
        "temperature": 0.7,
    }

    # Make the API request
    with requests.post(url, headers=headers, json=payload) as response:
        response.raise_for_status()
        print(response.json()["choices"][0]["message"]["content"])
iamtherhino•7mo ago
Signups are open: hypermode.com/sign-up

There's a waitlist for our prompt to agent product in the banner. That's a good call to update it to be more clear.

threeducks•7mo ago
Oh, I did not catch that. Sorry!
iamtherhino•7mo ago
Not at all! I'm updating the banner now
iamtherhino•7mo ago
updated our python example too!
KTibow•7mo ago
`post` automatically releases the connection. `with` only makes sense when you use a `requests.Session()`.
threeducks•7mo ago
You are right! https://github.com/psf/requests/blob/c65c780849563c891f35ffc...

The post function calls request the request function which uses its own context manager that will call the close function of the connection object.

hobo_mark•7mo ago
Is there something like OpenRouter, but for text-to-speech models?
iamtherhino•7mo ago
I haven't seen one yet-- no reason we couldn't do that with Hypermode. I'll do some exploration!
maxbendick•7mo ago
The logo is fairly evocative of the SS insignia.

To explain in the clearest terms: unlike the SS insignia, the lightning bolt in the logo has tapering at the bottom. The second element in the logo, the slash, does not have tapering at the bottom. The general shape of the logo is the same as the SS insignia: two diagonal elements side-by-side (which would be all good on its own). The mind tends to see repetition, so it has a tendency to "mix up" the two elements of the logo. The mind also has a tendency to remember similar things. Putting it all together, the logo has a chance to evoke the SS insignia.

I may just be reading too much Theweleit and W. Reich nowadays, but I think you'll get catch some flak for this logo if it becomes recognizable outside the tech milieu.

iamtherhino•7mo ago
Thanks for the feedback-- I can say emphatically, that's not our intention in the least. We chose a lightning bolt to evoke speed, i.e., the "hyper" in Hypermode. I've asked design to take another look at the "H" logo.
maxbendick•7mo ago
Thanks so much for replying. I didn't think it was your intention at all.