frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
119•ColinWright•1h ago•87 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
22•surprisetalk•1h ago•24 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
121•AlexeyBrin•7h ago•24 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
62•vinhnx•5h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
828•klaussilveira•21h ago•249 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
119•alephnerd•2h ago•78 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
55•thelok•3h ago•7 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•39m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
108•1vuio0pswjnm7•8h ago•138 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1059•xnx•1d ago•611 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
484•theblazehen•2d ago•175 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
8•valyala•2h ago•1 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
9•valyala•2h ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
209•jesperordrup•12h ago•70 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
558•nar001•6h ago•256 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
222•alainrk•6h ago•343 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
36•rbanffy•4d ago•7 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•31 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
76•speckx•4d ago•75 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
6•momciloo•2h ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•111 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•11 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
286•dmpetrov•22h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
71•mellosouls•4h ago•75 comments
Open in hackernews

Control LLM Spend and Access with any-LLM-gateway

https://blog.mozilla.ai/control-llm-spend-and-access-with-any-llm-gateway/
63•aittalam•2mo ago

Comments

bravura•2mo ago
Thoughts on any-llm-gateway versus litellm-proxy?

litellm is a great library, but one team using litellm-proxy reported having many issues with it to me. I haven't tried it yet.

cowmix•2mo ago
Yeah, I wonder what gaps in Litellm Proxy made Mozilla want to even do this.
dbish•2mo ago
What were the problems? I've been trying it out and haven't hit issues yet, but not using it at scale yet so I'm curious what to watch out for. I figure it's open source (MIT) so I can make changes as needed if there was anything particulary annoying.
ouk•2mo ago
There is also PydanticAI Gateway (https://ai.pydantic.dev/gateway/). I use it with the PydanticAI framework and it's quite nice.
verdverm•2mo ago
This service (llm proxy to all providers) are a dime-a-dozen

This one has very little on monitoring and no reference to OTEL in the docs

vultour•2mo ago
Which self-hosted one would you recommend?
SOLAR_FIELDS•2mo ago
LiteLLM is one of the most popular solutions. You would self host the gateway
sothatsit•2mo ago
We use LiteLLM and it is a bit of a dumpster fire of enterprise features and bugs. I can't even update the budget on keys in the UI (enterprise feature, although it may be a bug that it is marked as such). I can still update budgets through the API, but the API is a bit of a mess as well. Then we've ran into a lot of bugs like the UI DDOSing itself when the retry mechanism broke and it just started spamming API requests. And then basic features like the cleanup of old logs is an enterprise feature.

We are actively looking to switch away from it, so it was nice to stumble on a post like this. Something so simple as a proxy with budgeting for keys should not be such a tangled mess.

jetbalsa•2mo ago
I'm currently using apisix its ai rate limits are fine and the webui is a little json heavy but got me going on load balancing a bunch of models across ollama installs
NeutralCrane•2mo ago
Are there other alternatives you have been looking at? I’m just getting started looking at these LLM gateways. I was under the impression that LiteLLM was pretty popular but you are not the only one here with negative things to say about it.
sothatsit•2mo ago
I am planning to try any-llm-gateway that this post is about. We don't need anything fancy, so it seems that this might cover our needs.
Manouchehri•2mo ago
LiteLLM was good in the early days. I ran into more features than bugs. Sadly in the past year or so, I run into more bugs than features.
smcleod•2mo ago
Interested to see how this stacks up against Bifrost (fast but many features paywalled) and LiteLLM Proxy (featureful but garbage code quality). Especially if it gets a web admin / reporting frontend and high availability.
NeutralCrane•2mo ago
We are just now looking into LLM Gateways and LiteLLM was one I was considering looking into. I’m curious to hear more about what makes the code quality garbage.
SOLAR_FIELDS•2mo ago
I personally had no issues using the client libs, my only complaint was that they only offer official Python ones would love to see them publish a typescript one
everlier•2mo ago
How do you like bugs where tools are not working, but only for Ollama provider and only when streaming is enabled? This is one of the real instances I had to debug with LiteLLM.
smcleod•2mo ago
I've deployed LiteLLM proxy in a number of locations and we're looking to swap it out (probably to Bifrost), we've seen many bugs with it that never should have made it to a release. Most stem from poor code quality or what I'd classify as poor development practises. It's also slow, it doesn't scale well and adds a lot of latency.

Bugs include but are not limited to multiple ways budget limits aren't enforced, parameter handling issues, configuration / state mismatches etc...

What makes this worse is if you come to the devs with the problem, a solution and even a PR it's very difficult to get them to understand or action it - let alone see critical things like major budget blowouts as a priority.

dbish•2mo ago
What about forking it for your own use? Not worth it for the bugs you had fixes for?
smcleod•2mo ago
Not worth the technical debt and architecture of the codebase. To be honest I'd sooner completely rewrite it in Golang/Rust or otherwise.
bitpush•2mo ago
I'm conflicted on what Mozilla is doing here. On the one hand, it is nice that they are getting involved but com'on, dont you all have Firefox to work on?

This is a classic case of an over enthusiastic engineer who says yes / raises hand to everything, but doesnt do any one thing properly. At some point, you have to sit down and tell them to focus on one thing and do it properly.

ekr____•2mo ago
Mozilla spun up a whole new entity (Mozilla.ai) to do AI stuff, so doing AI stuff outside of Firefox is already baked into the equation, whatever you think of this particular thing.
benatkin•2mo ago
They're dumping competition on two other open source python libraries LiteLLM and simonw's llm. Unlike these two, Mozilla's any-llm doesn't have to make money. I'm sure simonw will be welcoming because he's a friendly kind of guy, but it might seem frustrating to LiteLLM which has a paid offering, for which they'd prefer organic competition rather than whatever magic 8 ball Mozilla uses.
tomComb•2mo ago
I don’t think those are comparable. simonw's llm Has a python SDK, but it’s very much CLI first. Light LLM is very much about the SDK. You can wrap some agent SDK’s around it, like Gemini, but that’s for agents not work flows. I can’t really think of them as in the same category.
benatkin•2mo ago
llm is a lot about the Python API (or SDK) as well: https://llm.datasette.io/en/stable/python-api.html

It shows how to use it async or sync, and even handles using async in a sync context.

It's hard to write a good CLI without also writing most of a Python API, and llm went the rest of the way by documenting it. I think llm has the best docs of the Python API of the three.

mfrye0•2mo ago
I was looking for a version of a proxy that could maximize throughput to each LLM based on its limits. Basically max requests and input/output tokens per second.

I couldn't find something, so I rolled a version together based on redis and job queues. It works decently well, but I'd prefer to use something better if it exists.

Does anyone know of something like this that isn't completely over engineered / abstracted?

Emen15•2mo ago
Feels like this is carving out a middle layer, simpler than other gateways out there, but way more practical than just a unified client library.