frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•2m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•2m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•3m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•8m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•14m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•15m ago•1 comments

Slop News - HN front page hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•20m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•22m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•28m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•32m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•32m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•36m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•37m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•39m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•41m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•44m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•45m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•47m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•48m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•50m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•53m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•58m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments
Open in hackernews

Show HN: Docker/Model-Runner – Join Us in Revitalizing the DMR Community

https://github.com/docker/model-runner
14•ericcurtin•3mo ago
Hey Hacker News,

We're the maintainers of docker/model-runner and wanted to share some major updates we're excited about.

Link: https://github.com/docker/model-runner

We are revitalizing the community:

https://www.docker.com/blog/revitalizing-model-runner-commun...

At its core, model-runner is a simple, backend-agnostic tool for downloading and running local large language models. Think of it as a consistent interface to interact with different model backends. One of our main backends is llama.cpp, and we make it a point to contribute any improvements we make back upstream to their project. It also allows people to transport models via OCI registries like Docker Hub. Docker Hub hosts our curated local AI model collection, packaged as OCI Artifacts and ready to run. You can easily download, share, and upload models on Docker Hub, making it a central hub for both containerized applications and the next wave of generative AI.

We've been working hard on a few things recently:

- Vulkan and AMD Support: We've just merged support for Vulkan, which opens up local inference to a much wider range of GPUs, especially from AMD.

- Contributor Experience: We refactored the project into a monorepo. The main goal was to make the architecture clearer and dramatically lower the barrier for new contributors to get involved and understand the codebase.

- It's Fully Open Source: We know that a project from Docker might raise questions about its openness. To be clear, this is a 100% open-source, Apache 2.0 licensed project. We want to build a community around it and welcome all contributions, from documentation fixes to new model backends.

- DGX Spark day-0 support, we've got it!

Our goal is to grow the community. We'll be here all day to answer any questions you have. We'd love for you to check it out, give us a star if you like it, and let us know what you think.

Thanks!

Comments

ericcurtin•3mo ago
Hi everyone, we're the maintainers.

We're rebooting the model-runner community and wanted to share what we've been up to and where we're headed.

When we first built this, the idea was simple: make running local models as easy as running containers. You get a consistent interface to download and run models from different backends (llama.cpp being a key one) and can even transport them using familiar OCI registries like Docker Hub.

Recently, we've invested a lot of effort into making it a true community project. A few highlights:

- The project is now a monorepo, making it much easier for new contributors to find their way around.

- We've added Vulkan support to open things up for AMD and other non-NVIDIA GPUs.

- We made sure we have day-0 support for the latest NVIDIA DGX hardware.

- There is a new "docker model run" UX: https://www.docker.com/blog/docker-model-run-prompt/

doringeman•3mo ago
Hi everyone, another maintainer here.

Moreover, vLLM support is coming this week!