frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
51•guerrilla•1h ago•20 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
35•mltvc•1h ago•28 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
148•valyala•5h ago•25 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
76•zdw•3d ago•30 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
36•gnufx•4h ago•39 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
81•surprisetalk•5h ago•88 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
19•swah•4d ago•12 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
118•mellosouls•8h ago•231 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
156•AlexeyBrin•11h ago•28 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
864•klaussilveira•1d ago•264 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
17•martialg•48m ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
113•vinhnx•8h ago•14 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
28•randycupertino•56m ago•28 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
21•mbitsnbites•3d ago•1 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
73•thelok•7h ago•13 comments

First Proof

https://arxiv.org/abs/2602.05192
74•samasblack•7h ago•57 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
253•jesperordrup•15h ago•82 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
156•valyala•5h ago•135 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
532•theblazehen•3d ago•197 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
67•vedantnair•1h ago•52 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
38•momciloo•5h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
98•onurkanbkrc•10h ago•5 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
19•languid-photic•3d ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
212•1vuio0pswjnm7•12h ago•320 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
42•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
52•rbanffy•4d ago•14 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
273•alainrk•10h ago•452 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
129•videotopia•4d ago•40 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
648•nar001•9h ago•284 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
41•sandGorgon•2d ago•17 comments
Open in hackernews

Ask HN: What's a standard way for apps to request text completion as a service?

50•nvader•1mo ago
If I'm writing a new lightweight application that requires LLM-based text completion to power a feature, is there a standard way to request the user's operating system to provide a completion?

For instance, imagine I'm writing a small TUI that allows you to browse jsonl files, and want to create a feature to enable natural language parsing. Is there an emerging standard for an implementation agnostic, "Translate this natural query to jq {natlang-query}: response here: "?

If we don't have this yet, what would it take to get this built and broadly available?

Comments

billylo•1mo ago
Windows and macOS does come with a small model for generating text completion. You can write a wrapper for your own TUI to access them platform agnostically.

For consistent LLM behaviour, you can use ollama api with your model of choice to generate. https://docs.ollama.com/api/generate

Chrome has a built-in Gemini Nano too. But there isn't an official way to use it outside chrome yet.

nvader•1mo ago
Is there a Linux-y standard brewing?
billylo•1mo ago
Each distro is doing their own thing. If you are targeting Linux mainly, I would suggest to code it on top of ollama or LiteLLM
vintagedave•4w ago
Do you know what it’s called, at least on Windows? I’m struggling to find API docs.

When I asked AI it said no such inbuilt model exists (possibly a knowledge date cutoff issue.)

bredren•4w ago
Yes. I am not aware of a model shipping with Windows nor announced plans to do so. Microsoft’s been focused on cloud based LLM services.
usefulposter•4w ago
This thread is full of hallucinations ;)
billylo•3w ago
https://learn.microsoft.com/en-us/windows/ai/apis/phi-silica
vintagedave•3w ago
Thankyou!
tony_cannistra•4w ago
These are the on-device model APIs for apple: https://developer.apple.com/documentation/foundationmodels
1bpp•3w ago
Windows doesn't?
WilcoKruijer•4w ago
MCP has a feature called sampling which does this, but this might not be too useful for your context. [0]

In a project I’m working on I simply present some data and a prompt, the user can then pipe this into a LLM CLI such as Claude Code.

[0] https://modelcontextprotocol.io/specification/2025-06-18/cli...

brumar•4w ago
Sampling seemed so promising, but do we know if some MCPs managed to leverage this feature successfully?
lurking_swe•3w ago
if i recall the issue is that most mcp capable client APPs (Cursor, Claude Code, etc) don’t yet support it! VSCode is an exception.

Example: https://github.com/anthropics/claude-code/issues/1785

lcian•4w ago
When I'm writing a script that requires some kind of call to an LLM, I use this: https://github.com/simonw/llm.

This is of course cross-platform and works with both models accessible through an API and local ones.

I'm afraid this might not solve your problem though, as this is not an out of the box solution, it requires the user to either provide their own API key or to install Ollama and wire it up on their own.

kristopolous•4w ago
I've been working on a more unixy version of his tool I call llcat. Composable, stateless, agnostic, and generic:

https://github.com/day50-dev/llcat

It might help things get closer..

It's under 2 days old and it's already really fundamentally changing how I do things.

Also for edge running look into the LFM 2.5 class of models: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct

mirror_neuron•4w ago
I love this concept. Looks great, I will definitely check it out.
kristopolous•3w ago
Please use it and give me feedback. I'm going to give a lightning talk on it tonight at sfvlug
nvader•4w ago
I think this is definitely a step in the right direction, and is exactly the kind of answer I was looking for. Thank you!

`llm` gives my tool a standard bin to call to invoke completions, and configuring and managing it is the user's responsibility.

If more tools started expecting something like this, it could become a defacto standard. Then maybe the OS would begin to provide it.

cjonas•4w ago
I asked a similar question a while back and didn't get any response. Some type of service is needed for applications that want to be AI enabled but not deal with usage based pricing that comes with it. Right now the only option is for the user to provide a token/endpoint from one of the services. This is fine for local apps, but less ideal for we apps.
netsharc•4w ago
That's interesting, on Linux there's the $EDITOR variable (a quick search of the 3 distros Arch, Ubuntu, Fedora show me they respect it) for the terminal text editor.

Maybe you can trailblaze and tell users your application will support the $LLM or $LLM_AUTOCOMPLETE variables (convene the committee for naming for better names).

joshribakoff•4w ago
I have been using an open source program “handy”, it is a cross platform rust tauri app that does speech recognition and handles inputting text into programs. It works by piggybacking off the OS’s text input or copy and paste features.

You could fork this, and shell out to an LLM before finally pasting the response.

TZubiri•4w ago
Not at all natural language, but linux has readline for exact character matches, it's what powers tab completion in the command line.

Maybe it can be repurposed for natural language in a specific implementation

Sevii•4w ago
Small models are getting good but I don't think they are quite there yet for this use case. For ok results we are looking at 12-14GB of vram committed to models to make this happen. My MacBook with 24GB of total ram runs fine with a 14B model running but I don't think most people have quite enough ram yet. Still I think it's something we are going to need.

We are also going to want the opposite. A way for an LLM to request tool calls so that it can drive an arbitrary application. MCP exists, but it expects you to preregister all your MCP servers. I am not sure how well preregistering would work at the scale of every application on your PC.

tpae•4w ago
You can check out my project here: https://github.com/dinoki-ai/osaurus

I'm focused on building it for the macOS ecosystem

jiehong•4w ago
This might work through a LSP server?

It’s not exactly the intended use case, but it could be coerced to do that.

I’ve seen something else like that, though: voice transcription software that have access to the context the text is in, and can interact with it and modify it.

Like how some people use super whisper modes [0] to do some actions with their voice in any app.

It works because you can say "rewrite this text, and answer the questions it asks", and the dictation app first transcribes this to text, extract the whole text from the focused app, send both to an AI Model, get an answer back and paste the output.

[0]: https://superwhisper.com/docs/common-issues/context