frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•42s ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•5m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•13m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•20m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
1•neogoose•23m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•24m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
2•sizzle•24m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•25m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•26m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•26m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•31m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•39m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•41m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•44m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
3•pabs3•46m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•46m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•48m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•48m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•52m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments
Open in hackernews

Show HN: Runprompt – run .prompt files from the command line

https://github.com/chr15m/runprompt
134•chr15m•2mo ago
I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.

When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.

Here's what it looks like:

--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}

Running it:

cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'

The things I think are interesting:

* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.

* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.

* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.

* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).

You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.

Would love your feedback, and PRs are most welcome!

Comments

dymk•2mo ago
Can the base URL be overridden so I can point it at eg Ollama or any other OpenAI compatible endpoint? I’d love to use this with local LLMs, for the speed and privacy boost.
jedbrooke•2mo ago
https://github.com/chr15m/runprompt/blob/main/runprompt#L9

seems like it would be, just swap the openai url here or add a new one

chr15m•2mo ago
Good idea. Will figure out a way to do this.
benatkin•2mo ago
Perhaps instead of writing an llm abstraction layer, you could use a lightweight one, such as @simonw's llm.
chr15m•2mo ago
I don't want to introduce a dependency. Simon's tool is great but I don't like the way it stores template state. I want my state in a file in my working folder.
threecheese•2mo ago
Can you explain this decision a bit more? I’m using ‘llm’ and I find your project interesting.
chr15m•2mo ago
llm stores data (prompts, responses, chats, fragments, aliases, attachment metadta) in a central sqlite database outside the working directory, and you have to use the tool to view and manipulate that data. I prefer a tool like this to default to storing things in a file or files in the project directory I'm working in, in a way that is legible e.g. plain text files. Contrast with e.g. git where everything goes into .git.

Functions require you to specify them on the command line every time they're invoked. I would prefer a tool like this to default to reading the functions from a hierarchy where it reads e.g. .llm-functions in the current folder, then ~/.config/llm-functions or something like that.

In general I found myself baffled when trying to figure out where and how to configure things. That's probably me being impatient but I have found other tools to have more straightforward setup and less indirection.

Basically I like things to be less centralized, magic, and less controlled by the tool.

Another thing, which is not the fault of llm at all, is I find Python based tools annoying to install. I have to remember the env where I set them up. Contrast with a golang application which is generally a single file I can put in ~/bin. That's the reason I don't want to introduce a dep to runprompt if I can avoid it.

The final thing that I found frustrating was the name 'llm' which makes it difficult to conduct searches as it is the generic name for what the thing is.

It is an amazing piece of engineering and I am a huge fan of simonw's work, but I don't use llm much for these reasons.

khimaros•2mo ago
simple solution: honor OPENAI_API_BASE env var
chr15m•2mo ago
I've implemented this now. You can set it with BASE_URL or OPENAI_BASE_URL which seems to vaguely be the standard. I also plan to use this with local LLMs. Thanks for the suggestion!
cedws•2mo ago
Can it be made to be directly executable with a shebang line?
_joel•2mo ago
it already has one - https://github.com/chr15m/runprompt/blob/main/runprompt#L1

If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.

vidarh•2mo ago
I'm assuming the intent was to as if the *.prompt files could have a shebang line.

   #!/bin/env runprompt
   ---
   .frontmatter...
   ---
   
   The prompt.
Would be a lot nicer, as then you can just +x the prompt file itself.
chr15m•2mo ago
That's on my TODO list for tomorrow, thanks!
leobuskin•2mo ago
/usr/local/bin/promptrun

  #!/bin/bash
  file="$1"
  model=$(sed -n '2p' "$file" | sed 's/^# \*//')
  prompt=$(tail -n +3 "$file")
  curl -s https://api.anthropic.com/v1/messages \
    -H "x-api-key: $ANTHROPIC_API_KEY" \
    -H "content-type: application/json" \
    -H "anthropic-version: 2023-06-01" \
    -d "{
      \"model\": \"$model\",
      \"max_tokens\": 1024,
      \"messages\": [{\"role\": \"user\", \"content\": $(echo "$prompt" | jq -Rs .)}]
    }" | jq -r '.content[0].text'

hello.prompt

  #!/usr/local/bin/promptrun
  # claude-sonnet-4-20250514

  Write a haiku about terminal commands.
chr15m•2mo ago
I added this and you can now make .prompt files with a runprompt shebang.

#/usr/bin/env runprompt

cootsnuck•2mo ago
This is pretty cool. I like using snippets to run little scripts I have in the terminal (I use Alfred a lot on macOS). And right now I just manually do LLM requests in the scripts if needed, but I'd actually rather have a small library of prompts and then be able to pipe inputs and outputs between different scripts. This seems pretty perfect for that.

I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.

Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".

PythonicNinja•2mo ago
added some examples using runprompt in blog post:

"Chain Prompts Like Unix Tools with Dotprompt"

https://pythonic.ninja/blog/2025-11-27-dotprompt-unix-pipes/

chr15m•2mo ago
Great article, thanks.

"One-liner code review from staged changes" - love this example.

orliesaurus•2mo ago
Why this over md files I already make and can be read by any agent CLI ( Claude, Gemini, codex, etc)?
garfij•2mo ago
Do your markdown files have frontmatter configuration?
orliesaurus•2mo ago
no, does it matter?
jsdwarf•2mo ago
Claude.md is an input to claude code which requires a monthly plan subscription north of 15€ / month. Same applies to Gemini.md, unless you are ok that they use your prompts for training Gemini. The python script works with a pay per use api key.
chr15m•2mo ago
Less typing. More control over chaining prompts together. Reproducibility. Running different prompts on different providers and models. Easy to install and runs everywhere. Inserts into scripting workflows simply. 12 factor env config.
swah•2mo ago
Thats pretty good, now lets see simonw's one...
__MatrixMan__•2mo ago
It would be cool if there were some cache (invalidated by hand, potentially distributed across many users) so we could get consistent results while iterating on the later stages of the pipeline.
chr15m•2mo ago
Do you mean you want responses cached to e.g. a file based on the inputs?
__MatrixMan__•2mo ago
Yeah, if it's a novel prompt, by all means send it to the model, but if its the same prompt as 30s ago, just immediately give me the same response I got 30s ago.

That's typically how we expect bash pipelines to work, right?

chr15m•2mo ago
Bash pipelines don't do any caching and will execute fresh each time, but I understand your idea and why a cache is useful when iterating on the command line. I'll implement it. Thanks!
stephenlf•2mo ago
That’s a great idea. Store inputs/outputs in XDG_CACHE_DIR/runprompt.sqlite
dymk•2mo ago
“tee” where you want to intercept and cat that file into later stages?
__MatrixMan__•2mo ago
Yeah sure but it breaks the flow that makes bash pipelines so fun:

- arrow up

- append a stage to the pipeline

- repeat until output is as desired

If you're gonna write to some named location and later read from it you're drifting towards a different mode of usage where you might as well write a python script.

chr15m•2mo ago
I've now added caching to runprompt with the --cache flag and RUNPROMPT_CACHE env var. Thanks for the suggestion!
tomComb•2mo ago
Everything seems to be about agents. Glad to see a post about enabling simple workflows!
oddrationale•2mo ago
Interesting! Seems there is a very similar format by Microsoft called `.prompty`. Maybe I'll work on a PR to support either `.prompt` or `.prompty` files.

https://microsoft.github.io/promptflow/how-to-guides/develop...

chr15m•2mo ago
Oh interesting. Will investigate, thanks!
stephenlf•2mo ago
Fun! I love the idea of throwing LLM calls in a bash pipe
stephenlf•2mo ago
Seeing lots of good ideas in this thread. I am taking the liberty of adding them as GH issues
ltbarcly3•2mo ago
Ooof, I guess vibecoding is only as good as the vibecoder.
journal•2mo ago
i literally vibe coded a tool like this. it supports image in, audio out, and archiving.
chr15m•2mo ago
Cool, I'm going to add file modalities too. Thanks for the validation!
gessha•2mo ago
Just like Linus being content with other people working on solutions to common problems, I’m so happy that you made this! I’ve had this idea for a long time but haven’t had the time to work on it.
Barathkanna•2mo ago
This is really clever. Dotprompt as a thin, pipe-friendly layer around LLMs feels way more ergonomic than spinning up a whole agent stack. The single-file + stdlib approach is a nice touch too. How robust is the JSON schema enforcement when chaining multiple steps?
chr15m•2mo ago
If the LLM returns invalid schema the next link in the chain will send that through since it defaults to string input if the JSON doesn't parse. Maybe I should make it error out in that situation.
anonym29•2mo ago
Is including a json schema validator and running the output through the validator against the input prompt, such that you can detect when the output doesn't match the schema, and optionally retry until it does match (possibly with a max number of attempts before it throws an error) too complex of an idea for the target implementation concept you were envisioning?

It certainly doesn't intuitively sound like it matches the "Do one thing" part of the Unix philosophy, but it does seem to match the "and do it well" part.

That said, I can totally understand a counterargument which proposes that schema validation and processing logic should be something else that someone desiring reliability pipes the output into.

chr15m•2mo ago
I'm not sure. I think I need to use it more to see what the LLMs do with bad data. The design you're suggesting might be the answer though.
meander_water•2mo ago
This is really cool and interesting timing, as I created something similar recently - https://github.com/julio-mcdulio/pmp

I've been using mlflow to store my prompts, but wanted something lightweight on the cli to version and manage prompts. I setup pmp so you can have different storage backends (file, sqlite, mlflow etc.).

I wasn't aware of dotprompt, I might build that in too.

threecheese•2mo ago
Looks like Google has packaged dotprompt into a Python library, might allow you to make the codebase leaner: https://github.com/google/dotprompt/tree/main/python/dotprom...

I think you mentioned elsewhere that you dont want to have a lot of dependencies, but as the format evolves using the reference impl will allow you to work on real features.

chr15m•2mo ago
Will have a look at this. That could be the way to go. Thanks.