frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
143•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
17•kaonwarb•3d ago•19 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
28•jesperordrup•4h ago•16 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
223•dmpetrov•14h ago•117 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•5 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
183•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

Show HN: System Prompt Learning – LLMs Learn Problem-Solving from Experience

48•codelion•8mo ago
I built a system that lets LLMs automatically learn and improve problem-solving strategies over time, inspired by Andrej Karpathy's idea of a "third paradigm" for LLM learning.

The basic idea: instead of using static system prompts, the LLM builds up a database of strategies that actually work for different problem types. When you give it a new problem, it selects the most relevant strategies, applies them, then evaluates how well they worked and refines them.

For example, after seeing enough word problems, it learned this strategy:

1) Read carefully and identify unknowns,

2) Define variables with units,

3) Write equations,

4) Solve step-by-step,

5) Verify the answer.

All strategies are stored as human-readable JSON that you can inspect and edit.

I tested it on math benchmarks and saw decent improvements - 8.6% better on Arena Hard, 6.67% on AIME24. After 500 queries, the system had created 129 strategies and refined 97 of them.

The implementation is an open-source plugin for optillm (our inference optimization proxy). It works with any OpenAI-compatible API - you just add "spl-" to your model name. Has two modes: inference-only (uses existing strategies) and learning mode (creates and refines strategies).

What's interesting is that it bridges the gap between the sophisticated system prompts that production AI uses and the basic prompts most of us work with. Your model literally gets better at the types of problems you throw at it.

Built it because I noticed ChatGPT, Claude etc. have incredibly detailed system prompts with problem-solving frameworks, but most developers use basic prompts and miss out on those performance gains. The approach is inspired by Andrej Karpathy's tweet about a "third paradigm" for LLM learning beyond just pretraining and fine-tuning: https://x.com/karpathy/status/1921368644069765486

The strategies are completely transparent - you can see exactly what the system learned and why it's making certain decisions. No black box learning.

https://github.com/codelion/optillm/tree/main/optillm/plugin...

Would love feedback on the approach. Has anyone else experimented with LLMs learning from their own experience?

Comments

codelion•8mo ago
Thanks for checking this out! A few additional details that didn't fit in the main post:

The system maintains two separate limits: a storage limit (max 10 strategies per problem type in the database) and an inference limit (max 3 strategies applied per query). This keeps the database manageable while ensuring the system prompt doesn't get too long.

One interesting finding was that strategies only get used for inference once they have at least 5 attempts and a 40% success rate. This prevents the system from applying unproven strategies to new problems.

The approach works particularly well with reasoning models like DeepSeek-R1 and QwQ - the learned strategies seem to guide their thinking process effectively.

I'm especially curious about:

1. How this might work with different model families

2. Whether the community sees value in sharing strategy databases between users

3. Ideas for extending beyond text-based reasoning to multimodal problems

The plugin integrates with our broader optillm project which has other inference optimization techniques. You can combine SPL with methods like mixture-of-agents or MCTS using the "&" operator.

Next I'm thinking about meta-learning - having the system learn how to create better strategies more efficiently. Also exploring collaborative strategy sharing.

Would love to hear thoughts on the approach or if anyone has ideas for other problem domains where this might be useful!

ramonga•8mo ago
I would like to see some interesting input/output pairs. Do you have any?
codelion•8mo ago
We have some examples in the plugin README: https://github.com/codelion/optillm/tree/main/optillm/plugin...

E.g. This was the strategy discovered by optiLLM for solving word problems:

*Refined Strategy for Solving Word Problems:*

1. *Understand:*\n * Read the problem carefully (multiple times).\n * Identify the question (what are you trying to find?).\n * List all given information (facts, numbers, units).\n * Clarify ambiguous terms/units.

2. *Organize Information & Identify Unknowns:*\n * Choose an organization method: (e.g., table, diagram, list, drawing).\n * Clearly identify the unknowns (what you need to solve for).

3. *Plan and Translate:*\n * Define all variables with units (e.g., `p = number of pennies`, `c = number of compartments`).\n * Identify relationships between knowns and unknowns.\n * Convert units if necessary.\n * Write equations or expressions, including units, that relate the knowns and unknowns.\n * Ensure units are consistent throughout the equations.\n * Outline the solution steps.

4. *Solve:*\n * Show work step-by-step.\n * Track units throughout calculations.\n * Calculate accurately.\n * Solve for the unknowns.\

5. *Evaluate and Verify:*\n * Check if the answer is reasonable.\n * Verify the answer.

6. *Summarize:*\n * State the answer with units

Full list of strategies discovered is available here -https://github.com/codelion/optillm/blob/main/optillm/plugin...

tanchaowen84•8mo ago
This is a really cool idea! I recently came across another project on GitHub: https://github.com/tensorzero/tensorzero that explores a similar direction. You might find it interesting, and perhaps it could offer some inspiration or useful insights for your work as well.
yunusabd•8mo ago
That's an interesting space to explore! I'm wondering about the baseline in the benchmarks. Which prompts did you use for those? I'm asking because some of the resulting prompts seem fairly generic, and I'm wondering if you could just blanket add them to each prompt and also see an improvement. Things like "Identify the question (what are you trying to find?)".

In the same vein, wouldn't it be interesting to measure which part of the prompt most contributed to better solving the problem? Surely some parts will be just noise and can be trimmed away.

Also wondering what this does, since the model probably won't (can't?) actually read the problem multiple times:

  > Read the problem carefully (multiple times).
codelion•8mo ago
Re-reading the problem apparently works well - https://arxiv.org/abs/2309.06275

Here the system seems to have discovered this strategy by itself. The prompts are generic because during learning there is a part to refine and combine them. I haven’t experimented yet by adding all prompts to every query, given the large context it will be interesting to see.

yunusabd•8mo ago
Okay, but it looks like in the paper, they are actually adding the question twice in the prompt, not just instructing the model to read it twice. Or am I missing something?
Falimonda•8mo ago
How do you forsee a system like this efficiently managing and relying on a set of strategies whose size can become unbounded?
codelion•8mo ago
We do not allow the strategies to keep growing there is a refinement phase where we refine and merge existing strategies. The experiments were run with this config - https://github.com/codelion/optillm/blob/main/optillm/plugin... which allows a maximum of 10 strategies of each type.
dedicate•8mo ago
If I jump in and, say, manually 'tweak' one of those JSON strategies because I think I have a better idea, what happens next? Does the LLM just roll with my brilliant human intervention, or could it eventually 'learn' that my tweak was actually counterproductive and refine it back (or away from my edit)?
codelion•8mo ago
You can run in two modes, by default you run in the inference mode without learning. So, the changes you made will be used. If you switch to learning mode then the strategies are updated/refined and merged based on a config that you can control.

# How often to perform maintenance operations (merge, prune)

MAINTENANCE_INTERVAL = 40

# Strategy selection thresholds

STRATEGY_CREATION_THRESHOLD = 0.7 # Higher threshold to avoid creating similar strategies

STRATEGY_MERGING_THRESHOLD = 0.6 # Lower threshold to merge more similar strategies

MIN_SUCCESS_RATE_FOR_INFERENCE = 0.4 # Minimum success rate for a strategy to be used during inference

The configs are all defined here - https://github.com/codelion/optillm/blob/main/optillm/plugin...

imaltont•8mo ago
You should take a look at something called Case-based reasoning. Seems to perfectly fit into the road you are currently walking, as you basically just rediscovered the CBR-cycle.
pratikk10•8mo ago
Hey this looks interesting... would love to discuss more... can we connect? pratikkhedikar10@gmail.com