frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: System Prompt Learning – LLMs Learn Problem-Solving from Experience

48•codelion•8mo ago
I built a system that lets LLMs automatically learn and improve problem-solving strategies over time, inspired by Andrej Karpathy's idea of a "third paradigm" for LLM learning.

The basic idea: instead of using static system prompts, the LLM builds up a database of strategies that actually work for different problem types. When you give it a new problem, it selects the most relevant strategies, applies them, then evaluates how well they worked and refines them.

For example, after seeing enough word problems, it learned this strategy:

1) Read carefully and identify unknowns,

2) Define variables with units,

3) Write equations,

4) Solve step-by-step,

5) Verify the answer.

All strategies are stored as human-readable JSON that you can inspect and edit.

I tested it on math benchmarks and saw decent improvements - 8.6% better on Arena Hard, 6.67% on AIME24. After 500 queries, the system had created 129 strategies and refined 97 of them.

The implementation is an open-source plugin for optillm (our inference optimization proxy). It works with any OpenAI-compatible API - you just add "spl-" to your model name. Has two modes: inference-only (uses existing strategies) and learning mode (creates and refines strategies).

What's interesting is that it bridges the gap between the sophisticated system prompts that production AI uses and the basic prompts most of us work with. Your model literally gets better at the types of problems you throw at it.

Built it because I noticed ChatGPT, Claude etc. have incredibly detailed system prompts with problem-solving frameworks, but most developers use basic prompts and miss out on those performance gains. The approach is inspired by Andrej Karpathy's tweet about a "third paradigm" for LLM learning beyond just pretraining and fine-tuning: https://x.com/karpathy/status/1921368644069765486

The strategies are completely transparent - you can see exactly what the system learned and why it's making certain decisions. No black box learning.

https://github.com/codelion/optillm/tree/main/optillm/plugin...

Would love feedback on the approach. Has anyone else experimented with LLMs learning from their own experience?

Comments

codelion•8mo ago
Thanks for checking this out! A few additional details that didn't fit in the main post:

The system maintains two separate limits: a storage limit (max 10 strategies per problem type in the database) and an inference limit (max 3 strategies applied per query). This keeps the database manageable while ensuring the system prompt doesn't get too long.

One interesting finding was that strategies only get used for inference once they have at least 5 attempts and a 40% success rate. This prevents the system from applying unproven strategies to new problems.

The approach works particularly well with reasoning models like DeepSeek-R1 and QwQ - the learned strategies seem to guide their thinking process effectively.

I'm especially curious about:

1. How this might work with different model families

2. Whether the community sees value in sharing strategy databases between users

3. Ideas for extending beyond text-based reasoning to multimodal problems

The plugin integrates with our broader optillm project which has other inference optimization techniques. You can combine SPL with methods like mixture-of-agents or MCTS using the "&" operator.

Next I'm thinking about meta-learning - having the system learn how to create better strategies more efficiently. Also exploring collaborative strategy sharing.

Would love to hear thoughts on the approach or if anyone has ideas for other problem domains where this might be useful!

ramonga•8mo ago
I would like to see some interesting input/output pairs. Do you have any?
codelion•8mo ago
We have some examples in the plugin README: https://github.com/codelion/optillm/tree/main/optillm/plugin...

E.g. This was the strategy discovered by optiLLM for solving word problems:

*Refined Strategy for Solving Word Problems:*

1. *Understand:*\n * Read the problem carefully (multiple times).\n * Identify the question (what are you trying to find?).\n * List all given information (facts, numbers, units).\n * Clarify ambiguous terms/units.

2. *Organize Information & Identify Unknowns:*\n * Choose an organization method: (e.g., table, diagram, list, drawing).\n * Clearly identify the unknowns (what you need to solve for).

3. *Plan and Translate:*\n * Define all variables with units (e.g., `p = number of pennies`, `c = number of compartments`).\n * Identify relationships between knowns and unknowns.\n * Convert units if necessary.\n * Write equations or expressions, including units, that relate the knowns and unknowns.\n * Ensure units are consistent throughout the equations.\n * Outline the solution steps.

4. *Solve:*\n * Show work step-by-step.\n * Track units throughout calculations.\n * Calculate accurately.\n * Solve for the unknowns.\

5. *Evaluate and Verify:*\n * Check if the answer is reasonable.\n * Verify the answer.

6. *Summarize:*\n * State the answer with units

Full list of strategies discovered is available here -https://github.com/codelion/optillm/blob/main/optillm/plugin...

tanchaowen84•8mo ago
This is a really cool idea! I recently came across another project on GitHub: https://github.com/tensorzero/tensorzero that explores a similar direction. You might find it interesting, and perhaps it could offer some inspiration or useful insights for your work as well.
yunusabd•8mo ago
That's an interesting space to explore! I'm wondering about the baseline in the benchmarks. Which prompts did you use for those? I'm asking because some of the resulting prompts seem fairly generic, and I'm wondering if you could just blanket add them to each prompt and also see an improvement. Things like "Identify the question (what are you trying to find?)".

In the same vein, wouldn't it be interesting to measure which part of the prompt most contributed to better solving the problem? Surely some parts will be just noise and can be trimmed away.

Also wondering what this does, since the model probably won't (can't?) actually read the problem multiple times:

  > Read the problem carefully (multiple times).
codelion•8mo ago
Re-reading the problem apparently works well - https://arxiv.org/abs/2309.06275

Here the system seems to have discovered this strategy by itself. The prompts are generic because during learning there is a part to refine and combine them. I haven’t experimented yet by adding all prompts to every query, given the large context it will be interesting to see.

yunusabd•8mo ago
Okay, but it looks like in the paper, they are actually adding the question twice in the prompt, not just instructing the model to read it twice. Or am I missing something?
Falimonda•8mo ago
How do you forsee a system like this efficiently managing and relying on a set of strategies whose size can become unbounded?
codelion•8mo ago
We do not allow the strategies to keep growing there is a refinement phase where we refine and merge existing strategies. The experiments were run with this config - https://github.com/codelion/optillm/blob/main/optillm/plugin... which allows a maximum of 10 strategies of each type.
dedicate•8mo ago
If I jump in and, say, manually 'tweak' one of those JSON strategies because I think I have a better idea, what happens next? Does the LLM just roll with my brilliant human intervention, or could it eventually 'learn' that my tweak was actually counterproductive and refine it back (or away from my edit)?
codelion•8mo ago
You can run in two modes, by default you run in the inference mode without learning. So, the changes you made will be used. If you switch to learning mode then the strategies are updated/refined and merged based on a config that you can control.

# How often to perform maintenance operations (merge, prune)

MAINTENANCE_INTERVAL = 40

# Strategy selection thresholds

STRATEGY_CREATION_THRESHOLD = 0.7 # Higher threshold to avoid creating similar strategies

STRATEGY_MERGING_THRESHOLD = 0.6 # Lower threshold to merge more similar strategies

MIN_SUCCESS_RATE_FOR_INFERENCE = 0.4 # Minimum success rate for a strategy to be used during inference

The configs are all defined here - https://github.com/codelion/optillm/blob/main/optillm/plugin...

imaltont•8mo ago
You should take a look at something called Case-based reasoning. Seems to perfectly fit into the road you are currently walking, as you basically just rediscovered the CBR-cycle.
pratikk10•8mo ago
Hey this looks interesting... would love to discuss more... can we connect? pratikkhedikar10@gmail.com

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•3m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•6m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•6m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•9m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•9m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•21m ago•3 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•22m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•24m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•26m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•28m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•40m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•42m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•42m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•44m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•48m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•53m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•55m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
37•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments