The basic idea: instead of using static system prompts, the LLM builds up a database of strategies that actually work for different problem types. When you give it a new problem, it selects the most relevant strategies, applies them, then evaluates how well they worked and refines them.
For example, after seeing enough word problems, it learned this strategy:
1) Read carefully and identify unknowns,
2) Define variables with units,
3) Write equations,
4) Solve step-by-step,
5) Verify the answer.
All strategies are stored as human-readable JSON that you can inspect and edit.
I tested it on math benchmarks and saw decent improvements - 8.6% better on Arena Hard, 6.67% on AIME24. After 500 queries, the system had created 129 strategies and refined 97 of them.
The implementation is an open-source plugin for optillm (our inference optimization proxy). It works with any OpenAI-compatible API - you just add "spl-" to your model name. Has two modes: inference-only (uses existing strategies) and learning mode (creates and refines strategies).
What's interesting is that it bridges the gap between the sophisticated system prompts that production AI uses and the basic prompts most of us work with. Your model literally gets better at the types of problems you throw at it.
Built it because I noticed ChatGPT, Claude etc. have incredibly detailed system prompts with problem-solving frameworks, but most developers use basic prompts and miss out on those performance gains. The approach is inspired by Andrej Karpathy's tweet about a "third paradigm" for LLM learning beyond just pretraining and fine-tuning: https://x.com/karpathy/status/1921368644069765486
The strategies are completely transparent - you can see exactly what the system learned and why it's making certain decisions. No black box learning.
https://github.com/codelion/optillm/tree/main/optillm/plugin...
Would love feedback on the approach. Has anyone else experimented with LLMs learning from their own experience?
codelion•1d ago
The system maintains two separate limits: a storage limit (max 10 strategies per problem type in the database) and an inference limit (max 3 strategies applied per query). This keeps the database manageable while ensuring the system prompt doesn't get too long.
One interesting finding was that strategies only get used for inference once they have at least 5 attempts and a 40% success rate. This prevents the system from applying unproven strategies to new problems.
The approach works particularly well with reasoning models like DeepSeek-R1 and QwQ - the learned strategies seem to guide their thinking process effectively.
I'm especially curious about:
1. How this might work with different model families
2. Whether the community sees value in sharing strategy databases between users
3. Ideas for extending beyond text-based reasoning to multimodal problems
The plugin integrates with our broader optillm project which has other inference optimization techniques. You can combine SPL with methods like mixture-of-agents or MCTS using the "&" operator.
Next I'm thinking about meta-learning - having the system learn how to create better strategies more efficiently. Also exploring collaborative strategy sharing.
Would love to hear thoughts on the approach or if anyone has ideas for other problem domains where this might be useful!