There are two popular ways to self-study chess: tactics and following along with professional games or with an engine. These are obviously helpful, but both have downsides.
When playing puzzles, just by knowing you are playing a puzzle means that you are biased towards looking for specific types of moves (checkmates, queen sacrifices, etc.). But in real life, you don't know what positions actually have tactics available, so you can waste your time looking for tactics, or, even worse, make a blunder by thinking there is a tactic when there really isn't.
When following along with an engine, there are tons of positions where an engine comes up with a move that you simply would never have seen and can't possibly understand. These are very low signal for learners, and it is hard to differentiate between positions like that and high-signal positions that are on the edge of your ability.[^1]
blunder.clinic addresses both of these problems by giving you positions where people of your skill level actually blundered, but the best move is something that isn't too far beyond your capability to understand and learn from. We do this by leveraging stockfish for positional evaluations and maia[1] for difficulty evaluation.
Overall, the main purpose of blunder.clinic is to help you stop blundering easy positions!
You can read a bit more about it here: https://mcognetta.github.io/posts/blunder-clinic/
[1]: Maia (https://www.maiachess.com/) is a family of chess models trained on real games. The inputs are a board position and a player rating, and the output is a probability distribution of moves. You can use this to answer queries like "How likely do we think a player of XYZ rating would pick the best move?"