My solution to this problem is to document everything I can think of about the library. Maybe if I can get enough coherent data into future LLM training data, they will (eventually) stop hallucinating such garbage code for users trying to use the library. I also used this new data, alongside some structured investigations of the library with two LLM products, to come up with a summary document[1] which users can feed into an LLM session to at least correct the worst assumptions current LLM iterations make about the library.
I don't expect to see any useful results emerge from this work for at least 12-18 months. But the work has to be done. I think this sort of documentation is yet another burden on the solo developer working on a pet side-project in the open software space ... but I see no other solution for the radically changed developer environment we suddenly face.
[1] - https://github.com/KaliedaRik/Scrawl-canvas/blob/v8/LLM-summ...