"I 'm high on my own supply, why aren't more people as well?"
But the hallucination problem is pretty bad, I've had it recommend books that don't actually exist etc.
When using it for studying languages I've seen it make silly mistakes and then get stuck in the typical "You´re absolutely right!" loop, the same when I've asked it about how to do something with a particular Python library that turns out not to be possible with that library.
But it seems the LLM is unable to just tell me it's not possible so instead goes round and round in loops generating code that doesn't work.
So yeah, it has some uses but it feels a long way off of the revolutionary panacea they are selling it as, and the issues like hallucinations are so innate to how the LLMs function that it may not be possible to solve them.
latexr•2mo ago
Reminded me of Sam Altman, who recently lamented conversations on the web are filled with bots. Who could’ve predicted that?!