The process for me is usually I go down a path with an LLM; I get excited at the prospect of using it as a tool. In the middle i spot an error. Question it, the LLM responds with a contradiction in the same conversation. I realise it means the last 5 minutes of dialogue was all incorrect, and I get frustrated that it wasted my time. I mean they're selling it as intelligent but where is the intelligence, to me its more akin to a search engine where instead of showing me a full article, it returns the excerpt of text instead. Useful.. maybe?
I know people use this in their work daily so I'm just curious how do they do it? Do they ignore the blaring errors like they aren't there? I know this sounds sarcastic but I really want to know because especially in my own projects, I find it hard to move on from not 'perfection' per say, but anything other than bad.
Thank you in advance and looking forward to other opinions also