Skimming the article, this would seem like another case of the explanainability problem, no? the conversation with the llm makes the results "easier to understand" (which is a requirement for real use-cases) but loses accuracy? Still good if we have more studies confirming this tradeoff to be the case.
Cynddl•1h ago
Co-author here and happy to answer questions!