A brilliantly described story that painfully exposes the problem: how can we trust code and configuration reviews when experts are being undermined in favor of a ChatGPT answer? I agree that it is crucial to close this 'knowledge gap' between what developers think they understand and the reality of the situation.
You mentioned the need to create new guidelines for non-technical people using AI. Do you see a chance for the LLMs themselves to be equipped with more effective 'Guardrails' in the future? Specifically, 'Guardrails' that, in critical security matters, would require the user to provide references (e.g., a link to official documentation) before the model delivers a categorical verdict?"
starlight1980•1h ago