As long as someone isn’t plagiarizing or putting out falsehoods which bring down institutional or industry standards or norms, prompt-engineered AI-composed code and academic writing should be acceptable.
LLMs have access to a lot more information than any group of people combined, depending on the knowledge domains under consideration. If you have access to foundation models fine-tuned on in-house or secret data, those models are still more knowledgeable than any group of people in your cohort, unless you’re seeking specific lived experiences which can drive decision making.
I don’t like the industrial scale levels of hypocrisy where we’re gearing up for an “AI revolution” but at the same time punishing its legitimate use.
Improper use, in my view, is uncritically putting out falsehoods or invalid information.
If you’re using AI for academia, write your paper with it as long as you go over each line, make sure it’s not plagiarized, make sure it makes sense in the confines of the current state of knowledge and information, and any new claims are backed up by quantitative or qualitative verifiable evidence.
If you’re using AI for code, make sure you go over each line, and it works as expected (via TDD, for example).
The human element is necessary for going beyond what’s currently known, for quality assurance, and for incorporating understanding of uniquely human qualities which our automated systems currently lack, and perhaps may never possess.
What’s wrong with this way of thinking?
bigyabai•1h ago
Even if you're critical, most AI users won't catch these kinds of mistakes.