This article feels to me less like a proposal and more like a capitulation - "we are being overwhelmed by garbage so we might as well embrace the garbage".
I have no objection to the "writing with LLMs" part - if you want to use fancy auto-correct, go with it. But the reviewing part feels just wrong: we've seen over and over evidence that people use AI as a shortcut for thinking, and yet the proposal argues that not using AI will "produce worst outcomes" because the reviewers will somehow hold themselves to the high standards they are (allegedly) currently not following.
The author then argues that the ideal application of AI is to answer questions like "Does the reasoning hold? Are the proofs correct? Are claims consistent with artifacts?" which he himself admits "as of yet they cannot offer strong guarantees about correctness", but that's okay because it's followed by the well-known AI mantra "I believe this will change in the future".
Final score: 2.5 - I'd rather not see it in the current conference.
Disclaimer: This comment was not written with the help of AI.
probably_wrong•19h ago
I have no objection to the "writing with LLMs" part - if you want to use fancy auto-correct, go with it. But the reviewing part feels just wrong: we've seen over and over evidence that people use AI as a shortcut for thinking, and yet the proposal argues that not using AI will "produce worst outcomes" because the reviewers will somehow hold themselves to the high standards they are (allegedly) currently not following.
The author then argues that the ideal application of AI is to answer questions like "Does the reasoning hold? Are the proofs correct? Are claims consistent with artifacts?" which he himself admits "as of yet they cannot offer strong guarantees about correctness", but that's okay because it's followed by the well-known AI mantra "I believe this will change in the future".
Final score: 2.5 - I'd rather not see it in the current conference.
Disclaimer: This comment was not written with the help of AI.