The article is basically saying that the authors of the Stochastic Parrot article (and their adherents) assume that humans have some sort of special magic that cannot be simulated using machine learning, and then constantly move task goalposts after language models reach human-level at that task.
Congratulations to the authors for making a great joke paper.
cpldcpu•1h ago
Here you can see how i prompted it. I provided a similar paper (also generated with C. opus) as an example, but Opus took it from there:
dekhn•5h ago
The article is basically saying that the authors of the Stochastic Parrot article (and their adherents) assume that humans have some sort of special magic that cannot be simulated using machine learning, and then constantly move task goalposts after language models reach human-level at that task.
Congratulations to the authors for making a great joke paper.