How about LLM do away with the pointless painful and thankless brute forcing of huge databases of physical paper to find the key piece of information you want to check something against. Teaching that, that is what academic study is leads to the misconception that writing enough about something makes it valid. A serious problem in many of the areas of new academic prostration.
I'd say this opens research to a new audience who otherwise wouldn't go near it, but frankly we see what damage the internet did to society after people said this in the 90s so yes maybe put it back in the box...
Isn't ignoring the incredible good the Internet has done exactly what you're advocating against with regards to LLMs?
Yes, new technology that impacts fundamental aspects of our society will have negative side effects that we will have to adapt to and solve - but on the whole, I think the Internet has produced far more good in the world than otherwise. We'll see how LLMs pan out, but I suspect it'll play out similarly.
I tend to thing that universities will be OK
Even in subjects that just need a mind and maybe a computer like mathematics and computer science.
The thing is, people still went to school. Why is that, do you think?
Apart from that, coercion and credentialism, where the latter can be provided with testing alone.
If AI is the equivalent of a college instructor, just deploy AI everywhere instead of wasting time and energy teaching humans, a fairly limited resource?
> Deep learning has historical and epistemological connections to eugenics through its mathematics, its metrics and through concepts like AGI, and we shouldn't be surprised if and when it gets applied in education to weed out 'useless learners'.
It might be partially correct but this is similar to saying Germans should not be trusted because of WWII.
(sad face) this post subtracts from the valid arguments against the usage of AI tools in some valid scenarios, it is because some folks have a knee jerk reaction and label authors as Luddites
Frankly an LLM would have done a better job and been more succinct.
Although universities are certainly against cheating, the responsibility has always been on the student not to cheat. Universities do not oppose useful technologies simply because they may be misused for cheating.
Put another way, the role of a university is to "discover and invent the future." In this light, universities will be more interested in developing AI than so-called "resisting" it. This is especially since it has already yielded breakthroughs in science, e.g. a Nobel Prize in Chemistry for protein prediction.
There are no guardrails to LLMs. They remove friction from tasks that used to require critical thinking. We're constantly pressured into using them. I think only blaming the end-user is naive.
[0]: https://aeon.co/essays/our-crisis-is-not-loneliness-but-huma...
[1]: https://www.forbes.com/sites/danfitzpatrick/2025/02/26/chatg...
LLMs curb independent thinking (links some n=5 article published recently)
LLMs reduce wages (wrong!!)
LLMs cause environmental damage (wrong!)
Add in some vague leftist jargon about decolonisation and you have your standard llms bad article. There is I admit, a nugget of truth in each criticism and I hope we can explore it from an unbiased angle.
After thinking about it I have a theory on the real reason some people have a bad opinion on LLMs.
rob_c•2h ago
This is like saying mathematics should avoid the calculator.
Don't be so naive, the only AI models have are able to perform more powerful contextual lookups and reference and on the case of chat bots hallucinate when this fails.
These tools have no agency, people who blindly trust it beyond this are the highest of fools and don't understand garbage in garbage out.
The role of a university is to train in the use of tools and that includes the squishy one used for critical thinking. If the university wasn't doing that before now it wasn't doing its job.
None of this pseudo intellectualism and politico opinion posting.