My view is that university classes should be taught in such a way that students can use AI as much or as little as they desire in order to learn the material. Evaluation should primarily be done in the classroom without access to AI. 90% of the grade in my undergraduate course comes from in-person exams. I don't care how they learned the material. This can be a problem for composition classes, for instance, but the problem existed long before the chatbots.
> AI is actually a giant material infrastructure with huge demands for energy, water and concrete, while the supply chain for specialised computer chips is entangled with geopolitical conflict. It also means that the AI industry will beg, borrow and steal, or basically just steal, all the text, images and audio that it can get its spidery hands on.
Sure. We don't know yet how the economics will play out. We don't know the actual cost of LLM and other AI services, we only know what companies are currently charging for them, but they're competing for mindshare so the prices are most definitely being held low. To a large extent, the whole thing has demonstrated what can be done in the short-term in the absence of copyright restrictions, and now we have to see the long-term effects of the removal of copyright restrictions.
I agree with many of the points in the article but don't understand how that turns into a recommendation to "resist".
The second phase being copyright claimed on the model itself, and its derivative works thus further expanding copyright paradoxically to things which couldn't be before the blackbox, and only allowing use by those who own the models?
Initially things are always rosy, then they are reduced to make profit and create moats.
One of my favourite uses of LLM is the reverse-dictionary, for example:
Give me one Saxon and one Romance word meaning "to write".
Saxon (Germanic origin): scratch — Old English scrætan, linked to marking or incising.
Romance (Latin origin): inscribe — from Latin inscribere, "to write on/in."
Genius!
Or you just ask the damn AI that has gone through the useless corpus of the ad-ridden web that was infested and prompted by VC's, and somehow magically, through a lot of effort, math, and 150Gigakilowatts of electricity, and extracted the piece of info you want, and simply gives it to you with a bit of annoying fluff.
My time is precious, and I want to see the useless web burn.
i'm guessing from your two paragraph reverse Unabomber manifesto above your time is not that precious.
It's wild that people don't see that LLMs are following the same playbook as streaming etc. and in time will predatorily monetize in every way possible. If you think people are trapped as customers because they can't do without tv shows, imagine five years from now when it's general thinking that people have become dependent on the tech giants for.
And LLM doesn't completely remove the "burden" of reading the dictionary to make sure the meaning is indeed fitting, but shortcuts the discovery by a lot. Also helps to learn new words, lol. I see it as a supercharged thesaurus.
IMHO this applies to all general research, one needs to be an utter monkey to copy LLM generated references without checking them first, so if anything, it trains critical thinking for free.
This isn't all that new, given that's a play on a Jobs quote about computers. And it's regular old software that can both unleash creativity and created social media brainrot.
The AI algorithms aren't the problem, it's how they're primed, marketed, and used.
There's absolutely nothing stopping us from releasing a bot that's great at looking stuff up and citing sources, but when asked to write an essay or make a decision for you, declines because that's not its job.
The Just Eat of the mind ;)
> yes, but viruses existed long before it
You can find old surveys asking university students how often they cheat. Let's say they don't paint a positive impression.
I grant that I have no evidence for this claim but: I don't see how it's reasonable to teach a subject with access to such a powerful tool and then to remove that tool to assess what the student has learned. My primary uses for LLM, limited as they may be, are explicitly about things I do not care to know, and I find it difficult to hold in my head how ChatGPT is going to help me learn anything in such a way where my understanding of it and use of that knowledge is not hinging directly on continuing to have access to it. And, more broadly, there's reason to suspect that the student will have access to it after that class ends, so it runs up against that old axiom of school meaning to prepare you for working life.
My math classes never interested me, I did the work on calculators whenever possible, and sure I have decent mental math skills, but I still pull out a calculator (app) for everything because... my meat brain just isn't as good at this task as this silicon one, and not only does every smartphone in existence have one, if you really don't want a touchscreen version, they can be had at any retailer in America for like $5-10.
AI can be used in ways that lead to deeper understanding. If a student wants AI to give them practice problems, or essay feedback, or a different explanation of something that they struggle with, all of those methods of learning should translate to actual knowledge that can be the foundation of future learning or work and can be evaluated without access to AI.
That actual knowledge is really important. Literacy and numeracy are not the same thing as mental arithmetic. Someone who can't read literature in their field (whether that's a Nature paper or a business proposal or a marketing tweet) shouldn't rely on AI to think for them, and certainly universities shouldn't be encouraging that and endorsing it through a degree.
I think the most important thing about that kind of deeper knowledge is that it's "frictional", as the original essay says. The highest-rated professors aren't necessarily the ones I've learned the most from, because deep learning is hard and exhausting. Students, by definition, don't know what's important and what isn't. If someone has done that intellectual labor and then finds AI works well enough, great. But that's a far cry from being reliant on AI output and incapable of understanding its limitations.
> AI can be used in ways that lead to deeper understanding.
> all of those methods of learning should translate
Shouldn't be, can be, should. How can we assess if a student has used AI "correctly" to further their understanding vs. used it to bypass a course they don't believe adds value to their education?
> Someone who can't read literature in their field (whether that's a Nature paper or a business proposal or a marketing tweet) shouldn't rely on AI to think for them
That's exactly what tons of pro-AI people are doing. There's an argument to be made that that's the intended purpose for the tool. Artificial Intelligence, sold on the basis to augment your own mental acuity with that of a machine. Well, what if you're a person whom doesn't have much acuity to augment? Like it's mean but those people exist.
Isn’t this basically the paradigm of a closed-book exam? I personally use LLM’s for learning by treating them like a textbook or Wikipedia article I can ask follow-up questions to.
Though to be clear, I am disappointed with the experience about 50% of the time.
Tangent: I've never thought exams should be anything but in-person, but I've also never thought they should be so heavily weighted towards like one or two lucky days, not that that's necessarily what you're suggesting. I recall failing my data structures and algos mid-term that largely consisted of writing syntactically correct java by hand mostly because exams don't really provoke a sense of panic in me, the 3 hours in the evening that the course was didn't turn out to be prime productivity time, so I just kind of got bored and zoned out since I knew it just didn't really matter outside the scope of grades. I think I ended up with a C or something after getting a second shot at the final.
I'd later learn I have ADHD, but there were numerous courses where my profs told me they were straight up disappointed I failed so hard, since I evidently stood out as the most engaged in the classroom, handling the course material and assignments just fine, and being a revisitor to the classroom after being a paid developer for years, then in my late twenties. I have no idea how Doctors that clearly have a similar type of attention do it through med school, maybe it's just sufficiently more difficult, enough to stay engaged.
There's nothing I can do to provoke a sufficient stress response in an exam environment, and I've basically let it be a thing of the past that comes down to a dice roll whether it's engaging enough, or I get a good sleep the night before, or any number of other uncontrollable variables work out in my favor. Ironically, a persuasive essay in a history class turned out to be perfect.
In some sense it does scare me a bit, this prospect of more heavily weighted analog exams, but I don't really see much of a way around it, as long as we continue accepting that the concept of grades and academic performance is a sufficient measure of something worth measuring, rather than the somewhat arbitrary filtering mechanism it became. If my career in software fails, I might have to re-enter into a system that's even more stacked against me than it was, unless it's a hands-on trade presumably.
Yeah. I took some of those classes (they were more common back then) and didn't feel they were a great measure of how much I knew. I give four exams. The students will have seen related questions on the homework and in the lecture prior to taking the exams. Anyone that's been actually learning the material will find the exams easy and those that use AI, get the answers from someone else, or whatever method to get the homework points, will be lost on the exam. At least that's my goal. Teaching is definitely an imperfect art.
The author is a kind of free-rider: he persists in his position having read just-enough continental philosophy, and memorized the right incantations (buzzwords) to communicate the "aligned" political subtext. Academic left-wing jargon; you have to be against neoliberalism, "tech broligarchy", Palantir (the military industrial complex), technogenesis... Oh, and you have to shoehorn as many race politics subjects into your article on AI, even if they have virtually nothing to do with the topic (KKK, White Genocide, racial superiority, supremacy, eugenics). Points for creativity, I guess.
Sadly, this article, or lets be honest, rant, is not a contribution to that.
I think the author presents a one dimensional view of AI bad and fails to see the bigger picture, which is ironic considering all the fine words he uses.
I agree that AI tools can potentially weaken some of our lower level cognitive functions, but on the flip side the AI tools also enable us to operate on higher levels of ability, planning, conceptualization and execution.
This is undoubtedly a point of inflexion for Universities: they should be working out how they can achieve a new deal for students and society that is far more nuanced and constructive than mere 'resistance' against AI.
The real problem for universities is this: much of what classical academia claims is important is not all that hard for a LLM. Writing "compare and contrast" student papers, research which consists of digging through existing texts and summarizing, and writing in a formal style are all things LLMs do. Probably better than most undergraduates.
This shakes the philosophical foundations of academia. What are universities for now? Job training? Sorting the winners from the losers? Something else?
Are there any concrete examples of this? Did any researcher, engineer, artist, etc come forwards and say: "yes, this work you esteem so much, was created by me with the help of AI"?
To me, your words sound more of wishful thinking than the situation as is currently. I'm willing to be set right though.
>The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don't actually work, or at least not as claimed.
>AI summarisation doesn't summarise; it simulates a summary based on the learned parameters of its model. AI research tools don't research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.
If AI tools do not actually work, how are students able to cheat with them? It seems like that would be a problem that would solve itself - a student would attempt to use AI to cheat, it would fail to complete the assignment, and the student would get a bad grade.
Cheating doesn't have to work for it to be cheating.
If you get caught robbing a bank that doesn't un-rob the bank.
We have police and prisons yet people still commit crimes.
Beyond that written work, more of what universities examine of students should be students actually standing up and speaking about their work (without a machine assisting them).
You can't say both that AI produces worse results, and that it will be used to manipulate the job market: savvy companies would outcompete by not adopting AI, and hiring up the victims of AI layoffs. If either of his statements is true, the other is false.
This whole article, man. I don't know where to start with it. It definitely reminds me of grad school. In a bad way.
The university has been on a glide path toward irrelevance for quite a while—long before AI was a going concern—and the humanities and social sciences, in particular, have been skimming the treetops since at least when I was in school at the turn of the century. The role of the university is to teach and do research. AI can be a tremendous asset for both of those, and it's not going away, so deal with that reality.
When I was in university, I did finance + economics + a bunch of other random stuff from CS to archaeology to philosophy.
One subject that was interesting from a technology standpoint was statistics. I took university at a point when ML was a thing, but LLMs obviously weren't. R was a thing, Python was beginning to get popular in the domain, you potentially had all sorts of tech to help with stats.
Introduction to stats, no technology was allowed. Every single problem was done by hand. Every single quiz and test, no calculators, no multiple choice, just problems to work through by hand. If you cheated on assignments, you'd obviously fail tests (which were >50% of the course). Problem solved. We had to learn without aids. Second stats course, everything was allowed. Did all my assignments with R. The point was simply learning. First theory, then how it's done on the real world.
University absolutely should teach all the theories, concepts and history before AI. And then it should also teach how to use AI, since it's a thing in the real world.
People just need to stop thinking about university as all about grades and check marks, and learn to learn.
tempodox•4h ago
So university cannot effectively resist AI without resisting these ideas first. I hope it can be done.