I'm a prof, and my experience so far is that - where AI is concerned - there are two kinds of students: (1) those who use AI to support their learning, and (2) those who use AI to do their assignments, thus avoiding learning altogether.
In some classes this has flipped the bell curve on its head: lots of students at either end, and no one in the middle
Perhaps the worst aspect of LLMs is they can support us in our “productivity,” when we’re actually trying to avoid the hard work of thinking. This sort of technology-assisted self-delusion feels very similar to social media: “I’m not hiding from life and people, I’m talking to them on Twitter!”
Its absolutely a backstabbing saboteur if you blindly trust everything it outputs.
Inquisitive skepticism is a super power in the age of AI atm.
That's a very unuseful way of looking at it. It solves nothing.
ChatGPT has created a problem, we need to look ways of solving it. Unregulated technology is harmful, we need regulations that limit the bad use cases.
To just blame people so, lets be real, your shares on bit tech corporations go up is destructive for society. This tools, like social media, are harming people and should be regulated to work for us. If they cannot then shut them off.
Actual, useful AI is a disruptive technology, just as the automobile was. Trying to regulate use cases is the wrong solution. We need to find out how this technology is going to be integrated into our lives.
Learning to drive a car at school would do a whole lot more good!
There will always be people that try to outsmart everyone else and not do the work. The problem here is those people, nothing else.
That applies here, with these academic-cheating scenarios: It's not just some incrementally cheaper and more-convenient way to hire somebody to write your paper like you could for decades.
They're not trying to give you facts that solve things, they're just giving you facts, as they see it. You can do with that info what you will.
Go back to pen-and-paper examinations at a location where students are watched. Do the same for assignments and projects.
Unfortunatly most often the cure is worse than the poison.
And your response is that we need regulations??
Institutional policy, changes to lesson practices, cover the risk of wasting your education in intro materials... sure!
But state is not your parents, it's certainly not mine. Geesh.
This is a prime example of thinking exclusively along the lines of rugged individualism. It assigns all blame on the individual, whilst ignoring any systemic or collective causes.
It ignores the socio-economic realities of the students. Especially if they come from a challenged background. To them the important thing is getting the high paying job which represents a ticket out of the lower class, and if that can be optimized, it's a no brainer that they would take that route.
It ignores the fact that the actual credential paper is more important to recruiters than the knowledge gained though the program. Or even that networking and referrals has a much larger weight than raw skill in recruiting than we'd like to admit, from our meritocratic perception.
It ignores the fact that maybe the module itself is not that valuable? We're talking about the US here, and people literally pay out of pocket for education. And yet they cheat/skip it in a heartbeat. The only valid rationale is that there is no value there from an economics lens. They'd rather spend that time doing extracurricular activities that actually improve their chances of getting employed.
It ignores the fact that since the industrial revolution the education system has not evolved at all (merely adding a computer lab does not mean the system was reworked, it's the other way around, the new technology was adapted into the existing system).
The education system has flaws. The incentives in the job marketplace have flaws. There are many factors at play here, and simply arguing that "it's the student's fault" is the equivalent of an ostrich sticking his head in the sand.
Some people somehow think that having more while working less is an act of resourcefulness. To some extent it maybe is, because we shouldn't work for work's sake, but making "working less" a life goal doesn't seem right to me.
I don't think these students necessarily would have bought.
I think it’s quite clear that most students who are using AI now to generate assignments would not have bought.
> Some of the sections were written to answer a subtly different question than the one we were supposed to answer. The writing wasn’t even answering the correct question.
Is my absolute biggest issue with LLMs - and it is really week written.
It is like two concepts are really close in latent space, and the LLM projects the question to the wrong representation.
Also having your entire semester spoiled by some incidents induced by random passersby? Come on, university is for growing up, start.
Worse yet. In 10% of the cases you'ld get some clueless but very opinionated student wanting to be 'manager' and 'editor in chief', contributing nothing but bossing everyone around.
And yes, in 10% of the cases, you would have another student actually smart and helpfull.
So is it better to have LLM slop rather than nothing at all? Probably not, but it is not like those people would have turned in good contributions otherwise.
skeaker•2d ago
tossandthrow•2d ago
Previously you could write a lot of text that was wrong, but claim that you at least tried.
Now, we need to get used to putting it in the same category that a peer did not contribute anything and that they contributed ai slop.
This is one of the reasons why juniors will vanish - there is no room for "I tried my best".
Edit: clarity
augment_me•2d ago
When I get obvious LLM-handins, who am I correcting? What point does it have? I can say anything, and the student will not be able to integrate it, because they have no agency or connection to "their" work. Its a massive waste of everyone's time, I have another 5 student in line who actually have engaged with their work, and who will be able to integrate feedback, and their time is being taken by someone who will not be able to understand or use the feedback.
This is the big difference. For a bad student, failing and making errors is a way to progress and integrate feedback, for an LLM-student, its pointless.
what-the-grump•2d ago
LLMs allow me to tackle stuff I have no business tackling because the support from the LLM for the task far exceeds google / stack overflow / [insert data source for industry or task].
Does the concept sink in? Yes and no, I am moving too fast most of the time to retain the solution.
When the task is complex enough and LLM gets it wrong, oh boy is it educational, not only do I have to figure out why the LLM is wrong, I have to now correct my understanding and learn to reason against it.
I was a very bad student, most of the classes didn't make sense to me, bored me out of my mind, I failed a lot. Do I ever feel that way when talking to Chatgpt about a task I have no idea how to solve? No, and guess what we figure it out together.
Another data point, my english writing has improved by using chatgpt to refactor / reformat, more examples, mostly correct english structure. Over time stuff sinks in even if you are not writing it, you are still reading it, and editing.
Lets take code for a minute, is it easier to edit someone else's code or your own? So everyone that has to dive deep into troubleshooting chatgpt's code is somehow dumb/lazy? I don't think so, they are at least as smart as the code.
What would happen if we made a curriculum around using chatgpt, how far would I get in chem 1 if I spent 90 minutes with chatgpt prompts prepared by a professor and a machine that never gets tired of explaining / rephrasing until I get it?
augment_me•2d ago
What I am describing in what I and many colleagues have run into are students who are not engaged or motivated with their work because there is a path of much less resistance, and are using the LLMs to pass learning moments with minimal effort.
When you choose to edit the code of the LLM instead of feeding it all back to it with the added prompt "it does not work, fix it", you have already made a choice in learning.
edit; I do agree on the curriculum change, however, there is a time-window now before there has been a consensus on the new ways of learning and political action from the universities where the learning is to a much higher degree in the hands of students than the universities. And this power can lead both ways to a higher extent than before when the university was in control of this.