I'm a prof, and my experience so far is that - where AI is concerned - there are two kinds of students: (1) those who use AI to support their learning, and (2) those who use AI to do their assignments, thus avoiding learning altogether.
In some classes this has flipped the bell curve on its head: lots of students at either end, and no one in the middle
Its absolutely a backstabbing saboteur if you blindly trust everything it outputs.
Inquisitive skepticism is a super power in the age of AI atm.
That's a very unuseful way of looking at it. It solves nothing.
ChatGPT has created a problem, we need to look ways of solving it. Unregulated technology is harmful, we need regulations that limit the bad use cases.
To just blame people so, lets be real, your shares on bit tech corporations go up is destructive for society. This tools, like social media, are harming people and should be regulated to work for us. If they cannot then shut them off.
Actual, useful AI is a disruptive technology, just as the automobile was. Trying to regulate use cases is the wrong solution. We need to find out how this technology is going to be integrated into our lives.
There will always be people that try to outsmart everyone else and not do the work. The problem here is those people, nothing else.
Some people somehow think that having more while working less is an act of resourcefulness. To some extent it maybe is, because we shouldn't work for work's sake, but making "working less" a life goal doesn't seem right to me.
I don't think these students necessarily would have bought.
> Some of the sections were written to answer a subtly different question than the one we were supposed to answer. The writing wasn’t even answering the correct question.
Is my absolute biggest issue with LLMs - and it is really week written.
It is like two concepts are really close in latent space, and the LLM projects the question to the wrong representation.
skeaker•1h ago
tossandthrow•38m ago
Previously you could write a lot of text that was wrong, but claim that you at least tried.
Now, we need to get used to putting it in the same category that a peer did not contribute anything and that they contributed ai slop.
This is one of the reasons why juniors will vanish - there is no room for "I tried my best".
Edit: clarity
augment_me•11m ago
When I get obvious LLM-handins, who am I correcting? What point does it have? I can say anything, and the student will not be able to integrate it, because they have no agency or connection to "their" work. Its a massive waste of everyone's time, I have another 5 student in line who actually have engaged with their work, and who will be able to integrate feedback, and their time is being taken by someone who will not be able to understand or use the feedback.
This is the big difference. For a bad student, failing and making errors is a way to progress and integrate feedback, for an LLM-student, its pointless.