If students using AI to cheat on homework are graduating with a degree, then it has lost all value as a certificate that the holder has completed some minimum level of education and learning. Institutions that award such degrees will be no different than degree mills of the past.
I’m just grateful my college degree has the year 2011 on it, for what it’s worth.
It's a bit like an audio engineer setting up your compressors and other filters. It's not difficult to fiddle with the settings, but knowing what numbers to input is not trivial.
I think it's a kind of skill that we don't really know how to measure yet.
None of this is true of an LLM. I believe there’s a little skill involved, but it’s nothing like tuning the pass band of a filter. LLMs are chaotic systems (they kinda have to be to mimic humans); that’s one of their benefits, but it’s also one of their curses.
Now, what a human can definitely do is convince themselves that they can control somewhat the outputs of a chaotic system. Rain prognostication is perhaps a better model of the prompt engineer than the audio mixer.
That's a "black box" problem, and I think they are some of the most interesting problems the world has.
Outside of technology- the most interesting jobs in the world operate on a "black box". Sales people, psychologists are trying to work on the human mind. Politicians and market makers are trying to predict the behavior of large populations. Doctors are operating on the human body.
Technology has been getting more complicated- and I think that distributed systems and high level frameworks are starting to resemble a "black box" problem. LLMs even more so!
I agree that "prompt engineer" is a silly job title- but not because it's not a learnable skill. It's just not accurate to call yourself an engineer when consuming an LLM.
You hire humans to help train AI and when done you fire humans.
But how do you correct it if you do not know what is right or wrong...
You keep human employees and require them to use LLM so that it gets corrected all the time from their input. Then you fire them.
You can't put the toothpaste back into the tube. Universities need to accept that AI exists, and adjust their operations accordingly.
You could sit down at a workstation with all the tools you might need to test your skills. :)
Provide an incentive for students to do the thing they should be doing anyway.
Give an opportunity to provide feedback on the assignment.
It is totally useless as an evaluation mechanic, because of course the students that want to can just cheat. It’s usually pretty small, right? IIRC when I did tutoring we only gave like 10-20% for the aggregate homework grade.
Realistically I think the more common case is to think you know the material, skip studying, and then faceplant on the test. Homework should help self-correct.
But yeah, I could it being annoying if you really do already know the material.
IMO the ideal class would be 4 or so students working together on a bespoke project, with weekly check-ins with some grad student teaching assistant. The goal would be to do something interesting and new. Of course nobody ever has enough teaching staff for that kind of thing.
Before college, when I was a kid, I just had the textbooks, so I read the chapters and did the assignments… it was much better than sitting and listening in lectures, then doing some small assigned subset of the problems…
Sounds laughably naive now, doesn’t it?
For most subjects at the university level graded homework (and graded attendance) has always struck me as somewhat condescending and coddling. Either it serves to pad out grades for students who aren't truly learning the material or it serves to force adult students to follow specific learning strategies that the professor thinks are best rather than giving them the flexibility they deserve as grown adults.
Give students the flexibility to learn however they think is best and then find ways to measure what they've actually learned in environments where cheating is impossible. Cracking down on cheating at homework assignments is just patching over a teaching strategy that has outgrown its usefulness.
I agree that something will have to change to avert the current trend.
I have had so many very frustrating conversations with full grown adults in charge of teaching CS. I have no faith at all that students would be able to choose an appropriate method of study.
My issue with the instruction is the very narrow belief in the importance of certain measurable skills. VERY narrow. I won’t go into details, for my own sanity.
I think they would really benefit learning how to work a full day and develop some work life balance.
As an example, I was a university student in Canada ~15 years ago. I lived with my parents, driving 30 minutes each way to attend classes. I had car insurance, gas, a cell phone, tuition, parking and books to pay. Tuition was costing 6000$ a year over 5 years. Being in humanities, I chose my own course schedule. I would often have classes 8 a.m. to 2 p.m. Monday through Thursday. I would work nights and weekends 25-33.5 hours most weeks..Most part-time employment worked around student hours and allowed some flexibility. Once I graduated and had a full-time salary position, I had much more free time and struggled to not feel lonely in filling up that time.
That is their problem, not your problem. You're not their nanny.
So I'm perfectly happy with a system of higher education that strongly rewards this behaviour
I have the opposite experience - the best professors focused on homework and projects and exams were minimal to non-existent. People learn different ways, though, so you might function better having the threat/challenge of an exam, whereas I hated having to put everything together for an hour of stress and anxiety. Exams are artificial and unlike the real world - the point is to solve problems, not to solve problems in weirdly constrained situations.
College students still cram and purge. Nobody forced to sit through OChem remembers their Diels-Alder reaction except the organic chemists.
College degrees probably don't have as much value as we've historically ascribed to them. There's a lot of nostalgia and tradition pent up in them.
The students who do the best typically fill their schedule with extra-curricular projects and learning that isn't dictated by professors and grading curves.
This is not related to "AI", but I have an amusing story about online cheating.
* I have a nephew who was switched into online college classes at the beginning of the pandemic.
* As soon as they switched to online, the class average on the exams shot up, but my nephew initially refused to cheat.
* Eventually he relented (because everyone else was doing it) and he pasted a multitude of sticky notes on the wall at the periphery of his computer monitor.
* His father walks into his room, looks at all the sticky notes and declares, "You can't do this!!! It'll ruin the wallpaper!"
It’s the same old story with a new set of technology.
What's your answer? Surely it was proven to be "not useful"? I don't think I ever met a person who benefitted from knowing math now that everyone has a calculator in pocket. Other than maybe playing some games where if you do calculation on the fly you win
I don't think we're living in the same world. I have met plenty of people who, despite having a calculator, can't solve their own problems because they don't know what to do with it in order to solve their problem.
^ Why many go to Harvard. Very nice club.
Just because machine can do things, doesn't mean humans should be able to do it too. Say reading a text aloud.
So in order to remain useful, the status quo of higher education will probably have to change in order to adapt to the ubiquity of AI, and LLMs currently.
Just because you can cheat at something doesn't mean doing it legitimately isn't useful.
I suspect the opposite: Known-good college degrees will become more valuable. The best colleges will institute practices that confirm the material was learned, such as emphasizing in-person testing over at-home assignments.
Cheating has actually been rampant at the university level for a long time, well before LLMs. One of the key differentiators of the better institutions is that they are harder to cheat to completion.
At my local state university (where I have friends on staff) it’s apparently well known among the students that if they pick the right professors and classes they can mostly skate to graduation with enough cheating opportunity to make it an easy ride. The professors who are sticklers about cheating are often avoided or even become the targets of ratings-bombing campaigns
The immediate effect was the distrust of the professors towards most everyone and lots classes felt like some kind of babysitting scheme, which I did not appreciate.
https://solresol.substack.com/p/you-can-no-longer-set-an-und...
Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.
Since endowments got huge.
A large endowment attracts greedy people who then want to make it larger, that is true regardless where you go.
So far, it’s still possible to opt out of this coordinated model, and I have been. But I suspect the ability to opt out will soon come under attack (the pretext will be ‘uniformity == fairness’). I never used to be an academic freedom maximalists who viewed the notion in the widest sense, but I’m beginning to see my error.
Those I ask are unanimously horrified that this is the choice they are given. They are devastated that the degree for which they are working hard is becoming worthless yet they all assert they don't want exams back. Many of them are neurodivergent who do miserably in exam conditions and in contrast excel in open tasks that allow them to explore, so my sample is biased but still.
They don't have a solution. As the main victims they are just frustrated by the situation, and at the "solutions" thrown at it by folks who aren't personally affected.
This is the core of the issue really. If we are in the business of teaching, as in making people learn, exams are a pretty blunt and ineffective instrument. However since our business is also assessing, proctoring is the best if not only trustworthy approach and exams are cheap in time, effort and money to do that.
My take is that we should just (properly) assess students at the end of their degree. Spend time (say, a full day) with them but do it only once in the degree (at the end), so you can properly evaluate their skills. Make it hard so that the ones who graduate all deserve it.
Then the rest of their time at university should be about learning what they will need.
The problem with this "end of university exam" structure is that you have the same problems as before but now that exam is weighted like 10,000% that of a normal exam.
They’re kids, and they should be treated as such, in both good and bad ways. You might want to make exceptions for the good ones, but absolutely not for the average or bad ones.
People of all ages seek rewards — and assessments gate the payoffs. Like a boss fight in a video game gates the progress from your skill growth.
I'm curious: what is fulfilling in your job as a math teacher? When students learn? When they're assigned grades that accurately reflect their performance? When they learn something with minimal as opposed to significant effort? Some combination?
I always thought teacher motivations were interesting. I'm sure there are fantastic professors who couldn't care less as to what grades they gave out at the end.
Many things. The most fulfilling for me is taking a student from hating maths to enjoying it. Or when they realise that in fact they're not bad at maths. Students changing their opinions about themselves or about maths is such a fulfilling experience that it's my main motivation.
Then working with students who likes and are good at maths and challenging them a bit to expand their horizon is a lot of fun.
> When students learn?
At a high level yes (that maths can be fun, enjoyable, doable). Them learning "stuff" not so much, it's part of the job.
> When they're assigned grades that accurately reflect their performance?
Yes but not through a system based on counting how many mistakes they make, like exams do. If I can design a task that enables a student to showcase competency accurately it's great. A task that enables the best ones to extend themselves (and achieve higher marks) is great.
> When they learn something with minimal as opposed to significant effort?
Not at all. If there is no effort I don't believe much learning is happening. I like to give an opportunity for all students to work hard and learn something in the process no matter where they start from.
I only care about the grade as feedback to students. It is a way for me to tell them how far they've come.
Isn't this part of life? Learning to excel anyway?
Interviews shouldn't be "exam conditions" either. See the ten thousand different articles that regularly show up here about why not to do the "invert a binary tree on a whiteboard" style of interview.
There are much better ways to figure out people's skills. And much better things to be using in-person interview time on.
The reality is life is full of time boxed challenges.
All of life! An exam is a time boxed challenge. Sometimes it's open notes, sometimes it's not. I've had exams where I have to write an essay, and I've had exams where I've had to solve math problems. All things I've had to do in high pressure situations in my job.
Solving problems with no help and a clock ticking happens a million times per day.
We even assign grades in life, like "meets expectations" and "does not meet expectations".
Even still, you missed the point of my comment. You keep focusing on how interviews should be done, not how they're conducted in reality.
… which most people come out of 17+ years of school having done very little of, with basically a phobia of it, and being awful at it.
They are probably something like oral exams that a few universities use heavily, or the teaching practices of many elite prep schools.
[edit] oh and interviews in most industries aren’t like that. Tech is especially grueling in the interview phase.
In my adult life I had a coworker who constantly demanded that she be given special consideration in the work environment: more time to complete tasks, not working with coworkers who moved too quickly, etc. She was capable but refused to recognize that even if you have to do things in a way that don't work for you, sometimes you either have to succeed that way or find something else to do.
Today she's homeless living out of her car, but still demands to that be hired she needs to be allowed to work as slowly as she needs and that she will need special consideration to help her complete daily tasks etc.
We recently lived through an age of incredible prosperity, but that age is wrapping up and competition is heating up everywhere. When things are great, there is enough for everyone, but right now I know top performers that don't need special consideration when doing their job struggling to find work. In this world if you learned to always get by with some extra help, you are going to be in for a very rude awakening.
Had I grown up in the world as it has been the last decade I would have a much easier adolescence and a much harder adult life. I've learned to find ways to maximize my strengths as well as suck it up and just do it when I'm faced with challenges that target my weaknesses and areas I struggle. Life isn't fair, but I don't think the best way to prepare people for this is to try to make life more fair.
My take is that we need to tread a thin line such that we teach young people to accept that life is inherently unfair, while at the same time doing what we can as a society to make it more fair.
Agreed. Teaching that life is unfair (and how to succeed despite that) is an important lesson. But there's an object-meta distinction that's important to make there. Don't teach people that life is unfair by being unfair to them in their education and making them figure it out themselves. Teach a class on the topic and what they're likely to encounter in society, a couple times over the course of their education.
We as a society have a lot of proxies for evaluating real world value. Testing is a proxy for school knowledge. Interviews are a proxy for job performance. Trying to understand and decouple actual value from the specific proxies we default to can unlock additional value. You said yourself that you do have strengths, so if there are ways society can maximize those and minimize proxies you aren’t strong in, that is a win win.
Your coworker sounds like they have an issue with laziness and entitlement more than an issue with neurodivergence. Anyone can be lazy and entitled. Even if someone has a weakness with quick turn production but excels in more complex or abstract long-term projects could be a value added for a company. Shifting workloads so that employees do more tasks they are suited towards, rather than a more ridged system, could end up helping all employees maximize productivity by reducing cognitive load they were wasting on tasks they were not as suited for, but did just because that was the way it was always done and they never struggled enough for it to become an actual “issue”.
Why? It's a useless skill that you will literally never have to use after your schooling.
If the company asks leet code problems, I guess they are making the same mistake as schools do.
"I had to suffer so you must too."
Yes, working under pressure is a skill that should be learned. It's best to learn it on a history exam when nobody is at risk.
Less stress at the end of the term, and the student can't leave everything to the last minute, they need to do a little work every week.
The solution I use when teaching is to let evaluation primarily depend on some larger demonstration of knowledge. Most often it is CS classes (e.g. Machine Learning), so I don't really give much care for homeworks and tests and instead be project driven. I don't care if they use GPT or not. The learning happens by them doing things.
This is definitely harder in other courses. In my undergrad (physics) our professors frequently gave takehome exams. Open book, open notes, open anything but your friends and classmates. This did require trust, but it was usually pretty obvious when people worked together. They cared more about trying to evaluate and push us if we cared than if we cheated. They required multiple days worth of work and you can bet every student was coming to office hours (we had much more access during that time too). The trust and understanding that effort mattered actually resulted in very little cheating. We felt respected, there was a mutual understanding, and tbh, it created healthy competition among us.
Students cheat because they know they need the grade and that at the end of the day they won't won't actually be evaluated on what they learned, but rather on what arbitrary score they got. Fundamentally, this requires a restructuring, but that's been a long time coming. The cheating literally happens because we just treated Goodhart's Law as a feature instead of a bug. AI is forcing us to contend with metric hacking, it didn't create it.
if "many" are "divergent" then... are they really divergent? or are they the new typical?
caveat emptor - I am not ND so maybe this is a real concern for some, but in my experience the people who said this did not know the material. And the accommodations for tests are abused by rich kids more than they are utilized by those that need them.
This presents itself as a bad test taker, I rarely ever got above a B+ on any difficult test material. But you put me in a lab, and that same skillset becomes a major advantage.
Minds come in a variety of configurations, id suggest considering that before taking your own experience as the definitive.
If I was evaluating the health of various companies, I wouldn’t use one metric for all of them, as company health is kind of an abstract concept and any specific metric would not give me a very good overall picture and there are multiple ways for a company to be healthy/successful. Same with people.
There are lots of different ways to utilize knowledge in real world scenarios, so someone could be bad at testing and bad at some types of related jobs but good at other types of related jobs. So unless “test taking” as a skill is what is being evaluated, it isn’t necessary to be the primary evaluation tool.
Students are more accurately measured via long, take-home projects, which are complicated enough that they can’t be entirely done by AI.
Unless the class is something that requires quick thinking on the job, in which case there should be “exams” that are live simulations. Ultimately, a student’s GPA should reflect their competence in the career (or possible careers) they’re in college for.
I mean, for every neurodivergent person who does miserably in exam conditions you have one that does miserably in homework essays because of absence of clear time boundaries.
Can't this be done in the US as well ?
Do you feel it is effective?
It seems to me that there is a massive asymmetry in the war here: proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.
I expect the system will only catch a small fraction of the cheating that occurs.
It'll depend a lot on who/where/how is doing the screening and what tools (if any) are permitted.
Remember that bogus program for TI8{3,4} series calculators that would clear the screen and print "MEMORY CLEAR"? If the proctor was just looking for that string and not actually jumping through the hoops to _actually_ clear the memory then it was trivial to keep notes / solvers ... etc on the calculator.
In the years since, I’ve only ever heard mention of older models, not newer ones which makes me wonder if this is a special case and situation where technology is frozen in time intentionally to foster learning.
Oh yes, they're frozen in time, but since the people who pay for them are not the same people who demand they must be used, they're not frozen in price. It's the most expensive kilobytes you'll ever buy.
At my high school we were allowed to have TI-83s but not TI-89s, because 89s had built in CAS (computer algebra system) and could do your algebra homework for you. When I went to college I already had an 83 so I didn't feel the need to upgrade.
I ended up displaying "M" "e" "min(" "c" "log(" "e" "a" "r" "e" "d". Then covered up the "in(" with spaces.
Then you lower your contrast for the full effect.
If they don’t catch them they don’t have a business model. They have one job. The University of London, Open University and British Council all have 50+ years experience on proctoring university exams for distance learning students and it’s not like Thomson Prometric haven’t thought about how to do it either, even if they (mostly?) do computerised exams.
You put your stuff in a locker. They compare your face to some official photo ID and take your photo. You sit the test. They print out your results along with your mugshot. That's it. It was very painless.
The difference is running exams is a small part of a teacher's job, and almost certainly not the part they're passionate about.
Also proctors demand things I've seen no teacher at any level demand (or be able to demand).
There is definitely a war between cheaters and people catching them. But a lot of people can't be bothered and if learning the material can be made easier than cheating then it will work.
You can imagine proctoring halls of the future being Faraday cages with a camera watching people do their test.
I've been running a programming LLM locally, with a 200k context length with using system ram.
Its also an abliterated model, so I get none of the moralizing or forced ethics either. I ask, and it answers.
I even have it hooked up to my HomeAssistant, and can trigger complex actions from there.
So to your point, it’s easy to cheat even if the proctor tries to prevent it.
You wanted to "ace the class", which is an "A" on your final report card? But your crush's exam tanked your grade? You passed the class anyway, right?
Did you swap Scantrons, then, and your crush sat next to you, writing answers on the dgfitz forms?
She wouldn't pass without an "A" on the exams, so her running point total was circling the drain, and your effort gave her a "C-" or something?
In what ways did your teacher make the exams "fair"? What percentage of the grade did they comprise?
Were the 3 tests administered on 3 separate occasions, so nobody caught you repeatedly cheating the same way?
I imagine that it would be utterly trivial for two people to nearly-undetectably cheat in this way, by both of them simply writing the other person's name on their exam.
Not sure if that impression is accurate though, or if it's true of mathematical writing.
The main kind of cheating we need them to prevent is effective cheating - the kind that can meaningfully improve the cheater's score.
Requiring cheaters to put their belongings in a locker, using proctor-provided resources, and being monitored in a proctor-provided room puts substantial limits on effective cheating. That's pretty much the minimum that any proctor does.
It may not stop 100% of effective cheating 100% of the time, but it would make a tremendous impact in eliminating LLM-based cheating.
If you're worried about corrupt proctors, that's another matter. National brands that are both self- and externally-policed and depend on a good reputation to drive business from universities would help.
With this system, I expect that it would not take much to avoid almost all the important cheating that now occurs.
I get why they use it, without it there's no way to know you're not on your phone or another device cheating since they can only really see what's on the device you've installed the proctor software/rootkit on.
Sadly Linus Tech Tips video of him taking the CompTIA A+ exam has been taken down after threatening letters from CompTIA but they demanded a basically baren room, 360 photos and spotless web cams.
- Make “homework” ungraded. Many college classes already do this, and it has been easy to cheat on it way before AI by sharing solutions. Knowledge is better measured in exams and competence in projects. My understanding is that homework is essentially just practice for exams, and it’s only graded so students don’t skip it then fail the exams; but presumably now students cheat on it then fail exams, and for students who don’t need as much practice it’s busywork.
- Make take-home projects complex and creative enough that they can’t be done by AI. Assign one large project with milestones throughout the semester. For example, in a web development class, have students build a website, importantly that is non-trivial and theoretically useful. If students can accomplish this in good quality with AI, then they can build professional websites so it doesn’t matter (the non-AI method is obsolete, like building a website without an IDE or in jQuery). Classes where a beyond-AI-quality project can’t be expected in reasonable time from students (e.g. in an intro course, students probably can’t make anything that AI couldn’t), don’t assign any take-home project.
- If exams (and maybe one large project) aren’t enough, make in-class assignments and projects, and put the lectures online to be watched outside class instead. There should be enough class time, since graded assignments are only to measure knowledge and competence; professors can still assign extra ungraded assignments and projects to help students learn.
In summary: undergraduate college’s purpose is to educate and measure knowledge and competence. Students’ knowledge and competence should be measured via in-class assignments/exams and, in later courses, advanced take-home projects. Students can be educated via ungraded out-of-class assignments/projects, as well as lectures, study sessions, tutoring, etc.
There are a LOT of people that don't take exams well. When you combine that with the fact that the real world doesn't work like exams in 90% of cases, it makes a lot of sense for grades to _not_ based on exams (as much as possible). Going the other direction (based on nothing _but_ exams) is going to be very painful to a lot of people; people that do learn the material but don't test well.
The in-class assignments should also be easier than the take-home projects (although not as easy as the exams). In-class assignments and exams would be more common in earlier classes, and long projects would be more common in later classes.
But it stops a casual cheater from having ChatGTP on a second device.
I did a remote proctored exam for the NREMT last year. They had me walk the camera around the room, under the desk, etc. All devices had to be in my backpack. No earbuds. They made me unplug the conference tv on the wall, lift picture frames etc. I had to keep my hands above the table the whole time, I couldn't look down if I was scratching an itch. They installed rootkit software and closed down all of the apps other than the browser running the test. They killed a daemon I run on my own pcs that is custom. They are recording from the webcam the whole time and have it angled so they can see. They record audio the whole time. I accidentally alt tabbed once and muted the mic with a wrong keyboard, those were first and second warning within 5 seconds.
When you take the test in a proctored testing center location they lock all of your stuff in a locker, check your hands, pockets, etc. They give you earplugs. You use their computer. They record you the whole time. They check your drivers license and take a fingerprint.
Those methods would stop a large % of your attack vectors.
As do the repercussions:
A candidate who violates National Registry policies, or the test center's regulations or rules, or engages in irregular behavior, misconduct and/or does not follow the test administrator's warning to discontinue inappropriate behavior may be dismissed from the test center. Exam fees for candidates dismissed from a test center will not be refunded. Additionally, your exam results may be withheld or canceled. The National Registry of EMTs may take other disciplinary action such as denial of National EMS Certification and/or disqualification from future National Registry exams.
At a minimum you're paying the $150 fee again, waiting another month to get scheduled and taking another 3 hours out of your day.
> When you take the test in a proctored testing center location they lock all of your stuff in a locker, check your hands, pockets, etc. They give you earplugs. You use their computer. They record you the whole time. They check your drivers license and take a fingerprint.
While attending there, I also took a virtual Calculus class. The instructor was based in the satellite campus, several miles away. The virtual class required a TI graphing calculator, used Pearson textbook & video lectures, and all the tests and quizzes were in Canvas. I worked from home or the main campus, where there was a tutoring center, full of students and tutors making the rounds to explain everything. I received tutoring every other week.
But then our instructor posted the details on our final exams. We were expected to arrive in-person, for the first time of the semester, on that satellite campus at specified times.
I protested, because everything I'd ever done was on the main campus, and I rode public transit, and the distance and unfamiliarity would be a hardship. So the disability services center accommodated me.
They shut me into in a dimly lit one-person room with a desk, paper, and pencil, and I believe there was a camera, and no calculator required. The instructor had granted an extended period to complete the exam, and I finished at the last possible moment. I was so thankful to be done and have good results, because I had really struggled to understand Calculus.
I used a spare laptop I wipe.
LLMs and remote exams changed the equation so now cheating is incredibly easy and super effective compared to trying to morse code someone with a button in your shoe.
Also: I think your suggestion is excellent. We may see this happen in the US if AI cheating gets out of control (which it well).
seriously it reminds me of my high school days when a teacher told me i shouldn’t type up my essays because then they couldn’t be sure i actually wrote them.
maybe we will find our way back to live oral exams before long…
I think things are going to have to get a lot worse before they get better. If we're lucky, things will get so bad that we finally fix some shaky foundations that our society has been trying to ignore for decades (or even centuries). If we're not lucky, things will still get that bad but we won't fix them.
So they know what students should be taught but I don't know that they necessarily know how any better than the administrators.
I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
Edit: and why perfectly capable professionals can’t be teachers without years of certification
Moreover, the same issues arise even outside a classroom setting. A person learning on their own from a book vs. a chatbot faces many of the same problems. People have to deal with the problem of AI slop in office emails and restaurant menus. The problem isn't really about teaching, it's about the difficulty of using AI to do anything involving substantive knowledge and the ease of using AI to do things involving superficial tasks.
My mother was a first grade teacher for 30+ years. In her school system, first grade is the year that students learn to read. Each year, she was also required to take professional training classes for a certain number of days. She told me that, in her career, there were many changes and improvements and new techniques developed to help children learn how to read. One thing that changed a lot: The techniques are way more inclusive, so non-normie kids can learn to read better at an earlier age.
> Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
I attended two universities to get my computer science degree. The first was somewhat famous/prestigious, and I found most of the professors very unapproachable and cared little about "teaching well". The second was a no-name second tier public uni, but I found the professors much more approachable, and they made more effort to teach well. I am still very conflicted about that experience. Sadly, the students were way smarter at the first uni, so the intellectual rigor of discussions was much higher than my second uni. My final thoughts: "You win some; you lose some."Taking this to the extreme, I think that a top-tier university could do very well for itself by only providing a highly selective admission system, good facilities and a rigorous assessment process, while leaving the actual learning to the students.
I was at a Uni aiming for and then gaining "Elite" status in germany and I did not liked the concept and the changes.
I like high profile debates. As high as possible. But I don't like snobism. We all started as newbs.
Your approach sounds too elitist for myself. I think you simply figure out the core skills of your professors. Maybe some teach undergrad well, others only advanced degrees. Maybe some should just be left to research with minimal classrooms etc.
I’ve only taken classes at state schools, and my experience was that I’d often get a professor that was clearly brilliant at publishing but lacked even the most rudimentary teaching skills. Which is insightful in its own way…just not optimal for teaching.
1. Stupider people are better teachers. Smart people are too smart to have any empathic experience on what it’s like to not get something. They assume the world is smart like them so they glaze over topics they found trivial but most people found confusing.
2. They don’t need to teach. If the student body is so smart then the students themselves can learn without teaching.
3. Since students learn so well there’s no way to differentiate. So institutions make the material harder. They do this to differentiate students and give rankings. Inevitably this makes education worse.
I could make arithmetic incomprehensible, let alone QM.
Less "prestigious" universities apply less of that pressure.
That push was accelerated because of COVID, but with the 'AI homework', it gave teachers a possibility to argue against that move and the trend seemed stopped last year (I don't now yet if it has reverted). In any case, I hope this AI trend will give more freedom to teachers, and maybe new ways of teaching.
And I'm not a big Llm fan in general, but in my country, in superior education, it seems good overall.
There is a lot more on the plate when you are kindergarten teacher - as the kids needs a lot of supervision and teaching outside the "subject" matters, basic life skills, learning to socialize.
Conversely, at a university the students should generally handle their life without your supervision, you can trust that all of them are able to communicate and to understand most of what you communicate to them.
So the subject matter expertise in kidnergartens is how to teach stuff to kids. Its not about holding a fork, or to not pull someones hair. Just as the subject matter expertise in an university can be maths. You rarely have both, and I don't understand how you suggest people get both a phd in maths, do enough research to get to be a professor and at the same time get a degree in education?
If you are just providing materials and testing, you aren’t actually teaching. Of course there are a ton of additional skills that go into childhood development, but just saying adults should figure it out and regurgitating material counts as “teaching” is BS.
PhD - Philosophy Doctor
I think this is partially due to the age of the students, by the time you hit college the expectation is you can do a lot of the learning yourself outside of the classroom and will seek out additional assistance through office hours, self study, or tutors/classmates if you aren't able to understand from the lecture alone.
It's also down to cost cutting, instead of having entirely distinct teaching and research faculty universities require all professors to teach at least one class a semester. Usually though the large freshman and sophomore classes do get taught by quasi dedicated 'teaching' professors instead of a researcher ticking a box.
If someone is doing something day in and day out, they do gain knowledge on what works and doesn't work. So just by doing that the professors typically know much more about how people should be taught than the administrators. Further, the administrators' incentives are not aligned towards insuring proper instruction. They are aligned with increasing student enrollment and then cashing out whenever they personally can.
What has AI done? I teach a BA thesis seminar. Last year, when AI wasn't used as much, around 30% of the students failed to turn in their BA thesises. 30% drop-out rate was normal. This year: only 5% dropped out, while the amount of ChatGPT generated text has skyrocketed. I think there is a correlation: ChatGPT helps students write their thesises, so they're not as likely to drop out.
The University and the admins are probably very happy that so many students are graduating. But also, some colleagues are seeing an upside to this: if more graduate, the University gets more money, which means less cuts to teaching budgets, which means that the teachers can actually do their job and improve their courses, for those students who are actually there to learn. But personally, as a teacher, I'm at loss of what to do. Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be? Nobody else seems to care. Or should I pass them, let them graduate, and reserve my energy to teach those who are motivated and are willing to engage?
There’s only so many jobs which have you a good salary.
So everyone had to become a doctor lawyer or engineer. Business degrees were seen as washouts.
Even for the job of a peon, you had to be educated.
So people followed incentives and got degrees - in any way or form they could.
This meant that degrees became a measure, and they were then ruthlessly optimized for, till they stopped having any ability to indicate that people were actually engineers.
So people then needed more degrees and so on - to distinguish their fitness amongst other candidates.
Education is what liberal arts colleges were meant to provide - but this worked only in an economy that could still provide employment for all the people who never wanted to be engineers, lawyers or doctors.
This mess will continue constantly, because we simply cannot match/sort humans, geographies, skills, and jobs well enough - and verifiably.
Not everyone is meant to be a startup founder. Or a doctor. Or a plumber, or a historian or an architect or an archaeologist.
It’s a jobs market problem, and has been this way ever since the American economy wasn’t able to match people with money for their skills.
In my country doctors earn huge salaries and have 100% job security, because their powerful interest groups have successfully lobbied to limit the number of grads below job market's demand. Other degrees don't come even close.
No, you should fail them for turning in bad theses, just like you would before AI.
The person who used the AI slop blog for sources, we asked them to just remove them and resubmit. The person who hallucinated sources is however getting investigated for fabrication. But this is an incredibly long process to go through, which takes away time and energy from actual teaching / research / course prep. Most of the faculty is already overworked and on the verge of burnout (or are recovering post-burnout), so everybody tries to avoid it if they can. Besides, playing a cop is not what anybody wants to do, and its not what teaching should be about, as the original blog post mentioned. IF the University as an institution had some standards and actually valued education, it could be different. But it's not. The University only cares about some imaginary metrics, like international rankings and money. A few years ago they built a multi-million datacenter just for gathering data from everything that happens in the University, so they could make more convincing presentations for the ministry of education — to get more money and to "prove" that the money had a measurable impact. The University is a student-factory (this is a direct quote by a previous principal).
> The person who used the AI slop blog for sources
That phrase is so utterly dystopian. I am laughing, but not in a good way.In The Netherlands, we have a three-tier tertiary system: MBO (practical job education / trades), HBO (college job education / applied college) and WO (scientific education / university).
A lot of the fancy jobs require WO. But in my opinion, WO is much too broad a program, because it tries to both create future high tier workers as well as researchers. The former would be served much better by a reduced, focused programme, which would leave more bandwidth for future researchers to get the 'true' university education they need.
Take law for example and free speech - a central tenet to a functional democracy is effective ways to trade ideas.
A core response in our structure to falsehoods and rhetoric is counter speech.
But I can show you that counter speech fails. We have realms upon realms of data inside tech firms and online communities that shows us the mechanics of how our information economies actually work, and counter speech does diddly squat.
Education is also stuck in a bind. People need degrees to be employable today, but the idea of education is tied up with the idea of being a good educated thinking human being.
Meaning you are someone who is engaged with the ideas and concepts of your field, and have a mental model in your head, that takes calories, training and effort to use to do complex reasoning about the world.
This is often overkill for many jobs - the issue isn’t doing high level stats in a day science role, it’s doing boring data munging and actually getting the data in the first place. (Just an example).
High quality work is hard, and demanding, and in a market with unclear signals, people game the few systems that used to be signals.
Which eventually deteriorated signal till you get this mess.
We need jobs that give a living wage, or provide a pathway to achieving mastery while working, so that the pressure on the education lever can be reduced and spread elsewhere.
> But I can show you that counter speech fails
Could you show me that? What's your definition of failure?
Hmmm.
An example - the inefficacy of Fact checking efforts. Fact checking is quintessentially counter speech, and we know that it has failed to stop the uptake and popularity of falsehoods. And I say this after speaking to people who work at fact checking orgs.
However, this is in itself too simple an example.
The mechanics of online forums are more interesting to illustrate the point - Truth is too expensive to compete with cheaper content.
Complex articles can be shared on a community, which debunk certain points, but the community doesn’t read it. They do engage heavily on emotional content, which ends up supporting their priors.
I struggle to make this point nicely, but The accuracy of your content is secondary to its value as an emotional and narrative utility for the audience.
People are not coming online to be scientists. They are coming online to be engaged. Counter speech solves the issue of inaccuracy, and is only valuable if inaccuracy is a negative force.
It is too expensive a good to produce, vs alternatives. People will coalesce around wounds and lacunae in their lives, and actively reject information that counters their beliefs. Cognitive dissonance results in mental strife and will result in people simply rejecting information rather than altering their priors.
Do note - this is a point about the efficacy of this intervention in upholding the effectiveness of the market where we exchange ideas. There will be many individual exchanges where counter speech does change minds.
But at a market level, it is ineffective as a guardian and tonic against the competitive advantage of falsehoods against facts.
——
Do forgive the disjointed quality in the response. It’s late here, and I wish I could have just linked you to a bunch of papers, but I dont think that would have been the response you are looking for.
If they want to use AI make them use it right.
The larger work that the intellectual and academic forces of a liberal democracy is that of “verification”.
Part of the core part of the output, is showing that the output is actually what it claims to be.
The reproducibility crisis is a problem Precisely because a standard was missed.
In a larger perspective, we have mispriced facts and verification processes.
They are treated as public goods, when they are hard to produce and uphold.
Yet they compete with entertainment and “good enough” output, that is cheaper to produce.
The choice to fail or pass someone doesn’t address the mispricing of the output. We need new ways to address that issue.
Yet a major part of the job you do. is to hold up the result to a standard.
You and the institutions we depend on will continue to be crushed by these forces. Dealing with that is a separate discussion from the pass or fail discussion.
It's probably not as bad for mathematical derivations. I still do those by hand since they are more like drawing than expression.
yes.
> I would be at a massive disadvantage.
yes.
...but.
how would you propose to filter out able cheaters instead? there's also in person one on one verbal exam, but economics and logistics of that are insanely unfavorable (see also - job interviews.)
So is testing; people who don't have the skills don't do well. Hell, the entire concept of education is ableist towards learning impaired kids. Let's do away with it entirely.
Having had much occasion to consider this issue, I would suggest moving away from the essay format. Most of the typical essay is fluff that serves to provide narrative cohesion. If knowledge of facts and manipulation of principles are what is being evaluated, presentation by bullet points should be sufficient.
Doing anything is inherently based on your ability to do it. Running is inherently ableist. Swimming is ableist. Typing is inherently ableist.
Pointing this out is just a thought terminating cliche. Ok, it's ableist. So?
> As soon as I could use a word processor I blossomed.
You understand this is inherently ableist to people that can't type?
> I still do those by hand since they are more like drawing than expression.
Way to do ableist math.
Teachers and professors: you can say "no". Your students will thank you in the future.
Wow.
Maybe we should consider the possibility that this isn't a good idea? Just a bit? No? Just ignore how obviously comparable this is to the most famous dystopian fiction in literary history?
Just wow. If you're willing to do that, I don't know what to tell you.
I am now doing an Online MSc in CompSci at Georgia Tech. The online evaluation and proctoring is fine. I’ve taken one rather math-heavy course (Simulation) and it worked. I see the program however is struggling with the online evaluation of certain subjects (like Graduate Algorithms).
I see your point that a professor might prefer to have physical evaluation processes. I personally wouldn’t begrudge the institution as long as they gave me options for proctoring (at my own expense even) or the course selection was large enough to pick alternatives.
This hybrid model is vastly preferable to "true" remote test taking in which they try to do remote proctoring to the student's home using a camera and other tools.
LLMs aren't destroying the University or the essay.
LLMs are destroying the cheap University or essay.
Cheap can mean a lot of things, like money or time or distance. But, if Universities want to maintain a standard, then they are going to have to work for it again.
No more 300+ person freshman lectures (where everyone cheated anyways). No more take-home zoom exams. No more professors checked out. No more grad students doing the real teaching.
I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
But yeah, if the professor is clearly checked out and only interested in his research, and the students are being told that the only purpose of their education is to get a piece of paper to show to potential employers, you'll get a cynical death-spiral.
(I've been on both sides of this, though back when copy-pasting from Wikipedia was the way to cheat.)
Back when I was teaching part time, I had a lot of fun looking at the confused looks on my students' faces when I said "you cannot use Wikipedia, but you'll find a lot of useful links at the bottom of any article there..."
But that commercially oriented boards are ruining education, that's a given. That they would stoop to this level is a bit surprising.
Of course that's only my experience and I can't speak for all of humanity. I'm sure people exist who can engage in and utilize remote learning to it's full potential. That said I think it's extremely tempting to lean on it to get out of providing classrooms, providing equipment, and colleges have been letting the education part of their school rot for decades now in favor of sports and administrative bloat, so forgive me if I'm not entirely trusting them to make the "right" call here.
Edit: Also on further consideration, remote anything but teaching very much included also requires a level of tech literacy that, at least in my experience, is still extremely optimistic. The number of times we have to walk people through configuring a microphone, aiming a webcam, sharing to the meeting, or the number of missed participants because Teams logged them out, or Zoom bugged out on their machine, or whatever. It just adds a ton of frustration.
I gather that that's not necessarily what you were referring to, but with the way that people tend to lump all remote experiences in the "inferior" basket together, I just wanted to point out that, in many cases, that kind of accessibility is better than the actual alternative: missing out.
Provide machine problems and homework as exercises for students to learn, but assign a very low weight to these as part of an overall grade. Butt in seat assessments should be the majority of a course assessment for many courses.
I could understand US tuition if that were the case. These days with overworked adjuncts make it McDonalds at Michelin star prices.
So, it's pretty hard for universities over here to maintain standards in this GenAI world, when the paying customer only cares about quantity, and not quality. I'm feeling bad for the students, not so much for foolish politicians.
But, of course, LLMs are affecting the whole world.
Yeah, I'd love to hear more about how other countries are affected by this tool. For Finland, I'd imagine that the feedback loop is the voters, but that's a bit too long and the incentives and desires of the voting public get a bit too condensed into a few choice to matter [0].
What are you seeing out there as to how students feel about LLMs?
[0] funnily enough, like how the nodes in the neural net of an LLM get too saturated if they don't have enough parameters.
Students can learn for free via online resources, forums, and LLM tutors (the less-trustworthy forums and LLMs should primarily be used to assist understanding the more-trustworthy online resources).
Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.
And tbh, lots of people historically would have loved a calculator that could write an essay about shakespeare or help code a simple game.
You tell it to act as a tutor, it'll act as one. Tell it to solve your homework in the form of a poem, it'll do that.
That's not just a calculator, even though it's just calculating, just as much as a computer isn't just a voltage switcher, even thought it's just switching voltages.
You can ask chat gpt to pretend to be Julius Caesar, it still probably shouldnt be in an english exam.
Point being, the fundamentals matter. I can't do mental arithmetic very well these days because it's been years since I've practiced, but I know how it works in the first place and can do it if need be. How is a kid learning geometry or calculus supposed to get by and learn to spot the patterns that make sense and the ones that don't without first knowing the fundamentals underlaying the more complex concepts?
My calculator doesn’t depend on a fancy AI model in the cloud. It’s not randomly rate limited during peak times due to capacity constraints. It’s not expensive to use, whereas the good LLM models are.
Did i mention calculators are actually deterministic? In other, always reliable. It’s difficult to compare the two. One gives a false sense of accomplishment because it’s say 80% reliable, and the other is always 100% reliable.
Yes, that's what we did and are still doing. Most grade schools don't allow calculators on basic arithmetic classes. Colleges don't integrate WolframAlpha into Calculus 101 exams. etc.
I want my math graduates to be skilled at using CAS systems. Yes, even in Calculus 1.
The lack of computer access for teaching math which objectively is supercharged by computation is a massive disservice to millions of individuals who could have used those CAS systems.
I don't want my engineers solving equations by hand. I especially don't want anyone who claims to be a "statistician" to not be skilled in Python (or historically, R)
When it's general thinking we've trained people not to have to do anymore, it's going to be dire.
Kicking against the pricks.
It is understandable that professional educators are struggling with the AI paradigm shift, because it really is disrupting their profession.
But this new reality is also an opportunity to rethink and improve the practice of education.
Take the author comment above: you can't disagree with the sentiment but a more nuanced take is that AI tools can also help people to be better communicators, speakers, writers. (I don't think we've seen the killer apps for this yet but I'm sure we will soon).
If you want students to be good at spelling and grammar then do a quick spelling test at the start of each lesson and practice essay writing during school time with no access to computers. (Also, bring back Dictation?)
Long term: yes I believe we're going to see an effect on people's cognition abilities as AI becomes increasingly integrated into our lives. This is something we as a society should grapple with and develop new enlightened policies and teaching methods.
You can't put the genie back in the bottle, so adapt, use AI tools wisely, think deeply about ways to improve education in this new era.
The reality is, if someone wants to learn something then there's very little need to cheat, and if they don't want to learn the thing but they're required to, the cheating sort of doesn't matter in the end because they won't retain or use it.
Or to put it simpler, you can lead a horse to water but..
A lot of debate around the idea of student loan forgiveness but nobody is trying to address how the student loan problem got so bad in the first place.
Yeah its absolutely bonkers. I spent 9 months out of school traveling, and the provided homework actually set me ahead of my peers when I had returned.
No ones stopped and considered "What is a school for".
For some people it seems to be mandatory state sponsored childcare. For others its about food? Some people tell me it sucks but its the best way to get kids to socialise?
I feel like if it was an engineering project there would be a formal requirements study, but because its a social program what we get instead is just a big bucket of feelings and no defined scope.
During my time I have come to view schooling as an adversary. I am considering whether it might be prudent to instruct my now toddler that school is designed to break him, and that his role is actually to achieve in spite of it, and that some of his education will come in opposition to the institution.
Similar to Montessori, LLMs can help students who wander off in various directions.
I remember often being “stuck” on some concept (usually in biology and chemistry), where the teacher would hand-wave something as truth, this dismissing my request for further depth.
Of course, LLMs in the current educational landscape (homework-heavy) only benefit the students who are truly curious…
My hope is that, with new teaching methods/styles, we can unlock (or just maintain!) the curiosity inherent in every pupil.
(If anyone knows of a tool like this, where an LLM stays on a high-level trajectory of e.g. teaching trigonometry, but allows off-shoots/adventures into other topical nodes, I’d love to know about it!)
Edutech is pretty new and virtually all of it has been a disaster. Sitting in a lecture and taking notes on paper is tried, tested, and research backed. It works. Not for everyone, but for a lot of people.
"Typical classroom experience" hasn't even meant the same thing for thousands of years.
"lecture" used to be centered around reading the source book so that students could copy it verbatim. The printing press was an important piece of "Edutech". Technology has been continuous, and much of it has been applied to impacted the experience of education, not just in the last few years, but over a long window of history. Yeah, what we currently think of as "edutech" is what has been around for only a short time, and hasn't yet been established as part of the consensus baseline -- but that's a moving target.
The issues with Edutech are mostly because they're bolting it on to the same broken system that people don't find value in. But the original comment wasn't about Edutech. When people want to learn new things, they largely do it without either typical college classrooms or Edutech, because the alternatives are so much better than anything coming out of the broken academic morass.
And not due to a lack of information. The draw of education hasn't been access, not since the internet anyway. Structure, pacing, curriculum, schedule, and measurement cannot be recreated.
I've had many people tell me they're going to learn to program online. Almost all of them fail.
At the end of the day, we go home and we don't crack open a textbook. We sit and watch TV. Maybe we go for a walk or go to the gym. The vast majority of people do not have the mindset required to be self-educated.
We used to do the "everyone self-educate" thing. Most people couldn't read or write. Humans are unintuitive. You can't just give them access to things and expect results. They require accountability, they require structure. We're not machines, we're faulty fleshy creatures. Our reward feedback loops were never built for self-determination at this high of a level.
Do you regard that as intrinsically problematic? The people themselves weren't unhappy about their state, and society, too, could function well without mass literacy. There was a certain period where we thought training wage workers for their duties required them to be literate, but that might turn out to be unnecessary, if supplying an LLM is cheaper overall than mandatory school education.
That's not true though? Many people are trying to increase their CS skills through self-study. This topic even comes up a lot here, with people recommending the self-studying they've been doing in CS.
> I've had many people tell me they're going to learn to program online. Almost all of them fail.
Yet there are still a large number of self-taught programmers.
Of course, more people will have an incentive to learn through the university system than through self-education, but that's because the current system says that you only get the highest level credentials if you go through a university education. Naturally, a system that explicitly biases a certain form of education to a large degree is going to cause more people to do that. But that's for the credential, not the education. When the credentials are taken out, we see people do better with other forms of education.
The vast majority of people do not complete.
The people who do complete are outliers. I suppose we can build for outliers, but then most people are just going to be ignored in this system, and if they have way to respond (vote), they won’t be happy about it.
Then it was corporal punishment if you did not learn quickly enough.
Comenius idea was of pansophia - knowledge for all. Also his Latin textbook - https://en.wikipedia.org/wiki/Janua_Linguarum_Reserata was quite revolutionary - in using relations to real world knowledge to learn a new language.
Even more ground breaking was his picture book for children - https://en.wikipedia.org/wiki/Orbis_Pictus . We take hybrid approach to learning for granted these days.
Even then Comenius was mostly forgotten in the enlightenment of 18th century - probably ideas of Jean-Jacques Rousseau took over - with insufficient backing.
The specific examples I recall most vividly were from 4th grade and 7th grade.
I think you hit on a major issue: Homework-heavy. What I think would benefit the truly curious is spare time. These things are at odds with one another. Present-day busy work could easily be replaced by occupying kids' attention with continual lessons that require a large quantity of low-quality engagement with the LLM. Or an addictive dopamine reward system that also rewards shallow engagement -- like social media.
I'm 62, and what allowed me to follow my curiosity as a kid was that the school lessons were finite, and easy enough that I could finish them early, leaving me time to do things like play music, read, and learn electronics.
And there's something else I think might be missing, which is effort. For me, music and electronics were not easy. There was no exam, but I could measure my own progress -- either the circuit worked or it didn't. Without some kind of "external reference" I'm not sure that in-depth research through LLMs will result in any true understanding. I'm a physicist, and I've known a lot of people who believe that they understand physics because they read a bunch of popular books about it. "I finally understand quantum mechanics."
I see both sides of this. When I was a teenager, I went to a pretty bad middle school where there were fights everyday, and I wasn’t learning anything from the easy homework. On the upside, I had tons of free time to teach myself how to make websites and get into all kinds of trouble botting my favorite online games.
My learning always hit a wall though because I wasn’t able to learn programming on my own. I eventually asked my parents to send me to a school that had a lot more structure (and a lot more homework), and then I properly learned math and logic and programming from first principles. The upside: I could code. The downside: there was no free time to apply this knowledge to anything fun
Yeah I feel like teachers are going to try and use LLMs as an excuse to push more of the burden of schooling to their pupils homelife somehow. Like, increasing homework burdens to compensate.
This resonates with me a lot. I used to dismiss AI as useless hogwash, but have recently done a near total 180 as I realised it's quite useful for exploratory learning.
Not sure about others but a lot of my learning comes from comparison of a concept with other related concepts. Reading definitions off a page usually doesn't do it for me. I really need to dig to the heart of my understanding and challenge my assumptions, which is easiest done talking to someone. (You can't usually google "why does X do Y and not Z when ABC" and then spin off from that onto the next train of reasoning).
Hence ChatGPT is surprisingly useful. Even if it's wrong some of the time. With a combination of my baseline knowledge, logic, cross referencing, and experimentation, it becomes useful enough to advance my understanding. I'm not asking ChatGPT to solve my problem, more like I'm getting it to bounce off my thoughts until I discover a direction where I can solve my problem.
Eg. it's easy to ask copilot: can you give me a list of free, open source mqtt brokers and give me some statistics in the form of a table
And copilot (or any other ai) does this quite nicely. This is not something that you can ask a traditional search engine.
Offcourse you do need to know enough of the underlying material and double check what output you get for when the AI is hallucinating.
Putting together a project using the AI help will be a very close mimicry of what real work will be like and if the teacher is good they will learn way more than being able to spout information from memory.
It's not only an anthropomorphism, it's also a euphemism.
A correct interpretation of the word would imply that the LLM has some fantastical vision that it mistakes for reality. What utter bullsh1t.
Let's just use the correct word for this type of output: wrong.
When the LLM generates a sequence of words, that may or may not be grammatically correct, but infers a state or conclusion that is not factually correct; lets state what actually happened: the LLM generated text was WRONG.
It didn't take a trip down Alice's rabbit hole, it just put words together into a stream that inferred a piece of information that was incorrect, it was just WRONG.
The euphemistic aspect of using this word is a greater offense than the anthropomorphism, because it's painting some cutesy picture of what happened, instead of accurately acknowledging that the s/w generated an incorrect result. It's covering up for the inherent short comings of the tech.
Not all llm errors are hallucinations - if an llm tells me that 3 + 5 is 7, It's just wrong. If it tells me that the source for 3 + 5 being 7 is a seminal paper entitled "On the relative accuracy of summing numbers to a region +-1 from the fourth prime", we would call that a hallucination. In modern parlance " hallucination" has become a term of art to represent a particular class of error that llms are prone to. (Others have argued that "confabulation" would be more accurate, but it hasn't really caught on.)
It's perfectly normal to repurpose terms and anthropomorphizations to represent aspects of the world or systems that we create. You're welcome to try to introduce other terms that don't include any anthropomorphization, but saying it's "just wrong" conveys less information and isn't as useful.
But in this specific case, I would say the reuse of this particular word, to apply to this particular error, is still incorrect.
A person hallucinating is based on a many leveled experience of consciousness.
The LLM has nothing of the sort.
It doesn't have a hierarchy of knowledge which it is sorting to determine what is correct and what is not. It doesn't have a "world view" based on a lifetime of that knowledge sorting.
In fact, it doesn't have any representation of knowledge at all. Much less a concept of whether that knowledge is correct or not.
What it has is a model of what words came in what order, in the training set on which it was "trained" (another, and somewhat more accurate, anthropomorphism).
So without anything resembling conscious thought, it's not possible for an LLM to do anything even slightly resembling human hallucination.
As such, when the text generated by an LLM is not factually correct, it's not an hallucination, it's just wrong.
I mark "hallucinations" as "LLM Slop" in my grading sheets, when someone gives me a 100-character sed filter that just doesn't work that there is no way we discussed in class/in examples/in materials, or a made up API endpoint, or non-nonsensical file paths that reference non-existent commands.
Slop is an overused term these days, but it sums it up for me. Slop, from a trough, thrown out by an uncaring overseer, to be greedily eaten up by the piggies, who don't care if its full of shit.
Just like after the invention of computers, those methods of how to do manual calculations faster can be eliminated from teaching tasks. Education shifted towards teaching students how to use computational tools effectively. This allowed students to solve more complex problems and work on higher-level concepts that manual calculations couldn't easily address.
In the era of AI, what teachers need to think about is not to punitively prohibit students from using AI, but to adjust the teaching content to better help students master related subjects faster and better through AI.
How long before a centaur team of human + AI is less effective than the AI alone?
On the one hand, I use AI extensively for my own learning, and it's helping me a lot.
On the other hand, it gets work done quickly and poorly.
Students mistake mandatory assignments for something they have to overcome as effortlessly as possible. Once they're past this hurdle, they can mind their own business again. To them, AI is not a tutor, but a homework solver.
I can't ask them to not use computers.
I can't ask them to write in a language I made the compiler for that doesn't exist anywhere, since I teach at a (pre-university) level where that kind of skill transfer doesn't reliably occur.
So far we do project work and oral exams: Project work because it relies on cooperation and the assignment and evaluation is open-ended: There's no singular task description that can be plotted into an LLM. Oral exams because it becomes obvious how skilled they are, how deep their knowledge is.
But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
Teaching Linux basics doesn't suffer the same because the exam-preparing exercise is typing things into a terminal, and LLMs still don't generally have API access to terminals.
Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
Assuming you have access to a computer lab, have you considered requiring in-class programming exercises, regularly? Those could be a good way of checking actual skills.
> Maybe providing the IDE online and observing copy-paste is a way forward. I just don't like the tendency that students can't run software on their own computers.
And you'll frustrate the handful of students who know what they're doing and want to use a programmer's editor. I know that I wouldn't have wanted to type a large pile of code into a web anything.
I might not have liked that, but I sure would have liked to see my useless classmates being forced to learn without cheating.
Even IntelliJ has gateway
By IntelliJ's own (on-machine) standards, Gateway is crap. I use the vi emulation mode (using ideavim) and the damn thing gets out of sync unless you type at like 20wpm or something. Then it tries to rollback whatever you type until you restart it and retry. I can't believe it is made by the same Jetbrains known for their excellent software.
I don't see why this is so hard, other than the usual intergenerational whining / a heaping pile of student entitlement.
If anything, the classes that required extensive paper-writing for evaluation are the ones that seem to be in trouble to me. I guess we're back to oral exams and blue books for those, but again...worked fine for prior generations.
You know that grading paper exams is a lot more hassle _for the teachers_?
Your overall point might or might not still stand. I'm just responding to your 'I don't see why this is so hard'. Show some imagination for why other people hold their positions.
(I'm sure there's lots of other factors that come into play that I am not thinking of here.)
> Show some imagination for why other people hold their positions.
I say that as someone who has also graded piles of paper exams in graduate school (also not that long ago!)
I don't believe the argument you are making is true, but if the primary objection really is that teachers have to grade, then no, I don't have any sympathy.
We made it. But, that’s survivorship bias, right? We can’t really know how much potential has wasted.
Doing programming on paper seems to me like assessing someone's skills in acrobatics by watching them do the motions in a zero-gravity environment. Without the affordances given by the computer, it's just not the same activity.
You can very easily test CS concepts on paper, and programming is demonstrated via group projects.
To solve those in a reasonable amount of time, you need to form a mental model of what is going on & how to fix it. Having access to a computer by itself won't really help for those.
In that context paper exams for computer science make much more sense to me now - they want you to understand the problem and provide a solution, with pen and paper being the output format.
People in the past put up with all kinds of struggles. They had to.
> I don't believe the argument you are making is true, but if the primary objection really is that teachers have to grade, then no, I don't have any sympathy.
I have no clue what the primary objection really is. I was responding to "I don't see why this is so hard", which just shows a lack of imagination.
Yup. ~25 years ago competitions / NOI / leet_coding as they call it now were in a proctored room, computers with no internet access, just plain old borland c, a few problems and 3h of typing. All the uni exams were pen & paper. C++ OOP on paper was fun, but iirc the scoring was pretty lax (i.e. minor typos were usually ignored).
We wrote c++ on paper for some questions and were graded on it. Ofcourse the tutors were lenient on the syntax they cared about the algorithm and the data structures not so much for the code. They did test syntax knowledge as well but more in code reasoning segments, i.e questions like what's the value of a after these two statements or after this loop is run.
We also had exams in the lab with computers disconnected from the internet. I don't remember the details of the grading but essentially the teaching team was in the room and pretty much scored us then and there.
There’s such a shortfall of teachers globally, and the role is a public good, so it’s constantly underpaid.
And if you are good - why would you teach ? You’d get paid to just take advantage of your skills.
And now we have a tool that makes it impossible to know if you have taught anyone because they can pass your exams.
> small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them ... won't lead them to a meaningful existence
I don't see a problem, the system is working.
The same group of people that are going to loose their job to an LLM arent getting smarter because of how they are using LLM's.
Paternalism in the sense of 'we know what's better for you than you do' is perhaps justified for those people who really don't know better. But I don't think we should overextend that notion.
I have to apologise, I was under this impression this thread was about university students, who should be old enough to fend for themselves (and enjoy respectively suffer from the consequences of their own actions). But I don't think anyone actually mentioned that age in the thread. I mixed it up with another one.
This is only temporary. It will be able to code like anyone in time. The only way around this will be coding in-person, but only in elementary courses. Everyone in business will be using AI to code, so that will be the way in most university courses as well.
However, most legacy code is fairly primitive on that level, so my observation is in no way contradicting yours.
It has been interesting to see this idea propagate throughout online spaces like Hacker News, too. Even before LLMs, the topic of cheating always drew a strangely large number of pro-cheating comments from people arguing that college is useless, a degree is just a piece of paper, knowledge learned in classes is worthless, and therefore cheating is a rational decision.
Meanwhile, whenever I’ve done hiring or internships screens for college students it’s trivial to see which students are actually learning the material and which ones treat every stage of their academic and career as a game they need to talk their way through while avoiding the hard questions.
Wow.
I guess this "tough love" attitude helps for some people? But I think mostly it's just that people think it works for _other_ people, but rarely people think that this works when applied to themselves.
Like, imagine the school administration walking up to this teacher and saying "hey dum dum, you're failing too many students and the time you've spent teaching them is a waste of life."
Many teachers seem to think that students go to school/university because they're genuinely interested in motivated. But more often then not, they're there because of societal pressure, because they know they need a degree to have any kind of decent living standard, and because their parents told them to. Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.
I'm sure GP isn't calling them dum-dum to their face. If they can't even do basic stuff, which seems to be their criteria here for the name calling, maybe a politely given reality-check isn't that bad. Some will wake up to the gravity of their situation and put in the hard work and surprise their teacher.
> Yeah you can call them names, call them lazy or whatever, but that's kinda like pointing at poor people and saying they should invest more.
They _should_ invest more because in this case, the "investment' is something that the curriculum simply demands - dedication and effort. I mean unless one is a genius, since when that demand is unreasonable? You want to work with people who got their degree without knowing their shit? (not saying that everyone who doesn't have a degree isn't knowledgeable - I've worked with very smart self-taught people).
Frankly I applaud that approach. Classes are to convey knowledge, even if the student only gives a shit about the diploma at the end of the road. At least someone cares enough to tell these students the truth about where that approach is going to take them in life.
At the time it was optional, but I get the feeling that if they still use that framework, it just became mandatory, because it has no internet facing documentation.
That said, I imagine they might have chucked it in for Unity before AI hit, in which case they are largely out of luck.
>But every year a small handful of dum-dums made it all the way to exam without having connected two dots, and I have to fail them and tell them that the three semesters they have wasted so far without any teachers calling their bullshit is a waste of life and won't lead them to a meaningful existence as a professional programmer.
This happened to me with my 3d maths class, and I was able to power through a second run. But I am not sure I learned anything super meaningful, other than I should have been cramming better.
Huh, fighting my way through a Linux CLI is exactly the kind of thing I use Chatgpt for professionally.
I did study it in compsci, but those commands are inherently not memorable.
Often I just paste an error with some scroll back but no instructions and it works out what I need.
I've had it make some pretty bad ones. Not directly hooked in to my terminal, just copy and paste. A couple of git doozies that lost my work, but I've done those too. Others more subtle, one of note is a ZFS ZPOOL creation script it gave me used classic linux style /dev/sda style drive identifiers instead of proper /dev/by-id paths which led to the disks being marked as failed every time I rebooted. Sure, that's on me for not verifying, but I was a little out of my depth with ZFS on Linux and thought that ZFS' own internal uuid scheme was handling it.
(Dramatic. AI is fine for upper-division courses, maybe. Absolutely no use for it in introductory courses.)
Our school converted a computer lab to a programming lab. Computers in the lab have editors/compilers/interpreters, and whitelist documentation, plus an internal server for grading and submission. No internet access otherwise. We've used it for one course so far with good results, and and extending it to more courses in the fall.
An upside: our exams are now auto-graded (professors are happy) and students get to compile/run/test code on exams (students are happy).
>Students mistake mandatory assignments for something they have to overcome as effortlessly as possible.
This is the real demon to vanquish. We're approaching course design differently now (a work in progress) to tie coding exams in the lab to the homework, so that solving the homework (worth a pittance of the grade) is direct preparation for the exam (the lion's share of the grade).
People could try to cheat, but it would be pretty stupid to think they would not catch you.
I know of a time where america didn’t have this problem and I could see it ramping up, because of my experience in India.
People will spend incredible efforts to cheat.
Like stories of parents or conspirators scaling buildings to whisper answers to students from windows.
As an higher education (university) IT admin who is responsible for the CS program's computer labs and is also enrolled in this CS program, I would love to hear more about this setup, please & thank you. As recently as last semester, CS professors have been doing pen'n paper exams and group projects. This setup sounds great!
It's a complete game changer for assessment—anything, really, but basic programming skills in particular. At this point I wouldn't teach without it.
So if the professors can cheat and they're happy about having to do less teaching work, thereby giving the students a lower-quality educational experience, why shouldn't the students just get an LLM to write code that passes the auto-grader's checks? Then everyone's happy - the administration is getting the tuition, the professors don't have to grade or give feedback individually, and the students can finish their assignments in half an hour instead of having to stay up all night. Win win win!
The value of educational feedback drops rapidly as time passes. If a student receives immediate feedback and the opportunity to try again, they are much more likely to continue attempting to solve the problem. Autograders can support both; humans, neither. It typically takes hours or days to manually grade code just once. By that point students are unlikely to pay much attention to the feedback, and the considerable expense of human grading makes it unlikely that they are able to try again. That's just evaluation.
And the idea that instructors of computer science courses are in a position to provide "expert feedback" is very questionable. Most CS faculty don't create or maintain software. Grading is usually done by either research-focused Ph.D. students or undergraduates with barely more experience than the students they are evaluating.
That's where you're wrong. Being a professional programmer is 10% programming, 40% office politics, and 50% project management. If your student managed to get halfway through college without any actual programming skills, they're perfect candidate, because they clearly own the 90% of skills needed to be a professional programmer.
I'd say that really depends on your job.
At smaller companies, your job will likely be 60% programming at a minimum.
Only at ~100 employees do companies fall into lots of meetings and politics.
What’s changed is that “some working code” is no longer proof that a student understands the material.
You’re going to need a new way to identify students that understand the material.
What are the higher level questions that LLMs will help with, but for which humans are absolutely necessary? The concern I have is that this line doesn't exist -- and at the very best it is very fuzzy.
Ironically, this higher level task for humans might be ensuring that the AIs aren't trying to get us (whatever that means, genocide, slavery, etc...).
Certainly there's something to be said for reconsidering much of the purpose (and mechanisms) of post-secondary education, but we often 'force' children and young adults to do things they don't want to do for their own good. I think it's better we teach our children the importance of learning - the lack of which is what results in, as another commenter puts it, students viewing homework as "something they have to overcome"
There's a reason why conservatives are so obsessed with school choice, LGBT book bans, etc.
The degree is the key that unlocks the door to a job. Not the knowledge itself, but the actual physical diploma.
And it REALLY, REALLY doesn't help that there are so many jobs out there that could be done just fine with a HS diploma. But because reasons, you now need a college degree for that job.
The problem isn't new. For decades people have bought fake degrees, hired people to do their work, even hired people the impersonate themselves.
But regardless I don't buy that, especially in college where you pick your own set of classes.
Unfortunately, 18-year-olds generally can't be trusted to go a whole semester without succumbing to the siren call of easy GenAI A's. So even if you tell them that the final will be in-person, some significant chunk of them will still ChatGPT their way through and bomb the final.
Therefore, professors will probably have to have more frequent in-person tests so that students get immediate feedback that they're gonna fail if they don't actually learn it.
I really think we need these policies to be developed by the opposite of misanthropists.
Maybe we should go back to times where failing students was seen more so fault of the student than the system. At least when majority of students pass and there is no proven fault by faculty.
In our competitive, profit-driven world--what is the value of a human being and having human experiences?
AI is neither inevitable nor necessary--but it seems like the next inevitable step in reducing the value of a human life to its 'outputs'.
ChatGPT can’t know if the cafe around the corner has banana bread, or how it feels to lose a friend to cancer. It can’t tell you anything unless a human being has experienced it and written it down.
It reminds me of that scene from Good Will Hunting: https://www.imdb.com/de/title/tt0119217/quotes/?item=qt04081...
But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.
If you don't want to determine your own value, you're probably no worse off letting an AI do that than anything else. Religion is probably more comfortable, but I'm sure AI and religion will mix before too long.
AI is incapable of producing anything that's not basically a statistical average of its inputs. You'll never get an AI Da Vinci, Einstein, Kant, Pythagoras, Tolstoy, Kubrick, Mozart, Gaudi, Buddha, nor (most ironically?) Turing. Just to name a few historical humans whose respective contributions to the world are greater than the sum of the world's respective contributions to them.
Unless you loosen the meaning of statistical average so much that it ends up including human creativity. At the end of the day it's basically the same process of applying an idea from one field to another.
Most humans are not Da Vinci, Einstein, Kant, etc. Does that make them not valuable as humans?
All humans (I believe!) have the potential to be that amazing. And all humans come up with amazing ideas and produce amazing works in their life, just that 99% of us aren't appreciated as much as the famous 1% are. We're all valuable.
Capitalism barely concerns itself with humans and whether human experiences exist or not is largely irrelevant for the field. As far as capitalism knows, humans are nothing but a noisy set of knobs that regulate how much profit one can make out of a situation. While tongue-in-cheek, this SMBC comic [1] about the Ultimatum game is an example of the type of paradoxes one gets when looking at life exclusively from an economics perspective.
The question is not "what's the value of a human under capitalism?" but rather "how do we avoid reducing humans to their economic output?". Or in different terms: it is not the blender's job to care about the pain of whatever it's blending, and if you find yourself asking "what's the value of pain in a blender-driven world?" then you are solving the wrong problem.
If it's to understand the material, then skip the essay writing part and have them do a traditional test. If it's to be able to write, they probably don't need that skill anymore so skip the essay writing. If it's to get used to researching on their own, find a way to have them do that which doesn't work with LLMs. Maybe very high accuracy is required (a weak point for LLMs), or the output is not an LLM-friendly form, or it's actually difficult to do so the students have to be better than LLMs.
Any person who can't write coherently and in a well organized way isn't going to be able to prompt a LLM effectively either. Writing skills become _more_ important in the age of LLMs, not less.
Writing is an essential skill.
Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.
This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).
The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?
The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.
Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).
If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.
Why are we still grading students on formulaic outputs when AI can do that faster?
The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.
That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.
Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.
Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.
The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.
It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.
Meanwhile, students are already living in the future, where AI is as foundational as electricity.
The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.
Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.
Edit: sorry for so many edits. Many typos.
You think we will make progress by learning to use AI in certain ways, and that assignments can be crafted to inculcate this. But a moment's acquaintance with people who use AI will show you that there is a huge divide between some uses of AI and others, and that some people use AI in ways which is not creative and so on. Ideally this would prompt you to reflect on what characteristics of people incline them towards using AI in certain ways, and what we can do to promote the characteristics that incline people to use AI in productive and interesting ways, etc. The end result of such an inquiry will be something like what the author of this piece has arrived at, unfortunately. Any assignment you think is immune to lazy AI use is probably not. The only real solution is the adversarial approach the author adopts.
I assume it's because many of the commenters of this post are skewed towards academia, and perhaps view the disruption by AI to the traditional methods of grading student work as a challenge to their profession.
As we have seen many times throughout history, when disruptive forces of technical or demographic changes or a new set of market forces occurs, incumbents often struggle to adapt to the new situation.
Established traditional education is a massive ship to turn around.
Your comments contain much food for thought and deserve to be debated. I agree with you that educators should not be branding students as cheaters. Using AI in an educational context is a rational and natural thing to do, especially for younger students.
> ... AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.
- Yes, this is such an important point and it's why we need enlightened policy making leading to meaningful education reform.
I do disagree with you about incorporating more pen and paper activities - I think this would provide balance and some important key skills.
No doubt AI is challenging to many areas of society, especially education. I'm not saying it's a wonderful thing that we don't need to worry about, but we do need to think deeply about its impacts and how we can harness its positive strengths and radically improve teaching and learning outcomes. It's not about locking students in exam rooms with high tech surveillance.
With AI it's disappointing that the prevalent opinions of many educators are seemingly stuck and struggling to adapt.
Meanwhile society will move on.
Edit: good to see you got a response!
The long list of titles is interesting and almost leads us to a self-referential thought. These series were often known as "boiler-room novels" because they were basic and formulaic, and it was possible to command a team of entry-level writers to churn them out.
Having a personal tutor who I can access at all hours of the day, and who can answer off hand questions I have after musing about something in the shower, is an incredible asset.
At the same time, I can totally believe if I was teleported back to school, it would become a total crutch for me to lean on, if anything just so I don't fall behind the rest of my peers, who are acing all the assignments with AI. It's almost a game theoretic environment where, especially with bell curve scaling, everyone is forced into using AI.
And it’s not just in school. I see the same thing at work. People rely on AI tools so much, they stop checking if they even understand what they’re doing. It’s subtle, but over time, that effort to think just starts to fade.
Culture, not technology.
Take one of those soundproofed office pods, something like what https://framery.com/en/ sells. Stick a computer in it, and a couple of cameras as well. The OS only lets you open what you want to open on it. Have the AI watch the student in real time, and flag any potential cheating behaviors, like how modern AI video baby monitors watch for unsafe behaviors in the crib.
If a $2-3000 pod sounds too expensive for you over the course of your child's education, I'm sure remote schoolers can find ways to rent pods at much cheaper scale, like a gym subscription model. If the classes you take are primarily exam-based anyway you might be able to get away with visiting it once a week or less.
I'm surprised nobody ever brings up this idea. It's obvious you have to fight fire with fire here, unless you want to 10x the workload of any teacher who honestly cares about cheating.
Going to school to listen to a teacher for hours and take notes, sitting in a group of peers to whom you are not allowed to speak, and then going home to do some homework on your own, this whole concept is stupid and deserves to die.
Learning lessons is the activity you should do within the confort of your home, with the help of everything you can including books, AIs, youtube videos or anything that float your boat. Working and practice, on the other hand, are social activities that benefit a lot from interacting with teachers and other students, and deserves to be done collectively at school.
For inverted classes, AI are no problem at all; at the contrary, they are very helpful.
- decide what I wanted to say about the subject, from the set of opinions I already possess
- search for enough papers that could support that position. Don't read the papers, just scan the abstracts.
- write the essay. Scan the reference papers for the specific bit of it that best supported the point I want to make.
There was zero learning involved in this process. The production of the essay was more about developing journal search skills than absorbing any knowledge about the subject. There are always enough papers to support any given point of view, the trick was finding them.
I don't see how making this process even more efficient by delegating the entire thing to an LLM is affecting any actual education here.
All I did was follow the process you outlined.
My mother used to do it as a service for foreign language students. They would record their lectures, and she would write their papers for them.
I tested it by getting hold of a paper which had received an A from another school on the same subject, copying it verbatim and submitting it for my assignment. I received a low grade.
Despite confirming what I suspected, it somehow still wasn't a good feeling.
But the problem is that in many cases, the degrees (like MBA, which I too hold) are merely formalities to move up the corporate ladder, or pivot to something else. You don't get rewarded extra for actually doing science. And, yes, I've done the exact same thing you did, multiple times, in multiple different classes. Because I knew that if what I did just looked and sounded proper enough, I'd get my grade.
To be fair, one of the first things I noticed when entering the "professional" workforce, was that the methodology was the same: Find proof / data that supports your assumptions. And if you can't find any, find something close enough and just interpret / present it in a way that supports your assumptions.
No need for any fancy hypothesis testing, or having to conclude that your assumptions were wrong. Like it is not your opinion or assumption anyway, and you don't get rewarded for telling your boss or clients that they're wrong.
Wrote wrote those papers? How did they learn to write them? At some point, somebody along the chain had to, you know, produce an actual independent thought.
May I ask a different question, why didn’t, or what stoped, you from engaging with the material itself ?
What is the alternative, we carry on without people skilled in the English language?
Basically analog thinking is still critical, and schools need to teach it. I have no issues with classrooms bringing back the blue exam books and evaluating learning quality that way.
I would go further than that, along two axes: it's not just AI and it's not just young people.
An increasing proportion of our economy is following a drug dealer playbook: give people a free intro, get them hooked, then attach your siphon and begin extracting their money. The subscription-model-ization of everything is an obvious example. Another is the "blitzscaling" model of offering unsustainably low prices to drive out competition and/or get people used to using something that they would never use if they had to pay the true cost. More generally, a lot of companies are more focused on hiding costs (environmental, psychological, privacy, etc.) from their customers than on actually improving their products.
Alcohol, gambling, and sex, are things that we more or less trust adults to do sensibly and in moderation. Many people can handle that, and there are modest guardrails in place even so (e.g., rules that prevent selling alcohol to drunk people, rules that limit gambling to certain places). I would put many social media and other tech offerings more in the category of dangerous chemicals or prescription drugs or opiates (like the laudanum the article mentions). This would restrict their use, yes, but the more important part is to restrict their production and set high standards for the companies that engage in such businesses.
Basically, you shouldn't be able to show someone --- child or adult --- an infinite scrolling video feed, or give them a GPT-style chatbot, or offer free same-day shipping, without getting some kind of permit. Those things are addictive and should be regulated like drugs.
And the penalties for failing to do everything absolutely squeaky clean should be ruinous. The article mentions one of Facebook's AIs showing CSAM to kids. One misstep on something like that should be the end of the company, with multi-year jail terms for the executives and the venture capitalists who funded the operation. Every wealthy person investing in these kinds of things should live in constant fear that something will go wrong and they will wind up penniless in prison.
I use it [Copilot / GPT / Khanmingo] all the time to figure out new tools and prototype workflows, check code for errors, and learn new stuff including those classes at universities which cost way too much.
If universities feel threatened by AI cry me a river.
No professor or TA was *EVER* able to explain calculus and differential equations to me, but Khanmingo and ChatGPT can. So the educational establishment can deal with this.
I even remember taking a Philosophy of AI class in 1999, something that should have been as interesting and intellectual stimulating to any thinking student, and the professor managed to clear the lecture hall from 300 to 50 before I stopped going too with his constant self-aggrandizing bullshit.
I had a history teacher in high school that didn't try to hide he was a teacher so he could travel in the summer and then made a large part of the class about his former and upcoming travels.
Most weren't this bad but they just sucked at explaining concepts and ideas.
The whole education system should obviously be rebuilt from the ground up but it will be decades before we bother with this. Someone above mentioned the Roman's teaching wrestling to students. We are those Romans and we are just going to keep teaching wrestling. I learned to wrestle, my father learned to wrestle so my kids are going to learn to wrestle because that is what defines an educated person!
I was a poor math student in HS, but I loved electronics, so that's why I decided to pursue electrical engineering. Seeing that I simply could not handle the math, I dropped out after the first year, and started working as an electricians apprentice.
Some years later YouTube had really taken off, and I decided to check out some of the math tutors there. Found Khan Academy, and over the course of a week, everything just fell into place. I stared from the absolute beginning, and binged/worked myself up to HS pre-calc math. His style of short-from teaching just worked, and he's a phenomenal educator on top.
Spent the summer studying math, and enrolled college again in the fall. Got A's and B's in all my engineering math classes. If I ever got stuck, or couldn't grok something, I trawled youtube for math vids / tutors until I found someone that could explain it in a way I could understand.
These days I use LLMs in the way you do, and I sort of view it as an extension of the the way I learned things before: infinite number of tutors.
Of course, one problem is that one doesn't know what one doesn't know. Is the model lying to you? Well, luckily there are many different models, and you can compare to see what they say.
Making assignments harder would be unfair to those few students who would actually try to solve the problem without LLMs.
So what I do is require extensive comments and ahem - chain of thought reasoning in the comments - especially the WHY part.
Then I require oral defense of the code.
Sadly this is unfeasible for some of the large classes of 200, but works quite well when I have the luxury of teaching 20 students.
Anyways this isn't actually useful advice because no one person can enact change on a societal scale but I do enjoy standing on this soapbox and telling at people.
BTW academic success has never been a fair measure of anything, standards and curriculum vary widely between institutions. I spent four years STRUGGLING to get a 3.2 GPA in high school then when I got to undergrad we had to take this "math placement exam" that was just basic algebra and I only had difficulty with one or two problems but I knew several kids with >= 4.0 GPA who had to take remedial algebra because they failed.
But somehow there's always massive pushback against standardized testing even when they let you take it over and over and over again until you get the grade you wanted (SAT).
I’m as cynical as they come, but even that’s a bit too much for me.
More to the point, the universities need to realize they're more like job certification centers and stop pretending their students aren't just there to take tests and get certified. Ideally they'd stop co-operating with employers that want to use them as a filter for their hiring process instead but even I'm not dumb enough to think that could ever happen, they'd be cutting off a massive source of revenue and putting themselves at a competitive disadvantage.
Like I said I don't actually have a viable solution to any of this but as long we all lie to ourselves about education being some noble institution that it clearly isn't (i mean for undergrad and masters, it might actually still be that at the phd level) then nobody will ever solve anything.
First off, I respect the author of the article for trying pen and paper, but that’s just not an option at a lot of places. The learning management systems are often tied in through auto grading with google classroom or something similar. Often you’ll need to create digital versions of everything to put in management systems like Atlas. There’s also school policy to consider and that’s a whole nother can of worms. All that aside though.
The main thing that most people don't have in the forefront of their mind in this conversation is the fact that most students (or adults) don't want to learn. Most people don't want to change. Most students will do anything and everything in their power to avoid those two things. I’ve often thought about why, maybe to truly learn you need to ignore your ego and accept that there’s something you don’t know; maybe it’s a biological thing and humans are averse to spending calories on mental processes that they don’t see as a future benefit – who knows.
This problem runs core to all of modern education (and probably has since the idea of mandatory mass education was called from the pits of hell a few hundred years ago). LLMs have really just brought us a society to a place where it can no longer be ignored because students no longer have the need to do what they see as busy work. Sadly, they don’t inherently understand how writing essays on oppressed children hiding in attics more than half a century ago helps them in their modern tiktok filled lives.
The other issue is that, for example, in the schools I’ve worked at, since the advent of LLMs, many teachers and most of the admin all take this bright and cheery approach to LLMs. They say things like, “The students need to be shown how to do it right,” or “help the students learn from ChatGPT.” The fact that the vast majority of students in high school just don’t care escapes them. They feel like it’s on the teachers to wield and to help the students wield this mighty new weapon in education. But in reality, It’s just the same war we’ve always had between predator and prey (or guard and prisoner) but I fear in this one, only one side will win. The students will learn how to use chat better and the teachers will have nothing to defend against it, so they will all throw up their hand as start using chat to grade thing. Before you know it, the entire education system is just chat grading work submitted by chat under the guise of, “oh but the student turned it in so it’s theirs.”
The only thing LLMs have done, and more than likely ever do, in education is to make it blatantly obvious that students are not empty vessels yearning for a drink from the fountain of knowledge that can only be provided to them by the high and mighty educational institution. Those students do exist and they will always find a way to learn. I also assume that many of us here fall into that, but those of us that do are not the majority.
My students already complain about the garbage chat created assignments their teachers are giving them. Entire chunks of my current school are using chat to create tests, exams, curriculum, emails and all other forms of “teacher work”. Several teachers, who are smart enough, are already using chat to grade thing. The CEO of the school is pushing for every grade (1-12) having 2 AI classes a week where they are taught how to “properly” use LLMs. It’s like watching a train wreck in slow motion.
The only way to maintain mandatory mass education is by accepting no one cares, finding a way to remove LLMs from the mix, or switch of Waldorf, homeschooling or some other better system than mandatory mass education. The wealthy will be able to, the rest will suffer.
- Hand written midterms and exams.
- The students should explain how they designed and how they coded their solutions to programming exercises (we have 15-20 students per class, with more students it become more difficult).
- Presentations of complex topics (after that the rest of the students should comment something, ask some question, anything related to the topic)
- Presentation of a handwritten one page hand written notes, diagram, mindmap, etc., about the content discussed.
- Last minute changes to more elaborated programming labs that should be resolved in-class (for example, "the client" changed its mind about some requirement or asked a new feature).
The real problem is that it is a (lot) more work for the teachers and not everyone is willing to "think outside of the box".
(edit: format)
We had to write the answer with pen and paper, writing the whole program in C. And the teacher would score it by transcribing the verbatim text in her computer, and if it had one single error (missed semicolon) or didn't compile for some reason, the whole thing was considered wrong (each question was 25% of the exam score)
I remember I got 1 wrong (missed semicolon :( ) and got a 75% (1-100 pointing system). It's crazy how we were able to do that sort of thing in the old days.
We definitely exercised our attention to detail and concentration muscles with that teacher.
My above comment is getting downvoted, and it's honestly a bit baffling. I'd be furious if I were paying tens of thousands of dollars to receive a university-level education in software engineering in 2025... and I had to write programs with pen and paper. It is so far detached from the reality of, not only the industry, but the practice itself, so as to be utterly absurd.
I have incredibly terrible handwriting and recall of specific syntax was difficult, but I wasn't punished terribly for either of those faults.
Already in 2018, almost everyone was cheating on typed assignments, "helping" each other with homeworks, and a significant portion of kids were abusing stimulants to get by. Exams were typically 70-80% of your grade. Now, when I speak with current students at that university and as I observed first-hand in 2020, when they went remote and generally relaxed standards and processes, how the quality of the instruction and the quality of the resulting "educated" students has fallen off the face of a cliff.
I'd be furious if I were paying tens of thousands of dollars to receive a university-level education in software engineering in 2025 and I had no educator willing to put their foot down and stop myself and my peers from faking the fact that we know anything indicating that we deserve the degree. What's a degree worth when nobody is willing to do the work required and lay down the tough love necessary to actually educate you?
in everything young people actually like, they train, spar, practice, compete, jam, scrimmage, solve, build, etc. the pedagogy needs to adapt and reframing it in these terms will help. calling it homework is the source of a flawed mental model that problematizes the work instead of incentivising it, and now that people have a tool to solve the problem, they're applying their intelligence to the problem.
arguably there's no there there for the assignments either, especially for a required english credit. the institution itself is a transaction that gets them a ticket to an administrative job. what's the homework assignment going to get them they value? well roundedness, polish, acculturation, insight, sensitivity, taste? these are not valuable or differentiating to kids in elite institutions who know they are competing globally for jobs that are 95% concrete political maneuvering, and most of them (especially in stem) probably think the class signifiers that english classes yield are essentially corrupt anyway.
maybe it's schadenfreude and an old class chip on my part, but what are they going to do, engage in the discourse and become public intellectuals? argue about rimbaud and voltaire over coffee, cigarettes and jazz? Some of them have higher follower counts than there were readers of the novels or articles being taught in their classes. More people read their tweets every day than have ever read a book by Chiang. AI isn't the problem, it's a forcing function and a solution. Instructors should reflect on what their institutions have really become.
The fact that AI can do your homework should tell you how much your homework is worth. Teaching and learning are collaborative exercises.
A lot of people who say this kind of thing have, frankly, a very shallow view of what homework is. A lot of homework can be easily done by AI, or by a calculator, or by Wikipedia, or by looking up the textbook. That doesn't invalidate it as homework at all. We're trying to scaffold skills in your brain. It also didn't invalidate it as assessment in the past, because (eg) small kids don't have calculators, and (eg) kids who learn to look up the textbook are learning multiple skills in addition to the knowledge they're looking up. But things have changed now.
"The fact that forklift truck can lift over 500kg should tell you how worthwhile it is for me to go to a gym and lift 100kg." - complete non-sequitur.
Then maybe the homework assignment has been poorly chosen. I like how the article's author has decided to focus on the process and not the product and I think that's probably a good move.
I remember one of my kids' math teachers talked about wanting to switch to in inverted classroom. The kids would be asked to read a some part of their textbook as homework and then they would work through exercise sheets in class. To me, that seemed like a better way to teach math.
> But things have changed now.
Yep. Students are using AIs to do their homework and teachers are using AIs to grade.
Unis should adjust their testing practices so that their paper (and their name) doesn't become worthless. If AI becomes a skill, it should be tested, graded, and certified accordingly. That is, separate the computer science degree from the AI Assisted computer science degree.
Homework is there to help you practise these things and have help you progress, find the areas where you're in need of help and more practise. It is collaborative, it's you, your fellow students and your teachers/professors.
I'm sorry that you had bad teachers, or had needs that wasn't being meet by the education system. That is something that should be addressed. I just don't think it's reasonable to completely dismiss a system that works for the majority. Being mad at the education system isn't really a good reason for say "AI/computers can do all these things, so why bother practising them?"
Schools should learn kids to think, but if the kids can't read or reasonably do basic math, then expecting them to have independent critical thinking seems a way of. I don't know about you, but one of the clear lessons in "problem math" in schools was to learn to reason about numbers and result, e.g. is it reasonable that a bridge span 43,000km? If not, you probably did something wrong in your calculations.
Giving people credit for homework helps because it gives students a chance to earn points outside of high pressure test times and it also encourages people to do the homework. A lot of people need the latter.
My friends who teach university classes have experimented with grading structures where homework is optional and only exam scores count. Inevitably, a lot of the class fails the exams because they didn’t do any practice on their own. They come begging for opportunities to make it up. So then they circle back to making the homework required and graded as a way to get the students to practice.
ChatGPT short circuits this once again. Students ChatGPT their homework then fail the first exam. This time there is little to do, other than let those students learn the consequences of their actions.
Thinking is a incremental process, you make small changes to things, verify if they are logically consistent and work from there.
What is to practice here? If you know something is true, practicing the mechanical aspects of it is text book definition of rote learning.
This whole thing reads like the academic system thinks making new science(Math, Physics etc) is for special geniuses and the remainder has to be happy watching the whole thing like some one demonstrating a 'sleight of hand' of hand trick.
Teach people how to discover new truths. Thats the point of thinking.
You just described the homework for a college-level math class (which will consist largely of proofs). That’s what you’re practicing.
Also, it’s 2025, if you want to discover new truths in math and science you’re going to need quite a lot of background material. We know a heck of a lot of old truths that you need to learn first.
you still have to learn. The goal of learning is not to do a job. It's to enrich you, broaden your mind, and it takes work on your part.
In similar reasoning, you could argue that you can take a car to go anywhere, or let everything be delivered on your doorstep, so why should I my child learn to walk?
The fact that AI can replace the work that you are measured on should tell you something about the measurement itself.
The goal of learning should be to enrich the learner. Instead, the goal of learning is to pass measure. Success has been quietly replaced with victory. Now LLMs are here to call that bluff.
> LLMs are here to call that bluff
Students have been copying from e.g. encyclopedias for as long as anyone can remember. That doesn't mean that an encyclopedia removes the need to learn. Even rote memorization has its use. But it's difficult to make school click for everybody.
That's precisely where we went wrong. Capitalism has redefined our entire education system as a competition; just like it does with everything else. The goal is not success, it's victory.
Ironically I have used ChatGPT in similar ways to have discussions, but it still isn’t quite the same thing as having real people to bounce ideas off of.
I wrote my master thesis about that.
It‘s an old idea.
Now, granted, AI can help with things students are passionate about. If you want to do gamedev you might be able to get an AI to walk you through making a game in Unity or Godot. But societally we've decided that school should be about instilling a wide variety of base knowledge that students may not care about: history, writing, calculus. The idea is that you don't know what you're going to need in your life, and it's best to have a broad foundation so that if you run into something that needs it you'll at least know where to start. 99% of the time developing CRUD apps you're not going to need to know that retrieving an item from an array is O(n), but when some sales manager goes in and adds 2 million items to the storefront and now loading a page takes 12 seconds and you can't remove all that junk because it's for an important sales meeting 30 minutes from now, it's helpful to know that you might be able to replace it with a hashmap that's O(1) instead. AI's fine for learning things you want to learn, but you _need_ to learn more than just what you _want_ to learn. If you passed your Data Structures and Algorithms class by copy/pasting all the homework questions into ChatGPT, are you going to remember what big-O notation even means in 5 years?
What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.
This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.
Its really frightening! Its like handling over the smartest brain possible to someone who is dumb, but also giving them very simple GUI that they actually can operate and ask good enough questions/prompts to get smart answers. Once the public at large figure this one out, I can only imagine courts being flooded with all kinds of absurd pleadings. Being the judge in the near future will most likely be the least wanted job.
I usually ask it to grade my homework for me before I turn it in. I usually find I didn’t really understand some topic and the AI highlights this and helps set my understanding straight. Without it I would have just continued on with an incorrect understanding of the topic for 2-3 weeks while I wait for the assignment to be graded. As an adult with a job and a family this is incredibly helpful as I do homework at 10pm and all the office hours slots are in the middle of my workday.
I do admit though it is tough figuring out the right amount to struggle on my own before I hit the AI help button. Thankfully I have enough experience and maturity to understand that the struggle is the most important part and I try my best to embrace it. Myself at 18 would definitely not have been using AI responsibly.
Not GP, but in my experience most MSC programs will require that you have substantial undergrad CS coursework in order to be accepted. There are a few programs designed for those without that background.
https://pe.gatech.edu/degrees/computer-science
(not affiliated, just a fan)
I am planning on doing a masters but I need some undergrad CS credits to be a qualified candidate. I don’t think I’m going to do the whole undergrad.
Overall my experience has been positive. I’ve really enjoyed Discrete Math and coming to understand how I’ve been using set theory without really understanding it for years. I’m really looking forward to my classes on assembly/computer architecture, operating systems, and networks. They did make me take CS 101-102 as prereqs which was a total waste of time and money, but I think those are the only two mandatory classes with no value to me.
This is my biggest concert about GenAI in our field. As an experienced dev I've been around the block enough times to have a good feel of how things should be done and can catch when and LLM goes off on a tangent that is a complete rabbit hole, but if this had been available 20 years ago I would have never learned and become an experienced dev because I absolutely would have over relied on an LLM. I worry that 10 years from now getting mid career dev will be like trying to get a COBOL dev now, except COBOL is a lot easier to learn.
In terms of creative writing I think we need to accept that any proper assessment will require a short essay to be written in person. Especially at the high school level there's no reason why a 12th grade student should be passing english class if they can't write something half-decent in 90 minutes. And it doesn't need to be pen and paper - I'm sure there are ways to lock a chromebook into some kind of notepad software that lacks writing assistance.
Education should not be thought of as solely a pathway to employment it's about making sure people are competent enough to interface with most of society and to participate in our broader culture. It's literally an exercise in enlightenment - we want students to have original insights about history, culture, science, and art. It is crucial to produce people who are pleasant to be around and who are interesting to talk to - otherwise what's the point?
Even before AI, our governments have long wanted more grads to make statistics look good and to suppress wages, but don't want to pay for it. So what you get are more students, lower quality of education, lower standards to make students graduate faster. Thanks to AI, now students don't have to really meet even those low standards to pass the courses. What is left is just a huge waste of young people's time and tax payer's money.
There are very few degrees I'm going to recommend to my children. Most just don't provide good value for one's time anymore.
Use AI to determine potential essay topics that are as close to 'AI-proof' as possible.
Here is an example prompt:
"Describe examples of possible high school essay topics where students cannot use AI engines such as perplexity or ChatGPT to help complete the assignment. In other words - AI-proof topics, assignments or projects"
After kids learn to read and do arithmetic, shouldn't we go back to apprenticeships? The system of standardized teaching and grading seems to be about to collapse, and what's the point of memorizing things when you can carry all that knowledge in your pocket? And, anyway, it doesn't stick until you have to use it for something. Plus, a teacher seems to be insufficient to control all the students in a classroom (but that's nothing new; it amazes me that I was able to learn anything at all in elementary school, with all the mayhem there always was in the classroom).
Okay, I can already see a lot of downsides to this, starting with the fact that I would be an illiterate farmer if some in my family had had a say in my education. But maybe the aggregate outcome would be better than what is coming?
TimorousBestie•1d ago
nathan_compton•23h ago
I teach at a university and I just scale my homework assignments until they reach or exceed sightly the amount of work I expect a student to be able to do with AI. Before I would give them a problem set. Next semester homeworks will be more like entire projects.
hollow-moe•23h ago
zeta0134•23h ago
Maybe the issue is, somewhat, the concept of graded homework in the first place. It's meant to be practice material, but is only actually useful as practice material if students put in that work. A lot of students come to resent the mountains of at-home work as the busywork that it feels like in the moment, and I feel like this whole set of emotions underpins the argument but isn't really called out for what it is all that often. Teachers understand the value of actually doing that practice, but the grading system rewards, instead, rushing through the busywork as quickly as possible. Are we not testing for the right things?
sho_hn•23h ago
Yeah, this seems like the obvious conclusion.
ghaff•20h ago
nathan_compton•6h ago
To be frank, a lot of programming is busywork, boilerplate, looking up information. Now that an AI can do that for my students I expect them to spend the time made up on developing higher level skills.
fn-mote•23h ago
1. Absurd. The measurement should be learning not “work”. My students move rocks with a forklift… so I give them more rocks to move?
2. From the university I’m looking for intellectual leadership. Professors thinking critically about what learning means and how to discuss it with students. The potential is there, but let’s not walk like zombies unthinking into a future where the disappearance of ChatGPT 8.5 renders thousands of people unable to meet the basic requirements of their jobs. Or its appearance renders them unemployed.
nathan_compton•6h ago
I teach data science, which involves a lot of relatively unimportant glueing together of libraries. Yes, I want the students to know how to program, but the key skills are actually coming to grips with data, applying methods correctly, etc. The AI can make writing out the actual code substantially more efficient for them and I expect them to use that saved time to understand higher level skills.
catigula•23h ago
Realistically I think we're just moving away from knowledge-work and efforts to resuscitate it are just varying levels of triage for a bleeding limb.
In the actual workplace with people making hundreds of thousands a year (the top echelon of what your class is trying to prepare students for) I'm not seeing output increase with AI tools so clearly effort is just decreasing for the same amount of output.
Perhaps your class is just supposed to be easier now and that's okay.
FinnLobsien•23h ago
Not rhetorical, I'm genuinely curious and could see that being a real scenario.
catigula•23h ago
rjsw•23h ago
__loam•22h ago
hkpack•23h ago
nkrisc•23h ago
TheFreim•23h ago
Under your system, I would have been actively punished for not cheating. What's the point of developing a cure that's worse than the disease?
nathan_compton•6h ago
ghurtado•23h ago
What made you get into teaching?
joe_the_user•23h ago
Would the teacher then grade the massive workload with AI also? There isn't really a limit to how much output an AI can generate and the more someone demands, the less likely it is that the final result will be looked at in any depth by a human.
nathan_compton•6h ago
the_snooze•23h ago
Take gyms, for example. You have your cheap commodity convenience gyms like Planet Fitness, where a lot of people sign up (especially at the beginning of the year) but few actually stick to it to get any real gains. But you also have pricy fitness clubs with mandatory beginner classes, where the member base tends to be smaller but very committed to it.
I feel like students that are OK with just phoning it in with AI fall into the Planet Fitness mindset. If you're serious about gains (physically or intellectually), you'll listen to the instructors and persist through uncomfortable challenges.
mullingitover•23h ago
Beat this game of prisoner's dilemma with a club at the accreditation level. Students can complain all they want, but if they want a diploma which certifies that they are able to perform the skills they learned, they will have to actually perform those skills.
JadeNB•21h ago
This is way outside the scope of something that a faculty member who is, as the article says, trying to teach has any hope of implementing within a reasonable time frame. Of course the ideal is that faculty, as major stakeholders in the educational institution, should ideally be active in all levels of university governance, but I think it is important to realize how much of a prerequisite there is for an individual professor even to get their voice heard by an accrediting body, let alone to change its accrediting procedures.
That's setting aside the fact that, even if faculty really mobilized to make such changes, in the absolute best case the changes would be slow to implement, and the effects would be slow to manifest, as universities are on multi-year accreditation cycles and there would need to be at least a few reputable universities that were disaccredited before others started taking the guidance seriously. Even if I were willing to throw everything into the politics of university governance, which would make my teaching suffer immensely, I'm not willing to say that we'll just have to wait a decade to see the effects.
AlexCoventry•23h ago
TimorousBestie•22h ago
AStonesThrow•22h ago
https://bible.usccb.org/bible/habakkuk/2?18
https://bible.usccb.org/bible/daniel/14?23
noitpmeder•9h ago
I see no future in education other than making homework completely ungraded, and putting 100% of the grade into airgapped exams. Sure, the pen and paper CS exam isn't reflective of a real world situation, but the universities need some way to objectively measure understanding once the pupil has been disconnected from his oracle.
noitpmeder•7h ago