Neal Koblitz's "The Case Against Computers in Math Education".
> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."
For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.
Agreed. It's a matter of degree, and I wonder what reaching the eventual limit (if there is one) looks like.
In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.
People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.
Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.
It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.
AI should be trained on all data that is available. For a significant part of the dataset, it's the most useful that data has ever been.
This blanket dismissal is not going to age well, and reads like a profession lashing out.
With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.
"How AI Could Save (Not Destroy) Education" (https://www.youtube.com/watch?v=hJP5GqnTrNo) from Sal Khan of Khan Academy
I think the original phrase was made with the assumption "as it is right now".
I do share concerns of undersigned, even though don't necessarily agree with all statements in the letter.
AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.
It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).
Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.
OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).
I really can’t understand why people don’t understand this. What am I missing?
Now is that a simulation of someone who thinks he's responding to a cretin... or actually the feelings of someone who thinks he's talking to a cretin?
You are missing what this "AI" is: a guessing system based on some patterns.
> You are missing what this "AI" is: a guessing system based on some patterns.
You just described the majority of your daily thoughts.
1. To do the homework because they view classes and grades as a barrier to their future instead of preparation for such.
2. In place of a well crafted query in an academic database.
I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.
We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.
I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.
Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.
There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.
Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.
Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.
the problem is that the goals are not effectively implemented. maybe it's more a dream than a goal, because the teachers and schools don't know how to actually reach that goal.
meaningful participation in society is often reduced to the ability to get a job by those outside of school, so you are right about employers. at least the large ones. unfortunately that works against them, because the current generation of juniors doesn't even want to learn anything. they are drones that just want to get paid, but are not motivated to learn what they need to do their job better.
I have personally observed how locals are bullied by overseas guests and choose a delusional escape into virtue signaling rather than defending themselves. I consider German upbringing to be that of a defeated people.
i don't know what you are trying to imply here. how should the feeling of defeat affect the upbringing? (i mean,i am sure there would be an effect, but how would that look like?)
what i can tell you is that the sentiment i experienced was not defeat. after all this is neither our, nor our parents, (and for the current generation also not their grandparents) experience. the feeling we were taught was that of embarrassment, of how could we let that happen and consequently the need to understand how we can avoid that from ever happening again. except for a minority or right wing sympathizers that we keep a close eye on.
It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.
The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.
Even then, an LLM running locally could still operate.
Okay, then we should do this.
> Either make the incentives personal enrichment instead of letter grades
This just straight up does not work.
The incentive for not being obese is perhaps the most perfect incentive ever: you live a happier life, with a greater quality of life, for longer, with less societal friction. It's the perfect poster child of "personal enrichment".
And yet, obesity is not declining. How is this possible?
Because internal locus of control as a "solution" for systemic issues just does not work. It doesn't maybe work, it doesn't sometimes work, it never works. If you don't address institutional issues and physiological issues then you're never going to find a solution.
What I mean is, kids use AI because it's easy. It's human nature to take the path of least resistance. This has a physiological, a biological, component to it. If we're just going to be waiting around for the day people aren't lazy then we're all gonna die.
Schools are artificial environments by design. They're controlled environments by design. If we leave children to their own devices, they grow up stupid.
The problem is that education is a cumulative endeavor. We don't give calculators to kindergartners trying to learn the number line. Why not? Because if you don't have the neural connections to intuitively, and quickly, understand the number line, then Algebra is going to be a nightmare.
AI can enhance learning, if and only if the prerequisites are satisfied. If you use AI to write but you don't know how to write, then you're going to progress on and struggle much more than you should. We carefully and deliberately introduce tools to children. Here's your graphing calculator... in Algebra I, after you've already graphed on paper hundreds of times. You already understand graphing, great, now you're allowed to speed it up.
We, as adults, are very far removed from this. We have an attitude of "what's the problem" because we already have built those neural connections. It's a sort of Lord Farquad "some of you may die, but that's a risk I'm willing to take" approach, but we don't even realize we do it.
To the degree it is possible I would like to think the AI community would try to address their issues.
I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.
The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.
AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.
You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.
No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.
Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.
When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.
>global community
As long as global means rich. 0 signatories from China, India, Russia, Pakistan, Bangladesh, Indonesia, Africa.
> Using ChatGPT to write an essay is a bit like using a forklift to lift weights. The forklift might do a perfectly good job of moving around some heavy iron plates, but you’d be wasting your time.
The point of writing essays (or doing any other school assessment) is not the completed product, it's the work (and hopefully learning) that went into it.
You can definitely use AI responsibly, but many students will not and do not.
amelius•7mo ago
DavidPiper•7mo ago
No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.
Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.
Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.
Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.
izacus•7mo ago
DavidPiper•7mo ago
mulmen•7mo ago
mulmen•7mo ago
Why not? Seems like a logical conclusion.
1. Introduce the concept.
2. Demonstrate an intuitive algorithm.
3. Assist students as they practice and internalize the algorithm.
4. Reinforce this learning by encouraging them to teach each other.
5. Show them how to use tools by repeating this process with the tool as the concept.
darth_avocado•7mo ago
Restricting AI completely or introducing it too early, both would be harmful.
mulmen•7mo ago
stephen_g•7mo ago