frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
306•theblazehen•2d ago•103 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
37•alainrk•1h ago•29 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
20•nar001•52m ago•10 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
40•AlexeyBrin•2h ago•7 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
20•onurkanbkrc•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
719•klaussilveira•16h ago•222 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
105•jesperordrup•6h ago•38 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
983•xnx•22h ago•562 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
21•matt_d•3d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
78•videotopia•4d ago•12 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
141•matheusalmeida•2d ago•37 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
5•andmarios•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
243•isitcontent•16h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
245•dmpetrov•17h ago•128 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
346•vecti•18h ago•153 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
511•todsacerdoti•1d ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
395•ostacke•22h ago•102 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
47•helloplanets•4d ago•48 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
310•eljojo•19h ago•192 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
363•aktau•23h ago•189 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
442•lstoll•23h ago•289 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
77•kmm•5d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
98•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
26•bikenaga•3d ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
47•gmays•11h ago•19 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
281•i5heu•19h ago•230 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1092•cdrnsf•1d ago•473 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
160•vmatsiiako•21h ago•73 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
312•surprisetalk•3d ago•45 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
36•romes•4d ago•3 comments
Open in hackernews

An open letter from educators who refuse the call to adopt GenAI in education

https://openletter.earth/an-open-letter-from-educators-who-refuse-the-call-to-adopt-genai-in-education-cb4aee75
100•mathgenius•7mo ago

Comments

amelius•7mo ago
Makes sense. You also don't give calculators to students of arithmetic.
DavidPiper•7mo ago
(Theses days) it's hard to know what you mean by this and whether you're being sarcastic.

No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.

Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.

Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.

Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.

izacus•7mo ago
The person you're responding to has clearly used the word "student". What on earth are you on about?
DavidPiper•7mo ago
I interpreted "students of arithmetic" as anyone that practices arithmetic - similar to "students of medicine", etc.
mulmen•7mo ago
Seems like a reasonable expansion of the concept to me. Why the aggressive dismissal?
mulmen•7mo ago
Can’t tell if you are serious but I will assume you are.

Why not? Seems like a logical conclusion.

1. Introduce the concept.

2. Demonstrate an intuitive algorithm.

3. Assist students as they practice and internalize the algorithm.

4. Reinforce this learning by encouraging them to teach each other.

5. Show them how to use tools by repeating this process with the tool as the concept.

darth_avocado•7mo ago
You want to limit the use of AI in schools just the way you want to limit calculators: ensure the student can do the math without calculators, even when the computation is hard and then teach them to use the calculator as a tool to help them move faster.

Restricting AI completely or introducing it too early, both would be harmful.

mulmen•7mo ago
I'm not really convinced. This sounds reasonable but I can't formulate a good argument in favor.
stephen_g•7mo ago
Sarcasm? We actually weren't allowed to take any kind of calculator into any of our advanced maths exams in University (and I'm talking just 15 years ago, not when they were newfangled things).
sfpotter•7mo ago
One of my favorite essays on a similar topic: https://sites.math.washington.edu//~koblitz/mi.html

Neal Koblitz's "The Case Against Computers in Math Education".

EMIRELADERO•7mo ago
Wow. Now there's a quote:

> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."

For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.

ultrarunner•7mo ago
> You could have told me this is from the current year and I would have believed you.

Agreed. It's a matter of degree, and I wonder what reaching the eventual limit (if there is one) looks like.

bombcar•7mo ago
There’s a platonic dialogue that has basically the same sentiment.
eikenberry•7mo ago
People see what they want to see, even very smart people.
ReDeiPirati•7mo ago
Ultimately those are tools and I think the goal is to educate students to use them properly. Also because I don't expect the knowledge paradox to disappear anytime soon with these models.
doctorpangloss•7mo ago
There is no ethical generative AI. Meaning fully permissioned datasets, end-to-end. Not yet scientifically possible. So 100%, everyone who claims this, is lying, usually by omission, and some BS startup isn't going to invent this.

In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.

People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.

Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.

It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.

ACCount36•7mo ago
I loathe this entire line of "ethical" moral grandstanding.

AI should be trained on all data that is available. For a significant part of the dataset, it's the most useful that data has ever been.

xnx•7mo ago
> GenAI is a threat to student learning and wellbeing.

This blanket dismissal is not going to age well, and reads like a profession lashing out.

With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.

"How AI Could Save (Not Destroy) Education" (https://www.youtube.com/watch?v=hJP5GqnTrNo) from Sal Khan of Khan Academy

avmich•7mo ago
Most student can not afford the expertise necessary to have AI patient etc.

I think the original phrase was made with the assumption "as it is right now".

I do share concerns of undersigned, even though don't necessarily agree with all statements in the letter.

happytoexplain•7mo ago
It's not a blanket dismissal, it's a fact in context. It should read like a profession lashing out - that's what it is.

AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.

It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).

xnx•7mo ago
I'd respect the statement more if it acknowledged that AI had some benefit, or potential benefit in the future, but they did not want to use it currently.
lawlessone•7mo ago
Maybe if we move from LLMS to real AI it will have benefits.
ktallett•7mo ago
If you are using the most commonly available AI and have an average ability of perfecting a search term, right now AI is not a particularly useful tool in learning anything. It is far too inaccurate to learn anything challenging. The key term here is could, and yes it is possible but there is nothing yet to say we shall get there.
netsharc•7mo ago
> AI can be a patient, understanding, encouraging, non-judgemental tutor

Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.

OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).

1) https://www.youtube.com/watch?v=jRPPdm09xZ8

abletonlive•7mo ago
I can say the exact same thing about you or anybody else. You can’t be patient, understanding, encouraging, non-judgmental tutor. You can only simulate it.

I really can’t understand why people don’t understand this. What am I missing?

netsharc•7mo ago
Geezus freaking christ.

Now is that a simulation of someone who thinks he's responding to a cretin... or actually the feelings of someone who thinks he's talking to a cretin?

binary132•7mo ago
Philosophical zombies are supposed to be a thought experiment to demonstrate that solipsism and nihilism are stupid, not a rhetorical device to equate human minds to linear algebra statistical parrots.
hulitu•6mo ago
> What am I missing?

You are missing what this "AI" is: a guessing system based on some patterns.

abletonlive•6mo ago
Every argument against AI always falls back to some weird reductionism that simply does not match up with reality as if it's a gotcha. I pity the fool that has such a poor working model of reality based on whatever internal pessimistic biases you may have.

> You are missing what this "AI" is: a guessing system based on some patterns.

You just described the majority of your daily thoughts.

eikenberry•7mo ago
Whether the AI is patient, understanding, etc., is entirely up to the person interacting with it to decide. Just like they decide this when interacting with people. You can never know the internal state of the other in a conversation so it is up to you to model it and if modeling it is best done with human metaphors then use human metaphors.
Loughla•7mo ago
My experience in higher education is that students use AI for one of two things:

1. To do the homework because they view classes and grades as a barrier to their future instead of preparation for such.

2. In place of a well crafted query in an academic database.

akomtu•7mo ago
AI is turning into a cult that's dividing us into those who support it and those who reject it. Arguments on both sides are flimsy, as no one really understands what it is. People see it as a black-box magic crystal.
bshepard•7mo ago
"You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” -- someone wise, or was he?
sergiomattei•7mo ago
This is absurd. I’ve learned so much from having an LLM tutor me as a I go through a dense book, for example.
kunzhi•7mo ago
> At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.

I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.

We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.

TheNewsIsHere•7mo ago
I attended a public school system which, while at times did falter in various ways, did a fairly good job meeting its stated mission that was more or less exactly that.

I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.

Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.

scarecrowbob•7mo ago
While you are likely correct about systems, I have known quite a few individual educators who have the goal of helping their fellow humans learn about their agency in the world.
jjmarr•7mo ago
Employers want a high-agency leadership class and drones for the individual contributors.

There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.

Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.

Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.

em-bee•7mo ago
the education i received in germany did have this goal. the teachers had this goal, and i have the impression that the teachers and schools my kids go to have this goal as well. i can't say how universal that is, but it the opposite is not universal either.

the problem is that the goals are not effectively implemented. maybe it's more a dream than a goal, because the teachers and schools don't know how to actually reach that goal.

meaningful participation in society is often reduced to the ability to get a job by those outside of school, so you are right about employers. at least the large ones. unfortunately that works against them, because the current generation of juniors doesn't even want to learn anything. they are drones that just want to get paid, but are not motivated to learn what they need to do their job better.

x3qt•7mo ago
Just yesterday, I talked to a neighbor who has two kids attending a local school in Mitte. He told me that the children are constantly indoctrinated into group conformity, obedience to authority, and fear of "wrong-think," with a good splash of wokie-talkie on top of it. To me, that sounds like a complete erasure of agency. Schools must provide knowledge, not override the nurture given by parents.

I have personally observed how locals are bullied by overseas guests and choose a delusional escape into virtue signaling rather than defending themselves. I consider German upbringing to be that of a defeated people.

em-bee•7mo ago
I consider German upbringing to be that of a defeated people

i don't know what you are trying to imply here. how should the feeling of defeat affect the upbringing? (i mean,i am sure there would be an effect, but how would that look like?)

what i can tell you is that the sentiment i experienced was not defeat. after all this is neither our, nor our parents, (and for the current generation also not their grandparents) experience. the feeling we were taught was that of embarrassment, of how could we let that happen and consequently the need to understand how we can avoid that from ever happening again. except for a minority or right wing sympathizers that we keep a close eye on.

x3qt•7mo ago
I think that the Allied victors laid the foundation of the current German education system on initial denazification and subsequent extreme pacification, to such a degree of impotence that people refuse to defend themselves even when they are fully capable of neutralizing a criminal, preferring to become victims rather than use force.
em-bee•7mo ago
i don't have this feeling at all. on what experience do you base that on?
x3qt•7mo ago
I’ve seen multiple instances of robberies where the attacker was a head shorter and could have been easily stunned, or worse, with a single hit, yet people gave away their valuables because even the thought of using violence is taboo. Of course, the police always say, “Just file a complaint,” which never results in anything. It’s not a joke: even if violence is used purely to stop a criminal, the police will prosecute you, lol. I’m not American, but I like the idea that one could defend themselves and their property using all means necessary.
andy99•7mo ago
> Current GenAI technologies represent unacceptable legal, ethical and environmental harms, including exploitative labour, piracy of countless creators' and artists' work, harmful biases, mass production of misinformation, and reversal of the global emissions reduction trajectory.

It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.

KevinMS•7mo ago
I'm not the biggest fan of AI for everything but you couldn't create something more of a dagger to the heart of the current education system. If you are in the U.S., carefully watch for the D party to turn on AI in their messaging and you'll be witness to the strong influence that teachers unions have on them. Disagree me all you want, but keep your eyes open, I guarantee you'll see it soon.
sfpotter•7mo ago
Interesting thought but my impression is that the democrats are much more beholden to other forces at play in the school system. I have friends who are teachers in the public school system, have been active in the union, and are indeed against AI in the classroom (although they're hardly rabid or unreasonable about it). On the other hand, the school administrators and IT departments are much more aggressive about pushing AI on them and pressing them to work it into the classroom somehow. Considering that the democrats are largely captured by corporate interest, and considering that tech/AI is one of the biggest corporate interests there is right now... I just don't see things playing out the way you predict.
KevinMS•7mo ago
the administrators and IT departments are not in the teachers unions.
sfpotter•7mo ago
Yes... exactly my point.
KevinMS•7mo ago
and the teachers unions have vastly more power than those guys
sfpotter•7mo ago
Hey, I'm sorry, but such a blanket statement is pretty weak on its own. I'm interested in your perspective. Can you provide some concrete details that support your point? Because the people I know feel like AI in the classroom is inevitable and that they don't have much power in the face of the authority that wants to impose it on them, which would seem to contradict what you're saying.
thinkingtoilet•7mo ago
Every teacher I talked to has said the influence of AI has been negative. Why wouldn't they fight to remove it from the classroom?
KevinMS•7mo ago
They are talking about cheating with it, not replacing teaching with it.
atleastoptimal•7mo ago
The cat is out of the bag. Kids will use AI to write papers, learn topics, cheat on take-home tests, etc. Only a completely closed-off environment with no access to the internet could prevent this.

The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.

sillystu04•7mo ago
> Only a completely closed-off environment with no access to the internet could prevent this.

Even then, an LLM running locally could still operate.

const_cast•7mo ago
> Only a completely closed-off environment with no access to the internet could prevent this.

Okay, then we should do this.

> Either make the incentives personal enrichment instead of letter grades

This just straight up does not work.

The incentive for not being obese is perhaps the most perfect incentive ever: you live a happier life, with a greater quality of life, for longer, with less societal friction. It's the perfect poster child of "personal enrichment".

And yet, obesity is not declining. How is this possible?

Because internal locus of control as a "solution" for systemic issues just does not work. It doesn't maybe work, it doesn't sometimes work, it never works. If you don't address institutional issues and physiological issues then you're never going to find a solution.

What I mean is, kids use AI because it's easy. It's human nature to take the path of least resistance. This has a physiological, a biological, component to it. If we're just going to be waiting around for the day people aren't lazy then we're all gonna die.

Schools are artificial environments by design. They're controlled environments by design. If we leave children to their own devices, they grow up stupid.

The problem is that education is a cumulative endeavor. We don't give calculators to kindergartners trying to learn the number line. Why not? Because if you don't have the neural connections to intuitively, and quickly, understand the number line, then Algebra is going to be a nightmare.

AI can enhance learning, if and only if the prerequisites are satisfied. If you use AI to write but you don't know how to write, then you're going to progress on and struggle much more than you should. We carefully and deliberately introduce tools to children. Here's your graphing calculator... in Algebra I, after you've already graphed on paper hundreds of times. You already understand graphing, great, now you're allowed to speed it up.

We, as adults, are very far removed from this. We have an attitude of "what's the problem" because we already have built those neural connections. It's a sort of Lord Farquad "some of you may die, but that's a risk I'm willing to take" approach, but we don't even realize we do it.

JKCalhoun•7mo ago
Whether you agree or disagree, I am happy to see a community putting out (in writing even) their problems with AI as it exists.

To the degree it is possible I would like to think the AI community would try to address their issues.

I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.

jacknews•7mo ago
I don't think resource use is any business of teachers to be honest.

The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.

AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.

nineplay•7mo ago
I find this all-or-nothing attitude extraordinary. Chatbots are the best personal tutors you'll ever find and I tell students so. Do you need to understand Mitosis for Bio 101? Ask your favorite chatbot. Then ask what daughter cells are - a question you might be too afraid to ask in class because maybe it was covered yesterday you weren't listening. Then ask why there are no "son" cells - which you'd also be to afraid to ask about in class but you want to know.

You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.

No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.

Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.

neurostimulant•7mo ago
Should we teach our kids to outsource their thinking to those genai services where the big clouds control the gate? It would be less of an issue if local genai with comparable capability is more accessible to general public.
123yawaworht456•7mo ago
Previously: An open letter from educators who refuse the call to adopt [printed books, ballpoint pens, calculators, computers, the internet] in education
shminge•7mo ago
There's a big difference between "Here's this tool that helps you think" (ie calculator or pen) and "Here's this tool that does the thinking for you". And before you say that AI can fall under the first option, plenty of schoolchildren will take the easy way out and not use it responsibly.
thedevilslawyer•7mo ago
> Further, GenAI adoption in industry is overwhelmingly aimed at automating and replacing human effort, often with the expectation that future “AGI” will render human intellectual and creative labor obsolete. This is a narrative we will not participate in

When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.

thedevilslawyer•7mo ago
Also,

>global community

As long as global means rich. 0 signatories from China, India, Russia, Pakistan, Bangladesh, Indonesia, Africa.

luqtas•7mo ago
do you think those have access to computers with AI for their education?
thedevilslawyer•7mo ago
Yes.
penguin_booze•7mo ago
Just in: new tariffs have been announced on educators. Not sure on whom, but there it is.
shminge•7mo ago
One of my favourite quotes on this topic:

> Using ChatGPT to write an essay is a bit like using a forklift to lift weights. The forklift might do a perfectly good job of moving around some heavy iron plates, but you’d be wasting your time.

The point of writing essays (or doing any other school assessment) is not the completed product, it's the work (and hopefully learning) that went into it.

You can definitely use AI responsibly, but many students will not and do not.