The teachers and professor I've known have always loved adapting their lessons to suit the interests of their students - I think that's a core educational instruction skill.
I'm open to hearing disagreements, but reading through the usages and evaluations does not leave me thinking of a tool that would provide any benefit greater than just giving teachers more resources would.
Link to an example:
https://learnyourway.withgoogle.com/scopes/AOcKkhsL/slides-n...
It also recommends questions on initial load that can help understand or explore the paper, here's a demo[2] from the popular Attention Is All You Need paper.
The code is all opensource[3], it uses the google 2.5 flash lite model to keep costs down (it's completely free atm), but that can be changed via env var if you run it locally.
what's the value add of the wrapper that this person wrote at all?
Simply replacing the domain arxiv.org with asxiv.org does all that for me now.
Also it links to pages in it's answers and scrolls the pdf to it on click, allowing to view the pdf side by side with the chat.
Good goal, but they've got to start somewhere.
Delivering an education experience even 80% as effective as the best private tutors would be a huge achievement.
For tutoring, I think the approach in https://www.nature.com/articles/s41598-025-97652-6 is promising. (Prompts are included in the supplementary material on the last page: https://static-content.springer.com/esm/art%3A10.1038%2Fs415... ) They start with an existing collection of worked exercises, give the chatbot access to the full solution and then let students interact with it to get a walkthrough or just check their own solution, depending on how much help the student thinks they need.
EdTech has the worst returns of any industry in venture capital. Why?
There are no teachers who say that technology has generally improved experiences in classrooms, even if some specific technology-driven experiences like Khan Academy and Scratch are universally liked. Why?
When you look at Scratch, which I know a lot about, one thing they never do is allege that it improves test scores. They never, ever evaluate it quantitatively like that. And yet it is beloved.
Khan Academy: it is falling into the same trap as e.g. the Snoo. If you don't know what I'm talking about, it's about, who pays? Who is the customer? Khan Academy did a study that showed a thing. Kids are not choosing to watch educational YouTube videos because of a study. It is cozy learning.
But why does Khan Academy need studies for a test score thing? Why does Google? This is the problem with Ed Tech: the only model is to sell to districts, and when you sell to districts, you are doing Enterprise Sales. You can sometimes give them a thing that does something, but you are always giving them exactly what they ask for. Do you see the difference?
It doesn't matter if it's technology or if it is X or Y or Z: if the district asks for something that makes sense, great, and if it asks for something that doesn't make sense, or doesn't readily have the expertise to know what does and doesn't make sense, like with technology, tough cookie. Google will make something that doesn't make sense, if it feels that districts will adopt it.
We can go and try the merits of Learn Your Way, thankfully they provide a demo. All I'll say is, people have been saying, "more reading" is the answer, and there is a lot of fucking reading in this experience, but maybe the problem isn't that there isn't enough text to read. The problem is that kids do not want to read, so...
Everywhere you look in education there are problems. There isn't going to be some stylized answer.
These Google guys - and a lot of other people who write comments online - go and promote something they think is a world view or theory, and is really just a bunch of stereotypes and projections of their own college-aged vengances. VC likes these kind of people! These Google guys fit that mold. I can agree with the broad strokes of techno-utopia, but that also means you need space to say that your app is bad, your art is ugly, and your text is long and boring.
These Google guys do not have space for criticism. They are Enterprise Sales. If the district asks for tasteless Corporate Memphis art, that's the art they're going to get - I'm going to focus on the art because I know something about art, and the text that appeared in the demo was so horrifically boring that I didn't read it. Have you opened a children's book? None of it looks like fucking Corporate Memphis!
One thing I am certain of is that these Google guys do have taste, they are smart people. Their problem is Enterprise Sales. Don't get me wrong. If you are narrowly focused on giving people what they want, your creative product will fail.
If you have free weights, a bench, and a place to run, you are already 98% of the way to being a healthy fit human. There is ample information available on how to use those tools.
You don't need a trainer, a $10,000 gym machine, and a $5,000 stationary bike.
Education has gotten so insane with per-student spend, and the results are the same as the kids who had pencils and 10 year old text books.
I think this one is fairly simple. Half of consumer spending comes from the top 10% of earners, whose kids we can assume have generally pretty decent educations already. The people who need education help the most don’t have money to spend on it.
The parents who do have money to spend want to invest in tailored education from a human teacher, not cheap, generic scalable technology. So margins will be low.
So if you want to make money, you need to focus on things like enrichment and test/college prep for the top 10%. Helping inner city kids who are 3 grades behind in reading doesn’t print money and VCs don’t want anything to do with that.
But there's no potential market in the top 10%. I mean, these people just hire a good teacher and that's it. There's no room for improvement; there's nothing that can beat a good teacher.
> Helping inner city kids who are 3 grades behind in reading doesn’t print money
This is a political problem. Political problems cannot be solved by technological means. So there is no market here either.
> Helping inner city kids who are 3 grades behind in reading doesn’t print money
100% and this is broadly why ed tech doesn’t move the needle.
Another POV is: pick your disruption.
AI stuff has definitely disrupted education... for the worse. It happened within a political and economic status quo. The AI stuff did not need to wait for the movement of any levers of power to happen at all.
If you are seeking a way to fix low returns in ed tech (and for that matter, Health IT, which is like, #2 worst performing sector): attack Enterprise Sales. Destroy it. Make stuff that destroys the monetization system where districts buy exactly what they ask for. It isn't complicated.
Scratch and early Khan Academy provide a template for good ed tech targeting the learner directly.
Whether you make $1 million or $1 billion doing this, I don't know.
Chegg got to, and fell from, great heights by delivering cheating, which ChatGPT does for free now. Cheating ALSO worked within a political and economic status quo, that 30% of students cheat, and that the cheating is a necessity, apparently, for the survival and thriving of a vast number of people, all around the world.
There are markets. Lots of them. You can do good or bad. Paul Graham doesn't invest in Cluely, even if it makes money, it's kind of evil (A16z doesn't care about cheating, the people who run it are the ones who cheated in school). So there are even opportunities that are missed by the very best seed fund.
To me, a big opportunity lies in things the government education cannot do. Some things good, some evil, some complicated. For example, no matter how hard it seems to try, the government cannot functionally collect on a trillion bucks of student loans. What does that mean for education? I don't know, but I think if you are looking for $1b+ opportunities, they're there.
The real challenge in teaching Newton's laws of motion to teenagers is that they struggle to deal with the idea that friction isn't always there. When students enter the classroom, they arrive with an understanding of motion that they've intuited from watching things move all their lives, and that understanding is the theory of impetus: https://en.wikipedia.org/wiki/Theory_of_impetus
An AI system that can interrogate individual students' understanding of the ideas presented and pose questions that challenge the theory of impetus would be really useful, because 'unteaching' impetus theory to thirty students at once is extremely difficult. However, what Google has presented here, with slides and multiple guess quizzes, is just a variation on the 'chalk and talk' theme.
The final straw that made me leave teaching was the head of languages telling me that a good teacher can teach any subject. Discussions about 'the best pedagogy' never make any consideration of what is being taught; there's an implicit assumption that every idea and subject should be taught the same way. School systems have improved markedly since they were introduced in the nineteenth century, but I think we've got everything we can out of the subject-agnostic approach to improvement, and we need to start engaging with the detail of what's being taught to further improve.
As an example, as you're reading it, try posing a few relevant counterfactuals.
What they are doing internally after launching something like this is patting themselves on the back, updating their resumes, and promptly forgetting it exists.
(see NotebookLM)
Tell me this wasn't foreign languages? :face_palm:
Okay, I was totally with you until this,
> but I think we've got everything we can out of the subject-agnostic approach to improvement, and we need to start engaging with the detail of what's being taught to further improve
I think if you walk into the bottom 80% of classrooms you would not see, interleaving, spaced repetition, recall-over-reread, or topic shuffling to avoid interference.
There's a load of understanding we've gained in pedagogy and human learning that has not affected how we structure formal education yet.
Where have you taught? I taught in Australia and the United Kingdom, where many of these things were mandated by the promulgation of spiral curricula by the relevant government departments. I'm aware in the US that, for example, algebra is taught as one or two block courses, but in the school systems I've taught in, algebra is taught as a few 'topics' of about a month in duration each, sprinkled throughout the whole four or five years in which mathematics is mandatory in secondary school. For Year 7 to 10 in Australia, there would be one or two topics for each of physics, chemistry, biology and earth sciences, covered across each year, building up from year to year. None of this was a choice by individual teachers or even schools; it was an artefact of the way the curricula are structured.
Since these things I mentioned are well demonstrated to be effective and you don't think there's anything left to be had with a subject-agnostic approach, I infer you have a high opinion of how well these countries have implemented these "tactics". Is that right?
Spaced-repetition is a good example. It's so objectively better than other forms of memorization, but it's just one tactic for learning.
In this sense "teaching well requires a specific set of tools and tactics" is exactly how "a good teacher can teach anything" would make sense.
The problem is it doesn't make sense.
He specifically says, "I think we've got everything we can out of the subject-agnostic approach to improvement"
So we all agree that subjects would benefit from specific interventions. The difference is he's going further and saying this is the only way forward; there are no general gains left to be had.
From the strength of the claim alone, this is hard to believe. Where do you stand on this?
Charitable pov though, i'd say it's about leverage. Learning outcomes globally suffer steep steep cliffs and it's inevitably due to socioeconomic factors.
It's hard to argue that more chromebooks, spaced repetition, and catering to learning styles are the missing pieces johnny needs to get out of the hood.
as a person in tech i believed for a long time that if only we had better learning materials, people could orient and better self motivate around subjects. (learning needs to be hard. it's biology. brain takes notice and retains new and challenging stimuli. so "making learning easier" is a misnomer. the insight becomes how do we get people to self-motivate into hard things?)
I still think that's true, to your point, but all these takes are one of many many problems, and they aren't equal in leverage and i think that's where OP is coming from. there's outsized leverage in domain specific pedagogy.
The issue with learning things isn't that it hasn't been tailored to be interesting or relatable to me, it's just that it's a lot of content and it's hard. The solution is figuring out how to set up a type of spoon feed algorithm that checks that I'm understanding little bite size pieces along the way in addition to giving layman's terms for things that don't necessitate the formal description (e.g., deciphering math language).
ChatGPT Study mode has actually been quite good at this when you prompt it correctly and are studying a subject that it's well trained on.
AI rephrasing words better to each individual isn't interesting to me. Automatic Interactive small quizzes, puzzles, and self adjusting difficulty level would be amazing, but i don't see AI really reaching that level.
When i see AI "quiz me on this" it gets stuck asking direct factual question about the text. But a good question challenges assumptions, and prod deeper understanding.
https://chatgpt.com/share/68cc844a-14d4-8009-88e3-53f5d781b5...
You need examples that point at the general case - like Newton's cradle.
Conservation of momentum helps.
Just giving the students access to something that simulates a frictionless world to play around with? maybe with a simple on off switch.
Something i've probably seen shared by others here in webgl at some point and far cheaper to run than genai
1) well-thought out exercises (covering all cases, whether in math or Spanish)
2) CORRECT solutions (just saying because even ChatGPT gets it wrong even for high school math)
3) that you can enter them using pen (if need be on an iPad)
Just a way to make zillions of exercises if I want to. And for my kids, the problem is these days teachers won't (AND mostly can't, they just don't know their subject) help them make a lot of exercises.
"a list can be used for a recipe"
"a set can be used to list all the unique ingredients you need to buy for a week's meals"
"a map can be used for a cookbook"
"a priority queue can be used to manage orders in a busy restaurant kitchen"
"a food-pairing graph can show which ingredients taste good together"
Maybe I'm over-estimating the taste of 7th graders, but I feel like I would get sick of this really quickly.
I don't know when these dorks will understand that education isn't a technical problem. Its a social and emotional problem.
existing material is clear enough to learn from.
i.e. we've been educating people for 1,000s of years even without textbooks.
Education itself isn't primarily a technology problem. Treating it as such is an administrative failure, as is pursuing a technological solution in many scenarios that are first social in nature.
By using the tools available at the time we did, certainly. That involves physical tools like writing, but also non-physical tools like better ways of conveying and disseminating information, better ways of testing the efficacy of various approaches, etc, etc.
Education has to evolve, as it always has. While I'm not sure TFA is it, I do think LLMs will have a role to play in making learning more accessible and enjoyable for everyone, not just kids.
A lot of the failure of learning is a failure of teaching. Incompetent teachers throw disconnected information at you instead of trying to explain or lead you to an understanding of what something is about. I attribute part of this to a loss of solid philosophical coursework where you are taught to think from first principles, taught within a larger integral context, and taught to reason clearly. It used to be the case that everyone with a college degree had at least some basic philosophy under their belts (compare a Heisenberg to a Feynman to a Krauss; the progression is clear). And don't forget the success of the trivium and quadrivium or some variation of them that was often presupposed and prepared students for intellectual work.
That said, teaching is hard. I don't fully blame teachers who cannot effectively convey subjects to 30 kids, especially these days. Even in an ideal situation, there's so much variance in how people learn best, that it would be hard to blame it on incompetence if a teacher cannot reach every one of their students.
Considering how hard it's going to be to fix the bigger problems with society* - obsession with credentials, lack of funding, better paying, less stressful jobs means less teachers, etc, etc - shouldn't we embrace tools that help kids learn things in a more accessible way to them? As I said, I don't think TFA is it, and we should obviously be aware of the issues, but surely people on HN of all places can see the value in tailoring subjects and lessons to a student's preferred method of learning?
* This is not to say we shouldn't also try to solve those problems
And we've been doing a pretty crappy job educating people without written texts. The written word led to a tremendous acceleration of knowledge transmission. The printing press enabled that transmission at a larger, but unified, scale.
Anything we even remotely recognize as science has only ever been practiced by literate cultures.
Discarding technology for education because it's not a panacea is an absolute failure as educator.
For the most part, it's a matter of clear presentation, student engagement, and effort. A well-written textbook (many suck) and a good teacher (same) and a properly disposed student (which presupposes things like certain virtues; parents are responsible for teaching and supporting these for the most part). Technology won't get around the basic human reality, and sometimes, there's nothing to fix. Some people aren't interested.
Its annoying that software is such a high gross margin industry - I would love to see Googles cash get taken away so they cant take these vanity projects.
I do agree that it would be better to dial in on a pupil's interest than the grade level (my kid may be 7th grade in English but 9th grade in Chemistry, for instance.)
[Edit: fix typo]
I don't think the failure mode here is really "7th graders will see through the superficiality of this really quick". I think the failure mode here will be:
> Explain computer science basics for a 7th grader interested in poop and butt-sniffing
Although who knows... maybe this will unleash a generation of memes of the likes we have never seen before. And if the side-effect is more people are at least conversant in more topics, well, maybe that's not a failure mode at all
But... which kids? Do we have a fundamental problem reaching kids who are interested in basketball? My kid had a period of being interested in dinosaurs, but I never felt the need to reframe everything in dinosaur-terms because of that. In fact, you kinda want them to broaden their horizons beyond dinosaurs?
The real challenges in education are elsewhere, and a lot of it has to do with socioeconomic status and bad influences early in life.
Haha, you think most Googlers understand this? No chance.
This is why products like this fail, dead on arrival - the person leading the charge simply doesnt get it.
But hey go ahead and burn the cash of shareholders.
I do think there is pedagogical value in showing where these concepts can be used practically and the advantage of LLMs is that you can transform the examples to what you're actually interested in. For example the Red Blob Games series on A* pathfinding are really good at showing how Dijkstra and graph traversal algorithms work, for a use-case (video games) that is appealing to a lot of nerdy people.
But there's another flaw that gets overlooked most of the time, which is that we're raising kids to believe that "why are you teaching me something that you're not 100% sure I will need in my day-to-day life" is a sensible question, when it really isn't.
Outside of my 2-year stint in the game development industry, I never really needed most of what I learned about trigonometry in my day-to-day life. But that doesn't mean it wasn't useful.
Yes, we should make the subject matter more approachable to kids, but we should also try to shift the paradigm so that kids are more open to learning new things.
I think the truth is a lot simpler. Most kids won’t use trig in real life.
Advanced math came in handy just once in my life. My keys fell in the toilet, and I realized the best tool was a wire bent like an integral.
Later at university I complained about the lack of applications in the textbook, and my classmates became very upset. One of them responded, "we are mathematicians, we do not concern ourselves with applications."
Your suggestion is interesting but I am not convinced that a student would be helped by aligning the examples with their interests. I could see a student asking how trig relates to computer games and the example the LLM generates becoming much more involved.
I see no problem with the examples being boring. The people that developed these techniques had such fundamental problems to solve and the wonder to me is the human mind that came up with these methods.
All this to say, maybe we lack appreciation for the fundamental sciences that underpin every aspect of our modern lives.
The trouble is a lot of those practical examples fall into the, "why would I care category". I had a high school physics teacher who described his university antics, one of which included a funny story of a bunch of his friends climbing on top of each other to measure the height of a flag pole. I guess the profs got tired of dealing with students scaling flag poles because I was measuring the height of mountains on the moon at the same university a couple of years later. The thing is nobody really cares about the height of a flag pole, while only a few would care about the height of the mountains on the moon.
The reality is the interesting applications are much more involved. They either require a depth of thought of process or a depth of knowledge that isn't appropriate for a textbook question. Take that trigonometry in games example. The math to do it was in my middle school curriculum, but it becomes obvious that computer graphics is more than trigonometry the moment you try to frame it as an example. I had linear algebra in high school. That will take you pretty far with the mathematics, but it will also be clear that a knowledge of computer programming is involved. Even knowing how to program isn't going to take you all of the way because few are interested in rendering verticies and edges ...
And that is just the obvious progression of knowledge in a simple application. Physics itself involves buckets full of trigonometry in extremely non-obvious ways, non-geometric ways.
Instruction and instructors won't be going away.
Most people have never looked at textbooks needing evolving.
It's like the LLM ai shift to not think about how software used to be.
The first 3 are simply plain wrong.
GenAI's gonna GenAI I guess.
> "a list can be used for a recipe"
A recipe is not just a list of steps, it's also a list of ingredients, potentially an introduction, some pictures, etc.
Ask a kid to draw you a mock recipe, you won't just get a list of steps in return.
> "a set can be used to list all the unique ingredients you need to buy for a week's meals"
Ingredients have quantities attached. If I tell you to make a cake you need sugar, an egg and flour and give you all the steps but no quantities, you're not making a cake. A map is the obvious choice for storing ingredients.
I agree that ingredients are unique, but they have attached data which is just as relevant as the ingredient itself.
> "a map can be used for a cookbook"
I just don't understand how a cookbook is supposed to represent a map, it just doesn't make sense, not even with the additional context of the previous metaphors.
At best it would be somewhat understandable if it said a map can be used for a cookbook, with dish names mapping to recipes, but even this would be a stretch and assume a dish can be made in a single way.
Keep in mind the goal is to teach someone who has zero ideas about datastructures what they are, not to give some analogies to an experienced software engineer.
I don't even know what it means, tbh. I feel it's going to confuse the hell out of 7th graders.
I personally prefer a serious text without bringing in unrelated concepts like food, but this is still understandable.
The first meaning of "use for a recipe" is "use as an ingredient."
But then, it's a pretty weird thing to explain to begin with, approximately every human on the planet knows what the word "list" means. So what does this pseudo-definition add?
This does almost nothing to explain what a “list” is in the CS sense. Teaching material needs to show how a list could be used for a recipe, and from that the student might begin to form a first incomplete understanding of what a “list” is.
- a list can be used for the steps of a recipe
If you ask it to explain some complex algorithm, it'll go "imagine a football field..." If you tell it you're a college-educated software engineer in the system prompt and ask it a non-cs question, it will go "imagine a variable..." instead.
I've had very good luck using LLMs to do this. I paste the part of the book that I don't understand and ask questions about it.
Asking the right kind of questions is a genuine skill.
It applies to every domain of life where you are at the mercy of a "professional" or at the mercy of some knowledge differential. So you need to be a good judge of whether the answers you're getting are good answers or bad answers.
A skill we cannot rely kids to have, and which i think takes years of training and learning for even adults to really acquire. (to be clear, i'm not thinking about AI prompting. I 'm thinking about assumption breaking and understanding prodding questions the learner asks themselves and seeks answers for, to build and refine their mental models of something they learn)
Because questions are fundamentally about knowledge differentials, which will always exist for individual human beings. We can't at any point know everything.
Know how to know what you don't know and get a good grasp of what it means to know in the first place.
Knowledge isn't absolute.
A great question can compensate for a simple answer.
Kids can ask questions, but they rely on an experienced teacher to effectively answer.
Teaching someone effectively through answering questions, require the teacher through the students questions to build a model of the students model. To answer not only the question directly, but also the question that should have been asked instead.
A good end-of-chapter quiz doesn't check that a reader read the next. It asks questions whos answer rule out possible (or common) incorrect mental models the reader may have built.
A learner skilled in asking truly excellent questions, makes questions for which even a bad or simple answer rule out and refine their assumptions.
And that is a skill i doubt is ever truly mastered.
Its like the X Y. A great teacher answers X instead of Y. A great learner asks about X in the firstplace.
Whaaaaat? How does this work? If you're trying to learn a new topic, how are you supposed to recognize a good (and truthful) answer, whether it's from an LLM or instructor?
By being skeptical of the answers, testing the answers, corroborating with other sources, etc.
This isn't new. This is literally how we've been exploring this knowledge game for thousands of years.
I bet when you're learning a new subject you do the same exact thing.
Imagine being handed a textbook with a warning in the first page "10% of the facts here are made up (including this one). Good luck!"
You as the reader when you're reading anything are supposed to verify claims the author is making.
You never expect anything to be sources of truth.
That's why every textbooks either cites the sources or proves their work.
Very rarely do you have any textbook that's just a list of facts out of thin air. I don't think I've seen a single textbook, even bad ones, do this. They always cite their claims, or they show the logical steps to prove or justify a claim. Good textbooks make it easy to follow and clearly show their steps for the convenience of their readers.
Any good textbook seriously considers both the historic literature on their subject, presents the context of that literature, and shows some kind of proof of work that synthesizes all of that to support their claim.
This is always the case. This is how basic academic writing is done.
And it is the job of the reader to follow those citations, and to verify the claims. That's literally how our academic system works.
It's basic literacy.
How do you verify the claims? Replicate every piece of research cited?
> Know how to find information in the old technology called “books"
> Can think critically about statements made in such different contexts as advertising, entertainment, news reporting, and books written in an earlier century.
So, before indulging this any further, do you mind citing your source for the definition of "basic literacy" that includes the claim "never expect anything to be sources of truth"?
If you’re a complete novice reading a niche graduate level textbook on Tolstoy’s critique of the Russian war effort in War and Peace, you’re going to get some wild hallucinations, and you’ll have no idea how to determine fact from fiction.
If you’re reading a high school textbook about the history of pre-revolution Russia, the models will have pretty comprehensive coverage of every concept you’re likely to come across.
I have also found another use for this. For example in studying modulation techniques in communication systems, I went back and forth between Monte-Carlo simulations and theoretical approximations to see how accurate each one is. And then added some more realistic error scenarios to do an end-to-end validation. In this case the LLM was used as a shortcut to write repetitive code that was verified manually, and this was complementary to the text-book, and made reading the topic more engaging, enjoyable, and comprehensive.
Like, if you had made the text pdf readers do some manual thinking by working on trying to place the topic into the same type of familiar/favored context, wouldn't that have been the better comparison?
I think using GenAI for learning is cool and exciting (especially for autodidacts) but I'm not excited by this particular study structure.
I don't know. I've been trying, but I think there are two fundamental issues. First, I don't think it's all that useful for "out-of-order" learning and for explaining concepts in non-conventional ways.
To give you a practical example, there's a certain order in which we teach math, and every subsequent step builds on the previous one. But quite often, this order is just a matter of convention, not necessity. You can explain a ton of higher-math concepts in terms of high-school algebra and geometry, it's just not something we do because we don't intend to teach high-schoolers any of that, and for undergrads, it's more expedient to lean on their knowledge of calculus / mathematical analysis than to start by drawing triangles.
And not once have I succeeded in convincing an LLM to circumvent that. If a topic is explained using mathematical analysis in college textbooks, this is how it will always answer. Which actually sucks for that curious high-schooler.
But second, LLMs just aren't nearly as dependable as textbooks. It's not even the base error rate - I think they're 90%+ accurate on most run-of-the-mill scientific questions - but that once they make a confidently-sounding mistake and you try to drill down, they keep digging that hole and sending you more and more off track. It's amusing if you know the domain and can spot mistakes. It's a huge waste of time when trying to learn a new field.
It's precisely why vibe-coding is more useful to experienced developers who can immediately reject bad results.
I don't particularly expect models to be dependable in responses, but I see how that presents a much larger problem in a learning context. I'm ok with bad responses that I can fight back against, but I also wouldn't reach for an LLM for a new field by default either.
For me, I do like using an LLM as a supplemental learning aid along with other traditional resources. I haven't tackled a deeper, new-to-me field yet with one. Maybe it's time for that . . .
Call me pessimistic, but this technology looks more poised to replace teachers in schools than supplement them.
And note how i didn't mention a country; i think this is a widespread issue beyond country borders currently.
My frame of reference is europe and America. I suspect more of asia than just china mirrors that respect, as education feels nearly unhealthily emphasised.
The so called advanced world that prosper on innovation, design and R&D is skimpimg on the very thing that is supposed to be our advantage. Its frustrating.
That project led me to conscript AI as a private tutor. With custom instructions, ChatGPT and Gemini now surface new words and nudge my prose toward clarity, turning a vague fear of erosion into conviction. A dedicated subset of users will inevitably harness such tools to strengthen their expressive range and communicative precision.
Until recently, my writing rarely left emails and journals. Now, with AI as scaffold and sparring partner, I draft short stories from my own life and recast them in the voices of authors I admire. This feels less like a technology poised to supplant teachers, and more like the substrate for a renaissance in autodidactic education.
No AI you ever create will get a kid to choose learning how math works over doing basically anything else with their time. The point of school is not to teach, it is to discipline children to participate in education. Otherwise, why have it at all? Kids can find extensive information and guides for basically any topic they want on the internet right now.
The entire "AI education" thing misses this.
I vividly remember hitting some blocker (7th grade chem, 4th grade reading, 2nd grade dinosaurs), where I had a question that the teacher dismissed.
My mind was stuck (blocked) as it couldn’t get past the question I had, and in a public school setting, it wasn’t worth the time for the teacher to dive down the tangent (or they simply weren’t prepared).
My hope for LLMs in education, is that they can supplement traditional curriculums such that students can go “off the rails” while still being nudged back to the desired outcome.
- How do we know electrons “spin”?
- Why does that word behave differently than others (in English)?
- How big is a sauropod compared to a blue whale?
I’ve found that on my own journey through education, it’s these sparks of interest that drive towards deeper understanding, rather than surface level rote memorization.
TFA says: “What if students had the power to shape their own learning journey?”
In the context of nonfiction/textbooks, this is already possible!
I didn’t read “How to read a book”[0] until high school, but it opened the world for me on another silly blocker I had, which is that material should be consumed start to finish.
Hopefully with “AI” more students will learn that there are many paths towards understanding the world, and not just the curriculum in front of them.
This was fantastic, because everything I learned about Laplace, Fourier, etc. had an immediate connection to another area of interest, which made the class much more engaging.
Steve Jobs was able to turn around Apple in such a fashion that they become even bigger by letting go of the PC market and going mobile.
What’s been most valuable for me is the way they create a kind of imperfect but effective Socratic dialogue with whatever I’m reading. I was the kid who always had my hand up in class, not to show off, but because I hated leaving something unexplained. Good teachers could make a text come alive by answering those questions.
LLMs give me some of that back outside the classroom. Even when I ask them to speculate, the process forces me to interrogate the material and refine my own model of it. That’s changed how I read, learn, and even how I experience novels.
So innovation on this “Socratic interface” and other interfaces is pointed in the right direction.
Also, Smiley is getting up their in fictional characters I admire. Not Iroh level, but up there.
Would appreciate feedback!
There's a bit of overlap with Learn Your Way I guess. I'm not sure users need to toggle between alternate formats of the same instruction though. Instead the instruction itself should be as multi-modal as possible, and offer flexibility to ask questions... which even gemini.google.com offers so I'm not sure this is a net improvement over that.
2. About the AI part: Let's remember that at the other end of these bells and whistles there is a huge amount of expended electricity, and sprawling sever farm infrastructure. That's the hidden cost. For now, it might seem like someone else is footing the bill, but that will not last for very long - and in fact, it already starting not to last. See:
https://www.newsweek.com/ai-data-centers-why-electric-bill-s...
A typical US household is paying a 26 USD "AI tax" this year through its electricity bill.
https://learnyourway.withgoogle.com/scopes/1KNlGW5E/immersiv...
It starts with just this sentence, followed by a quiz on that sentence:
> When we are born, we inherit our genetic makeup and biological features. However, our identity as human beings develops through interactions with others in society. Many experts in both psychology and sociology have described the process of self-development as a key step to understanding how that "self" learns to function within society.
Followed by the quiz:
> Question 1: Based on the provided text, what is a key difference in focus between psychology and sociology regarding self-development? A) Sociology is concerned with inherited traits, whereas psychology is concerned with societal norms. B) Psychology studies societal functions, while sociology studies individual identity. C) Psychology focuses on genetic makeup, while sociology focuses on social interactions. D) Both fields exclusively study the biological features inherited at birth.
I thought D makes the most sense, as nothing in the immediate text provides a more granular answers. But it's not D. It made me question my intelligence, maybe I misread the sentence, maybe I needed to read something else? Oh there is a button for the entire PDF, but then isn't the purpose of it to break down the PDF into chunks and ask me questions on what I'm reading?
I'm sure this is a fixable bug, but I was looking for the "provide feedback" button, there is none.
This would be very frustrating to a student.
I'm sure it is, but the bar for accuracy in education is way too high for a mistake as blatant as this to be allowed to slip through.
(I'm sure someone will chime in about their useless Hum 10 teacher making an even bigger gaffe, and how this can be excused, it's just a beta...)
(edit to add:)
This is just regular poor-quality language model output. The language model is trained on data where the phrase "based on the provided text" makes a common appearance between a text segment and subsequent questions, but the model has no knowledge of the very specific, limiting meaning of that phrase: it limits the following question to assess reading comprehension only, not general knowledge.
So no, I don't think this is a minor bug that's easily fixable: a pure language model will always associate key signaling phrases with the wrong type questions, because it has no concept of (didactic) mode. It basically considers all phrases ornamental instead of purposeful.
All of this hurts. GenAI cannot replace people grounded in reality. Especially not teachers and mentors. The effects will be nauseating to anyone who cherishes the development of human beings and minds in general.
Teaching and mentoring is a two-sided thing. The mentor, if adequately tutored or capable himself, learns more than the student. I understand that this is something "we" hope to achieve for AI but it's so insanely dumb to do it this early, it almost makes me angry. Almost. These people are just doing their jobs. So, as usual, I'm calling their bosses dumb fucking pathetic shitheads of idiot trash who fucking cost our kind sooooooo much fucking potential it almost makes me angry.
Sorry, gotta keep the anger at "almost", max. Can't be angry at lack of levels of consciousness or awareness. It's beyond genetic, a decision of "culture" and conformity; back to topic:
Teaching is one of those mythical edges that hone themselves. And I don't mean just the skill, I mean the entire category, the concept, the inter-generational action that happens on absolutely all levels. GenAI in education, applied anything below "correctly" on the scale of civilization, in any interval/integral, is taking XP points off a higher dimension that is exclusive to us in the kingdom of animals.
Don't fuck this up. Don't listen to your bosses. Quit. Found your own companies. I can't put it in words, yet. I don't have the peace of mind. But it's too damn important not to mention it. Every dimwit with access to drugs and hookers can make fuck-you money. For himself and others. FUCK THAT.
we could blame universities... but nobody forced their hand and they adapted to lower standards because ... [no peace of mind to take the time, ok, economists x'] ...
I can supply a personal narrative, though: ( I have to exclude the conspiratorial part )
I move back home when my little sister had issues in 8th grade. it was coincidental. I started tutoring her.
(background: After I turned 13, every spring, summer and fall, I WORKED at a Gaertnerei aka garden center until I finished my Abitur with a C)
She made progress. ( she's a bad example because someone fucked with her thyriod right when she started to make progress ( it's Germany, they are D.U.M.B with eugenics, they outsourced it to The USA because they are bad losers but whatever ))
She made progress. What did I learn? A LOT. Zero hand waving. In retro, I learned the MINIMUM ( I, do you understand who I felt like back then and NOW ? ) ( for clarity, I THINK I know where your comment comes from ) ... HOW can they fail to teach this to youngsters in University? Wait ... I get it. The system works well to reach it's objective, but CAN it be aware of what lies ahead? No way. I was 15, or 16, now 37, when I was planting some dumb trees in some dumb pot, when I realized, in my ADHD obliviousness, that neither my boss, nor his son, my age, my then colleague, knew how things would turn out. Working class, programmed* to be this way, ok.
Fast forward: my classmates, the Jahrgang before and after, all teachers for more or less ten years now. ... ... ... None founded their own school, despite abilities to get the finances, ... ... ... all abiding to the standard curriculum ... ... ... none of them being a Studienrat ( some higher regional rank ) ... ... all feeling very cool cuz they make 5k a month (if) ... ... ... ... ... .PERIOD, moving on:
THEY wouldn't think of
> supporting instructors or instructional evolution.
HOW THE FUCK WOULD/COULD COMPUTER SCIENTISTS, OR ENGINEERS THINK THE SAME?
GERMANY: (our region lacks E V E R Y T H I N G ... agricultural corporations with something worthwhile developing ... I had someone uniquely capable (and experienced in my Jahrgang) but ... ... ... nothing ... not even a 0.1%er now or soon to be ... so even if you judge form a punched drug dealer's aka Fortune 500 POV... N O T H I N G.... HOW, except portfolio capitalism featuring "please don't make my child understand how disrespectful my capabilities are to the evolution of my grandparents genes" ... )
None of the people promoting these things address this question in any tangible or productive way, it is almost always yes its a problem, or dissembling.
Its not an uncommon or unworthy question. They just don't like the only possible answer preferring the the opaqueness is nobody gets held accountable because that is what the given incentives produce.
Ask and answer. Its a worthy question.
It's very hard for humans to imagine some mundane things when they go against their expectations. I would imagine AI would suffer similar issues, but there is a potential for them to do better. It strikes me as fascinating because I would consider it to be a sign of a greater level of reasoning, but because it goes against expectations would be considered worse by many.
Some examples of what I am thinking about.
Grass is only around 60 million years old, depictions of earth earlier than that frequently show grass.
Jesus did not grow a beard until a couple of hundred years after his death.
Anyway …
Nor do I understand why you'd say "ouch, that hurt" while dribbling...
I already used it to learn like this 6-12 months ago.
We are letting terrible students become terrible teachers who go on to teach terribly and produce the next generation of terrible students.
lagniappe•8h ago
exe34•7h ago
TimorousBestie•7h ago
Your tone suggests that this wouldn’t be horrifying so I wonder what you meant by this.
exe34•6h ago
I was referring to the visual experience, not the dark lord possessing people.