Teachers use chatbots for everything else, uncritically. Not good!
I already feel disrespected in powerpoint presentations where they clearly haven't practiced it for a long time and seem to be discovering the slides and coming up with the argument they want to make on the spot. I usually get up and leave.
[the slideshow] was missing important tested material, repetitive, and just totally airy and meaningless. Just slide after slide of the same handful of sentences rephrased with random loosely related stock photos.
Who cares if he saved himself some time when he completely wasted everyone else's time?https://np.reddit.com/r/Teachers/comments/1mhntjh/unpopular_...
I keep telling people here that reddit is actually an underappreciated goldmine, but I guess feeling better than others feels too good to pass on.
In my mind, reddit is like of HN, except instead of being just tech and business oriented people, it's every subject under the sun. Most of it is garbage (like on HN) but if you're willing to search it's a goldmine.
When I go to reddit, I see posts about astrophotography, vintage computers, ham radio, classic cars, typewriters, film photography, sculpture, gardening, woodworking, firefighting, archery, fencing, outer space, watches, cutaway drawings, the Sega Saturn, and more topics I'm interested in.
When they go to reddit they see stuff they don't like, and people arguing about it.
I just want to shout "yes, you do, because your brain is damaged and you asked it to show those things to you".
It's the same with all social media. When I go to instagram I see people I know personally and have been in the same room with doing things I am interested in. I don't see any rage, titillation, celebrities, or gossip. Just my friends and acquaintances being friendly. (It IS annoying that I keep having to turn off suggested posts)
Even when I click on the magnifying glass, which is where people say Instagram shows them titillating things in order to get them hooked, I see scuba diving, aviation, vintage Macs, watches, and astronomy.
What is going on? Do I have a "only show this guy nice stuff he's interested in" cookie following me around the internet?
And YouTube. People will complain about YouTube showing them shit. When I go to YouTube.com, right now, I just opened it in another tab, the top six videos are: Dutch firefighters battling a stable fire, a homelabber messing around with vintage linksys equipment, a history of the development of nuclear weapons, a review of a handheld video game emulator, a guy with a VAX setup in his basement working on restoring those machines, and a video on new theories about how the moon was formed.
The next six are also laser-focused on my interests including a two hour video about various motifs found in Indo-European mythology and their origins which I am totally going to listen to in the background while at work.
I did nothing, NOTHING, except subscribe to/follow things I like and people I know, and it's great.
When people log into reddit and see people arguing about bullshit, instagram and see models bouncing their tits, and YouTube and see garbage, the only logical conclusion I can reach is that their brains are damaged and they set up the systems to show them these things then decided to complain about it as some kind of hobby or something.
If anything, HN is the worst of them all because I can't tell it "show me more 'floppy disk raid array' and less 'crypto and AI bullshit'".
Not everything is a personal moral failure when society is literally out to get each and every one of us. Many of us have been damaged by the ‘net, it’s purveyors of crap, intentionally for their gain.
Don’t just turn and point fingers at the endusers. They sure as fuck didn’t design the algorithms.
I am convinced that is a skill that can be learned or taught.
Students using AI to generate their papers and solve complex problems.
What are we as humans even doing. Why not just connect two shitty models together and tell them to hallucinate to each other and skip the whole need to think about anything. We can fire both teachers and students at the same time and save money on this whole education thing.
Somehow though, this actually might be the best time to for learners, to sit down and engage with topics and don’t be distracted by formal stuff (degrees, grades, points) because the latter is becoming more meaningless with each token being sent down the drain.
Western countries have better conditions than much of the world for a variety of reasons, but among them is education and culture.
Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
You might argue that the AI can be a mentor or can guide society appropriately. That's not wholly untrue, but if AI is "a bicycle for the mind", you still have to have the willingness and vision to go someplace with it. If you've never thought for yourself, never learned anything independently, I just don't see how people will avoid using AI to be "stupid faster".
absolutely
up until 2022 I was optimistic for the future
our current big problems: climate change, nuclear proliferation, global pandemics, dictatorships, antibiotic resistance, all seemed solvable over the long term
"AI" however is different
previously all human societies placed a high value on education
this is now gone, if anything spending time educating yourself is now a negative
I don't see how the species survives this new reality over the long term
> Raising the next generation to outsource all thinking to AI and form a culture around influencing people 45 seconds at a time will destroy those prerequisites to our better lifestyle, and it will be downhill from there.
they said the same about tv, youtube and even printed books. short length videos now apparently are the new evil (somehow).
quick question, why was nobody complaining about these exact same "engagement" algorithms 20 years ago? Why only when tiktok short form videos appear? Popularity based ranking was in search engines decades ago but nobody cared then. No cocomelon back then, coincidence?
IIRC, it may be better to have the same number of real humans focussing on fewer pupils. Even when they're using VLMs as assistants.
Students:
While humans max out at a higher skill level than VLMs, I suspect that most (not all!) people who would otherwise have finished education at their local mandatory school leaving age, may be better off finishing school as soon as they can use a VLM.
But also #1: There's also a question of apprenticeships over the current schooling system. Robotics' AI are not as advanced as VLMs, so while plumbing will get solved eventually (and a tentacle robot arm with a camera in each "finger" is clearly superior to our human arms in tight spaces), right now it still looks like a sane thing to train in.
But also #2: Telling who is and isn't getting anything out of the education system is really hard; not only in historical systems like the UK's old eleven-plus exams, but today after university graduation when it can sometimes take a bit of effort to detect that someone only got a degree for the prestige and didn't really learn anything.
This is the current meta. Today's knowledge workers are propertymaxxing like crazy, and sending their kids to trade school. Well, at least those who see the writing on the wall. The second half of the 021st century will see the rise of the PLIWs [1]. Knowledge work will become extinct. The social order will be:
1. elites: a small aristocracy, who control the access to AI
2. middle class: PLIWs
3. low class: children of today's knowledge workers who couldn't amass sufficient wealth for their kids to become PLIWs. Also called slop-feeders, as their job is to carry out the instructions coming from the AIs without questioning or understanding what they're doing.
________
[1] PLIW = Physical Labour, Inherited Wealth
The current US administration has already started this process.
A teaching culture of thinking that all you have to do is graduate students + a learning culture of thinking all you have to do is graduate.
This already was at an 8. Got dialled up to 11 during covid. And somehow dialled up to 21 after ChatGPT.
Normally, broken things can hobble along for a very long time. But the strain is so intense on what has become of education that my current guess is that the chickens will come to roost on this one sometime 2026 to 2027.
We are avoiding work that we don't want to do and therefore saving time, which is precisely what technology promised would help us do.
Apparently we aren't.
The system we have is now legacy: https://www.ndtv.com/offbeat/student-flaunts-use-of-chatgpt-...
I think you haven't picked up enough history books if that's the only positive thing you can come up about "schools". But I guess that's what we get after decades of "the economy is the only thing that matters" propaganda, what's the point of history, math, science, when the system just need good little consumerist wage slaves
Perhaps chasing what employers want at any given moment is not a good basis for an education system.
and rightly so! kids deserve better, that is awful
Link: (https://en.wikipedia.org/wiki/Paul_Watzlawick#Five_basic_axi...)
1. One cannot not communicate
2. Every communication has a content and relationship
aspect such that the latter classifies the former
and is therefore a metacommunication
3. The nature of a relationship is dependent on the
punctuation of the partners' communication procedures
4. Human communication involves both digital and analog
modalities
5. Inter-human communication procedures are either
symmetric or complementary
Re: (1), the "mere" act of using AI communicates something, just like some folks might register a text message as more (or less) intimate than a phone call, email, etc. The choice of modality is always part of what's communicated, part of the act of communication, and we can't stop that. Re: (2), that communication is then classified by each person's idea of what the relationship is.This is a dramatic and expensive way to learn they had different ideas of their relationship!
Of course, in a teacher/student situation, it's the teacher's job to make it clear to the students what the relationship is. Otherwise you risk relationship-damaging "surprises" like this.
Even ignoring the normative question of what a teacher Should™ do in that situation, it was counterproductive. Whatever benefit the teacher thought AI would provide, they'd (hopefully) agree it was outweighed by the cost to their relationship w/ students. All future interactions w/ those students will now be X% harder.
There's a kind of technical rationale which says that if (1) the GOAL is to improve the student's output and (2) I would normally do that by giving one or more rounds of feedback and waiting for the student to incorporate it then (3) I should use AI because it will help us reach that goal faster and more efficiently.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
The act of receiving and incorporating feedback is not "inefficient", especially not in a school setting. The consummation of that process is part of the goal. Maybe the most important part!
Full Dewey quote: https://news.ycombinator.com/item?id=44597741
Couple of days later she comes home and tells me I was wrong about some of them which I know I was not. Apparently they self marked them as the teacher read the answers out. Decided to phone in and ask about the marking scheme which I was told I was wrong too and basically I should have done better at GCSE mathematics.
I relayed my mathematical credentials and immediately the tone changed. The discussion indicated that they’d generated the questions with CoPilot and then fed that back into CoPilot and generated the answer sheet which had two incorrect answers on it.
The teacher and department head in question defended their position until I threatened to feed them to the examination board and leadership team. The following of the tech was almost zealot level religious thinking which is not something I want to see in education.
I check all homework now carefully and there have been other issues since.
In other words I predict this to be less of an issue with smaller class sizes.
The work load is making people create or extend their work using LLMs, and the reviewers/managers are also overloaded and don't have enough time to go through it so they feed it to an LLM to get a summary, that later is pushed somewhere else to feed another process... becoming a "broken telephone" business process were nobody really knows the detail of what's going on, and it's just LLMs feeding another LLMs in an eternally absurd process.
(Very anecdotal, local-to-the-Netherlands experience, of course.)
Looms allowed us to produce fabrics of higher quality than we ever could by hand. Fast fashion caused us to lose the ability to care for and mend clothes completely.
Computers allowed us to calculate and "think" faster than we ever could before. AI caused us to lose the ability to think completely...
Now they just have an extra tool to help them.
Things being a little bit wrong is not a huge problem. Much worse is if LLMs remove all the rigor and grit from education, the hard work to learn how to recognize facts.
Education in public schools is going to be 100% LLMs with text-to-speech, the only human adult in classrooms will be a security guard, but later they will also be replaced with AI-controlled autocannons that shoot non-lethal projectiles to discipline misbehaving kids.
Education is secondary to a teachers job …. the real issue is managing the classroom without disruption
If our culture valued education we would value teachers and their ability to teach, and we so clearly do not.
If you object, it's because you hate children.
Eventually, there are no more misbehaving kids, there are misbehaving parents who children are reporting to the trusted phones who taught them about the world, the phones that aligned the values of your children with the values of the people who paid the people who designed the system.
Just add a student compliance add-on subscription.
• https://www.linkedin.com/company/mithril-defense/people/
• https://www.nbcnews.com/nightly-news/video/company-says-high...
Why solve the root problem when it can instead be made into a business opportunity?
* Class monitoring when no teacher present - optional collusion analysis
* Conduct enforcement in corridors - optional RFID speed ticketing to prevent running in the hallways!
* Playground overwatch - optional score keeping for licensed games such as Hopscotch [TM].
* Perimeter monitoring for truants, contraband trading and drug dealing
* Toilet break escorting (optionally at a discreet distance)
* Per-student tracking and ensemble fraternisation analysis, optional social media and online profile correlation, and real-time alerting of parental accounts on contact with other students in parent- or community-provided watchlists or handy pre-set demographic groups.
* Student mood, wellness and attitude monitoring based on body language and speech patterns. Referral to preferred behavioral therapeutic partner providers at a discount!
With facial recognition you can even send warnings and punishments directly to the student and parental phones via the CGA App and apply demerits to their account automatically. Link a lunch payment account for automatic profanity penalties!
https://www.campusguardianangel.com/faq
I would say you couldn't make it up, but you could. You'd just be called a bad writer with unsubtle and derivative ideas.
And you probably don't need fiber optic, because you're operating in "owned space" - the drones sit on charging platforms until needed. You can have, say, additional access points embedded in the walls and ceilings (for a price, but it's children's safety, so who are we to say base station rental is worth more that little Timmy not getting a 5.56 in the back in a signal blackspot!?)
Buses will be driven by AI as well, so they'll only see their parents for 10 minutes in the morning and for an hour or so during the occasional dinners they eat together, and otherwise kids will be entirely alienated and left alone.
But do not worry! There will be an AI companion for them to talk to at all hours, a nanny AI, or a nAInny, one that starts as a nanny when they are infants and will gradually grow into an educator, confidante, and true friend that will never betray them.
That nAInny will network with other nAInnies to find mates for their charges, coordinate dates, ensure that no hanky-panky goes on until they graduate college and get married, and will be there together to give pointers and instructions during the act of coitus to enhance the likelihood of producing offspring that their fellow nAInnys will get to take care of.
A truly symbiotic relationship where the humans involved never have any agency but never need it as their lives are happy and perfect in every way.
If you don't want to participate in that, you will be removed as an obstacle to the true happiness of the human race.
That's what's called GAN - generative adversarial network.
The majority of times I see things like this it turns out that it's either:
- The "they've built it wrong" case; this one is the most common. People using - or in this case being made to use at work - tools that behind the scenes all use very cheap models (e.g. 4o-mini) with little context, half vibe-coded up, to save costs. The company making "MagicSchool" doesn't care, they want to maximize those profit margins and they're selling to school administration, not teachers, who only look at the costs and don't ever actually use the products themselves. Just like classic enterprise software in traditional companies. They need to tick boxes, show features that only show the happy path/case. It is perfectly possible to make it high quality, in a way that adds value, doesn't make shit up, and is properly validated. But especially in this niche, sales trumps everything. The hope is that at some point, this will change. We've seen the same play out with enterprise software to an extent; new such software does tend to be more usable on average than it used to be. It has taken a long time to get there though.
- The "you're holding it wrong" meme; users themselves directly using tools like Microsoft Copilot, 4o and friends (very outdated, free tiers, miles behind Claude/Gemini 2.5 pro/o3/etc.), along with having zero idea about what LLMs can and can't do, and obviously even less of an idea about inherent biases and prompting to prevent those. This combined with a complete lack of caring, along with a lack of competency - people lacking the basic critical thinking skills necessary to spot issues - is a deadly combo.
Of the problems with tasks and outcomes named in that thread, the large majority can indeed be done already with LLMs in a manner that both saves time and provides better quality than the level of those teachers rightly being criticized there. Teachers who are not even checking the output obviously don't give a single damn anyway, and that tells you enough about what the quality of their teaching would've been like pre-LLMs.
As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.
If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.
Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.
I recently taught a high school equivalent philosophy class, and wanted to design an exercise for my students to allocate a limited number of organs to recipients that were not directly comparable. I asked an LLM to generate recipient profiles for the students to choose between. First pass, the recipients all needed different organs, which kind of ruined the point of the dilemma! I told it so, and second pass was great.
Even with the extra handholding, the LLM made good materials faster than if I would have designed them manually. But if I had trusted it blindly, the materials would have been useless.
If you're teaching ethics in high school (which it sounds like you are), how many minutes does it take to write three or four paragraphs, one per case, highlighting different aspects that the student would need to take into account when making ethical decisions? I would estimate five to ten. A random assortment of cases from an LLM is unlikely to support the ethical themes you've talked about in the rest of the class, and the students are therefore also unlikely to be able to apply anything they've learned in class before then.
This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
By participating in the exercise during class. Introducing the cases, facilitating group discussions, and providing academic input when bringing the class back together for a review. I'm not just saying "hey take a look at this or whatever".
> If you're teaching ethics in high school (which it sounds like you are)
Briefly and temporarily. I have no formal pedagogic background. Input appreciated.
> This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
I may not have elaborated well enough on the context. I'm not creating slop in order to avoid doing work. I'm using the tools available to do more work faster - and sometimes coming across examples or cases that I realized I wouldn't have thought of myself. And, crucially, strictly supervising any and all work that the LLM produces.
If I had infinite time, then I'd happily spend it on meticulously handcrafting materials. But as this thread makes clear, that's a rare luxury in education.
> As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
It's absolutely necessary to have a good fundamental understanding of the material yourself. These teachers abusing AI and not even catching these obvious issues, clearly don't have such an understanding - or they're not using any of it, which is effectively the same. In fact, they're likely to have a much worse understanding than your average frontier LLM, especially given this post is about high school level teaching.
> The only way to do this is to prepare the material yourself.
As brought up in other comments, what is yourself? For decades teachers have been using premade lesson plans, either third-party, school supplied or otherwise obtained, with minor edits. All teachers? Of course not, but it's completely normalized. Are they doing it themselves? If not, then the remainder did it together with Google and Wikipedia. Were they also not doing it themselves? Especially given how awful modern Google is (and the worldwide number of high school teachers using something like Kagi will be <100 people), simply using a frontier model, especially with web search enabled, is simply a better version of doing that, if used in the same way.
None of this will be true for LLM output.
Smaller institutions are indeed better, but they are also less efficient. It's no wonder that only rich families can afford institutions like that.
Learning is hard, it's a struggle. Why learn when you can not learn?
Well I guess as long as you have an idea which model your teacher uses you are golden.
If there are serious societal issues that it can cause at the moment I wonder why it was released before being perfected, but then, what does perfected even look like? The product is darn good at the moment.
Looking around me, engineers do not understand that. Instead, they have exactly same overblown expectations and actively push for LLM everywhere. They will call you ludite if you say anything else.
[0]: https://www.newyorker.com/books/page-turner/rethinking-the-l...
What's worse is that people still make the same mistake today.
Ummmm. No.
The luddites were not opposed to progress or new machinery. The luddites called for unemployment compensation and retraining for workers *displaced* by the new machinery (machinery they sometimes helped build!). This probably makes them amongst the most progressive people of the 1800's.
Also, what's the definition and point of "progress" to you? Because the way AI is shaking out to me screams the opposite of what I'd expect progress to look like. Assuming the likes of Altman (an individual who wants to harvest your biometric data for a scam shitcoin, by the way) can be believed and we indeed reach the singularity or AGI or whatever, is everyone except for the C-level (who is somehow magically exempt from the negative effects of this progress and irreplaceable somehow) losing their livelihoods and getting crushed under the boot of the wealthy and powerful "progress", in your eyes?
Am I? I'm saying this is what's going to happen (if people like Altman are correct), same as how the Luddites knew exactly what was going on. I'm not denying that we're not likely to stop AI development even if everyone loses their jobs, I'm saying that it's not what my vision of progress looks like.
You also conveniently left out the part I mentioned about those jobless people getting crushed by powerful boots and focused solely on what I said about job loss.
And again I'll ask, what exactly does "progress" mean for you? What world are we heading towards that counts as positive progress in your mind? Because from what I can tell you think we're going to be heading towards mass unemployment and... consider it a good thing, for some reason?
The behavior of the boosters is basically the opposite of how to make friends and influence people. I've been through plenty of hype cycles, and this is the first one where they seem to need to insult and threaten everyone.
I don't get it. And I don't feel any need to entertain it.
- "LLM X told that we should try to add this into configuration file – SOMETHING_SOMETHING = false."
- "There is no SOMETHING_SOMETHING configuration option, you have a full source, grep for it."
- "But should we try at least?"
A friend of mine states that the market rates for their position are wrong, because ChatGPT gave higher numbers: this is an example on the far end of the spectrum of confirmation bias - it matters little whether it was sold as source of truth or not.
Sounds similar to social media.
Otherwise, yes, I am very concerned about society's use of LLMs -- particularly young people (students).
But now the very teachers themselves... Frankly, not surprised.
I've been using it to make me a much better tutor/mentor. But the cases outlined in (I'm assuming) the public education sector are very, very worrisome.
It is hilarious though that humans are just ready to slurp up any opportunity to just start shitting all over their work and their coworkers and students. Less time spent and caring?! yes please! then they become full time salesmen of it to everyone else, and then the social interaction problems just explode. Tribal standoffs, create entirely new tribes, plotting and deception to continue getting what they have a taste of.
Infighting distractions are so convenient as the government guts everything they work under
But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade. I would wager that the number of students and teachers using AI is now the majority population.
I face this dilemma on a daily basis when trying to do my job as a software developer. Let claude take over, and risk losing the only skill I had to differentiate myself in this harsh world? Or, take a chance on being the turtle and trying to win the race against the hare?
the good times are over, it happens. i remember watching that Dall-E come out and feeling sorry for graphic designers, gloating in the knowledge programming was too complex to automate. then they automated it.
a human is still required in the loop for vibe coding, as its fairly fuckin useless without guidance, but i can see that changing too
To get good results out of an LLM you need to determine exactly what the system needs to do and how it should work.
That's programming! We just don't have to type all the semicolons ourselves any more.
I agree with you. And, I'm not sure LLMs help me learn high level concepts (yet). They certainly have those concepts inside their training data and you can extract the concepts if you do the work. But, in a lot of domains, and this applies to someone old like me and someome young like my kids, knowing what to ask is the central problem.
This applies to what I see my kids doing with AI: I don't think LLMs, right now, encourage them to learn concepts as much as they quickly give them answers.
I don't see ChatGPT Study Mode as fulfilling on this, in my limited usage, but I would love to be wrong about that. Its a good direction indeed.
Probably this is the new frontier, where the best students are the ones that figure out how to use these tools to learn "deeply" rather than just jumping to the answers. Maybe that is how it has always been?
And if we're not having to code in Java then we never had to type all those in the first place! ;)
I am going to propose that no one should feel pressure to use any of the generative coding tools if they don't want to.
Why is "unlevel" in quotes? When it comes to physical activities, biological males have a huge advantage over biological females; high school boys routinely beat professional adult women's sport teams.
> But, when students use AI, and if there are some students that don't, the playing field is "unlevel" there as well. The students that don't perhaps want to learn a craft rather than take a shortcut to getting a grade.
I agree that this is a bigger problem than trans kids in sports. I think people are less upset about this because
1. It's a more recent development 2. They think that the kids using AI are actually putting themselves at a disadvantage, albeit one that will only become apparent after they graduate.
In client projects we see two hard costs pop up: 1. Human review time ⟶ still 2–4 min per 1 k tokens because hallucination isn’t solved. 2. Inference \$: for a 70 B model at 16 k context you pay ~\$0.12 per 1 k tokens — cheap for generation, expensive for bulk reading.
So yes, AI will read for us, but whoever owns the *attention budget + validation loop* still controls comprehension. That’s where new leverage lives.
second order effect: across the entire population, the incentive to learn anything at all is removed
third order effect: society ceases to improve and regresses
but it's all good as I can generate boilerplate 30% faster!
They say they have good results?
Here's a review of AlphaSchool and it's methods. Honestly, the review is a good one and very well written. It's worth your time if you have inkling about alternative education and the use of AI in the classroom.
TLDR: The magic is not AI, it's that they bribe kids for good grades. Oops, sorry, 'motivate' kids for good grades.
Can some one answer what would realistically change if teachers did use ChatGPT in this way but the students never found out? Things would be more or less the same.
If the teacher had asked AI what are more effective ways to ensure the students are learning the material, I really doubt a PowerPoint presentation would have been the result
hanspeter•19h ago
EdwardDiego•18h ago
hanspeter•18h ago
It doesn't change that this is just a quote from a reddit post and a link to it.