The harsh reality is that academia as a whole needs to be revamped. The easy solution would be to revert back to paper only exams, and physical attendance - but that would also exclude a ton students. A huge number of modern students are online students, or similar programs where you don't need to show up physically. Moreover, I don't think universities / colleges themselves want to revert back, as it would mean hiring more people, spending more on buildings, etc.
We already have a real-world scourge of tech bros who view learning about other subjects as beneath them and who rate their ignorance as more valuable than experts in that subject. This wonderful new world of AI is only going to exacerbate that problem.
Point is, it's not a problem with academia and it's not easy to solve. If we could just magically peer into people's minds and figure out what they know definitively then we'd solve a lot of world issues. Alas, we must settle for numbers and papers instead.
Can you provide an example?
I encountered this imposter too.
I understand companies want to protect themselves. They need to understand some people just don’t want to be on social media (HN aside).
In other words, there needs to be a middle ground. Job hunting is already an arduous, demoralizing task. Forcing people to be on social media for a just another layer of stress
I gave it the prompt "Suppose you are in a job interview for a front-end web position and someone asks you about how you use the React library and the hardest problem you even had to solve with it. How might you react, along with a somewhat amusing anecdote?"[1] and it did pretty well. I think I'd play with it a bit to see if I can still suppress some of the LLM-isms that came out, but a human could edit them out in real-time with just a bit of practice too... it's not like you can just read it to your interviewer, you will need to Drama Class 101 this up a bit anyhow. It'll be easier to improv a bit over this than a bare Wikipedia list.
In other words, as with the question the article title asks, the question isn't about what happens "when" this starts being possible... the capability has run ahead of all but the most fervent AI user's understanding and it is already here. It's just a matter of the word-of-mouth getting around as to how to prompt the AIs to be less obvious. I also anticipate that in the next couple of years, the AI companies will be getting tired of people complaining about the "default LLM voice" and it'll shift to be something less obvious than it is now. Both remote interviews and college writing are really already destroyed, the news just hasn't gotten around to everybody yet.
(In fact I suspect that "default LLM voice" will eventually become a sort of cultural touchstone of 2024-2026 and be deliberately used in future cultural references to set stories in this time period. It's a transient quality of current-day LLMs, easy to get them out of even today, and I expect future LLMs to have much different "default voices".)
[1]: And in keeping with my own philosophy of "there's not a lot of value of just pasting in LLM responses" if you want to see what comes out you are welcome to play with it yourself. No huge surprises though. It did the job.
Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations. Whatever the solution, we need to be taking these existential threats a lot more seriously than how we treated social media (the plague before this current plague).
They are only allowed once students can do it on their own, because now you have a foundational understanding and the tool just speeds you up.
Thanks for the insightful comment!
Using a machine to do the very thing you are supposed to be demonstrating a proficiency in is cheating and harms the legitimacy of the accreditation of the school.
I suspect this is the true fermi paradox. Once a civilization reaches a certain point, automation becomes harmful to the point that no one knows how do anything on their own. Societal collapse may be back to bronze age, if not more regressed.
Classic illustration of this: https://www.thomasthwaites.com/the-toaster-project/
Ask students to solve harder problems, assuming they will use AI to learn more effectively.
Invert the examination process to include teaching others, which you can’t fake. Or rework it to bring the viva voce into evaluation earlier than PhD.
There are plenty of ideas. The problem is, a generation of teachers likely need to be cycled through for this to really work. Much harder for tenured professors.
Every technical revolution “threatened to erode the critical thinking of a generation”, and sure, the printing press meant that fewer texts were memorized rote… not to say there are no risks this time, but rather that it’s hard to predict in advance. I can easily imagine access to personalized tutors making education much better for those who want/need to learn something.
I’m more worried about post-truth civilization than post-college writing civilization for sure.
Objectively, many of them did erode some amount of critical thinking, but led to skill transfer to other domains so maybe it was neutral. Some of them were productivity boons and we got the golden age that boomers hail from. Other revolutions have just been a straight degradation in QOL. Social Media and LLMs seem to be in that vein. I'd also throw in gambling ads/micro-transactions and smoking as things that haven't exactly helped society. Out of those four examples, we only tried to course correct on smoking and, after a long period of time, we can see it's a net benefit to not smoke.
> I’m more worried about post-truth civilization than post-college writing civilization for sure.
These are the same civilizations on the same timeline.
My opinion is that even if capabilities halted now, LLMs would be more economically valuable than the internet (compared over the same 50 year trajectory). And I predict that they will not halt any time soon.
Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
> I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale.
The only thing I’m confident about is volatility, the range of outcomes is wide.
Maybe maybe maybe
Should we gamble on the lives of future gens for some economic maybes or should we take a minute to think through all probable outcomes and build out some safeguards?
anyone who claims otherwise doesn't remember their school days
What does this look like? Like asking children learning to read to demonstrate they can read Shakespeare?
A staple of modern education is scaffolding learning, where skills are incrementally learnt and build on previously learnt simpler skills. Much of what students learn in high school and early on at university is meant as a stepping stone to acquiring more applicable skills. Just like you can't start assessing learning abilities from Shakespeare, students simply need to be told whether they master those simpler skills before moving up. Doing away with assessing simpler skills just because AI can now perform them isn't the solution to addressing a lack of critical thinking.
What will need to happen is that early subjects will need to stop being used for gatekeeping. Rather than treating education as an adversarial game, we should make it a collaborative one and instead of trying to make it more difficult for students to pass assessments with AI (and mechanically making it harder to pass them without as well), we should give students a stake in learning what they need to learn.
We definitely need to rethink some assumptions.
I view education as serving at least two goals: one is job-specific training (eg you will go on to do an English Lit PhD) and the other is proving you can do sit in a chair for long enough to do a job, and do critical thinking (you will graduate and get a “degree required” job).
I think we are mostly worried about how the death of the essay affects the latter. In some sense what you are studying is irrelevant for this case, what is important going forward is developing critical reasoning skills _in partnership with AI_.
Not claiming to have solved this, but some ideas would be:
- ask students to produce way more papers, but now you are an editor checking for high level understanding and catching falsehoods/errors. - stop asking students to write essays, instead offer them simulated conversations with AI experts where they need to display knowledge to keep up (eg “Art Museum Curator Simulator”), or tutoring kids from lower years. - stop asking people to do 4 year degrees and use AI to give much better apprenticeships for the real work (this might be my favorite).
> we should give students a stake in learning what they need to learn.
I strongly agree with this one, I think for many people the college degree has fallen victim to Goodheart’s law. Trade schools might actually be better but it took ZIRP to create those for software engineering.
How do you test the students' students if you can't test the students?
Exactly the threat of AI. With regards to jobs, we'll have a shock but we will adapt as with any other wave of automation.
Yes and no.
Upper middle class parents as a group will still instill critical thinking skills in their kids.
But the above comment reveals more about SES (socioeconomic status) and education in general rather than something specific to critical thinking or LLMs. The current education environment in the US heavily favors kids from higher SES families for a number of reason. LLMs won’t change this.
The challenge for the education system, imho, is to find a way for lower SES kids to thrive in an LLM environment. Pre-LLM, this was already a challenge, but was possible. Post-LLM, the LLM crutch may be too easy for some lower SES folks to lean on such that they don’t develop the skills they need to develop higher order skills.
Alternatively, if we still want to cling on to this ritual of measuring the performance of students, you could give each and every one of them oral examinations with AI professors.
Institutions that prepare people for future jobs have an even harder time to justify what they’re doing than the people who are looking for jobs right now. It’s just inertia at this point.
Not to mention that AI can educate the people better by solving Bloom’s Two Sigma Problem.
So colleges are obsolete except as four year cruises for entertainment and networking.
What would be the impact on democratic systems if voters always turn to an LLM for answers because schools didn't require them to think on their own?
Further, there are likely situations where the participants avail of AI to different extents based on how they feel about the situation (cf. different degrees of doping in athletics), and students will sometimes be limited by their means in use of AI tools.
People said the same shit about calculators in math class. Yes, you may have a calculator that can do calculus available 99% of your working life, but if you never learn even the basics of anything then you are fundamentally just a monkey with a typewriter.
Which students?
If it's just about travel-distance, maybe schools could organize themselves to offer local test-centers where students could attend exams under observation. Reusing existing facilities in this way is pretty common in my countries education-system since decades.
That AI can pass these tests doesn't mean it is as smart and capable as a grad. I mean, it might be, or if not today then in a few years, but not because it can pass exams, having digested past exams and sample solutions into its bellows.
1) Written in person exams that were most of the grade (this includes "blue book" exams where you have to sit in front of the professor and write an essay on whatever topic he writes on the board that morning as well as your typical math/algorithms tests on paper.)
2) Written homework where you have to essentially have a satisfactory discussion on the topic (no word range, you get graded on creative interpretation of the course subject matter.)
Language models could maybe help you with 2 but will actually kill your ability to handle 1 if you're cheating on homework with them. If anything language models will mean the end of those retarded make-work cookie cutter graded homework assignments that got in the way of actually studying and learning.
Not that we were learning all that much to begin with. I mean, walk into any sorority and ask to see the test bank. The students and Profs were phoning it in for a while, by and large. Not all of them were though, and good on yah.
But now that the fig leaf is torn away, we're left with the Oxbridge model and not much else - small classes, under 10, likely under 5, with a grad level tutor, social pressure making sure you've done the work. The great thing about this though is that you'll have an AI listening in all the time and helping out, streamlining the busywork and allowing the group to get down to business.
But that version is very expensive. You're looking at ~$50k / student year [0] at a baseline Oxbridge model in secondary school on up - ~$400k / student from 9th to university graduation.
Assume a 6% loan rate for 30 years (a mortgage, essentially), and you've got ~$2,300 monthly payments for all your working life, ~$46k/year down the drain. How in the hell are you going to manage student loans like that and then try to live a life without a really good job? How the hell is a nation going to be expected to pay for that per kid if you make school free for them?
Cheap learning wasn't good, but it sufficed. The new models of education must answer to the fundamental question of education: How much does it cost?
[0] 2 hours 3x a week per class; 4 classes per tutor per week. Assume $100k/tutor and 5 students/tutor. So $5k / student / tutor. 4 classes / student. So $20k / student in just raw tutors. At least double that for overhead if not triple.
This is not an unsolvable problem if handwritten work becomes a requirement.
Of course, I still treated it like a lazy college student: I did it in 2.1 or 2.2 line spacing to hit the page requirements, and flipped my thesis because it was easier to research (I started out arguing against the US invading Iraq, but found it way easier to find sources that supported an invasion... well, we all know how reliable those sources were).
This is the equivalent of asking students to show their work when they do math problems and that is how we thwarted those evil calculators.
For something like digital art creation verifying the edit history is much more fruitful since the diffusion process is nothing like how humans create art.
AI should allow every student to have personalized instruction and tutoring. It should be a massive win.
If everyone instead of taking advantage of that refuses to do any work and decided to lie and pass the AIs output off as their own, that is not something the AI did. The students did that.
I admire your optimism.
Funny how everyone has their own dream of the miracles that “AI” should perform. It's just the perfect silver screen for everyone to project their wishes on.
But wouldn't, so we only have the loss of cheating replacing learning.
IMO the underlying cause has much more to do with a hiring cycle issue: the boom of the low-interest / free money / I-don't-need-to-pay-for-an-office covid years is now leading to the relative hiring "bust" (even though it's not really a bust, unemployment is at 4.2%, certainly nothing out of the ordinary for the US)
This compared to my method of reading widely, learning quotes and ideas and then writing each essay fresh in the exam hall - and I would typically manage about 3-4 pages per essay. (Reader, I did not get a top first).
I relate this anecdote as I don’t really see my friend’s method as being much better than using AI. Although I do acknowledge his 16 page essays must have been reasonably good.
Why not? He wrote all the essays himself, after all, and in a setting that's much more relevant to real life vs. the artificial constraints of a shorter exam. With AI he would've written/learned nothing himself.
The point is that your friend did all this effort to write those essays, which (supposedly, and I believe it) actually caused them to "learn" the material.
We are often changed in the process of doing thing, weather working out or thinking through concepts.
Your claim here is about like claiming you get the same workout regardless of if you drive a mile or ran a mile because you have changed positions using either method.
Don't see how it's legal (by the way, neither is the original method, the exam is legally about doing a task in the same conditions of having a set time as place)
It's more similar to spending hours preparing small exam cheat sheets, and then realizing that you didn't need them during the exam, as you had learnt the material.
What would you say about someone getting AI to write high standard essays and simply spend learn those word-for-word?
It’s also not cheating but not in the spirit of the thing I think.
Well there's the problem right there. Academia shouldn't be an "industry", it should be a public good. "101 level courses that are packed with over 100 students, listening to an uninspired lecturer talk at a crowd of disinterested faces on their phone and being FORCED to spend hundreds to thousands of dollars on it." is an artefact of forcing a public good shaped peg into a capitalism shaped hole.
When academia is an industry, students and professors get treated like commodities that have to demonstrate an ROI to be allowed to exist. Humanities don't have a good ROI in our society, therefore they are not funded, therefore positions in humanities are scarce. The end result is overworked lecturers and large class sizes.
The alternative is to cut humanities altogether, which honestly is happening; we're headed to a situation where humanities programs in some areas will just cease to exist, and they'll concentrate in places that actually care about education. It's sad but that's where we are.
Go to college without knowing their major, take two years of gen ed classes, realize they have to choose a major, choose the one which they happened to have more credits towards.
In a better system, studying english would be difficult, it would be for people that are passionate about it and excel at it. Not the default option because its the best way to capture money from subsidized 18 year old.
We probably agree on a lot, but disagree on solutions.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.”
[1] https://time.com/7295195/ai-chatgpt-google-learning-school/ [2] https://www.microsoft.com/en-us/research/wp-content/uploads/...
I can easily tell code written by a novice programmer naively 'vibe coding' an app from code written by an experienced developer using AI to help him. Can a history professor tell the difference between a purely AI essay from one written by someone who knows what they're talking about, and is assisted by AI to make the essay better?
Yes. That you consider this a question worth asking is a sign of your contempt for the craft of writing an essay. If an AI is that bad at mimicking expertise in your field, why shouldn't it be that bad at mimicking expertise in others' fields?
I did not mean to disparage the craft of writing, by being imprecise with my own writing. My base assumption was that it should be easy, but if it is this easy, why is everybody freaking out?
> He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Middle of the road colleges will not have the resources to ensure that students learn despite AI, whereas the Oxbridges, etc, will retain their tutorial systems and smaller class sizes, where AI is of no use whatsoever.
A comparable phenomenon perhaps exists in the news publishing world. It was envisaged that easy access to information would be the death of pay-to-read news. However, the huge volumes of mediocre and politically-driven output that swamped the internet, airwaves, and printing presses instead increased the relative value of thoughtful and well-sourced new and writing, e.g. the FT, Guardian, BBC, etc., even the New Yorker...
What can help you actually write? [spoiler: not LLMs]
The thing is that most college-educated adults don't really write to one another. Nor do they really read. We're now at the point of expecting college-level texts or Tweets or coffee orders. I know myself. I've seen the beast. No fat double latte please, with which I'll go about my day and forget this digression into my feelings (separate box from children or coffee).
adrianhon•7mo ago