> Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.
It is already the case that effectively nobody reads these books. They're basically just "proof of work" for people's tenure dossiers.
Instead of framing this debate about having our jobs replaced by a machine, it's more useful to frame it as having our jobs and value to society taken by a new ethnicity of vastly more capable and valuable competing jobseekers. It makes it easier to talk about solutions for preserving our political autonomy, like using the preservation of our rights against smarter LLMs as an analogy for the preservation of those LLM's rights against even smarter LLMs beyond them.
What about more advanced ones that have yet to be invented? Will they be persons once they're built?
(For clarity I want to make sure you know I'm talking about de facto personhood as independent agents with careers and histories and not legal recognition as persons. Human history is full of illustrative examples of humans who didn't have legal personhood.)
There isn't, today, a good filter for such input beyond that it came from a person or that it came from a probabilistic vector distance algorithm. Perhaps we'll have such qualification in the future to make the distinction in this context irrelevant.
Even if LLMs do become capable of generating usable training output for themselves, they will still not have human personhood.
Personhood as a capacity to participate an an agent in a network of mutual recognition of personhood, however, is likely.
https://meltingasphalt.com/personhood-a-game-for-two-or-more...
It’s an applied field, there’s actually-existing technology that depends on it, but it’s technically challenging and a lot of people left for AI/ML because it’s easier and there’s more low-hanging fruit.
Anyway, my colleagues and I, we write monographs for each other more or less, using arXiv to announce results as a glorified mailing list—do you consider that mere “proof of work”? By my count, 250 folks is practically no one.
As for my career, that’s going to depend on NSF returning to normal operations.
But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?
Surviving humans will no longer be free to participate in the academic humanities however, as their study/curation/production etc will exclusively be job roles for AGIs.
.
If there is no singularity however, none of what I've written above will apply. If. (fingers crossed)
Only if the AGIs want those roles. We already have super-smart people who don't want to be history professors or classical musicians.
I realize that this seems like trolling, so I'll explain myself a bit... the idea that the AGI's will settle into our culture and economy has always struck me as weird. So I always ask: What do they want?
I'm a musician, so I have friends who live at the periphery of capitalism, and actually don't want to spend every day at a job, even if it means that they could be more wealthy. That's part of what forces me to ask these questions.
I imagine an AGI who's like one of my friends, who was voluntarily homeless for a few decades, and arranged his life so he could survive in good health without a conventional income.
But will AI survive us ? Just look at how the Internet changed from the 80s to now. It is filled with ads popping up everywhere, making many activities useless.
People with decades of experience in the trenches who recently got laid off(business failure, corporate greed cutting costs, restructuring ..) now are asked everywhere to submit a link to their github(no one knows gitlab/codeberg/sourcehut etc) full of portfolio projects! I talked to few academic friends, who are worried that their research work is now reproduced verbatim by two specific LLM HN really loves!
Unless, LLMs go the way of ads to survive and rely on SEO spam to retrain, a monopolistic capture will happen mandating that all useful content must be fed into common hubs where AI can happily ingest it but cumulatively no human expert will be able to use it(we all know the abysmal state of info retrieval) and LLMs as these become more popular will become ever so unreachable for common folks without lots of riches. For medium term, I see a Netflix/Amazon Prime Video play, LLMs as these get more popular(same way people mindlessly scroll yet lecture others of its harm), will raise prices and lock out people from the common good and serve specific beneficiary group(shareholder).
Machine translation has poor reputation for the mistakes it makes; but if it ever gets good, then I wonder why learning foreign languages will be a more popular activity than learning Latin or Classical Greek is today. Of course reading Homer or Virgil in the original must be much more satisfying than reading them in translation; but the number of people who truly care about that is vanishingly small.
I don't think so.
I don't know any today that make what even a mediocre new surgeon or quant takes home.
Seems like a truly horrific world you're imagining. I hope you're wrong.
Affected
Sorry, I'll stop now.
The problem with humanities is their tendency to be palace sciences, easily abused for political reasons. It's more of feature than a bug and it's unlikely to change from within them.
AI regurgitates, at synthesizes, but it doesn't have lived experience, it just draws on what its fed -- that isn't human.
Much of the value in the humanities, in art, is owed to its provenance. Viewing it enables social reflection and growth, and engenders culture. That is simply absent in AI unless you want to probe the training data and the nuances of the model, but again that's a pretty circuitous/inefficient path to learning about humans, or growing as one.
It does, however, reveal some of he mechanics involved and, my hope, is that it leads to deeper and more nuanced discourse in the humanities.
Why are physical paintings more valuable than digital art? Why is manmade art implicitly higher value than imagegen art? Why do we watch Magnus Carlsen when engines are leagues ahead of the top 10?
Because the human condition matters. We crave seeing the world through the eyes of others with different (or even similar) lived experiences, fantasizing about what we could have been, under different circumstances. Empathizing. AI fundamentally has experienced nothing and so empathizing is not possible. It is not even able to escape the constraints of the human imagination.
You might doubt that an AI can ever write a novel as great as the greatest of human writers. I have doubts as well. But I don't think it can be a priori inferior. If an AI ever produces a novel that would have been great if a human wrote it, then that will be a great novel.
As an analogy and contrast, take the case of euclidean geometry. This is knowledge about geometry which relates to our "feeling" of space around us. But, it is symbolized in a precise and operational manner which becomes useful in all sorts of endaevors (physics, machines we use etc) - because of the precision. LLMs as machines cannot yet create symbolism and operational definitions of worlds which produce precise and operational inferences. However, they excel at producing persuasive bag-of-words.
As the author notes and concludes, human intuition, experience and the communication of it (which is the purview of humanities) is a pre-cursor to formally encoding it in symbolism (which renders said intuition stale but operationally useful). ie, socratic dialog was a precursor to (and inspired) euclidean geometry, and meta-physics inspires physics.
Software has been going through the same productive shift for many decades now, e.g., Free and Open Source Software. Simply because copying bytes is absurdly cheap. It's still around.
The problem with humanities is that the "state of the art" is about only saying something "new". For example, the author thinks that discussing Kantian theories of the sublime and “The Epic Split” ad (a highly meme-able 2013 Volvo ad starring Jean-Claude Van Damme) is "straight A" work.
It is irrelevant nonsense masquerading as intelligent discourse much like most of this author's actual published work and it is also something at which LLMs excel.
To be a "rockstar" humanities professor at Princeton, you have to make up something "cool" to fill the seats to keep the attention of 17-25 year-olds.
With LLMs that have encoded snapshots of the entire literary corpus that humanity has produced, those students can make up whatever connections they want and justify their worldview. No humanities courses required beyond maybe some introduction to vocabulary / prompting.
It brings up some real questions about what does it mean to be, even if it doesn't ask whether our institutions are capable of recognizing that effort as valuable.
Off topic, it's extremely frustrating to see how few top-level comments are engaging with TFA. So many people are just using the headline as an excuse to pontificate.
> That guess is the result of elaborate training, conducted on what amounts to the entirety of accessible human achievement. We’ve let these systems riffle through just about everything we’ve ever said or done, and they “get the hang” of us. They’ve learned our moves, and now they can make them. The results are stupefying, but it’s not magic. It’s math.
The best description I've seen so far.
tkgally•11h ago
One remark:
> I fed the entire nine-hundred-page PDF [of the readings for a lecture course titled “Attention and Modernity: Mind, Media, and the Senses”] to Google’s free A.I. tool, NotebookLM, just to see what it would make of a decade’s worth of recondite research. Then I asked it to produce a podcast. ... Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fell back on some pedestrian formulations (along the lines of “Gee, history really shows us how things have changed”). But they also dug into a fiendishly difficult essay by an analytic philosopher of mind—an exploration of “attentionalism” by the fifth-century South Asian thinker Buddhaghosa—and handled it surprisingly well, even pausing to acknowledge the tricky pronunciation of certain terms in Pali. As I rinsed a pot, I thought, A-minus.
The essay is worth reading in its entirety, but, in the interest of meta-ness, I had NotebookLM produce a podcast about it:
https://www.gally.net/temp/20250425notebooklm/index.html
echelon_musk•7h ago
On a semi related tangent, I recently listened to the audio book of Ajahn Brahm's Mindfulness, Bliss and Beyond. It was pleasantly surprising to hear nimitta spoken about so frequently outside of the Visuddhimagga!
Ingesting Buddhist commentaries and practice manuals to provide advice and help with meditation is one of the few LLM applications that excite me. I was impressed when I received LLM instructions on how an upāsaka can achieve upacāra-samādhi !
hzay•7h ago
roenxi•7h ago
MN 128 is also worth reading through on that topic.