I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.
It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.
When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.
I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.
For example a possibly trajectory might be that many years in the future because human thinking has degraded due to AI-assisted cognition, most people will get a chip implant and AI-assistance becomes integrated with the brain. Basically same pattern as most everything else -- technology augments solve the new reality. I'm not saying this will happen, but just a possible outcome of this.
Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.
Probably just 95% of the users. You know, the non-techies.
(At present, Gemini's question-answering capability (which Google kind of makes its users use) seems extremely error-prone -- much worse than competing LLMs when asked the same question.)
I recently saw a video discussing a researcher who published a fake scientific article about a fictitious disease, with bogus author names, even a warning IN the article itself that stated "This is not a real disease, this article is not real" (paraphrasing) but still AI ended up picking up this article and serving information from it as if it was a real disease.
It even got cited in papers (which were later redacted of course), but the fact those papers got published in the first place is a serious issue.
Isn’t a lot of pretraining done by chopping sources up into short-context-window-sized pieces and then shoving them into the SGD process? The AI-in-training could be entirely incapable of correlating the beginning with the end of the article in its development of its supposed knowledge base.
It will not only answer confidently incorrect, but it will not web search in obvious scenarios where it should.
The words here, aren't meant to be a warning for people in this type of community falling victim to this type of thing, its more for the general public that doesn't grasp the tools they are using, the people that wont ever wander across this article.
This i think is a huge reason we really need to jump into LLM basics classes or something similar as soon as possible. People that others consider "smart" will talk about how great chatgpt or something is, then that person will go try it out because that person they respect must be right, they'll hop on the free model and get an absurdly inferior product and not grasp why. They'll ask something that requires a web search to augment info, not get that web search, and assume the confidently incorrect agent is correct.
The thesis is also I think not entirely about not having that modern info at query time, its more scattered. Someone asks what product they should use to mash potatoes, a tool is suggested. Everyone that asks then receives that same recommendation and instead of having a range of different styles of mashing potatoes, we end up all drifting closer towards one style, and the range of variance in how food is prepared is slowly getting lost.
https://classics.mit.edu/Plato/phaedrus.html#:~:text=there%2...
Looks like even back then, they went "cool story bro" on that text...
Some points:
1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.
2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?
I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.
I remember a few numbers of my most direct contacts and depend on backups for everything else.
This is how I for one understood this.
2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?
He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.
He wonders why you bother will all the technology when it made your life worse. Is he right?
Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.
Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.
Writing systems are ‘a’ fundamental way to pass down large collections of facts, and my personal bias. We are prejudiced and naive though:
- Those knotting systems in China and South America that preceded writing for millennia are also persistent and intricate
- Cave paintings are quite dense, drawings and art are direct visual representations with compound meanings (seasonal behaviour, hunting strategy, creation myths)
- Iconography of all forms persists a rich visual language, hieroglyphics and equivalent which carry deep social instruction with verbal reinforcement
- Stories with self-correction have many-tens-of-millennia consistency categorically outstripping any other medium we have tested, the aboriginal dream-stories capture humanities shared storage during its global expansion
- Music is math. Song and dance captured all of the above in self-verifying and correcting fashion for hundreds, hundreds of millennia before that.
And before we hit any complexity arguments, like a hard specification:
a) those formats leveraged human pattern recognition and meat-based compression (ie “every chunk in the 4,000 page OOXML specification is as simple as do-as-Word-did…”)
b) find video of African dance/drumming ceremonies — density is not the issue — a special hoot, a known drumbeat… there were continental signalling networks that terrified Colonial explorers.
There is an argument that writing allows for corrosive decontextualization. Jesus cursed a fig tree. No one learning that tale the old ways would snicker. And, thus, history becomes not a tale, but a grab bag of a child’s letter blocks, you can spell anything you want.
Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.
Therefore, writing is bad for the same reasons that this post thinks that AI is bad.
Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.
So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.
id probably start with "who locked us in this sewer?"
I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.
AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.
Most general concepts are well within the grasp of human understanding.
My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.
Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.
Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.
I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.
Also thanks to Mia (she/her); this was a very interesting read.
This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.
her plumber offloaded to chatgpt.
"i just think it's good for humans to know how to do stuff."
are we talking about your sister or her plumber?
some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.
Without further knowledge of what was going on it's hard to say why they used ChatGPT.
Yes
In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.
The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?
That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.
Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.
In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.
"how do I fix a clogged toilet?" would be bad..
And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.
The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.
Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
The plumber who turned up leave without fixing the problem,
The plumber fixing something that he didn't know how to do by looking up the answer.
The plumber attempting to fix something that they didn't know how to do.
While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.
That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.
Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.
Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
Slightly FTFY.
The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.
I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
But I am not sure you can compartmentalize the specific skill we can out-source to AI. I would not agree with "you don't need to be able to think in your head."
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.
Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.
This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.
Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".
Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.
Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.
A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
AI is just current scapegoat.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
bomewish•1h ago
chromacity•1h ago
ulf-77723•1h ago
asdfman123•1h ago
Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.
layer8•34m ago