Hmmmmm.
Bias is useful and inevitable
Good journalists are open about their angle. Bad journalists tell you they are "unbiased" and "just bringing you the facts".
LLMs are much harder to fact-check because they can make anything up based on their training data and weights without sources.
People already just blindly believe whatever is put in front of them by the algorithm gods. Even the "@Grok is this real" spam on every X post is an improvement.
i.e. Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots.
https://thebulletin.org/2025/03/russian-networks-flood-the-i...
All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.
If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!
It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
The world the author is describing currently has LLM in it. Irrespective of the author liking it or not, it is here to stay. So to build a model of the world, you would still need to consult an LLM, understand how it can give plausible looking answers, learn how to effectively leverage the tool and make it part of your toolkit. It does not mean you stop reading manuals, books or blogs. It just means you include LLM also in those list of things.
Just incase you didn’t know you can append ? to any query and get a quick answer straight away
So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.
That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
The downside of the internet is that we get to see people agonizing over their inability to adapt to change.
It feels like I'm reading an article crying "if you buy a car, you will lose your horse-shoeing skills!" every day lately.
I’m currently working on my first major project that incorporates heavy LLM contributions. It’s coming along great.
I started with Machine Code and individual gate ICs, so my knowledge goes way down past the roots.
I don’t miss it, at all. Occasionally, my understanding of stuff has been helped by that depth of experience, but, for the most part, it’s been irrelevant. It’s a first-stage booster, dropping back into the atmosphere.
I will say that my original training as a bench tech has been very useful, as I’m good at finding and fixing bugs, but a lot of my experience is in the rear-view mirror.
I have been routinely googling even the most basic stuff, for many years. It hasn’t corroded my intellect (yet), and I’m doing the same kind of thing with an LLM.
Not being sneered at by some insecure kid is nice.
So if I can do both with these tools, then great. I want to cognitively offload in a way that allows me to focus on the important bits. And I'm writing instructions to the LLM to help me do that eg 'help teach me this bit'. A builder and tutor at once.
I usually use ChatGPT, as a chat (as opposed to an agent).
It explains everything quite well (if sometimes a bit verbose).
Today, I am going to do an experiment: I’ll be asking it to rewrite the headerdocs in one of my files (I tend to have about a 50% comment/code ratio), so it generates effective docc documentation. I suspect the result will be good.
How very adult of you.
I have spent my entire career, being the dumbest guy in the room, and I’m not exactly a dunce. It can sometimes be quite humbling, but I’ve had great opportunities to learn.
People will often be willing to go out of their way to help you understand, if you treat them with respect; even if they are being jerks.
Life’s too short, to be spending in constant battle.
But it does mean that there’s limits. A lot of folks start at points higher than mine, and can go much further than me.
That’s fine; as long as I understand and accept my limitations, as well as my strengths.
It’s a long story, but I spent the majority of my career at some pretty demanding and high-functioning places. I had to learn to be self-critical, without being self-abusive.
Also, I have terrible memory. I suspect it comes from … experimentation … in my youth. Doesn’t make me stupid, but has given me a talent for leveraging reference materials.
1. It causes a step change in productivity for those that use it well, and as a result a step change in the expectations on productivity for dev teams. Folks simply expect things to be done faster now and that’s annoying folks that have to do the building.
2. It’s removed much of the mystique of dev. When the CEO is vibe coding legit apps on their own suddenly the SDE team is no longer this mysterious oracle that one can’t challenge or question because nobody else can do what they do. Now everyone can do what they do. Not to the same degree, yes, but it’s completely changed the dynamic and that’s just annoying some devs.
SDEs aren’t going away, but we will likely need less moving forward and the expectations on how long things take have changed forever. Like anything in tech we’re not going back to the old way so you either evolve or you get cycled out.
I also hear the “but LLMs only produce unrecognizable junk that one can’t maintain” angle, but that implies dev teams have been shipping beautiful artwork. Truth is most dev teams have been shipping undocumented fragile junk for years. While LLMs occasionally do odd things, in my experience LLM code is actually better documented and structured than what most dev teams produce and because of that it’s actually easier to hand off to others vs the typical codebase full of hacky workarounds and half-completed documentation.
Most LLM documentation is extremely poor unless you literally can’t understand code. It’s communicating obvious things that the code makes apparent. Good documentation covers what’s not obvious from reading the code. I’m amazed at how many devs think running Claude init is a good idea.
I’ve noticed a constant trend throughout my career that low performing devs generate and are impressed with this kind of documentation.
> and structured than what most dev teams produce
Left alone, LLM structure is terrible. Speed running the cheapest outsourced slop is not a good thing unless you’re already paying for outsourced slop. Huge win in that case.
LLMs are median equalizers. I think a large contingent of devs (especially given the money grab around 2021) are not that good, and LLMs really are a step improvement for them. I’m not denying there are use cases for competent devs, but so far that seems much narrower (albeit insanely impressive) than the benefit low performers (think) they receive. This fact is important to keep in mind when discussing the future of dev.
I get the sense a lot of new and middling engineers view this as some shortcut to success. Angry about getting passed on promotions, push back on their PRs, questioned why things are taking so long(hint: because they weren't actually putting in a full days work), or not making the grade for FAANG.
Now here is LLM and it's their chance to get stuff done without having to do much work. Don't need to have put in the work to learn and study. Now all those that were keeping them down before are the REAL problem because they are too slow, too out of date. What a gift! It's the great mediocracy uprising!
I haven't quite wrapped my head around it but there is definitely some weird social stuff going on.
"Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained."
From my experience, LLMs don't cause this effect. You still get to explore a ton of dead ends and whatnot, just on a much higher level.
"You get the answer but nothing else (keep in mind we are assuming that it's a good answer)."
On the contrary here - you get to answer a ton of followup questions easily, something you don't get to ask books.
"I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field."
LLMs are at a level of junior-mid of any field (and going higher every year), not senior-master. Is that anything new? Their strength is, among other things, in making connections between fields, and also the availability.
If you have an option to talk to a specialist in your field that has time 24/7 to discuss ideas with you, that's great, but also highly unusual. If you don't have such a person, an LLM that is junior-mid is way better than plain books.
The only real "catch" researchers found was timing. If you introduce them before a kid has "automaticity" (around 4th grade), they never develop a baseline number sense, which makes high-level math harder later on.
It's a pretty clean parallel for LLMs. The tool isn't the problem, but using it to bypass the "becoming" phase of a skill usually backfires. If you use an LLM before you know how to structure an argument or a block of code, you're just building on sand.
I keep seeing sentiments like this, but to me they're still very much stuck in the past.
We once needed to develop our intellects in order to solve problems. It was a necessary part of the process. Solving a particular problem would flood our brains with dopamine, and we would feel good for achieving our goal, and thus continue to develop our intellect in the hope of achieving a similar rush in the future.
Now that we have a machine that can solve our problems for us, intellect just plain isn't necessary anymore. We can solve our problems immediately and skip the roundabout intellect-building process entirely. That's a liberating thing.
a search engine always gives multiple results and it is on me to read and compare. i usually look at the first half a dozen or so results.
a well written wikipedia article will represent multiple views and critical opinions if there are any. but if there aren't then you may want to expand your search. same goes for other encyclopedias or articles you are reading.
the problem, again, is that an LLM answer does not have the quality of a well written article you may find elsewhere. that may change in the future, but until then i find LLMs are a waste of time for me.
For instance, I wanted to know if it's better to setup a call with a client in a coffee shop over their office. Almost all LLMs will give you whatever answer you want based on how much stress you put on one particular answer.
I'm not saying that traditional search engines will show you only one sided results, but you'll see reddit threads that will have a bit of conversation before arriving at some conclusion. You'd read it and go hmm this question had XYZ setup and they picked the coffee shop as a great place. My own situation is perhaps (for sake of argument) not XYZ, so it's better to have the meeting in the client office.
This is a community driven decision where I have certain basis if I want to revisit my decision.
With an llm I find that it will happily just give you one of the two options and keep switching in the middle if you ask it to. That's what makes me a bit a skeptical.
With technical stuff, there are direct solutions that fit your usecase and you can just use it and move on.
mark_l_watson•17h ago
I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.
I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.
Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.
All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.
XenophileJKO•17h ago
4ndrewl•16h ago
mr_mitm•16h ago
In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.
Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.
rgoulter•16h ago
e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.
Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.
What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.
XenophileJKO•16h ago
Those are all things many people outsource their thinking to other people with.
v3xro•15h ago
mr_mitm•10h ago
hectormalot•16h ago
For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approach
For other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.
It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.
delusional•16h ago
I don't mean to be judgemental. It's possible this is a personal observation, but I do wonder if it's not universal. I find that if I give an inch to these models thinking, I instantly become lazy. It doesn't really matter if they produce interesting output, but rather that I stop trying to produce interesting thoughts because I can't help wonder if the LLM wouldn't have output the same thing. I become TOO output focused. I mistake reading an interpretation for actually integrating knowledge into my own thinking, I disregard following along with the author.
I love reading philosophy as well. Dialectic of Enlightenment profoundy shaped how I view the world, but there was not a single part of that book that I could have given you a coherent interpretation of as a read it. The interpretations all come now, years after I read it. I can't help but wonder if those interpretations would have been different, had my subcouncious been satiated by cheap explanations from the lie robot.
kolinko•15h ago
An analogy would be - if GPS allows you to not worry about which turn to take, you can finally focus on where you want to get.
demorro•14h ago
sph•12h ago
simianwords•13h ago
Sure if you want to read a novel, don’t ask an llm about it.
When you want to learn something quick then use LLMs. But you would know how much compression is going on. This is something we do routinely anyway. If I want to know something about taxes, I read the first google result and get the gist of it. But I’m still better off and didn’t require to take a full course.
carlosjobim•12h ago
That is like listening to music and asking somebody if you liked a song.
tylervigen•12h ago
therealdrag0•1h ago