frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Agentic Coding Tools – Not Skynet, Not a Stochastic Parrot

https://www.brethorsting.com/blog/2025/07/agentic-coding-tools-not-skynet,-not-a-stochastic-parrot/
1•aaronbrethorst•6m ago•0 comments

Listen to Rfc2119

https://ericwbailey.website/published/you-must-listen-to-rfc-2119/
1•bluGill•7m ago•1 comments

Armin Ronacher on Agentic Coding

https://www.youtube.com/watch?v=nfOVgz_omlU
1•paulsutter•12m ago•0 comments

Super Simple "Hallucination Traps" to detect interview cheaters

4•EliotHerbst•20m ago•0 comments

A customizable and extensible all-purpose diagrams library for Blazor

https://github.com/Blazor-Diagrams/Blazor.Diagrams
1•mountainview•22m ago•0 comments

Coinbase Acquires LiquiFi

https://www.coinbase.com/es-la/blog/Coinbase-acquires-LiquiFi-the-leading-token-management-platform
1•wslh•23m ago•0 comments

Trans-Taiga Road:The farthest you can get from a town on a road in North America

https://www.jamesbayroad.com/ttr/index.html
2•jason_pomerleau•27m ago•0 comments

Checklist Genie App – Last Call for Beta Testers

https://checklistgenie.app
1•alohaplannerapp•27m ago•0 comments

Show HN: I created a privacy respecting ad blocker for apps

https://www.magiclasso.co/insights/app-ad-blocking/
2•bentocorp•29m ago•0 comments

An Analysis of Links from the White House's "Wire" Website

https://blog.jim-nielsen.com/2025/links-from-whgov-wire/
1•OuterVale•36m ago•0 comments

Why are my Product Hunt upvotes delayed

https://www.ceresai.xyz/
1•Mahsanziak9•45m ago•2 comments

Qualcomm's Centriq 2400 and the Falkor Architecture

https://chipsandcheese.com/p/qualcomms-centriq-2400-and-the-falkor
1•brian_herman•45m ago•0 comments

Bridging Shopify and Shipstation on Heroku: A Story of Custom Fulfillment

https://kevinhq.com/shopify-shipstation-heroku-integration/
1•kevinhq•48m ago•0 comments

My official list of post-glitch.com hosting options

https://livelaugh.blog/posts/glitch-alternatives/
1•raybb•50m ago•1 comments

All high value work is deep work, and all motivation is based on belief

https://www.reddit.com/r/ExperiencedDevs/s/qV1w0XeFPw
2•Crier1002•51m ago•0 comments

'There is a problem': Meta users complain of being shut out of their accounts

https://www.bbc.com/news/articles/cvgnp9ykm3xo
4•mikece•52m ago•1 comments

Mount Everest's Trash-Covered Slopes Are Being Cleaned by Drones

https://www.bloomberg.com/news/features/2025-07-03/dji-drones-clean-up-mount-everest-trash-in-record-time-amid-climate-change
2•nharada•54m ago•2 comments

Gaming on a Medical Device [video]

https://www.youtube.com/watch?v=rf-efIZI_Dg
1•JKCalhoun•54m ago•1 comments

Open Source 1.7tb Dataset of What AI Crawlers Are Doing

https://huggingface.co/datasets/lee101/webfiddle-internet-raw-cache-dataset
3•catsanddogsart•1h ago•0 comments

Microsoft will lay off 9k employees, or less than 4% of the company

https://techcrunch.com/2025/07/02/microsoft-will-lay-off-9000-employees-or-less-than-4-of-the-company/
5•mrcsharp•1h ago•2 comments

Whole-genome ancestry of an Old Kingdom Egyptian

https://www.nature.com/articles/s41586-025-09195-5
10•A_D_E_P_T•1h ago•0 comments

NYT to start searching deleted ChatGPT logs after beating OpenAI in court

https://arstechnica.com/tech-policy/2025/07/nyt-to-start-searching-deleted-chatgpt-logs-after-beating-openai-in-court/
7•miles•1h ago•0 comments

AI virtual personality YouTubers, or 'VTubers,' are earning millions

https://www.cnbc.com/2025/07/02/ai-virtual-personality-youtubers-or-vtubers-are-earning-millions.html
3•pseudolus•1h ago•0 comments

US rural communities bearing the brunt of Bitcoin mining

https://www.dw.com/en/us-rural-communities-bearing-the-brunt-of-bitcoin-mining/a-72889383
4•musha68k•1h ago•1 comments

gmailtail: tail -f Your Gmail

https://github.com/c4pt0r/gmailtail
1•c4pt0r•1h ago•0 comments

A Non-Partisan U.S. Military Is Essential

https://time.com/7296041/non-partisan-military-is-essential/
5•herecomethefuzz•1h ago•0 comments

What to build instead of AI agents

https://decodingml.substack.com/p/stop-building-ai-agents
49•giuliomagnifico•1h ago•31 comments

Flint, Michigan replaces most lead pipes 10 years after Michigan water crisis

https://www.nbcnews.com/news/us-news/flint-replaces-lead-pipes-10-years-michigan-water-crisis-rcna216442
6•toomuchtodo•1h ago•2 comments

Nebius emerged from Russia as one of Nvidia's top-performing investments

https://sherwood.news/tech/nebius-nvidia-gpus-ai-startup/
2•gmays•1h ago•0 comments

One Life

https://thisisyouronelife.com/
1•tasshin•1h ago•0 comments
Open in hackernews

LLMs are making me dumber

https://vvvincent.me/llms-are-making-me-dumber/
69•vincentcheng•1mo ago

Comments

etblg•1mo ago
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails.

Think I'd rather just have the bullet points in the first place, to be honest, has to be easier and quicker to read than an LLM soup of filler paragraphs.

Tempest1981•1mo ago
For sure. If I get an email with 3 dense paragraphs, I'm more likely to mark it unread and come back to it later, after processing the other 20 emails in my inbox.
jebarker•1mo ago
"While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly. Moving up the ladder of abstraction has consistently been good."

Gotta disagree. Adding abstraction has yielded benefits but it certainly hasn't been consistently good. For example, see the modern web.

zamderax•1mo ago
It’s been overall good. Being able to access a web app or website by entering a URL is impressive!
PessimalDecimal•1mo ago
Many browsers, especially Chrome, have abstracted away direct interaction with URLs. Would you also consider that good?
zdragnar•1mo ago
You can still do that if you want to. Most people don't.

Back before I got a cell phone, I had many many phone numbers memorized. Once I got a cell phone with a contacts list, I just stopped. Now I have my parents and my wife's phone numbers memorized, and that's it.

URLs are much the same. On most websites, if I can see the domain is the one that I expect to be on, that's all I really care about. There's a few pages that I interact with the URL directly, but it's a minority.

jebarker•1mo ago
You can serve web pages and render them in browsers all written in C. I'll concede that that's a useful level of abstraction over assembly.
nativeit•1mo ago
A functioning URL is impressive?
PessimalDecimal•1mo ago
The analogy likening LLMs to compilers is extremely specious. In both steps, the text written by the user/programmer is higher-level and thus "easier" but beyond that, the analogy doesn't hold.

- Natural language is not precise and has no spec, unlike programming languages.

- The translation from C (or other higher-level language) to assembly by a given compiler is determined in a way that the behavior of an LLM is not.

- On the flip side, the amount of control given to the tool versus what is specified by the programmer is wildly different between the two.

rvz•1mo ago
Exactly. The industry has encouraged mediocrity and in-efficiency with over-abstraction and abusing technologies in areas where it doesn't make sense for basic software.

This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.

Which is where you have basic desktop apps written using electron taking up 500MB each and use 1.2GB of memory. It doesn't scale well on a typical 8GB laptop on a user machine.

Not saying it should be in assembly either (which also doesn't make sense), but the fact that today's excuse is that a SWE is used to one language is really a poor excuse.

Nothing wrong with using high-level compiled languages to write native desktop apps that compile to an executable.

lispisok•1mo ago
>This is what you have seen with the rise with some of the worst technologies (Javascript) being used in places where it shouldn't because some engineers want to keep using one language for everything.

NodeJS was the biggest mistake our industry made and I will die on this hill. It has taken the crown from null. People have been trying to claw it back with Typescript but the real solution was to drop JS altogether. JS becoming the language in the browser was an artifact of history when we didnt know where this internet thing was going. By the time NodeJS was invented we should have known better.

resonious•1mo ago
Very subjective but IME, understanding assembly is correlated with being a skilled web developer. Even though you don't actually write assembly while doing web dev.
Swizec•1mo ago
> When I need to write an email, I often bullet-point what I want to write and ask the LLM to write out a coherent, cordial email. I’ve gotten worse at writing emails

Just send the bullet points! Nobody wants the prose. It’s a business email not an art. This is a hill I will die on.

Prose has its uses when you want to transmit vibes/feelings/... For actionable info communication between busy people, terse and to the point is better and more polite.

It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.

sshine•1mo ago
> Just send the bullet points! Nobody wants the prose. […] It’s bad enough when I have to read people waffling. Please don’t make me read LLM waffle.

I use LLMs to shorten my emails.

seadan83•1mo ago
I think this is the author's point. The ability to write short and concisely is a skill. So goes the saying: "If I had more time, I would have written a shorter letter."

Using LLMs to do that shortening is potentially hindering that practice.

The author's point I think is less about sending LLM waffle, it's a lot more that they can't send something that is indistinguishable from LLM waffle anyways due to skills issue - because the LLM is so often used instead of building that skill.

I think the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills - in which case those skills never form or atrophy.

sshine•1mo ago
> the question is largely, can the LLM results be used for human learning and human training, or is it purely a shortcut for skills

I agree. And I think everyone who uses them does a combination. The biggest danger lies with new learners who mistake immediate completion of a training task with having learned something.

I will drive a stick-shift once a year, and automatic every other day. Surely my skill will atrophy, but the convenience is worth it.

neom•1mo ago
I think it's a fair hill to die on, I'll join you. I go so far as to say if I take a very direct tone with you after a formality and you keep up the formalities, it's a bit of a red flag. Gimmie the words with what you want only, please.
rahimnathwani•1mo ago
The exception is when you're sending emails to people who don't have the same background knowledge or assumptions as you do.

Imagine:

  Write a coherent but succinct email to Ms Griffin, principal of the school where my 8yo son goes, explaining;
  - Quizlet good for short term recollection
  - no point memorising stuff if going to forget later
  - better to start using anki, and only memorize stuff where worth remembering forever
belorn•1mo ago
That seems like an effective way to get Ms Griffin annoyed. Given the prevalence of cheating in education they are might be much more likely to identify that an LLM was used to generate the text, after which they label the email as spam and the parent as someone would would send them such spam.
KevinMS•1mo ago
> Just send the bullet points! Nobody wants the prose.

But the recipient can just ask AI to convert the prose into bullet points.

roxolotl•1mo ago
That’s what I can do with my new found time now that LLMs write my emails for me, use LLMs to converts others emails into bullet points!
AlecSchueler•1mo ago
Bonus global warming for everyone!
MisterTea•1mo ago
A long time ago I would write these stupid long, wordy, emails to my manager summing up my work week. He finally told me, "please, keep it short and sweet. I don't need to know every wire or line of code you touched. Just summarize it into a few sentences." Best conversation ever. Went from 2 hours of typing Friday afternoon to 10 minutes or so. I'm stumped as to why we went backwards.
the_arun•1mo ago
Now it has gotten more informal - slack. Not sure how many still use emails for internal communications.
outlore•1mo ago
it’s like Saitama sensei said; keep it to twenty words or less
MisterTea•1mo ago
Yeah I was going full Genos for sure...
lucyjojo•1mo ago
this heavily depends on your interlocutor.
holografix•1mo ago
There’ll be a move to oral ability assessment across the board.

Oral exams, face to face interviews, etc.

If you think of the LLM as a tireless coach and instructor and not a junior employee you’ll have a wonderful opportunity. LLMs have taught me so so much in the last 12 months. They have also removed so many roadblocks and got me to where I wanted to be quicker. Eg: I don’t particularly care about learning Make atm but I do want to compile something to WASM.

handfuloflight•1mo ago
Better check if that's really a "hearing aid", then.
bryan0•1mo ago
I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.

0: https://en.m.wikipedia.org/wiki/Phaedrus_(dialogue)

namaria•1mo ago
> I think another good historical analogy is the invention of writing. In Phaedrus[0] Plato argued that it may make people dumber.

No, he doesn't. Plato quotes Socrates quoting a mythical Egyptian king talking with the god that had supposedly created writing and wanted to gift it to the Egyptians. The entire conversation is much more nuanced. For one, writing had existed for three millennia by the point this dialogue was written, and alphabetic Greek writing had existed for several centuries.

Plato does make the point that access to text is not enough to acquire knowledge and it can foster a sense of false understanding in people who conflate knowing about something with knowing something, which I think is quite relevant when you see people claiming than can learn things from asking LLMs about it.

bryan0•1mo ago
Thanks for explaining this in a much better way
idopmstuff•1mo ago
I think the list of historical analogies is missing the biggest one - the internet.

Memorization used to be a much more important skill than it is today. I am probably worse at rote memorization than I was when I was 13. Am I dumber? I would say no - I've just adapted to the fact that memorization is much less important in a world where I have access to basically the entire recorded history of human knowledge anywhere, anytime.

LLMs are just another very powerful technology that changes what subdomains of intelligence matter. When you have an LLM that can write code better than any human being (and since I know I will get testy HN replies about how LLMs can't do that, I will clarify here that I mean this is a thing that is not true today but will be in the future), the skill that matters shifts from writing code to defining the problem or designing the product.

> Looking at historical examples, successful cases of offloading occurred because the skills are either easily contained (navigation) or we still know how to perform the tasks manually but simply don’t need to anymore (calculator). The difference this time is that intelligence cannot easily be confined.

This is true, but I think it just means we'll see a more extreme kind of the same change we've seen as we've created powerful new tools in the past. I think it's helpful to think of the tool less as intelligence and more as the products of intelligence that are relevant, like generating high quality code or doing financial analysis. You'll have tools that can do those things extremely well, and it'll be up to you to make use of them rather than worrying about the loss of now-irrelevant skills.

rvz•1mo ago
> Assembly to C to Python. Almost no one writes assembly by hand anymore (unless you work at Deepseek)

Unless you are maintaining hardware or device drivers which is done at any company that makes hardware such as: Apple, Google, Microsoft, Nvidia, SpaceX, Intel, AMD, ARM, Tesla and the list goes on.

jebarker•1mo ago
Yep. Or writing video codecs or other performance critical software. It's amazing that people make blanket statements like this when really they're just not familiar with what other SWEs are doing
the_snooze•1mo ago
More broadly, there's a lot of value in knowing how to work with constrained systems: things that have to be offline, or radiation-hardened, or low-power, or low-spec, etc. Those tend to be resilient systems; i.e., things that people can quietly rely on instead of being subject to "move fast and break things."

Building web apps that you can update willy-nilly while running on arbitrarily powerful and always-available hardware isn't the entirety of software engineering.

NewsaHackO•1mo ago
Correct, crème of the crop software engineers doing bleeding edge work will most likely never be supplanted by LLM. I think the issue is that 90% of programmers do not do such work. Things that most software engineers actually do (front end web dev using a popular framework, MVC like apps, gluing together APIs and library’s to make a custom configuration in an otherwise commonly solved problem etc.) are the things that LLM excels at, and will continue to improve as time goes on.
legohead•1mo ago
I love LLMs, and actually feel they are making me smarter.

I'll be thinking of something in the car, like how do torque converters work? And then I start live talk session with GPT and we start talking about it. Unlike a Wikipedia article that just straight tells you how it works, I can dive down into each detail that is confusing to me until I fully understand it. It's incredible, for the curious.

steve_adams_86•1mo ago
If you're curious about torque converters I suspect you're careful about this, but what's your information vetting process? I use LLMs via text, so I can verify info as it streams in. How do you verify what's spoken to you in a car?
0xEFF•1mo ago
I do the same as GP on a regular 2 hour drive up I-5 I take.

The vetting process is the same as if I were driving up I-5 with a gear head friend of mine having a conversation with them as we go.

legohead•1mo ago
If something sounds off I just tell it I think it's wrong or to double check itself, similar to what I do with text.
spacemadness•1mo ago
I also rather use them as a tutor of sorts than "please do things for me." I think they're quite useful in that regard, albeit I know not to trust them fully as the only source of information.
ZeWaka•1mo ago
> While CS undergrads are still required to take classes on assembly, most productive SWEs never interact with assembly.

You may think this, but the principles are extremely relevant even in much 'higher tiers' of programming, such as the front-end. Performance optimization is always relevant, and understanding the concepts you learn from learning assembly is crucial.

Such courses also generally encourage a depth of understanding of the whole computing stack. This is more and more relevant in the modern age, where we have technologies such as Web*Assembly*.

pglevy•1mo ago
The author mentions it at the end, but continuing to experience long-form content like reading books and listening to multi-hour podcasts — particularly in other knowledge domains — should counteract this.
000ooo000•1mo ago
That's simply consumption and is by no means a stand-in for actual problem solving and learning.
kelsey98765431•1mo ago
I have used a simple benchmark for productivity related personal workflow changes that has always served me very well and kept me feeling good about what i choose to use to do my job, and that's what i now call the airplane mode test. can i use this tool completely offline, isolated from the rest of the world? growing up wifi was not ubiquitous for me. i didn't live somewhere remote, i was just an early adopter of computers and carried a laptop in school when that was considered a rare piece of kit. learning programming i always kept the philosophy of can i do this on the train while going home in mind when selecting how i wrote code. i kept man pages handy and learned how to search them properly (man -k . | grep), learned how to access the gnu info pages on my machine, and even found the glibc manual tucked away safely in my /usr/share directory which i hadn't know was there. over the years i stayed away from stack overflow and google when i was writing code as much as possible, first looking at the resources available to me on my local machine.

i now have qwen3. it runs locally on my machine. it can vibe code, it can oneshot, it can reason about complex non code problems and give me tons of background information on the entire world. i used to keep a copy of wikipedia offline, only some few gigabytes for the text version and even if that is too much there's reduced selection versions available in kiwix.

i am fine with llms taking over a lot of that tedious work, because i am confident i will always have this tool the same as all my other tools and data that i backup and keep for myself. that means its ok for me to cheat a bit here and there and reach out to the higher power models in the cloud the same way i would sometimes google an error message before even reading it if i am doing work to pay my bills. i have these rungs of the ladder to climb down from and feel like i am not falling into oblivion.

i think the phrase that sums this up best is work smarter not harder. im ok with accepting a smarter way of doing things, as long as i can always depend on it being there for me in an adverse situation.

tiffanyh•1mo ago
I’ve been afraid of this as well.

Which is why I try to treat LLMs like a “calculator” to check my work.

I do things myself, then after I do it myself - ask an LLM to do the same.

That way, I’m still critical thinking and as a result - I actually get more benefit from the LLM since I can be more specific in having it help me fill in gaps.

bgwalter•1mo ago
For a critical article it lists quite a lot of pro-LLM analogies, which are false in my opinion.

The pocket calculator simplifies exactly one part of math and probably isn't even used that much by research mathematicians.

Chess programs are obviously forbidden in competitions. Should we forbid LLMs for programming? In line with the headline though, Magnus Carlsen said that he does not use chess programs that much and that they do make him dumber. His seconds of course use them in preparation for competitions.

LLMs destroy the essence of humanity: Free thought. We are fortunate that many people have come to the same conclusion and are fighting back.

AceJohnny2•1mo ago
> Historical Analogies

I want to add another one to the author's list, which I think is even more relevant:

Writing.

Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.

Like Assembly to C to Python, as the author points out, LLMs allow us to move up the ladder of abstraction. There are obvious downsides to this, but I expect the evolution is inevitable.

The complaints about that evolution are also inevitable. We lose control, and expertise is less valued. We experts stand to lose a lot, especially if we clutch to the old ways. We're in the midst of a sea-change, we know what we're losing, but we're not sure what we're gaining.

PessimalDecimal•1mo ago
> Story goes, the Celtic druids relied on oral tradition and rejected writing, because they figured relying on writing was a crutch that made them weaker. They're gone now and, because of that choice, most of their culture with them.

Can you help me complete this analogy? By failing to rely on "writing" (read: LLMs), what will fail to be recorded and therefore remembered? Is the idea that if knowledge isn't encompassed by an LLM, in the future it will be inaccessible to anyone?

NewsaHackO•1mo ago
Sure! I am not the OP, but it seems like the analogy is how being a Luddite and refusing to integrate modern tools leads to being left behind and becoming irrelevant. Another more contemporary example: when intravascular techniques were first being developed, many CT surgeons felt as though those procedures were beneath them and gladly let cardiologists take point for those while they continued to do open procedures. Because of this, they lost a lot of ground in billable procedures, and it negatively affected compensation and demand for CT surgeons. Now, cardiologists can do some minimally invasive valve repairs and ASD closures, which will continue to take business away from CT surgeons. If you refuse to adapt to new technologies, you will be left behind.
steve_adams_86•1mo ago
"However… even this might still be too slow. Why understand every line of code deeply if you can just build and ship?"

Because the journey is the destination. Using AI extensively so far appears to be a path that mostly allows for a regression to the mean. Caring about what you're doing, being intentional, and having presence of mind is what leads to interesting outcomes, even if every step along the way isn't engaging or yielding the same output as telling an LLM to do it.

I suppose if you don't care about what you're doing, go ahead and get an LLM to do it. But if it isn't worth doing yourself... Why are you doing it?

Really, do you need those Chrome extensions?

Alternatively, though... If you do, but they aren't mission critical, maybe it's fine to have an LLM puke it out.

For something that really matters to you though, I'd recommend being deep in it and taking whatever time it takes.

Also the tutor approach seems great to me. I don't feel like it's making me dumber. Using LLMs to produce code seemed to make me lazy and dumber though, so I've largely backed off. I'll use it to scaffold narrow implementations, but that's it.

skywhopper•1mo ago
The vibe coding examples are interesting to me. Okay, you can create chrome extensions and personal apps with these tools. The author seems to take it as a given that that’s the extent of useful programming. How do they work in maintaining huge applications over time that require interactions between dozens or hundreds or thousands of streams of work?
karaterobot•1mo ago
> No one lamented the advent of calculators.

It's interesting that he lists a number of historical precedents, like the invention of the calculator, or the mechanization of labor in the industrial revolution, and explains how they are different than AI. With the exception of chess, I think he's wrong about the effects of all of them.

For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did. People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did. People (in aggregate) are worse at those skills now.

Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.

The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place? Is it crazy to think that not memorizing things because we can access printed (and digitized) material might have larger, unforeseen consequences on our brains, or our societies? Could mechanizing menial labor have induced some change in how we think, or have any long term effects on our bodies?

I think we're seeing—and will continue to see—that there are knock-on effects to technology that we can't predict beforehand. We think we're making a simple exchange of an old, difficult skill for a new, easy one, but we're actually causing a more far-reaching cascade of changes that nobody can warn us of in advance.

And, to me, the even scarier thing is that those of us who don't live through those changes will have no basis for comparison to know whether the trade-off was worth it.

drewcoo•1mo ago
Calculators also ruined the ability to understand and use logarithms (slide rules).
aminsadeghi•1mo ago
I'd argue that using calculators instead of learning how addition is done does hurt kids' ability to do mental arithmetic. It's an experiment we haven't tried, or at least not in places I've lived in. Sure, once you get how addition is done, feel free to free up your mind skipping 2+ digit arithmetics using a calculator. Same as: sure, once you learned what caching is and implement a small prototype, feel free to ask Claude to implement caching for you.
etblg•1mo ago
> For instance, people did lament the invention of calculators, saying it would harm kids' ability to do mental arithmetic. And it did.

> Fortunately for us, we replaced those skills with technology that allowed us to do something analogous to them, but faster and more easily.

Don't kids still learn to do arithmetic in their head first? I haven't been in a school in decades but I remember doing it all sans calculator in elementary school. When you move on up to higher level stuff you end up using a calculator, but it's not like we skip that step entirely, do we?

aminsadeghi•1mo ago
Exactly! Steph Ango (Obsidian creator) has said it well in his "Don't delegate understanding" essay: https://stephango.com/understand
karaterobot•1mo ago
I bet they do, but it's not a question of learning the skills in school. It's more about using those skills, which I don't think is as common today.
threatofrain•1mo ago
I wonder if in the place of many lower level skills one is then freed to explore higher order skills. We now have very fancy calculators, such as in the form of tools like notebooks that connect to data sources and run transformations and show visualizations.
karaterobot•1mo ago
Looking around, I don't see too many people exploring higher order skills using that spare brain power. I think at the margins, you've probably got really smart scientists/engineers/philosophers doing that, but what does the ordinary person in the street do? This is the grumpiest thing I'll say today, but it seems like they just scroll social media on their phones while streaming media plays in the background.
jrapdx3•1mo ago
> "People also said that GPS navigation would hurt people's ability to use a map, or create one in their heads. And I'm old enough to say: it absolutely did."

Thing is some people never were good at reading/using maps, much less creating them. Even with GPS at hand I still prefer seeing a map to know where I'm going. Anyway, retaining at least a modicum of "classic" skills is beneficial. After all, GPS isn't infallible. As with all complex technologies, possibility of failure warrants having alternatives.

I was recently on a cruise, someone asked the ship's navigator whether officers were trained on using old instruments like the sextant. He replied that they were, and continue to drill on their use. Sure, the ship has up-to-date equipment, but knowing the "old ways" is potentially still relevant.

> "The question is: what are the second- and third-order effects of losing those skills, or not learning them in the first place?"

Naturally, old skills fade with advent of newer methods. There's a shortage of ferriers, people who shoe horses. Very few people are being apprenticed in the trade. (Though I'm told the work pays very well.) Owning horses is a niche but robust interest, so ferriers have full workloads, the occupation is not disappearing.

Point is that in real-world terms losing skills diminishes the richness of human lives because there's value in all constructive human endeavor. Similarly an individual's life is enriched by acquiring fundamental skills even if seldom used. Of course we have to parcel our time wisely, but sparing a bit of time to exercise basic capabilities is probably a good idea.

ayrtondesozzla•1mo ago
I recommend using Anki (or whatever software does the job) to commit everyday, normal stuff that comes up to long-term memory.

Anki has desktop and phone apps, and if you make an account online, you can connect both to it and sync across the two devices with no effort. I can do my daily review and add cards from laptop or phone whenever something comes up.

I use no subdecks, and zero complex features. Add cards, edit in the "browser", delete sometimes if I've second thoughts. 40 new cards each day, reviewing is ~45 mins and a joy.

All that to say - it's a direct antidote to the issues being described here. I rush to new things less, and spend much more time consolidating and forming links between stuff I know or "knew".

It's directly pushing me towards behaviour that fits the reality of how my brain works. Tabs are being closed with me saying to myself - I'll learn the name of the author and book for now, that's a good start.

Great for birthdays, names, an anecdote you loved, a little idea you had, fleshing out your geography, history, knowledge of plants, lyrics, nuggets from the Common Lisp book you're doing, etc etc.

So for me one huge thing to reclaim your brain and get acquainted with your memory is - flashcards!

tailspin2019•1mo ago
This is very interesting. I’m aware of Anki/flash cards for helping with learning specific topics, but this more general usage that you’re describing is extremely intriguing!

I’m inspired to add something like this to my workflow.

I’d be curious to hear more about how/why you got started with this habit?

ayrtondesozzla•1mo ago
https://augmentingcognition.com/ltm.html I can't remember what I saw that lead me to that essay, but it got me thinking I should give flashcards another go.

Like I guess many people, I also had tried once or twice, but very much the common downloading a couple of other people's decks then stopping a couple of days later.

I'm not inplementing what he suggests very much at all in the end - but the idea is at least hinted at there, I think.

ayrtondesozzla•1mo ago
For the specific idea, I'm experimenting with a piece of wisdom shared with me once by a friend who had an immense repertoire of Irish traditional tunes she remembered and played effortlessly. I knew a few hundred tunes, her, thousands.

I lamented the long road ahead of me, all these tunes to learn, and she said, no no, don't worry, just start by learning the tunes you already know!

senordevnyc•1mo ago
Why is it scary? How much time do you spend fretting over the hunter gatherer skills you missed out on?
firejake308•1mo ago
> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand.

I feel like the possibility of having a dying and phone and needing to get back home from a new place late at night is certainly possible, so I think it is worth having at least a basic knowledge of the major highways in your locality and which direction each one goes.

kranke155•1mo ago
Social media caused societal decay. Dating apps led to a loneliness epidemic. AI will make us dumber.

Digital applications lead to the opposite of what they were meant to do. This is a very reliable indicator for outcomes.

alganet•1mo ago
From my perspective, the kind of loss expected with LLMs does not reveal itself in one generation.

What you described is more akin to laziness than loss of knowledge. It is also a trap. Your text is almost satirical to the notion that AI could be harmful for learning, because we all know we can relearn those things. And we can, for now.

Several generations of it, when people start to forget simple things, is where the danger lies. We don't know if it will come to that or not.

CelestialMystic•1mo ago
> My first response to most problems is to ask an LLM, and this might atrophy my ability to come up with better solutions since my starting point is already in the LLM-solution space.

People were doing this with Stack Overflow / Blogs / Forums. It doesn't matter if you look up pre-existing solutions. It matters whether you understand it properly. If you do that is fine, if you don't then you will produce poor code.

> Before the rise of LLMs, learning was a prerequisite for output. Learning by building projects has always been the best way to improve at coding, but now, you can build things without deeply understanding the implementation.

People completed whole projects all the time before LLMs without deeply understanding the code. I've had to work with large amounts of code where it was clear people never read the docs, never understood the libraries frameworks they were working with. Many people seem to do "Cargo Cult Programming", where they just follow what someone else has done and just adapt enough to solve their problem.

I've seen people take snippets from stack overflow wholesale and just fiddle until it works not really understanding it.

LLMs are just a continuation of this pattern. Many people just want to do their hours and get paid and are not interested and/or capable of actually understanding fully what they are working on.

> GPS. It’s so reliable that I’m fine being unable to navigate. I’ve never gotten in a situation where I wish I had learned to navigate without Google Maps beforehand. But this is also a narrow skill that isn’t foundational to other higher-order ones. Maybe software engineering will be something as obsolete as navigating where you can wholly offload it? However, that seems unlikely given the difference in complexity of the two tasks.

I think the author will learn the hard way. You shouldn't rely on Google Maps. Literally less than 2 weeks ago, Google maps was non-functional (I ran out of data), I ended up using road signs and driving towards town names I recognised to navigate back. Learning basic navigational methods is a good idea.

comrade1234•1mo ago
Hey man, it’s not necessarily the llm.
jasonthorsness•1mo ago
I learn a lot from asking LLMs to do things especially in areas like front-end development where I don't know most features of CSS, HTML5, or React. All you have to do is read the code the LLM writes and ask it follow-up questions.

LLMs can accelerate learning. Everyone is optimistic about the idea of personalized tutors improving education. You can already use them like that while working on real-world projects.

cadamsdotcom•1mo ago
> If you do work at the very frontier, LLMs definitely aren’t as helpful and, for very good programmers, their use of these models for coding is fundamentally different.

This needs a mention anytime someone says they struggle to get value from LLMs.

cadamsdotcom•1mo ago
> Total offloading cripples true learning but maximizes short-term speed and output, and finding the right balance is crucial.

LLMs help you go fast but going fast keeps you shallow.

Luckily you can fractally take apart something the LLM made for you with the help of, you guessed it, an LLM. “Why did you make this a constant?” “Why is this hoisted?” “What makes this more performant than that?” .. and pretty soon, you’re not dumb anymore. At least in that area.

LLMs are the ultimate tool for self-directed learners.

countWSS•1mo ago
The whole idea of "LLM replacing thinking" is because people don't want to understand the output. Just a few prompts like "What are the flaws of code below" would improve understanding and allow to plan ahead better.
TheChaplain•1mo ago
LLM is just another tool, use it wrong and you'll suffer from it.

I use it as a tutor, to teach how to do certain tasks or and explain things I do not yet understand.

When I create things like articles, I ask it to review and point out grammar/spelling mistakes.

And sometimes I use it as a search engine to find sources where I can validate information.

NetRunnerSu•1mo ago
sounds like φ matched orders

https://dmf-archive.github.io/

Havoc•1mo ago
Wild that students are forced to consciously make a decision to not learn because that seems like the better strategic play
coffeebeqn•1mo ago
Where are people getting this 5-10x more productive number? It’s more like 1.2x in the best scenario. 0x in the worst case. Working in a large startup with a large legacy and new codebase with all the models and tools at our disposal