I'm now aware of that problem and haven't had that problem since but I was pretty shocked in retrospect that I confidently headed off in the wrong direction when the tool I was using was by any objective measure much better.
I agree with this:
"the key to navigating successfully is being able to read and understand a map and how it relates to your surroundings"
https://www.mountaineering.scot/safety-and-skills/essential-...
Can you point to a study to back this up? Otherwise, it's anecdata.
have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..
but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.
100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.
I actually agree with you on this!
But... I have very very good directional sense, and as far as I can tell it's innate. My whole life I've been able to remember pathing and maintain proper orientation. I don't think this has anything to do with lack of navigation aids (online or otherwise) during formative years.
But I'm talking about geospatial sense within the brain. If your point is that people no longer learn and improve the skill of map-reading then yes that should be self-evident.
The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.
> it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.
It’s the vape of IT.
It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.
Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.
Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.
It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.
I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.
For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.
Code without AI - sharp skills, your brain works and you come up with better solutions etc.
Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.
Smug face: “weeeell, how can you say you’re a real programmer if you use a compiler? You need write raw assembly”, “how can you call yourself reeeeal programmer if you don’t know your computer down to every register?”, “real programmurs do not use SO/Google” and all the rest of the crap. It is all nerds trying to make themselves feel good by inflating their ego with trivia that is not interesting to anyone.
Well, what do you know? I’m still in business, despite relying a lot on Google/SO, and still create solutions that fix real human problems.
If AI can make 9 to 5 more bearable for majority of people and provide value in terms less cognitive load, let’s fucking go then.
But I'm 100% sure i have some "natural" neural connections based on those experiences and those help me even when doing high level languages.
By the way, I am using LLMs. They help until they don't. One real life example i'm hitting at work is they keep mixing couchdb and couchbase when you ask about features. Their training dataset doesn't seem to be large enough in that area.
This is not what founder culture is about.
That train of thought leads to writing assembly language in ed. ;-)
I think developers as a group have a tendency to spend too much time "inside baseball" and forget what the tools we're good at are actually used for.
Farmers don't defend the scythe, spend time doing leetscythe katas or go to scything seminars. They think about the harvest.
(Ok, some farmers started the sport of Tractor Pulling when the tractor came along and forgot about the harvest but still!) :)
Hard disagree, LLVM will always outperform me in writing assembly, it won't just give up and fail randomly when it meets a particularly non-trivial problem, causing me to write assembly by hand to fix it. If LLMs would be 100% reliable on the tasks I had to do, I don't think anyone here would seriously debate about the issue of mental attrition (i.e. you don't see people complaining about calculators). The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished and the worse you get at that last part the more your skillset converges to exactly the type of things an LLM (and therefore everyone else with a keyboard) can reliably do.
You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.
One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.
( Of course dear reader, YOU won't randomly kill people because you're a "good driver". )
And it will be the same thing with AI. You want to ask it a question that you can verify the answer to, and then you actually verify it? No problem. But then you have corporations using it for "content moderation" and end up shadow banning actual human beings when it gets it wrong, and then those people commit suicide because they think no one cares about them when it's really that the AI wrongly pegs them as a bot and then heartlessly isolates them from every other living person.
Exercise is good.
Being outside is good.
New experiences happen when you're on foot.
You see more things on foot.
Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.
Going by foot or bicycle is so healthy for us for a myriad of reasons.
Economies of scale do mean you can get a fluffy blanket imported from China at $5, less than the cost of a coffee at Starbucks, but for food necessities Walmart isn’t even that cheap or abundant compared to other chains.
When 75% of the west is overweight or obese, and when the leading causes of death are quite literally sloth and gluttony I think I'd take my chances... We're drown in insane quantity of low quality food and gadgets
And you pay small local stores with higher prices - which leads to more people, even in such small-towns with local butchers and bakers to get into their ride and go to the Lidl or Aldi on the outskirts.
Much like companies will realise LLM-using devs are more efficient by some random metric (do I hear: Story points and feature counts?), and will require LLM use from their employees.
The car analogy has that covered already. When Guttenberg was printing bibles, those things sold like warm bread rolls - these days, printing books is barely profitable. The trick with new disruptive tech always is to be an early adopter - not the long tail.
In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.
With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.
And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.
Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.
If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?
I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.
"Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."
In terms of connections made, Brain Only beats Search User, Search User beats LLM User.
So, yes. If those measured connections mean something, it's the same but worse.
At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.
The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.
I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs
With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.
Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.
Very little space is dedicated to knowing if the essays are actually any good.
Overall pretty disappointing.
This is still true whether or not the claim is true/accurate or not, as it allows for actual relevant and constructive critique of the work.
The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.
Tools augment skills and you can use them for speedier success if you know what you're doing.
The people who need hand-held alarmism are mediocre.
I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.
The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.
I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery
I thought a lot about it and realised discriminating is much easier than generating.
I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.
I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.
We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.
These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.
I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.
The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".
So having someone else do a task for you entirely makes your brain work less on that task? Impossible.
unsupp0rted•4h ago
risyachka•4h ago
lostlogin•3h ago
Aboriginal storytelling is claimed to pass on events from 7k+ years ago.
https://www.tandfonline.com/doi/abs/10.1080/00049182.2015.10...
chongli•3h ago
dig1•4h ago
[1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8482376
readthenotes1•3h ago
I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?
rokkamokka•3h ago
elric•3h ago
Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.
I wonder if they ever realised that it could also be a giant knowledge amplifier.
moffkalast•2h ago
I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.
Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.
camillomiller•3h ago