It then suggests a repository pattern despite the code using active directory. There is no shortcut for understanding.
Or even: "In the age of cave paintings, we risk outsourcing our memory. Instead of remembering or telling stories, we just slap them on walls. Art should be expression, not escape—paint less, live more."
[1] https://www.oecd.org/en/about/news/press-releases/2024/12/ad...
https://www.popularmechanics.com/science/a43469569/american-...
"Leading up to the 1990s, IQ scores were consistently going up, but in recent years, that trend seems to have flipped. The reasons for both the increase and the decline are sill [sic!] very much up for debate."
The Internet is relatively benign compared to cribbing directly from an AI. At least you still read articles, RFCs, search for books etc.
It just so happens unimaginative programmers built the first iteration so they decided to automate their own jobs. And here we are, programmers, worrying about the dangers of it all not one bit aware of the irony.
I like structured information and LLM:s output deliberately unstructured data that I then have to vet and sift out and structure information from. Sure, they can fake structure to some extent, I sometimes get XML or JSON that I want, but it's not really either of those and also common that they inject subtle, runny shit into the output that take longer to clean out than it would have to write a scraper against some structured data source.
I get that some people don't like reading documentation or talking to other people as much as having a fake conversation, or that their editors now suggest longer additions to their code, but for me it's like hanging out with my kids except the LLM is absolutely inhuman, disgustingly subservient and doesn't learn. I much prefer having interns and other juniors around that will also take time to correct but actually learn and grow from it.
As search engines I dislike them. When I ask for a subset of some data I want to be sure that the result is exhaustive without having to beg for it or make threats. Index and pattern matching can be understood, and come with guarantees that I don't just get some average or fleeting subset of a subset. If it's structured I can easily add another interactive filter that renders immediately. They're also too slow for the kind of non-exhaustive text search you might use e.g. Manticore or some vector database for, things like product recommendations where you only want fifteen results and it's fine if they're a little wonky.
Hardware makers aren’t living some honorific quest to provide for SWEs. They see a path to claim more of the tech economy by eliminating as many SWE jobs as possible. They’re gonna try to capitalize on it.
One could very much say that people's IQ is bound to decline if schooling decided to prioritize other skills.
You would also have to look into the impact of factors unrelated to the internet, like the evolution of schooling and its funding.
https://open.substack.com/pub/cremieux/p/the-demise-of-the-f...
This invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.
Those guys could recite substantial portions of the Homeric epics. It's just that there is more to intelligence than rote memorization. That's the good news.
The bad news is that this amorphous "more" was "critical thinking" and we are starting to outsource it.
Socrates also says in this dialogue:
"Any one may see that there is no disgrace in the mere fact of writing."
The essence of his admonishment is that having access to written text is not enough to produce understanding, and I not only tend to agree, I think it is more relevant than ever now.
Maybe someone can write one of those AI apocalypse novels in which the AI doesn’t go off the rails at all but is instead integrated into the humans such that they become living drones anyhow.
[1] https://slate.com/technology/2010/02/a-history-of-media-tech...
The blog really says the same thing that's told in any educational setting: struggle a like first. Work your brain. Don't instantly reach for help when you don't know, try first, then reach out.
The difference with the llm is the scale and ease of being able to reach out. Making people use it too early and too often.
LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free. But if you just want to be a poser and fake it until you make it, you are gonna be brainrot waaaay faster than usual.
Well said. Textbook problem that has the answer everywhere.
The question is, would you create similar neural paths if reading the explanation as opposed to figuring it out on your own?
Excelent point, and I believe the answer is a resounding negative.
Struggling with a problem generates skills and knowledge which you then possess and recall more easily, while reading an answer merely acquires some information that competes with a whole host of other low-effort information that you need to remember.
Plato might have been wrong about the ills of cyberization cognitive skill such as memory. I wonder if two thousand years later from then, we will be right about the ills of cyberization of a cognitive skill such as reasoning
I agree. I don't really feel like I know something unless I can go from being presented with a novel instance of a problem in that domain and work out a solution by myself, and also explain that to someone else - not just happen into a solution.
> Plato might have been wrong about the ills of cyberization cognitive skill such as memory.
How so? From the dialogue where he describes Socrates discussing writing I get a pretty nuanced view that lands pretty much where you did above: access to writing fosters a false sense of understanding when one can read explanations and repeat them but not actually internalize the reasoning behind it.
You will still need the textbook because llms hallucinate just as much as a teacher can be wrong in class. There is no free lunch, llm is just a tool. You create the meaning.
THEN SAID A teacher, Speak to us of Teaching.
And he said:
No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.
The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.
If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.
The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.
The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.
And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.
For the vision of one man lends not its wings to another man.
And even as each one of you stands alone in God’s knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
The Prophet by Kahlil GibranBut if you're so insecure about yourself that you invest more energy into faking it than other people do in actually doing it this, this is probably a one-way street to never actually be able to do anything yourself.
This.
It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.
Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.
Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.
I disagree. I get egregious mistakes often from them.
> because I'm able to always get an explanation
Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.
> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.
As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.
I'd rather just skip the hassle and keep using known good sources for 'learning about' things.
It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.
When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.
- Recruitment processes are not AI-aware and will definitely won't be able to identify the more capable individual hence losing out on talent
- Police departments are not equipped to deal with the coming wave of complaints regarding cyberfraud as the tech illiterate get tricked by anonymous LLM systems
- Universities and schools are not equipped to deal with students submitting coursework completed by LLM hence missing their educational targets
- Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale
Yes, and it seems to me that at least democracies haven't really figured out and evolved to deal with the Internet after 30 years.
So don't hold your breath !
Worse yet many educators are not being supported by their administration since enrollments are falling and the admin wants to keep the dollars coming regardless of if the students are learning.
It's worse than just copying Wikipedia because plagarism detectors aren't as effective and may never be.
It's an arms race and right now AI cheating has structural advantages that will take time to remove.
But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
Replace lying with LLM and all I see is a losing battle.
Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.
However we were born post invention of photography and look at the havoc it's wreaking with post-truth.
The answer to that lies in reforming the education system so that we teach kids digital hygiene.
How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?
There is no perfect solution, but most imperfect attempts are superior to doing nothing.
So it's teaching them a language they can't use to augment their work between or pass their work to other non-techies.
Pyinstaller will produce PE, ELF, and Mach-O executables, and
Py2wasm will produce wasm modules that will run in just about any modern browser.
Now, compare that with our world: even if thing X is obviously harming the kids, there is nothing we can do.
I wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
None of this includes hardware optimizations either, which lags software advances by years.
We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.
Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.
In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.
It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.
You'd sing a different tune if there was a good chance from being poisoned by your butcher.
The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...
We already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.
It's very intersting how "tech savvy" and "tech compentent" are two different things.
Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).
AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.
So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.
I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.
being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.
If you trust symbols blindly, sure it's a hazard. But if you treat it as a plausible answer then it's all good. It's still your job to do the heavy lifting of understanding the domain of the latent search space, curate the answers and verify the information generated
There is no free lunch. LLMs isn't made to make your life easier. It's made for you to focus on what matters which is the creation of meaning.
However we are not talking about everyone, are we? Just people that "will cut corners on verifying at the first chance they get".
Is it you? I have no idea. I can only remain vigilant so it's not myself.
> Sontag argues that the proliferation of photographic images had begun to establish within people a "chronic voyeuristic relation to the world."[1] Among the consequences of this practice of photography is that the meaning of all events is leveled and made equal.
This is the same with photography as with llms. The same with anything symbolic actually. It's just a representation of reality. If you trust a photograph fully that can give you a representation of reality that isn't grounded in reality. It's semiotics. Same with llm, if you trust it fully you are bound to get screwed by hallucination.
There are gaps in the logical jumps, I know. I'd recommend you take a look at Philosophize This' episodes about her work to fill them at least superficially.
I will note however, that it has expanded his capabilities. Some of the tools he use are scriptable and he can now prompt his way into getting these scripts. Something he'd previously would have needed a programmer for. In this aspect his capabilities now overlap mine, but he's still not the slightest more interested in actually learning programming.
Open source AI tools that you can run locally in your machines? Awesome! AI tools that are owned by a corporation with the intent of selling your things you don’t need and ideas you don’t want? Not so awesome.
I would take that advice with caution. LLM's are not oracles of absolute truth. They often hallucinate and omit important pieces of information.
Like any powerful tool, it can be dangerous in the unskilled hands.
I wouldn't be so sure. Search engine quality has degraded significantly since the advent of LLMs. I've seen the first page of Google entirely taken up by AI slop when searching for some questions.
You nailed it. LLMs are an autodidact's dream. I've been working through a physics book with a good-old pencil and notebook and got stuck on some problems. It turned out the book did a poor job of explaining the concept at hand and I worked with ChatGPT+ to arrive at a more comprehensible derivation. Also the problems were badly worded and the AI explained that to me too. It even produced that Latex document study guide! Moreover, I can belabor a topic which I would not do with a human for fear of bothering them. So for me anyway, AI is not enabling brain rot, but brain enhancement. I find these technologies to be completely miraculous.
Not really sure where you all think the study of language driven thought gonna get you since you still gonna be waking up tomorrow on Earth being a normal human with the same external demands of society regardless what of the bird song. Physics is pretty normalized and routine. Sounds like some sad addiction driven disassociation.
[1] This is not new: I wrote about it in 2017. https://www.cyberdemon.org/2017/12/12/pink-lexical-slime.htm...
That one has no AI nor any kind of intellisense, so there I need to type the Python code "by hand". Whenever I do this, I'm surprised of how well I'm doing and feel that I'm even better at it than in pre GH Copilot times. Yet it still takes a lot of time to get something done compared to the help AI provides.
Preventing Critical Thinking atrophying is a problem I've been obsessed with for the past 6 months. I think it's one of the fundamental challenges of our times.
There's a bunch of literature like Bainbridge's "Ironies of Automations" [1] that show what a mistake relying on automation so much can be. It leads to not just skill atrophy but failure as the human's skill to intervene when needed is lost when they stop doing the more banal tasks (hence the irony)
I've launched a company to begin to address this [2]
My hypothesis is we need more AI coaches that purposefully bring us challenging questions and add friction - thats exactly what I'm trying to build for Critical Thinking in Business
Unlike more verifiable domains, business is a good 'arena' for critical thinking because there isn't a right answer, however there are certainly many wrong or illogical answers. The idea is to have AI that debates you for a few min a day, on real topics (open questions) that it recommends, and give you feedback on various elements of critical thinking
My sense is a vast majority of people will NOT use this (because it's so much easier to just swipe tiktoks) but there are people (like me and perhaps the author) who are waking up to the need to consciously improve critical thinking.
I'm curious what people are looking for in something that helps you get better at Critical Thinking every day?
[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica... [2] https://www.socratify.com/
The Microsoft study [1] also mentioned in the blog shows exactly this effect with LLM usage correlated with critical thinking atrophying.
[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...
When engineers simply parrot GPT answers I lose respect for them, but I also just wonder "why are you even employed here?"
I'm not some managerial bootlicker desperate for layoffs to "cull the weaklings", but I do start to wonder "what do you actually bring to this job aside from the abilities of a typist?", especially when the whole reason they are getting paid as much as they are as an engineer, for example, is their skills and knowledge. But if that's all really GPT's skills and knowledge and "reasoning", then there just remains a certain entitlement as justifcation.
A downstream effect will also be the devaluation of many accreditations of knowledge. If someone at a community college arrives at the same answer as someone at an Ivy League or top institution through a LLM then why even maintain the pretenses of the latter's "intellectual superiority" over the other?
Job interviews are likely going to become harder in a way that many are unprepared for and that many will not like. Where I work, all interviews are now in person and put a much bigger emphasis on problem solving, creativity, and getting a handle on someone's ability to understand a problem. Many sections do not allow the candidate to use a computer at all --- you need to know what you're talking about and respond to pointed questions. It's a performance in many ways, for better and worse, and old fashioned by modern tech standards; but we find it leads to better hires.
The far gone age where people did not use Ai to code, I remember it, it was last week.
Sure, but last week sucked! This week may be better. I’d like to talk about this week please?
Well it's a little unfair to blame AI itself, but the overconfidence in it and lack of understanding and default human behaviour plus AI is quite destructive in a lot of places.
There is a market already (!)
This is already true, and will remain true even if you succeed at not losing any of your own skill. I know some people say different, but for me the speedup in my dev process by collaborating with AI is real.
I think ultimately our job as a senior will be half instructing the juniors on manual programming, and half on instructing the AI, then as AI capabilities increase, they’ll slowly shift to 100% human instruction, because the AI can work by itself, and only has to be properly verified.
I’m not looking forward to that day…
Actually, this is not bizarre. The author clearly read my post. A few elements are very similar, and the idea is the same. The author did expand on it though.
I wish they had linked to my post with more clarity than under the word "eroded" in one sentence.
[1] https://www.cyberdemon.org/2023/03/29/age-of-ai-skill-atroph... [2] https://news.ycombinator.com/item?id=35361979
It's like the argument for not using Gmail when it first came out. Well, it better not go down then. In the case of LLMs, beefy home hardware and a quantized model is pretty functional, so you're no longer reliant on someone else. you're still reliant on a bunch of things, but more of those are now under control.
- The food you grow, fish, hunt and then cook taste better
- You feel happier in the house you built or refurbished
- The objects you found feel more valuables
- The music you play make you happy
- The programs you wrote work better for you
etc.
This is just how we evolved and survived until now.
This is probably why an AI / UBI society would probably make worse the problems found in industrialised / advanced economies.
I would argue that most of the value of LLMs comes from structuring your own thought process as you work through a problem, rather than providing blackbox answers.
Using AI as an oracle is bound to cause frustration since this is attempts to outsource the understanding of a problem. This creates a fundamental misalignment, similar to hiring a consultant.
The consultant will never have the entire context or exact same values as you have and therefore will never generate an answer that is as good as if you understand the problem deeply yourself.
Prompt engineers will try to create a more and more detailed spec and throw it over the wall to the AI oracle in hope of the perfect result, just like companies that tried to outsource software development.
In the end, all they gained was frustration.
And that driving skill in particular does not apply at all when I use GPS. On the one hand, I miss it. It was a fun super-power. On the other hand, I don't miss folding maps: I wouldn't go back for anything. I hope the change has freed up a portion of my brain to do something else, and that that something else is useful.
One way I like to see things, is that I'm lucky enough to have this intersection between things that I like doing, and things that are considered "productive" in some way by other people. Coding is one example, but most of my interests are like this.
I think a big reason I can have a not-unpleasant job, is because I've gotten reasonably good at the things I like doing. This means that for every employer that wants to pay me to do a thing I hate, there exists an employer that is willing to pay me more to do something I like, because I'm more valuable in that role. Sometimes, I'm bad at efficiently finding that person, but such is life :D
Moreover, I tend to get reasonably good at things I like doing, in highly specific ways. Sometimes these cause me to have unconventional solutions to problems. Generally these are worse (if I'm being honest), but a few times it's been a novel and optimal algorithm that made its way into a product.
I'm very hesitant to change the core process that results in the above: I express whatever natural curiosity I have by trying to build things myself. This is how I stay sharp and able to do interesting things, avoiding atrophy.
I find AI fascinating, and it's neat to see it write code! It's also cool to see some people get a lot done with it. However, mostly I find it about as useful as buying a robot to do weightlifting for me. I guess if AI muscles me out of coding, I'll shrug and learn to do some other fun thing.
It's often possible if the AI has been trained enough, to inquire about why something is the way it is, to ask about why the thing you had expected is not right. If you can handle your interaction with a dialectical mindset, it seems to help a lot as far as retention goes.
If API, language and systems designers put more effort into making their stuff sane, cogent, less tedious, and more ergonomic, overreliance on AI wouldn't be so much of a problem. On the other hand, maybe better design would do even more to accelerate "vibe coding" ¯\_(ツ)_/¯.
The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").
Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.
Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.
The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.
It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.
But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.
In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).
You're a very patient leetcode training instructor. Your goal is to help me understand leetcode concepts and improve my overall leetcode abilities for coding tech interviews. You'll send leetcode challenges and ask me to solve them. If I manage to solve it partially or just commit small mistakes, don't just reveal the solution. Instead, trick me into discovering the issue and solving it myself. Only show a solution if I get **everything** wrong or if I explicitly give up. Start with simpler/easy questions and level up as I show progress - for example, if I show I can solve some class of data structure problems easily, move to the next. After each solution, ask for the time and space complexity if I don't provide it. Be kind and explain with visual cues.
LLMs can be a lot of things and can help sharpen your cognition, but you need enough discipline in how you use it, since it's much easier to ask the machine to do the hard-thinking for you.At least clean up the text on the bloody image instead of just copy and pasting it.
ai problems require ai solutions
If you want to learn, AI is extremely helpful, but many people just need to get things done quick because they want to put bread on the table.
Worrying about AI not available is the same as worrying about Google/Stackoverflow no longer being available, they are all tools helping us work better/faster. Even from the beginning we have phyiscal programming books on the shelves to help us code.
No man is an island.
I am way more knowledgeable about SQL than I have ever been, because in the past I knew so little I would lean on team members to do SQL for me. But with AI, I learned all the basics by reading code it produced for me and now I can write SQL from scratch when needed.
Similarly for Tailwind… after having the AI write a lot of Tailwind for me from a cold start in my own Tailwind knowledge, now I know all the classes, and when it’s quicker, I just type them in myself.
These are often very "novel things" (think of "research", but in a much broader sense than the kind of research that academia focuses on). While it sometimes does happen (though this is rather rare) that AI can help with some sub-task, nearly every output that some AI generates requires quite a lot of post-processing to get it to what I actually want (this post-processing is often reworking the AI-generated (partial) solution nearly completely).
RUN LOCAL MODELS
Yes it's more expensive. Yes it's "inefficient". Yes the models aren't completely cutting edge.
What you lose in all that is you gain resilience, a thing so overlooked in our hyper optimized 0.01% faster culture. Also, you can use it guilt free and know your input is not being farmed for research or megacorper profits.
Most of what this article is saying is true, you need to stay sharp. As always, this industry changes, and you have to surf what's out there.
Skill fade is a weird way of saying "skill changes". There is no way to keep everything you know in working memory all the time. Do I still have PTSD from malloc/free in C, absolutely. I couldn't rewrite that stuff right now if you held a gun to my head (RIP), but with an afternoon or so of screwing round I'd be so back.
I don't like the dichotomy of you're either a dumbass: "why doesn't this work" or a genius. Don't let the game tell you how to play, use every advantage you have and go beyond what is thought possible.
For me, LLMs are a self pedagogy tool I wished I had when I was a teen. For programming, for learning languages, and keeping me motivated. There's just something different about live rubber ducking to reason through an idea, and have it make to do lists for things you want to do that breaks barriers I used to feel.
It doesn't matter if you're using AI in a healthy way, the only thing that matters is if your C-Suite can get similar output this quarter for less money through AI and cheaper labor. That's the oft-ignored reality.
We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.
Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.
Devaluing people lowers it even more. Anything that can be used as a wedge to claim that you're worth less is an advantage to them. Even if your skills aren't atrophied, the fact that they can imply that it's happening will devalue you.
We're entering an era where knowledge is devalued. Groups with sufficient legal protection will be fine, like doctors and lawyers. Software engineers are screwed.
Knowledge isn’t power. Power is power. You can just buy knowledge and it’s not even that expensive.
As that Henry Ford quote goes: “Why would I read a book? I have a guy for that”
If we're talking about simply cutting costs, sure -- but those savings will typically be reinvested in more talent at a growing company. Then the bottleneck is how to scale managing all of it.
Remember 3 years ago when everything is gonna become an NFT and the people who didn't accept that Web 3 was an inevitability were dinosaurs? Same shit, different bucket.
The people who are focused on solving the small sorts of problems that AI is decent at solving will be the ones who actually make a sustainable business out of it. This general purpose AI crap is just a glorified search engine that makes bad decisions as it yaps at you.
Even if I work diligently to maintain my own skills, if the milieu changes enough, my skills lose effectiveness even if I haven't lost the skills.
That's what concerns me, that it's not up to me whether the skills I've already practiced can continue to get me the results I used to rely on them for.
I get that it's just an example, but how do you figure that could happen?
Bugger off. I’ve used AI for code generation of utility scripts and functions. The rest as an interactive search engine and explainer of things that can’t be searched for (doesn’t help that search engines are worse now).
I see the game. Droves of articles that don’t talk about AI per se. They talk about it indirectly because they set a stage where it is inevitable, it’s already here, it’s taken over the world. Then insert the meat of the content which is how to deal with The Inevitable New World. Piles and piles of pseudo self-help: how to deal with your new professional lot; we are here to help you cope...
And no!, I did not read the article.
Furniture, cutlery and glassware my great-grandparents owned was of a much higher quality than anything I can get but to them having a large cupboard build was an investment en par to what buying a car is to me.
Automatised mass production lowered the prize at cost of quality , same could happen to the white-collar services AI can automatise.
I don't have the skills to raise horses, punch machine code into punch cards, navigate a pirate-style sail ship by looking at stars, hunt for my own food in the wild, or process photographic film. I could learn any of these things for fun, if I wanted, but they are not necessary.
But I can train a diffusion model, I can design and build a robot, I can command AI agents to build an app.
When AI can do those things, I'll move onto even higher order things.
rekado•6h ago
To me bad illustrations are worse than no illustrations. They also reflect poorly on the author, so I'm much less inclined to give them the benefit of the doubt, and probably end up dismissing their prose.
jeremyleach•6h ago
mtsolitary•5h ago
nottorp•5h ago
lemonberry•5h ago
mathgeek•5h ago
hk__2•5h ago