frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI tools are making me lose interest in CS fundamentals

44•Tim25659•2h ago
With powerful AI coding assistants, I sometimes feel less motivated to study deep computer science topics like distributed systems and algorithms. AI can generate solutions quickly, which makes the effort of learning the fundamentals feel less urgent.

For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?

Comments

jbrozena22•2h ago
> why do you think it’s still important to stay strong in CS fundamentals?

I don't think anyone at any level has any idea what the future is holding with this rapid pace of change. What some old timers think is going to be useful in a post-Claude world isn't really meaningful.

I think if I had limited time to prioritize learnings at the moment it would be prioritizing AI tooling comfort (e.g. getting comfortable doing 5 things shallowly in parallel) versus going super deep in understanding.

bluefirebrand•2h ago
I view this a bit like asking "why bother getting a job when I could just get rich at the slot machines"

Knowledge is still power, even in the AI age. Arguably even moreso now than ever. Even if the AI can build impressive stuff it's your job to understand the stuff it builds. Also, it's your job to know what to ask the AI to build

So yes. Don't stop learning for yourself just because AI is around

Be selective with what you learn, be deliberate in your choices, but you can never really go wrong with building strong fundamentals

Edit: What I can tell you almost for certain is that offloading all of your knowledge and thinking to LLMs is not going to work out very well in your favor

Tim25659•2h ago
One follow-up question I’ve been thinking about:

In the AI era, is it still worth spending significant time reading deep CS books like Designing Data-Intensive Applications by Martin Kleppmann?

Part of my hesitation is that AI tools can generate implementations for many distributed system patterns now. At the same time, I suspect that without understanding the underlying ideas (replication, consistency, partitioning, event logs, etc.), it’s hard to judge whether the AI-generated solution is actually correct.

For those who’ve read DDIA or similar books, did the knowledge meaningfully change how you design systems in practice?

mbernstein•30m ago
Short answer: yes.

Longer answer: About 10 years I moved into leadership roles (VP Eng) and while I continued to write code for POCs, it hasn't been my primary role for quite some time. DDIA has been a book I pull out often when guiding leaders and members of my teams when it comes to building distributed systems. I'm writing more code these days because I can, and I still reference DDIA and have the second edition preordered.

babas03•1h ago
AI is great at giving you an answer, but fundamentals tell you if it's the right answer. Without the basics, you're not a pilot; you're a passenger in a self-driving car that doesn't know what a red light is. Stay strong in the fundamentals so you can be the one holding the steering wheel when the AI hits a hallucination at 70mph.
add-sub-mul-div•1h ago
It reminds me of the situation with self-driving that expects you to keep your full attention on the road while not driving so that you can take over at any time. It's clearly unrealistic.

It's not a failing of yours or anyone else's, but the idea that people will remain intellectually disciplined when they can use a shortcut machine is just not going to work.

kccqzy•1h ago
Because AI still hallucinates. Since you mentioned algorithms, today for fun I decided to ask Claude a pretty difficult algorithm problem. Claude confidently told me a greedy solution is enough, before I told Claude a counterexample that made Claude use dynamic programming instead.

If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.

rishabhaiover•7m ago
I'm curious, what was the algorithm problem?
kevv•6m ago
That is correct. But for how long ? How long would it take for AI to learn all of this too ? AI sure does learn faster than humans and even though it will never degrade the relevance of fundamentals, don't you think that the bar for someone beginning to learn about the fundamentals, would just keep increase exponentially.
royal__•1h ago
These tools actually make me more interested in CS fundamentals. Having strong conceptual understanding is as relevant as ever for making good judgement calls and staying connected with your work.
rossjudson•1h ago
This is the right answer. AI writing code for you? Then spend that time understanding what it is writing and the fundamentals behind it.

Does it work? How does it work? If you can't answer those questions, you should think carefully about what value you bring.

We're in this greenfield period where everybody's pet ideas can be brought to life. In other words...

Now anyone can make something nobody gives a shit about.

j3k3•1h ago
"Now anyone can make something nobody gives a shit about."

lol nice one

j3k3•1h ago
Ultimately humans are the judge of reality, not LLMs.

How can you be a good judge? You must have very strong foundations and fundamental understanding.

anonym29•1h ago
babas03 put it best IMO - https://news.ycombinator.com/item?id=47394432

I'd also second bluefirebrand's point that "it's your job to know what to ask the AI to build" - https://news.ycombinator.com/item?id=47394349

Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.

AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.

pkulak•1h ago
I find I'm going even deeper lately. I, obviously, have to completely and _totally_ understand every line written before I will commit it, so if AI spits something out that I haven't seen before, I will generally get nerd sniped pretty good.
tartoran•1h ago
I think that AI, particularly LLMs, can be quite effective for learning, especially if you maintain a sense of curiosity. CS fundamentals, in particular, are well-suited for learning through LLMs because models have been trained on extensive CS material. You can explore different paradigms in various ways, ask questions, and dissect both questions and answers to deepen your understanding or develop better mental models. If you're interested in theory, you can focus on theoretical questions but if you're more hands-on you can take a practical approach, ask for code examples etc. If you have a session and feel that there's something there that you want to retain ask for flash cards.
tayo42•1h ago
For now yeah becasue you need to direct the Ai correctly still. Either with planning or you need to fix it's mistakes and identify when it did something correct but not optimally.
ankurdhama•1h ago
Did you learn arithmetic in school even though calculator exist?
remarkEon•48m ago
Many did not. It's important to understand the distinction.

I was in middle and high school when calculators became the standard, but they were still expensive enough that we kept the Ti-80 calculators on a backroom shelf and checked them out when there was an overnight problem set or homework assignment. In a round about way, I think I ended up understanding more about the underlying maths because of this.

So, no, many did not actually learn arithmetic in school. This isn't necessarily because of the calculator, but if you don't get a student to understand what arithmetic even is then handing them a calculator may as well be like handing them a magic wand that "does numbers".

wayfwdmachine•1h ago
"With powerful computers, I sometimes feel less motivated to study deep mathematical topics like differential equations and statistics. Computers can math quickly, which makes the effort of learning the fundamentals feel less urgent. For those who have been in the industry longer, why do you think it’s still important to stay strong in mathematical fundamentals?"

Because otherwise you are training to become a button pressing cocaine monkey?

lich_king•48m ago
I don't find your analogy compelling. More like "calculators make me less motivated to learn how to multiply four-digit numbers in my head". There used to be jobs for people who were good with numbers. They're pretty much gone, and it's not even much of a parlor trick, so no one bothers to learn these skills anymore.

If the best argument for going into CS is that LLMs sometimes make stuff up and will need human error checkers, I can see why people are less excited about that future. The cocaine monkey option might sound more fun.

atonse•1h ago
Read this article from the Bun people about how they used CS fundamentals (and that way of thinking) to improve Bun install's performance.

https://bun.com/blog/behind-the-scenes-of-bun-install

Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?

Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.

And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.

Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.

Ycros•1h ago
How can you possibly make any informed statement about the solutions AI generates for you if you don't understand them?
petersonh•1h ago
if you really love CS - there's a future in it. If AI becomes the new substrate for civilization, we'll always need people who fundamentally understand how these systems works to some degree.
nilirl•1h ago
CS fundamentals is about framing an information problem to be solvable.

That'll always be useful.

What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.

hedora•1h ago
Fundamentals are the only thing left to learn in our field.

Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.

remarkEon•53m ago
I was wondering about this. I do not write software to pay the mortgage, I just write the occasional python script, some SQL stuff to update various dashboards, R in my spare time when I'm getting ready for looking at baseball stats or something. AI has had pretty much the opposite effect for me. Watching it write something has made me ask questions, get answers, dig into more details about things I never had the time to google on my own or spend an hour or several looking through stackoverflow.

I'd say my ability to write code has stayed about the same, but my understanding of what's going on in the background has increased significantly.

Before someone comes in here and says "you are only getting what the LLM is interpreting from prior written documentation", sure, yeah, I understand that. But these things are writing code in production environments now are they not?

TehShrike•52m ago
There are two types of CS fundamentals: the ones that help in making useful software, and the rest of them.

AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).

And I don't care if they care about the latter.

serf•49m ago
watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.
codance•49m ago
I'd argue the opposite — AI tools make fundamentals more important, not less. When you can generate code instantly, the bottleneck shifts to knowing what to ask for and evaluating whether the output is correct.
jmward01•25m ago
There are two aspects to this. The desire to learn and the utility of learning. These are two very different things. Arguably the best programmers I have known have been explorers and hopped around a lot. Their primary skills have been flexibility and curiosity. The point here was their curiosity, not what they were curious about. Curiosity enabled them to attack new problems quickly and find solutions when others couldn't. Very often those solutions had nothing to do with skip lists or bubble sort. Studying algorithms is useful for general problem solving and hey, as a bonus, it helps sometimes when you are solving a real world problem, but staying curious is what really matters.

We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.

So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.

diven_rastdus•24m ago
The motivation loop breaking is the real problem, not the fundamentals themselves becoming less useful. Fundamentals feel rewarding to learn when you immediately apply them — you learn how a hash map works, you write one, you feel the difference. AI short-circuits that feedback loop by giving you the answer before you've built the intuition. The fix isn't to avoid AI tools; it's to deliberately impose a delay — implement the thing yourself first, then compare with what the model produces. The gap between the two is where the learning still happens.
nvme0n1p1•23m ago
ai slop account
__rito__•13m ago
I will keep learning fundamentals.

I studied Physics fundamentals even though I had a microwave or could buy an airplane ticket. And I deeply enjoyed it. I still do.

I will keep doing it with CS fundamentals. Simply because I enjoy it too much.

j45•9m ago
AI tools are more effectively used aligning with/from CS fundamentals. Knowing how to ask and what to avoid is critical. It can be powered through/past, but the incorrect areas in the context can compound and multiply.

Canada's bill C-22 mandates mass metadata surveillance of Canadians

https://www.michaelgeist.ca/2026/03/a-tale-of-two-bills-lawful-access-returns-with-changes-to-war...
476•opengrass•6h ago•126 comments

What is agentic engineering?

https://simonwillison.net/guides/agentic-engineering-patterns/what-is-agentic-engineering/
87•lumpa•3h ago•48 comments

Chrome DevTools MCP (2025)

https://developer.chrome.com/blog/chrome-devtools-mcp-debug-your-browser-session
400•xnx•9h ago•172 comments

The 49MB web page

https://thatshubham.com/blog/news-audit
377•kermatt•8h ago•192 comments

LLM Architecture Gallery

https://sebastianraschka.com/llm-architecture-gallery/
297•tzury•12h ago•22 comments

Electric motor scaling laws and inertia in robot actuators

https://robot-daycare.com/posts/actuation_series_1/
8•o4c•3d ago•0 comments

The Linux Programming Interface as a university course text

https://man7.org/tlpi/academic/index.html
52•teleforce•4h ago•3 comments

LLMs can be exhausting

https://tomjohnell.com/llms-can-be-absolutely-exhausting/
124•tjohnell•7h ago•94 comments

//go:fix inline and the source-level inliner

https://go.dev/blog/inliner
124•commotionfever•4d ago•47 comments

What to know about floating-point arithmetic (1991) [pdf]

https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
13•jbarrow•4d ago•1 comments

Separating the Wayland compositor and window manager

https://isaacfreund.com/blog/river-window-management/
245•dpassens•13h ago•118 comments

Stop Sloppypasta

https://stopsloppypasta.ai/
175•namnnumbr•10h ago•95 comments

AI tools are making me lose interest in CS fundamentals

45•Tim25659•2h ago•36 comments

How I write software with LLMs

https://www.stavros.io/posts/how-i-write-software-with-llms/
39•indigodaddy•2h ago•6 comments

Federal Right to Privacy Act – Draft legislation

https://righttoprivacyact.github.io
40•pilingual•2h ago•22 comments

What makes Intel Optane stand out (2023)

https://blog.zuthof.nl/2023/06/02/what-makes-intel-optane-stand-out/
191•walterbell•13h ago•127 comments

SpiceCrypt: A Python library for decrypting LTspice encrypted model files

https://github.com/jtsylve/spice-crypt
9•luu•21h ago•1 comments

Quillx is an open standard for disclosing AI involvement in software projects

https://github.com/QAInsights/AIx
10•qainsights•2h ago•7 comments

Glassworm is back: A new wave of invisible Unicode attacks hits repositories

https://www.aikido.dev/blog/glassworm-returns-unicode-attack-github-npm-vscode
236•robinhouston•15h ago•149 comments

Cannabinoids remove plaque-forming Alzheimer's proteins from brain cells (2016)

https://www.salk.edu/news-release/cannabinoids-remove-plaque-forming-alzheimers-proteins-from-bra...
81•anjel•3h ago•48 comments

The emergence of print-on-demand Amazon paperback books

https://www.alexerhardt.com/en/enshittification-amazon-paperback-books/
113•aerhardt•19h ago•78 comments

Bandit: A 32bit baremetal computer that runs Color Forth [video]

https://www.youtube.com/watch?v=HK0uAKkt0AE
34•surprisetalk•3d ago•2 comments

Learning athletic humanoid tennis skills from imperfect human motion data

https://zzk273.github.io/LATENT/
133•danielmorozoff•12h ago•27 comments

Nasdaq's Shame

https://keubiko.substack.com/p/nasdaqs-shame
226•imichael•6h ago•75 comments

Bus travel from Lima to Rio de Janeiro

https://kenschutte.com/lima-to-rio-by-bus/
137•ks2048•4d ago•54 comments

A Visual Introduction to Machine Learning (2015)

https://r2d3.us/visual-intro-to-machine-learning-part-1/
331•vismit2000•17h ago•29 comments

An experiment to use GitHub Actions as a control plane for a PaaS

https://towlion.github.io
11•baijum•3h ago•4 comments

A new Bigfoot documentary helps explain our conspiracy-minded era

https://www.msn.com/en-us/news/us/a-new-bigfoot-documentary-helps-explain-our-conspiracy-minded-e...
56•zdw•6h ago•40 comments

A Plain Anabaptist Story: The Hutterites

https://ulmer457718.substack.com/p/a-plain-anabaptist-story-the-hutterites
31•gaplong•3d ago•3 comments

Type systems are leaky abstractions: the case of Map.take!/2

https://dashbit.co/blog/type-systems-are-leaky-abstractions-map-take
37•tosh•4d ago•18 comments