I won't pretend that it's turned me into a senior engineer, of course, but it's definitely gotten me over the 0 to 1 problem much quicker than I think I could have without it nudging my code in the right direction.
For what it's worth, I don't ask Copilot to write the code, I just use it as an advanced auto-complete, reading the suggestion to see if I agree with it before hitting tab.
BS code smells the same in any language.
Beginner devs don't even know what smelling means.
In the land of vibe-coders, the old man is king.
What happened is that companies tried to push an idea that this new AI thing would be inhospitable to whoever is already an experienced programmer. The idea of "new land", fair and equal to all. Smelling woudn't matter, because all smells would be new and unfamiliar.
After insisting on this silly mistake for a while, they realized that experienced programmers were actually their only viable target audience, and attempted to change their approach accordingly. It's embarassing.
It sort of is, indirectly, and I agree with pretty much everything.
But the bit about sycophancy was particularly enlightening. I actually thought "plain" ChatGPT-like interfaces could be good for learning. But the Youtube ROAS example is really powerful. If the student can skew the teacher's conclusions so much just by the way they phrase their questions/answers, we're going to mislead new programmers en masse.
I'm not even sure that the extensive prompting they say they use for their "Boots" is good enough.
I guess in the age of AI you still need someone to repeatedly reject your pull requests until you learn. And AI won't be that someone, at least for now.
We go back to this original prediction that the tool will help those, who both want help and are painfully aware of LLMs peculiar issues.
I always try to stay above this by prompting the question twice, with the opposite biases. But I of course don't know, which hidden biases I have that the LLM still reinforces.
Critical thinking is hard.
While I am not learning "coding" as a beginner anymore, I am constantly learning new frameworks, language features, algorithms etc. as is the norm in the industry, and I disagree it's bad to use AI auto-complete. Pre-AI IntelliSense-style autocomplete from Visual Studio or ReSharper makes learning new libraries and language features much easier. ReSharper for example will suggest taking advantage of new language features when you write something an older way, and many times this was my introduction to that new feature.
The new AI-based autocomplete can be even better; it demonstrates one way to do something and regardless of whether you use it or not, you can learn from it. You have to be curious enough to actually read what it is doing, but if you lack that curiosity it isn't AI's fault (before AI this was just "copy-pasting from Stack Overflow").
I want to be able to say I've tried and done things when speaking with highly technical people. I've been a 'programmer' since I was 10, I'm 35 now but never joined the work force as a programmer; I don't know why, but now that AI is here, the love for coding, tinkering, making system level things, trying things like WASM which may be the future of our www; these all give me that joy. I found my limitations as a programmer and excelled because I have different skillsets.
I love learning that doing something MY way is a good idea, but has been thought of and some amazing programmer already built the ground-work for it.
My Cursor AI agent even setup git for me for my projects so I can easily push with my SSH keys: do I know I can do that myself? Yes. Do I want to? no.
> ReSharper for example will suggest taking advantage of new language features when you write something an older way, and many times this was my introduction to that new feature.
That's actually news to me and sounds amazing. I started coding with C syntax when I was young. You learn habits then, it sticks with you.
I'm since enjoying python for backend things, flask for little webserver stuff and javascript for front-end things.
WASM Python ain't there yet, but I _love_ tinkering. I _love_ finding bugs. I _love_ poking and prodding at how things work. I'm almost always re-inventing the wheel with concepts but you know what? At least it's mine and I can tinker and learn.
Some of us enjoy the craft as a hobby and learning. Even within my teams some are more sophisticated tech wise than I am; to get on their level remotely requires me to tinker.
Often times, I find a solution for my problems that were the most simple; engineering minds like to overcomplicate things.
It won't write big chunks, so it won't hinder your learning.
I bet you are most likely just blindly trusting the AI response and moving on. Sure, the code structure might checkout and the calls it completed are sometimes fairly generic/predictable, but there will be plenty of situations where the behavior is just different enough or the black boxes are something you have no idea about what it even does and you are too lazy to check the docs and commit the code anyway.
AI autocomplete essentially searches stackoverflow for you and pastes the first answer without context, adjusting it to match your code. If you are learning, just do the stackoverflow search yourself, or prompt you favorite chatbot if you insist on using AI, so you can have at least some explanation about why it is done like this.
As an experienced dev, I hate this trend. I don't need my hand held for 10 minutes; I need to see three specific lines of config that may or may not be somewhere in the video.
I grew up with a learning disability. I was extremely curious and able to hyper-focus on learning, but only when I found it interesting or easy to pick up and run with ideas. Other kids didn't have the same problem as me, so they excelled while I dragged behind. What I learned (after attending 5 schools in 2 years) is that I have to find my own path to learning that works for me.
It's impossible for me to focus on dense text. You could point a gun at my head and I still couldn't absorb the information. I need spatial learning. Moving pictures, flow charts, multi-level cutaways. Lists and sections broken up into hierarchies with clear simple headings and compartmentalized concepts. This way my brain can organize a literal map of the information for me to traverse later. But some other people might find that a nightmare.
At the same time, I learned programming extremely slowly, because I only used the methods that were easy to me. I just gave myself projects to accomplish and used trial and error to slowly learn how the language worked, along with a book for reference. It took me years to finally understand the academic underpinnings of how languages (and software) worked. I wish I could've seen a map of the different concepts, to reinforce what I needed to know to learn the next thing.
But there's also different kinds of information which need different learning methods. What's a sine, cosine, and tangent? I honestly still don't know, because the words themselves are foreign to me. For that I would need some kind of Duolingo-style repetitious-card-memory-trick-thing to even remember what word is what concept.
I don't know any framework to break up any subject into multiple course methods. And AI can't do it either. AI sucks at visualizations, and it doesn't have a deep understanding of how to teach things in multiple ways. EdTech needs to be extremely careful not to put all its cards into one "thing" if that thing can't do what people need. (That said: AI is great at quickly explaining things you don't know, and providing you an insanely fast path to the information you need)
In terms of CS itself, I feel like what we're lacking is a big-ass wikipedia or knowledge base. A lot of it is in Wikipedia, but not nearly detailed or interlinked enough. Once you have all the content, then you can reorganize them into different curriculums for different learning styles. But these are two separate problems. The tutorials are way too shallow, and the dense academic verbiage is far too detailed. You need a way to intermix them that's tailored to the user.
They are conversion functions between different fraction-based ways of measuring angles.
You can draw a right triangle for the angle you want to build and you can measure it based on the ratio of any two sides of the triangle.
You can also view the angle as a fraction of a circle. It's up to you decide whether a full circle counts as 360 or 2pi (or 400 or 1 or whatever).
sin/cos/tan and their inverses let you convert between the two. Both are useful, neither is always better. The conversions let you use whichever is easier.
The sine/cosine names don't really make sense in Indo-European languages because they are based on terribly mangled old Arabic. No, they do not come from the Latin word "sinus" = bay or bend. Yes, they probably did affect the direction of the mangling because there was this nice Latin word that looked like it ought to have something to do with it... but they started out as Arabic.
The name of the tangent function comes from the geometric tangent as a line that touches a curve. Tangent comes from a Latin word that means to touch -- hence why they keys on a keyboard are called that in some languages. If you do some fancy geometric drawing involving a unit circle, a radius, and an angle, then the tangent function naturally appears as the length of a line segment that 1) just touches the perimeter of the circle and 2) is at a right angle to the radius.
Tan is the slope of the radius line: sin(angle)/cos(angle).
How do you remember the fraction for the slope of a line? I use a mnemonic: dydx ("dydex").
I draw on the ratios I memorized in high school, e.g. sin=opposite/hypotenuse, but 40+ years later sometimes I'm not sure so I look it up online.
My needs for trigonometry are separated on a scale of years, and at some point knowledge unused is knowledge forgotten.
But we have exactly the same number of reviewers. How the heck are we gonna deal with it when we cannot use LLMs for sanity checking LLM code?
Like literally yesterday I had a not-technical person who used codex to build an optimization algorithm, and due to the momentum it gained I was asked to “fix the rough edges and help with scaling”.
The entire thing was trash (was trying to do naive search in a combinatorial problem with 1000s of integers, and was violating constraints with high probability, including the integrality). I had to spend all my day reviewing it and make a technical presentation to their leadership that it is just a polished turd.
Honestly, this may be the only way to go about it.
If you want to be seen as the hero who solves things instead of the realist who says why other solutions won’t work, this could be worth exploring.
But why didn't the AI expert solve it using chatGPT? If it has to land to an expert for reimplementation from scratch after wasting a day on reviewing slop, did we gain productivity?
Unit testing. LLM's are very good at writing tests and writing code that is testable (as long as you ask it), and if you just check that the tests are actually calling the code and doing so with all the obvious edge cases and that the results are correct, that's actually quite fast to review -- faster than reviewing the code.
And you can include things like performance testing in tests as well.
We're moving to a world where we work with definitions and tests and are less concerned with the precise details of how code is written within functions. Which is a big shift in mindset.
I’m pretty confident that most developers, again including myself, just really enjoy knowing something is done well. Being able to separate yourself from the code and fixate solely on the outcomes can sometimes get me past this.
The unit tests LLMs generate are also often crap, testing tautologies, making sure that your dependencies act as specified without testing the actual code, etc.
Having the LLM write the tests is… well, a recipe for destruction unless you babysit it and give it extremely specific restrictions (again, I’ve done this in mid to large sized projects with fairly comprehensive documentation on testing conventions and results have been mixed: sometimes the LLM does an okay job but tests obvious things, sometimes it ignores the instructions, sometimes it hardcodes or disables conditions…)
Inferring intent from plain english prompts and context is a powerful way for computers to guess what you want from underspecified requirements, but the problem of defining what you want specifically always requires you to convey some irreducible amount of information. Whether it’s code, highly specific plain english, or detailed tests, if you care about correctness they all basically converge to the same thing and the same amount of work.
But if you have a lot of unit tests and need to make a cross-cutting refactor you run into the same problem that you always have if all your coverage is at the unit level. Now your unit boundary is fundamentally different and you need to know how to lift and shift all the relevant tests to the relevant new places.
And so far I've been less impressed by the "agents"' attempts at cross-cutting integration testing since this usually requires selective and clever interface setup and refactoring.
LLMs have a habit of creating one-off things for particular unit test scenarios that doesn't scale well to that problem.
You got more diabetes? Use more insulin :x (insulins are very good handling diabetes) (analogy).
Seniors would tell: the more you get in seniority the more you delete code. So I don't think, more cushion for higher jumping is the solution, sometimes you don't need to jump from that high.
We're moving to Junior Generative Juniors, recursively.
LLMs can help with reviews as well. LLMs are not too bad at reviewing code; GPT 5 for example can find off-by-one, missed returns, all sorts of problems that are localized. I think they have a harder time with issues requiring a higher-level global understanding. I wonder if in the future you could fine-tune an LLM on a big codebase (maybe nightly or something) and it could be the first-level reviewer for all changes to that codebase.
Makes having good tests even more important. One technique I've found super helpful for coding with agents is to make the agent do TDD.
Basically ask the agent to come up with the test cases first, manually review those to make sure they make sense, then have the agent game itself to write code to pass the tests. I feel like doing TDD on my own manually is very tedious but having it be AI-assisted helps me move a lot faster.
Indeed. For me this feels like an “I saw the best minds of my generation” moment.
I had a deep rooted emotional response to this. One of the most gruelling and somewhat distressing experiences of learning to program was going through a tutorial, kind of getting it, then trying to make my own spin of the same idea and getting completely stuck.
But I’m also convinced that this gruelling process was the highest density learning I’ve ever done. I’ve learned much more since then, and a lot of considerably more complex things. But I’ve never matched the same density of learning.
The closest was probably high school math. That deeply uncomfortable “this hurts my brain and is stressing me out” feeling that I suspect isn’t normal for everyone.
I view the rise of these tools and particularly efficacy in programming as an indictment against modern programming. The modern web is both amazing and horrific. If bureaucratic is "using or connected with many complicated rules and ways of doing things" (Britannica), then modern programming may be the ultimate poster child. Sure, we love to slap this on "civil institutions", but the fact that I need an automaton, answers based on probability, to guide me in how to navigate doing some of the simplest things, is pretty sad (IMO).
I used to counsel aspiring new programmers, "It's not about knowing a certain language or framework. Your single most important asset will be an aptitude to constantly keep relearning. Some trends will stand out along the way, but you'll never quit learning new tools and languages".
Maybe it's just my age, but it feels like we've overflowed at some point.
Early programming was too electrical, too mathematical, so pioneers sought to close the gap between coding and human think. And yet, after years of speculative funding, what we're left with, is a whole different set of problems.
Is this not the true promise of technology?
> Students would watch (or fall asleep to) 6-hour videos, code along in their own editors, feel like they got it, and then freeze up the moment they had to write anything from scratch. Classic tutorial hell.
This is why, across history, the tried and true method of learning a craft is an apprenticeship. You, the junior, tag along a senior. You work under a shop that is led by a senior-senior that is called a master. Apprentices become craftsmen, craftsmen become masters. AFAIK, the master does not 'offload' project guidance into non-craftsmen, it is an expected part of the craftsmen role to be project/product managers/owners.
I've said this a million times to close friends and at this point I'm only half joking. We, and I'm including myself in the 'developer' crowd although I may not deserve it, have really dropped the ball in not being a 'guild' since way back when. At least since the late 1980's; and certainly since before the Original Boom of software dev as a profession (I'm assuming it was late 90's? I know not)
(Although I suspect that if that were the case we'd have fewer developers throughout the 00s and 10s, which may have impacted the development of the field itself in unexpected, but likely negative, ways)
Not reading and watching.
Pure and simple.
The alternative has been to massify education for 'students' (not apprentices) in passive lectures with 'exercises/homework', which does not work as well for most things and particularly for crafts.
BTW for a very minor portion of the population the 'student' route is just as effective as the 'apprentice' route, but these are in my experience the exception
If you're trying to work out at the gym on your without reading anythi g about it first, you'll probably make a mess of it.
There's a lot of info out there about how to train at the gym, as well as how to write code. People who know how to read can certainly get a long way by reading a few simple tutorials.
Programming only clicked for me when I had a goal in mind and started reading documentation: change the color of the button when I click it. How to do something on click? How to change the color of an element? Etc. From there my goals became bigger and reading documentation and examples along the way got me to where I am today.
Video is the true deception. I was trying to design patterns for sewing recently, and as a novice I watched a few videos. And none of them ever stuck with me on how to design something myself. It was only when I read a book about pattern design that the concepts stuck. I think the friction of reading, parsing the info, and then acting on it is what allows learning to happen.
Doing and thinking solidifies it, teaches you to use the things you've read about
You need both
When I'm learning something new I like to skim a bunch of content upfront to get an idea of what's there
It doesn't. That's the problem.
It fills your brain with procedure. For a short time.
If you solidify the procedure, you will be able to perform that one task. What on software development is still useless.
Only at the next step, where you know so much that you can think of your own new procedures that you have basic competence at software development. There are other professions like this, but for most, basic competence happens before you even solidify the procedures.
For most simple problems, it's true that the taking the seemingly shortest path to solving the problem is good enough. There are other problems where you simply have to understand the abstractions at a deeper level than you can visualize in code. It's there that things like reading a textbook or taking a course can help.
I mean: if you're learning a new language/library/framework it's really useful to have a broad idea of what the tooling for it looks like.. what features does it offer? You can look up the details when you need to
It's really useful to have a broad knowledge of algorithms and what problems they're applicable to. Look up the details later
If you're going into a new domain.. know the broad, high level pieces of it. You don't need to pre-learn the specifics of websockets but if you don't even know they exist or what they're useful for in web development.. that's kind of a problem
Even more abstract concepts like how to design code there's a lot of good info out there
If every generation had to re-invent the wheel from scratch we'd never get anywhere. The problem people have is they think ONLY reading is enough
An apprentice model doesn't really change that. Your average electrician gets called to many more "here's new construction that we're wiring from scratch" jobs than your average corporate engineer gets "we need to set up a new project from scratch without copying any of our existing files or folders."
From where I stand, we're never going to find what you want in the workplace for reasons which predate LLMs: job hopping, remote/hybrid work, incurious managers etc.
Juniors need to just accept they will have to learn the hard way, on their own, asking occasional questions and looking at tutorials until stuff sticks.
But - I'm not really sure it's necessary in software. The skillset can be entirely self taught if you're intelligent enough. There are an abundance of resources and all it requires is a terminal. Good software engineering principles can be covered in a 200 page book.
You can't say the same for trades like plumber, electrician, etc. which still use apprenticeships.
> You can't say the same for trades like plumber, electrician, etc. which still use apprenticeships.
Yes you can, and yet they still have apprenticeships.
By contrast software just isn't comparable at all. You can sit at your desk, pay $0, and the only limitations to your experience is the amount of time you're willing to dedicate.
Citation needed - at least for anything software development. Every single respectable software dev I met around my age bracket or older (40+), was self-taught. Mostly because in the 80s or 90s there wasn't much opportunity. But computers shipped with handbooks how to program them, at that time.
And in our modern world, universities are still the best place for such apprenticeship. Not the ones per Mark Trevor's words (https://marktarver.com/professor.html), of course, but a self-respecting university will train their students with progressively challenging and practical assignments. We started with implementing simple data structures and algorithms and solving simple puzzles all the way to implementing toy OSes, databases, persistent data structures, compilers, CPUs, discrete simulations, machine learning models. We started with implementing functions and individual components and quicly to building things from scratch. I'm forever grateful to the training I received in my univerity.
You might have had a point a few decades ago when the information itself was difficult to fine but with the internet and online courses, its easier than ever to teach yourself in a "nontraditional" setting.
Those classes unlocked a whole new level of programming for me. I just didn't know what I didn't know before.
People keep reinventing the same shit if they haven't learned about it before.
Sure, you can learn many things online. But for most things you just don't even know that they exist, you wouldn't know to search for them.
I spent a good portion of my life in Universities -- and went as far as one can go in terms of educational credentials and taught at the university level -- and I cannot disagree more.
Universities produce job skills incidentally, if at all. It's simply not their goal [1]. Even today, at the best CS programs in the country, it's possible to get a degree and still not be better than a very junior engineer at a software company (and quite a few graduates are worse).
> We started with implementing simple data structures and algorithms and solving simple puzzles all the way to implementing toy OSes, databases, persistent data structures, compilers, CPUs, discrete simulations, machine learning models.
This was not my experience, nor is it what I have seen in most university graduates. It's still quite possible for a CS grad to get a degree having only theoretical knowledge in these topics, and no actual ability to write code.
This leaves open the question of where "the best place" is to learn as-practiced programming [2], but I tend to agree with the root commenter that the best programmers come up through a de facto apprenticeship system, even if most of them spend time in universities along the way.
[1] Their goal is to produce professors. You may not realize this if you only went as far as the undergraduate diploma, but that is mostly what academics know, and so it is what they teach. The difference between the "best" CS programs and the others is that they have some professors with actual industry experience, but even then, most of them are academics through and through.
[2] Code academies suck in their own ways.
Having been self taught in both software and electrical engineering, I’ve experienced a lot of this.
In EE, it’s amazing how many graduates come into the job without ever having used Altium/KiCAD/Cadence for a nontrivial project or who can give you a very precise definition of impedance but don’t know how to break out an engineering calculator to set design rules for impedance controlled differential pairs. Or worse yet, people who can give you all the theory of switching model power supply but can’t read datasheets and select parts in practice.
Looking back, I'd consider my University degree to be essentially a 4 year pause on growing my programming skills.
* in keeping with https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait."
Of course things were much simpler. You had an editor, and a compiler that you ran from the command line. At some point you would learn about Makefiles, but not before you would appreciate their value.
And there was no CI, no source control, no IDEs, no TDD frameworks.
I can see that throwing a brand new developer into something like Visual Studio would be overwhelming. Even I find it overwhelming after three decades. I still use emacs and a shell.
A: https://chatgpt.com/share/68e940a1-953c-8011-a8f2-3a1a0c51be... B: https://chatgpt.com/share/68e94067-ec74-8011-88e5-9d27670f31...
That's scary, because you cannot escape from the repercussions just by not being one of the dummies who relinquished all learning to AI.
You depend on being surrounded by other people who know what they are doing. And not just immediately surrounded, but in a broader scope.
"Now we do x,y, z, and voila! here you have it, a fully fledged (whatever)." Ok, but what did you just do? Why doesn't it work on my machine? etc. I've seen tutorials that do this stuff right and it's a very obvious night and day difference.
Take notes as you go. Type the code manually. Experiment with variations of the code. It does help your brain encode the information.
Tutorials fundamentally exist to serve a different purpose: to orient people within the subject matter, when they don't even know what question to ask. Going through steps in order is important so that the student can focus. Intentionally going down wrong paths can be counterproductive for the neophyte, because it means having about as much experience doing the wrong thing as the right thing. Debugging is a general skill, but technology-specific debugging can and probably should be taught separately from the "happy path".
A properly done tutorial will properly show the steps, and will have been tested to ensure that it can in fact be expected to work on everyone's machine. The parameters for success will be controlled as tightly as possible.
I feel the same about "what books do you recommend reading to learn Y" Have you tried looking at the online documentation?
Usually see it from people that have formal CS education. They learned one way to learn things and refuse to adapt to real life.
om22shree•4h ago
Its trained on masses ... well, unfortunately, masses are wasting a lot of dough