I have interacted with software developers at conferences who cannot do basic things with computers, like navigate file systems, or make changes to the Windows registry, where to get and how to use environment variables, how to diagnose and fix PC issues... Like in a perfect world your IT department sorts this stuff for you but I struggle to take seriously someone who claims to create software who seemingly lacks basic computer literacy in a number of areas.
And I'm sorry, "it compiles and runs" is the bare fucking minimum for software quality. We have machines these days that would run circles around my first PC in the late 90's, but despite that, everything is slower and runs worse. My desktop messaging apps are each currently sucking up over 600 MB of RAM apiece, which is nearly 3 times what my original PC had total. Everything is some bloated shite that requires internet access now at all times or it utterly crashes and dies, and I'm sorry but I cannot separate in my mind the fact that we have seemingly a large contingent of software developers out there that can't bloody use computers to thank for this. And cheap-ass management, to be clear, but I think these are nested problems.
…and rapidly becomes deprecated not due to quality but because the requirements for operation or development changed substantially. This second order effects make the “compile and run” focus paradoxically efficient and correct use of resources. Engineers, especially academically experienced ones, prematurely optimize for correctness and arbitrary dimensions of quality because they are disconnected from and motivated by interests orthogonal to their users.
Did they? Like I have no data for this nor would I know how one would set about getting it, but like, from my personal experience and the experiences of folks I've spoken to for basically my entire career, the requirements we have for our software barely change at all. I do not expect Outlook to have chat and reaction functionality. I do not desire Windows to monitor my ongoing usage of my computer to make suggestions on how I might work more efficiently. These things were not requested by me or any user I have ever spoken to. In fact I would take that a step further and say that if your scope and requirements are shifting that wildly, that often, that you did a poor job of finding them in the first place, irrespective of where they've now landed.
They are far more often the hysterical tinkerings demanded by product managers who must justify their salaries with some notion of what's "next" for Outlook, because for some reason someone at Microsoft decided that Outlook being a feature complete and good email client was suddenly, for no particular reason, not good enough anymore.
And again speaking from my and friend's experiences, I would in fact love it very much thank you if Microsoft would just make their products good, function well, look nice and be nice to use, and then stop. Provide security updates of course, maybe an occasional UI refresh if you've got some really good ideas for it, but apart from that, just stop changing it. Let it be feature complete, quality software.
> Engineers, especially academically experienced ones, prematurely optimize for correctness and arbitrary dimensions of quality because they are disconnected from and motivated by interests orthogonal to their users.
I don't think we're disconnected at all from our users. I want, as a software developer, to turn out quality software that does the job we say it does on the box. My users, citation many conversations with many of them, want the software to do what it says on the box, and do it well. These motivations are not orthogonal at all. Now, certainly it's possible to get so lost in the minutia of design that one loses the plot, that's definitely where a good project manager will shine. However, to say these are different concerns entirely is IMO, a bridge too far. My users probably don't give a shit about the technical minutia of implementing a given feature: they care if it works. However, if I implement it correctly, with the standards I know to work well for that technology, then I will be happy, and they will be happy.
Their end-user software ranges from "bad but could be worse" to "outlandish crap that should be illegal to ship". Their user base however doesn't know much better, and decision makers in commercial settings have different priorities (choosing MS would not be held against you).
But even in tech circles MS Windows is still used. I know the excuses. MS can continue focusing their efforts productising the clueless user that doesn't understand anything and doesn't give a shit about all the leaks, brittle drivers, performance degradation, registry cluttering etc. MS follows the right $$ strategy, their numbers don't lie.
The way I conceptualize this is that there are two kinds of knowledge. The first is fundamental knowledge. If you learn what is computational complexity and how to use it, or what is linear algebra and why do we care, then you're left with something. The second is what I call "transient" knowledge (I made up the word). If you learn by heart the DOM manipulation methods you have to invoke to make a webpage shiny (or, let's be real, the API of some framework), or what is the difference between datetime and datetime2 in SQL Server 2017, then it looks like you know how to do stuff, but none of those things are fundamental to the way the underlying technologies work: they are mostly pieces of trivia that are the way they are because of historical happenstance rather than actual technical reasons.
To be effective at any given day job, one might need to learn a few pieces of knowledge of the second kind, but one should never confuse them for actual, real understanding. The problem is that the first kind can't be learned from youtube videos in increments of 15 minutes.
That's what LLMs are exposing, IMO. If you don't know what is the syntax for lambdas in C# or how to structure components in React, any LLM will give you perfectly working code. If your code crumbles to pieces because you didn't design your database correctly or you're doing useless computations, you won't even know what you don't know.
This transcends software development, by the way. We talk about how problem solving is a skill, but in my experience is more like physical form: if you don't keep yourself in shape, you won't be when you need it. I see this a lot in kids: the best ones are much smarter than I was at their age, the average struggles with long division.
What LLMs have done for most of my students is remove all the barriers to an answer they once had to work for. It’s easy to get hooked on fast answers and forget to ask why something works. That said, I think LLMs can support exploration—often beyond what Googling ever did—if we approach them the right way.
I’ve seen moments where students pushed back on a first answer and uncovered deeper insights, but only because they chose to dig. The real danger isn’t the tool, it’s forgetting how to use it thoughtfully.
If I'm pulled 27 different ways. Then when I finally get around to another engineer’s question “I need help” is a demand for my synchronous time and focus. Versus “I’m having problems with X, I need to Y, can you help me Z” could turn into a chat, or it could mean I’m able to deliver the needed information at once and move on. Many people these days don’t even bother to write questions. They write statements and expect you to infer the question from the statement.
On the flip side, a thing we could learn more from LLMs is how to give a good response by explaining our reasoning out loud. Not “do X” but instead “It sounds like you want to W, and that’s blocked by Y. That is happening because of Z. To fix it you need to X because it …”
This is one of my biggest pet peeves. Not even asking for help just stating a complaint.
You can sample that shit and make some loops in your DAW. Or just use a generative AI nowadays.
This is really a tragedy because the current technology is arguably one of the best things in existence for explaining "why?" to someone in a very personalized way. With application of discipline from my side, I can make the LLM lecture me until I genuinely understand the underlying principles of something. I keep hammering it with edge cases and hypotheticals until it comes back with "Exactly! ..." after reiterating my current understanding.
The challenge for educators seems the same as it has always been - How do you make the student want to dig deeper? What does it take to turn someone into a strong skeptic regarding tools or technology?
I'd propose the use of hallucinations as an educational tool. Put together a really nasty scenario (i.e., provoke a hallucination on purpose on behalf of the students that goes under their radar). Let them run with a misapprehension of the world for several weeks. Give them a test or lab assignment regarding this misapprehension. Fail 100% of the class on this assignment and have a special lecture afterward. Anyone who doesn't "get it" after this point should probably be filtered out anyways.
Gaining focus as a skill is something to work on with every batch of new students
We're on the same page. I'm turning that around to say: let's remember focus isn't something we're naturally born with, it has to be built. Worked on hard. People coming to that task are increasingly damaged/injured imho.
No, skill for future is using AI to carve out safe space for yourself, so you can focus without distractions!
Well said, and an interesting idea, but most of my LLM usage (besides copilot autocomplete) is actually very search-engine-esque. I ask it to explain existing design decisions, or to search for a library that fits my needs, or come up with related queries so I can learn more.
Once I've chosen a library or an approach for the task, I'll have the LLM write out some code. For anything significantly more substantive code than copilot completions, I almost always do some exploring before I exploit.
"Arcane Information" is absolutely the worst possible use case I can imagine for LLMs right now. You might as well ask an intern to just make something up
Good engineers must also be responsive to their teammates, managers, customers, and the business. Great engineers also find a way to weave in periods of focus.
I’m curious how others navigate these?
It seems there was a large culture shift when Covid hit and non-async non-remote people all moved online and expected online to work like in person. I feel pushed to be more responsive at the cost of focus. On the flip side, I’ve given time and space to engineers so they could focus only to come back and find they had abused that time and trust. Or some well meaning engineers got lost in the weeds and lost the narrative of *why* they were focusing. It is super easy to measure responsiveness: how long did it take to respond. It’s much harder to measure quality and growth. Especially when being vulnerable about what you don’t know or the failure to make progress is a truly senior level skill.
How do we find balance?
I've been struggling with finding balance for years as a front-line manager who codes. I need to be responsive-ish to incoming queries but also have my own tasks. If I am too responsive, it's easy for my work to become my evening time and my working hours for everybody else.
The "weaving" in of periods of focus is maintained by ignoring notifications and checking them in batches. Nobody gets to interrupt me when I'm in focus mode (much chagrin for my wife) and I can actually get stuff done. This happened a lot by accident, I get enough notifications for long enough that I don't really hear or notice them just like I don't hear or notice the trains that pass near my house.
There are very few notifications that can’t wait a few hours for my attention and those that cannot have the expectation of being a phone call.
So we had to solve this problem pre-covid, and the solution remained the same during the pandemic when every org went full remote (at least temporarily).
There is no "one size fits all approach" because each engineer is different. We had dozens of engineers on our team, and you learn that people are very diverse in how they think/operate.
But we came up with a framework that was really successful.
1) Good faith is required: you mention personnel abusing time/trust, that's a different issue entirely, no framework will be successful if people refuse to comply. This system only works if teammates trust the person. Terminate someone who can't be trusted.
2) "Know thyself": Many engineers wouldn't necessarily even know how THEY operated best (if they needed large chunks of focus time, or were fine multi-tasking, etc). We'd have them make a best guess when onboarding and then iterate and update as they figured out how they worked best.
3) Proactively Propagate Communication Standard: Most engineers would want large chunks of uninterrupted focus time, so we would tell them to EXPLICITLY tell their teammates or any other stakeholders WHEN they would be focusing and unresponsive (standardize it via schedule), and WHY (ie sell the idea). Bad feelings or optics are ALWAYS simply a matter of miscommunication so long as good faith exists. We'd also have them explain "escalation patterns", ie "if something is truly urgent, DM me on slack a few times and finally, call my phone."
4) Set comms status: Really this is just slack/teams. but basically as a soft reminder to stakeholders, set your slack status to "heads down building" or something so people remember that you aren't available due to focus time. It's really easy to sync slack status to calendar blocks to automate this.
We also found that breaking the day into async task time and sync task time really helped optimize. Async tasks are tasks that can get completed in small chunks of time like code review, checking email, slack, etc. These might be large time sinks in aggregate, but generally you can break into small time blocks and still be successful. We would have people set up their day so all the async tasks would be done when they are already paying a context switching cost. IE, scheduled agile cadence meetings etc. If you're doing a standup meeting, you're already gonna be knocked out of flow so might as well use this time to also do PR review, async comms, etc. Naturally we had people stack their meetings when possible instead of pepper throughout the day (more on how this was accomplished below).
Anyways, sometimes when an engineer of ours joined a new team, there might be a political challenge in not fitting into the existing "mold" of how that team communicated (if that team's comm standard didn't jive with our engineer's). This quickly resolved every single time when our engineer was proven out to be much more productive/effective than the existing engineers (who were kneecapped by the terrible distracting existing standard of meetings, constant slack interruptions, etc). We would even go as far as to tell stakeholders our engineers would not be attending less important meetings (not immediately, once we had already proven ourselves a bit). The optics around this weren't great at first, but again, our engineers would start 1.5-2X'ing productivity of the in-house engineers, and political issues melt away very quickly.
TL;DR - Operate in good faith, decide your own best communication standard, propagate the standard out to your stakeholders explicitly, deliver and people will respect you and also your comms standard.
However the future is uncertain, when we reach a point where most developers have used generated code most of their lives, and never developed the coding skills that are required to fully understand the code.
I guess we'll adapt to it. We always do. I mean for example, I can no longer do long division on paper like I did in elementary school, so I rely totally on computers for all calculating.
Gonna need a BIG citation on that one, chief.
> Even though every test and benchmark we can come up with, LLMs do better with every generation.
Has it occurred to you the people making the tests and benchmarks are, more often than not, the same people making the LLM? Like yeah if I'm given carte blanche to make my own test cases and I'm accountable to no one and nothing else, my output quality would be steadily going up too.
The other day I tried asking Copilot for a good framework for accomplishing a task, and it made one up. I tried the query again, more specifically, and it referred me to a framework in another language. And yes, I specified.
OP has consumed so much LLM they’ve started to hallucinate themselves
If we assume that civilization is already teetering thanks to the smartphone/social media, the fallout of AI would make Thomas Cole blush.
Can humanity use "literacy aimbot" responsibly? I don't know.
It's just a cautionary tale. I'm not expecting to win an argument. I could come up with counter anectdotes myself:
ABS made breaking in slippery conditions easier and safer. People didn't learned to brake better, they still pushed the pedal harder thinking it would make it stop faster, not realizing the complex dynamics of "making a car stop". That changed everything. It made cars safer.
Also, just an anecdote.
Sure, a lot of people need focus. Some people don't, they need to branch out. Some systems need aimbot (like ABS), some don't (like Gunbound).
The future should be home to all kinds of skills.
When I started in the 90s I could work on something for weeks without much interruption. These days there is almost always some scrum master, project manager or random other manager who wants to get an update or do some planning. Doing actual work seems to have taken a backseat to talking about work.
Copy-paste, copy-paste. No real understanding of the solutions, even for areas of my expertise. I just don't feel like understanding the flood of information, without any real purpose behind the understanding. While I probably (?) get done more, I also just don't enjoy it. But I also can't go back to googling for hours now that this ready-made solution exists.
I wish it would have never been invented.
(Obviously scoped to my enjoyment of hobbyist projects, let's keep AI cancer research out of the picture..)
Newer models often end responses with questions and thoughts that encourage exploration, as do features like ChatGPT's follow up suggestions. However, a lot of work needs to be done with LLM interfaces to balance exploitation and exploration while avoiding limiting AI's capabilities.
You can ask the llm to generate a number of solutions though - the exploration is possible and relatively easy then.
And I say that as someone who dislikes llms with a passion.
Ozzie_osman•3h ago
I've actually found that LLMs are great at exploration for me. I'd argue, even better than exploitation. I've solved many a complex problem by using an LLM as a thought partner. I've refined many ideas by getting the LLM to brainstorm with me. There's this awesome feedback loop you can create with the LLM when you're in exploration mode that is impossible to replicate on your own, and still somewhat difficult even with a human thought partner.
tombert•3h ago
I've started doing something that I have been meaning to do for years, which is to go through all the seminal papers on concurrency and make a minimal implementation of them. I did Raft recently, then Lamport timestamps, then a lot of the common Mutex algorithms, then Paxos, and now I'm working on Ambient Calculus.
I've attempted this before, but I would always get stuck on some detail that I didn't fully grasp in the paper and would abandon the project. Using ChatGPT, I've been able to unblock myself much easier. I will ask it to clarify stuff in the paper, and sometimes it doesn't even matter if it's "wrong", so much as it's giving me some form of feedback and helps me think of other ideas on how to fix things.
Doing this, I manage to actually finish these projects, and I think I more or less understand them, and I certainly understand them more than I would have had I abandoned them a quarter of the way through like I usually do.
boleary-gl•2h ago