Ask HN: How are you adapting your career in this AI era?
4•sarthaksaxena•2h ago
Comments
EdNutting•1h ago
Picking up another tool and figuring out where it's useful to integrate it into my workflow. Much the same as when I picked up BeyondCompare, VSCode (replacing Visual Studio) and numerous other tools that have come (and some, since gone).
The only major difference to past experiences of new tools is that AI appears to have a wide range of likely-looking uses (and even more _marketed_ uses), and only recently have specific use-cases/patterns started to emerge with any stability. Many of the likely-looking uses turn out to be minor or no improvement (in a good number of cases, actually worse), which cumulatively _change_ the workflow but don't _improve_ it. Then there's a few specific areas where it helps (sometimes enormously).
To be more concrete:
1. AI helps with being more specification-driven (AI UX people have inadvertently replaced with the word "specification" with "plan"). Think upfront, do research, plan the design, then get AI to scaffold the code, then spend lots of time cleaning up and dealing with filling-out to a full production-worthy implementation.
2. AI can (on average) help with writing anything which is easily scaffolded from existing 'stuff': boiler plate code; adding an extra piece of infrastructure following established patterns; writing commit messages for small commits with clear intentions from the code, and similarly for PRs.
3. AI is useful as a search and diagnostic tool. Impenetrable or just long error message? AI can summarise that and pull out the useful specifics (e.g. the target line of code and the actual likely interpretations of the message). And Google search has become so poor for specific searches that I rely more and more on Claude for finding (verifiable-by-me) answers.
- Has it changed my workflow? Yes.
- Has it replaced me? Not even close.
- Has it displaced some of the work I used to do? Yes. More time spent on architecture and debugging than on writing code. Debugging workload has gone up due to the convincing-but-wrong code AI often generates that then takes a while to pull apart and fix; or when the code just doesn't match a production-worthy architecture despite extensive planning: too much training on open-source which is made up of crumby code (by volume, not by popularity).
Sadly, (1) above has also meant that some of the joy of "diving into a problem and scrubbing around in the code to figure out what's going on" has been lost. Instead, just ask AI to "delve into it". For many people, this has removed a part of the process they found tedious. They just wanted to get to a solution. For some people, this has removed a part of the problem-solving challenge that was good fun. Professionally, it's a shift, and it's still hit or miss as to whether it's overall more productive or not. For hobby projects, it's a choice whether to start or continue using AI or not.
Parting thought: AI has been pretty great for web tech stuff. I can see why so many engineers (particularly in Silicon Valley) think it's going to rule the world. But outside of web tech (e.g. computer architecture), it's pretty pants. It's junior-engineer-quality/reliability on stuff it's had huge amounts of training on (web tech, infrastructure, fantasy art, etc.) but useless at things it's got much less coverage of (computer architecture, technical diagrams, 3D spatial reasoning, etc.). This is a comment on where LLMs like Claude and ChatGPT and others are at today. It is not a comment on the future potential, nor on what can be achieved using other forms of AI or combined forms of AI.
This is a personal viewpoint and experience, not on behalf of any current or former employer.
EdNutting•1h ago
P.s. In career terms, it's become a defining career area for now because there's basically no interesting work outside of AI-related stuff.
That'll change in a few year's time. The industry can't sustain this level of obsession forever (for one thing, venture capitalists will move on to the next big thing, as per the very definition of their business model).
For now, it's a case of make hay while the sun shines.
frje1400•56m ago
Taking wider responsibility, doing both dev and ops. Learning more about k8s since my company uses it. Trying to think more about testing and verification in general because I think that's what the bottleneck will be.
EdNutting•1h ago
The only major difference to past experiences of new tools is that AI appears to have a wide range of likely-looking uses (and even more _marketed_ uses), and only recently have specific use-cases/patterns started to emerge with any stability. Many of the likely-looking uses turn out to be minor or no improvement (in a good number of cases, actually worse), which cumulatively _change_ the workflow but don't _improve_ it. Then there's a few specific areas where it helps (sometimes enormously).
To be more concrete:
1. AI helps with being more specification-driven (AI UX people have inadvertently replaced with the word "specification" with "plan"). Think upfront, do research, plan the design, then get AI to scaffold the code, then spend lots of time cleaning up and dealing with filling-out to a full production-worthy implementation.
2. AI can (on average) help with writing anything which is easily scaffolded from existing 'stuff': boiler plate code; adding an extra piece of infrastructure following established patterns; writing commit messages for small commits with clear intentions from the code, and similarly for PRs.
3. AI is useful as a search and diagnostic tool. Impenetrable or just long error message? AI can summarise that and pull out the useful specifics (e.g. the target line of code and the actual likely interpretations of the message). And Google search has become so poor for specific searches that I rely more and more on Claude for finding (verifiable-by-me) answers.
- Has it changed my workflow? Yes.
- Has it replaced me? Not even close.
- Has it displaced some of the work I used to do? Yes. More time spent on architecture and debugging than on writing code. Debugging workload has gone up due to the convincing-but-wrong code AI often generates that then takes a while to pull apart and fix; or when the code just doesn't match a production-worthy architecture despite extensive planning: too much training on open-source which is made up of crumby code (by volume, not by popularity).
Sadly, (1) above has also meant that some of the joy of "diving into a problem and scrubbing around in the code to figure out what's going on" has been lost. Instead, just ask AI to "delve into it". For many people, this has removed a part of the process they found tedious. They just wanted to get to a solution. For some people, this has removed a part of the problem-solving challenge that was good fun. Professionally, it's a shift, and it's still hit or miss as to whether it's overall more productive or not. For hobby projects, it's a choice whether to start or continue using AI or not.
Parting thought: AI has been pretty great for web tech stuff. I can see why so many engineers (particularly in Silicon Valley) think it's going to rule the world. But outside of web tech (e.g. computer architecture), it's pretty pants. It's junior-engineer-quality/reliability on stuff it's had huge amounts of training on (web tech, infrastructure, fantasy art, etc.) but useless at things it's got much less coverage of (computer architecture, technical diagrams, 3D spatial reasoning, etc.). This is a comment on where LLMs like Claude and ChatGPT and others are at today. It is not a comment on the future potential, nor on what can be achieved using other forms of AI or combined forms of AI.
This is a personal viewpoint and experience, not on behalf of any current or former employer.
EdNutting•1h ago
That'll change in a few year's time. The industry can't sustain this level of obsession forever (for one thing, venture capitalists will move on to the next big thing, as per the very definition of their business model).
For now, it's a case of make hay while the sun shines.