For example, how can doctors save time and spend more time one-on-one with patients? Automate the time-consuming, business-y tasks and that’s a win. Not job loss but potential quality of life improvement for the doctors! And there are many understaffed industries.
The balancing point will be reached. For now we are in early stages. I’m guessing it’ll take at least a decade or two for THE major breakthrough—whatever that may be. :)
Do you think they will spend more time with patients or take in more patients?
From what I have seen in my country they would do the latter.
Well, taking in more patients per doctor is what will decrease the cost for the patient (so would increasing the number of doctors). Often, I'd rather be shuffled in and out in half the time, and be charged less, than charged the same and be given more time to talk with the doctor.
Or: The AI tooling will be able to allow the lay-person to double-check the doctor's work, find their blind spots, and take their health into their own hands.
Example: I've been struggling with chronic sinus infections for ~5 years. 6 weeks ago I took all the notes about my doctor interactions and fed them into ChatGPT to do deep research on. In particular it was able to answer a particularly confusing component: my ENT said he visually saw indications of allergic reaction in my sinuses, but my allergy tests were negative. ChatGPT found an NIH study with results that 25% of patients had localized allergic reactions that did not show up on allergy tests elsewhere on their body (the skin of my shoulder in my case). My ENT said flat out that isn't how allergies work and wanted to start me on a CPAP to condition the air while I was sleeping, and a nebulizer treatment every few months. I decided to run an experiment and I started taking an allergy pill before bed, while waiting for the CPAP+nebulizer. So far, I haven't had even a twinge of sinus problems.
Ultimately, doctors are the experts doing the studies, but AI being there to help will certainly add value.
Avoiding any percentage of misdiagnoses is a huge win and time saver.
For example, if you enjoy cooking, or it is your job, you might be willing to pay for an artisan knife, even though you can buy a good knife for a few bucks. Same with clothes. They are extremely industrialized, but there is still a lot of tailor living of making bespoke clothe.
We might do it for no other reason than an appreciation of the craft, but a lot of time, it is driven by a desire for high quality (and/or customization).
This makes me wonder if one day we will see artisan software developer (I mean, the idea of software craftmanship is already here). LLM&Co are good at outputting a lot of code very quickly, but they are often not good at producing quality code. And I sincerely doubt that it will get any better, this seems to be more or less a consequence of the core technology making LLM. So unless we have a significant paradigm shift, I don't think it will improve much more. It already feels like we reach the point of diminishing return on this specific tech.
So what about making smaller software, but better software, for client wanting nothing but the upmost quality ? Just like bladesmith, we will have a bunch of new fancy tool at our disposal to be able to code better and faster, but the whole point of the exercice will still to be in control of the process and not give all the decision power to a machine.
Would you rather the software that drives your car be "artisan software", where labels were carefully chosen by a human?
Software isn't the end product. It's a tool that's supposed to do something we want. We may want an artisan to design our house or iPhone, but we don't want them using "artisan hand crafted" rulers and compilers.
The reason I think we might not see this for software even though we do other goods is that the output of a developer is not code it is software. It's possible for good (fit for purpose, easy to use, fast, pretty, whatever metric) software to be built on bad code. The craft of the code is not necessarily apparent in the product in the same way it can be with physical goods.
Whether or not LLMs can consistently output "good" software is less clear to me and I'm not interested in trying to make a prediction about it. But if they do I don't see "hand crafted" code being a thing. No one cares about code.
That said, there’s another angle worth considering. AI has introduced a new kind of labor: prompt engineers.
These aren’t traditional programmers, but they interact with models to produce code-like output that resembles the work of software developers. It’s not perfect, and it raises plenty of concerns, but it does represent a shift in how we think about code generation and labor.
Regardless of which side of the fence you're on, I think we can all agree that this paradigm shift is happening, and arguments like the authors raise valid and important concerns.
At the same time, there are also compelling reasons some people choose to embrace AI tools.
In my opinion, the most crucial piece of all this is government policy. Policymakers need to get more involved to ensure we're prepared for this fast-moving and labor-disruptive technology.
Just my two cents and thanks for sharing.
I see it happening every day. It’s especially concerning when the person using the tool doesn’t really understand the output. That kind of disconnect can be dangerous. But at the same time, I’ve also read about cases where AI helped scientists accelerate research and save years of work. So there’s clearly potential for good.
In my case, I’m genuinely worried about where this technology could lead us. But I’m still hopeful. With enough awareness through voices like the author’s and continued public pressure, maybe policymakers will step up and take it seriously before things get out of hand. Thanks for your comment.
This capital versus labor dynamic is very common and an interesting way to frame things. But suppose you do take the view that all the wealth accrues to capital. What are the implications?
One implication would be to skip college, take that money and invest it in the stock market. Why invest in labor when capital grows faster? Although I don't think anyone with this mindset would offer that advice, but rather dwell in the fact that they are laborers by design with no hope of ever breaking that. Sure enough:
> I’m a labourer. A well-compensated one with plenty of bargaining power, for now. I don’t make my living profiting from capital. I have to sell my time, body, and expertise like everyone else in order to make the profits needed to support me and my family with life’s necessities.
Another point, in regards to productivity
> Did you know there’s no conclusive evidence that static type checking has any effect on developer productivity?
We don't need "conclusive evidence" for everything. You see this a lot with a lot of ridiculous claims. I don't need some laboratory controlled environment to prove that static type checking is more productive. How do I know? Because I've used statically typed and non-statically typed language and I'm more productive in statically typed. I remember the first time I introduced Flow, a static type checker to my javascript project and the amount of errors I had was really mind-boggling.
A lot of people agree with me and statically typed languages are popular and dynamically typed languages like Python are constantly adding typing tooling. It's a test of the market. People like these tools, presumably because they're making them more productive. Even if they're not, people like using them for whatever reason and that translates to something, right?
This scientism around everything is exhausting.
Because capital (in the sense that your referring to, which is money) isnt "real". The economy isnt based on money, its based on goods and services, which require labour and natural resources. Money is just the lubricant to make sure these two things can be optimally distributed.
In simpler terms, if everyone skipped college and just invested in the stock market, then the market will collapse because no one is producing the goods and services the market is trading kn. and thats the rub (tragedy) of economics, its a cooperative system, you are a part of it, not the temporarily embarrassed billionaire you might fancy yourself as. so if you have a strategy, you have to account for everyone using it, and see if its sustainable
This applies to cost reduction through wage supression or tax avoidance. If onky you do it, then it works great for you, but if all the companies in the economy do it, there suddenly consumers have no money to buy your products and the public infrastructure you need.
This is why im kinda on the side of the luddites, as presented in the post. They've been portrayed as neadrathals, backwards and scared of technological progress. But if they are people who understand that capitalists (not fans of capitalism but people who own and deploy capital for a living) will ruin the whole economy with their greed and must be brought back in line, then i support that. The original luddite movement might have failed but the industrial revolution did neccesitate revolts later on where people had to get violent to ensure basic rights like child labour laws, minimum wage, the weekend, paid leave. I think the information revolution will have that as well. Probably something like the 19th- early 20th century labour rights protests, with a butlerian twist
This is just silly. I can give advice to my son "you should go to college" and on the margin, I could think more people should go to college, and then somebody like you comes in "If everyone went to college who would pick up garbage??" as though its some profound statement.
If you think capital grows faster than labor (conditional not 100% of the population), on the margin you should invest more in capital.
You're tilting at windmills here.
So you don't know. There are a couple of devs where I work that started using llms heavily to support their work. They earnestly claim that they are more productive, however their other team member disagree with their self-assessment and say that they are no more or less productive than before using llms.
How to know whose assessment is more accurate? You need some sort of test that eliminates, as far as possible, subjectivity.
Belief that we are powerless plays right into their hands. And it is too psychologically damaging to hold over the long term. Better to acknowledge the reality that the propaganda machine is turned to 11, things are quite uncertain, but the game isn’t over yet.
Some good points in the rest of the post.
> It is during the struggle that I learn the most and improve the most as a programmer. It is this struggle I crave. It is what I enjoy about programming.
This explains the whole post to me. First of all this is an area where using an AI can streamline design considerations before getting to the head-banging-walls. Since this is an aspect that the writer enjoys, there's no saving them. They decided that they like things as they are without AI. The rest of all the cited reasons are post-decision justifications.
shrug
josefritzishere•5h ago
simonw•4h ago
uberman•4h ago
So, not so great there.
Then recently I had to modernize a python app written by someone who no longer works for the organization and was circa python 3.6. Several of the key libraries no longer supported the interfaces being called and there was no documentation at all.
On a whim I asked an LLM to help modernize the code, file by file and it cut the effort in half.
So, pretty great there.
mattgreenrocks•4h ago
Hallucinations seem fundamental to how LLMs work now, so AI is probably still a ways off from being able to absorb responsibility for decisions like a human can.
I’m sure this comment will offend both pro and anti AI camps. :)
coro_1•4h ago
You can for example, do minimal input (some peppered phrases) and see a rich answer. More specificity and research on your own, brings it out to play. The designs appear to level their self to your own capabilities.
actuallyrizzn•2m ago
if you're a senior researcher that understands how to research, and you send a baby junior researcher off to do the work and did not give them tips or parameters to improve the research process, that's on you, not the junior.