Perhaps a monthly session to practice their skills would be useful? So they don’t atrophe…
I think we have to treat the algorithm as a medical tool here, whose maintenance will be prioritised as such. So your premise is similar to "If all the scalpels break...".
The premise is absolutely not the same.
Then we could just go Google it, and/or skim the Wikipedia page. If you wanted more details you could follow references - which just made it easier to do the first point.
Now skills themselves will be subject to the same generalizing phenomenon as finding information.
We have not seen information-finding become better as technology has advanced. More people are able to become barely-capable regarding many topic, and this has caused a lot of fragmentation, and a general lowering of general knowledge with regard to information.
The overall degradation that happened with politics and public information will now be generalized to anything that AI can be applied to.
You race your MG? Hey my exoskeleton has a circuit racer blob we should go this weekend. You like to paint? I got this Bougereau app I'll paint some stuff for you. You're a physicist? The font for chalk writing just released so maybe we can work on the grand unified theory sometime, you say you part and I can query the LLM and correct your mistakes
Except at this point, market forces and going whole hog on neural networking and such instead of sticking with just reflective, impartial indexing of the digital medium has made it nigh impossible for technological aid to actually improve your ability to find niche things. Search Engine Optimization, plus the interests in shaping narratives, have made searchability take a plunge. Right now the unpolluted index may as well be a WMD for how hard it is to find/keep operating one.
Instead, I would hope that we can engineer around the downtime. Diagnosis is not as time-critical as other medical procedures, so if a system is down temporarily they can probably wait a short amount of time. And if a system is down for longer than that, then patients could be sent to another hospital.
That said, there might be other benefits to keeping doctors' skills up. For example, I think glitches in the diagnosis system could be picked up by doctors if they are double-checking the results. But if they are relying on the AI system exclusively, then unusual cases or glitches could result in bad diagnoses that could otherwise have been avoided.
The x ray machine would still work, it’s connected directly to a PC. A doctor can look at the image on the computer without asking some fancy cloud AI.
A power outage on the other hand is a true worst case scenario but that’s different.
Like one of the biggest complaints I've heard around hospital IT systems is how brittle they are because there are a million different vendors tied to each component. Every new system we add makes it more brittle.
This is not new, lots of mission critical software systems are run like this.
Who gets the next generation "up to speed" if the teachers are always forgetting?
They aren’t parallel situations and you can’t cleanly graft these requirements over.
My follow up question is now “if junior doctors are exposed to AI through their training is doctor + AI still better overall?” e.g. do doctors need to train their ‘eye’ without using AI tools to benefit from them.
I agree.
I work in healthcare and if you take a tech view of all the data there are a lot of really low hanging fruit to pick to make things more standardised and efficient. One example is extracting data from patient records for clinical registries.
We are trying to automate that as much as possible but I have the nagging sense that we’re now depriving junior doctors of the opportunity to look over hundreds of records about patients treated for X to find the data and ‘get a feel’ for it. Do we now have to make sure we’re explicitly teaching something since it’s not implicitly being done anymore? Or was it a valueless exercise.
The assumptions that we make about training on the job are all very chesterton’s fence really.
In a perfect world, there would be no problem - those of us who enjoy the experience would vibe-code whatever they need, and the rest of us would develop our skills the way we want. Unfortunately, there is a group of CXOs, such as the ones at Github, who know better and force the only right way down their employees throats.
To continue to torture analogies, and be borderline flippant, almost no one can work an abacus like old the masters. And I don't think it's worth worrying about. There is an opportunity cost in maintaining those abilities.
Paper link: https://www.thelancet.com/journals/langas/article/PIIS2468-1...
Supprt: Statistically speaking, on average, for each 1% increase in ADR, there is a 3% decrease in the risk of CRC. (colorectal cancer)
My objection is all the decimal points without error bars. Freshman physics majors are beat on for not including reasonable error estimates during labs, which massively overstates how certain they should be; sophomores and juniors are beat on for propogating errors in dumb ways that massively understates how certain they should be.
This article is up strolls rando doctor (granted: with more certs than I will ever have) with a bunch of decimal points. One decimal point, but that still looks dumb to me. What is the precision of your measuring device? Do you have a model for your measuring device? Are you quite sure that your study, given error bars, which you don't even acknowledge the existence of, don't cancel out the study?
> While short-term data from randomized controlled trials consistently demonstrate that AI assistance improves detection outcomes...
A compiler can be fixed thing that does a fixed task. A cancer recognizer is something like a snapshot of people's image-recognition process during a period of time. These are judgement that can't be turned into set algorithms directly.
There was a discussion a while about how face recognition trained with Internet images has trouble with security camera footage 'cause the security camera doesn't certain images.
It sounds weird to say that what cancer looks like drifts over time but I'm pretty sure it's actually true. Demographics change, the genes of even a stable group change over the generations, exactly how a nurse centers bodies, etc. change over time and all these changes can add to the AI judgement snapshot being out of date after some period. If the doctors whose judgements created the snapshots no longer have the original (subtle) skill then you have a problem (unlike a compiler whose literal operations remain constant and where updating involves fairly certain judgements).
The detector can be fine-tuned over time to improve accuracy and account for distribution shift. We should still have access to the same signals (did this turn out to be a tumor?) that doctors would use to update their own judgement.
My solution is increase the amount I write purely by hand.
The completions are usually no more than a few lines.
This speeds up my typing (not that my 60-70wpm is “slow”) and allows me to get to the next bit of thinking, without getting too much in the way, or decreasing my syntax knowledge, or requiring me to put brainpower into checking the code it generates (since it was what I was about to type). And hopefully avoids copyright issues, how can “what I was about to type” be infringing?
Typing speed isn’t usually considered to major bottleneck for programming, but I do think using a LLM this way does actually increase my productivity somewhat. It’s not the typing speed increase itself (hell, I’m dubious there even is a real typing speed increase, reading possible completions takes time. But it does feel faster).
It’s more that my adhd-ass brain had a tendency to get bored while typing and has a tendency to get distracted, either with irrelevant tasks, or I go crazy with Don’t-Repeat-Yourself, wasting way more time creating complex unneeded layers of abstractions.
Using an LLM as a fancy autocomplete helps me short circuit these bad tendencies. The resulting code is less DRY and way more KISS.
https://salmonmode.github.io/2020/08/14/the-harmful-obsessio...
I’m sure similar things have been said with:
- calculators & impact on math skills
- sewing machines & people’s stitching skills
- power tools & impacts on craftsmanship.
And for all of the above, there’s both pros and cons that result.
If we accidentally put ourselves in a position where humans fundamental skills are being eroded away, we could potentially lose our ability to make deep progress in any non-AI field and get stuck in a suboptimal and potentially dangerous trajectory.
For example, (a) we’ve lost the knowledge of how the Egyptian pyramids were built. Maybe that’s okay, maybe it’s not. (b) On a smaller scale, we’ve also forgotten how to build quality horse-and-buggies, and that’s probably fine since we now live in a world of cars. (c) We almost forgot how to send someone to the moon, and that was just in the last 50-years (and that’s very bad).
However, I would like someone to explain this to me: If I haven't needed these skills in enough time for then to atrophy, what catastrophic event has suddenly happened that means I now urgently need them?
This just sounds very much like the old "we've forgotten how to shoe our own horses!" argument to me, and exactly as relevant.
The scenario we want to avoid is:
"sorry, your claim was denied, the AI said your condition did not need that treatment. You'll have to sell your house."
https://news.ycombinator.com/item?id=44883350
https://www.thelancet.com/journals/langas/article/PIIS2468-1...
https://doi.org/10.1016/S2468-1253(25)00133-5
If someone finds a link to a pre-print or other open access, please post it in the thread, as this is just the abstract.
This looks correct. I noticed this part on the ScienceDirect page for this paper:
https://www.sciencedirect.com/science/article/abs/pii/S24681...
> In Press, Corrected Proof
There’s a lot of info about what those terms mean if you click on them, more than I think would make sense to quote, but the reason I mention it is that I noticed that info the other day when I saw the news and posted it to HN. I’m not sure what the corrections are, as I haven’t seen the corrected version, just the abstract and pre-print, thanks to you again.
As a layman, does ADR simply mean suspicion, or does it mean they correctly and accurately saw adenomas in 28.4% of patients before and now the rate is only 22.4%. Or just that they suspected it 6% more before? Does the actual paper detail if they simply stopped seeing illusions, or did they actually stop seeing meaningful things they used to see?
I'm sure the paper goes into more detail, but I'm more interested in the false positive vs false negatives than just overall %.
throwup238•5mo ago
smitty1e•5mo ago