frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Claude Sonnet 4 now supports 1M tokens of context

https://www.anthropic.com/news/1m-context
940•adocomplete•11h ago•517 comments

Search all text in New York City

https://www.alltext.nyc/
127•Kortaggio•3h ago•29 comments

Ashet Home Computer

https://ashet.computer/
201•todsacerdoti•8h ago•43 comments

Show HN: Building a web search engine from scratch with 3B neural embeddings

https://blog.wilsonl.in/search-engine/
363•wilsonzlin•11h ago•59 comments

Journaling using Nix, Vim and coreutils

https://tangled.sh/@oppi.li/journal
86•icy•13h ago•29 comments

Bezier-rs – algorithms for Bézier segments and shapes

https://graphite.rs/libraries/bezier-rs/
16•jarek-foksa•3d ago•0 comments

Training language models to be warm and empathetic makes them less reliable

https://arxiv.org/abs/2507.21919
221•Cynddl•13h ago•221 comments

A gentle introduction to anchor positioning

https://webkit.org/blog/17240/a-gentle-introduction-to-anchor-positioning/
49•feross•4h ago•13 comments

Show HN: Omnara – Run Claude Code from anywhere

https://github.com/omnara-ai/omnara
220•kmansm27•10h ago•111 comments

Visualizing quaternions: An explorable video series (2018)

https://eater.net/quaternions
11•uncircle•3d ago•3 comments

Multimodal WFH setup: flight SIM, EE lab, and music studio in 60sqft/5.5M²

https://www.sdo.group/study
190•brunohaid•3d ago•81 comments

Blender is Native on Windows 11 on Arm

https://www.thurrott.com/music-videos/324346/blender-is-native-on-windows-11-on-arm
125•thunderbong•4d ago•50 comments

WHY2025: How to become your own ISP [video]

https://media.ccc.de/v/why2025-9-how-to-become-your-own-isp
107•exiguus•10h ago•13 comments

LLMs aren't world models

https://yosefk.com/blog/llms-arent-world-models.html
242•ingve•2d ago•129 comments

Blender on iPad Is Finally Happening

https://www.creativebloq.com/3d/blender-on-ipad-is-finally-happening-and-it-could-be-the-app-every-artist-needs
20•walterbell•1h ago•7 comments

Launch HN: Design Arena (YC S25) – Head-to-head AI benchmark for aesthetics

61•grace77•11h ago•24 comments

A spellchecker used to be a major feat of software engineering (2008)

https://prog21.dadgum.com/29.html
140•Bogdanp•4d ago•129 comments

Go 1.25 Release Notes

https://go.dev/doc/go1.25
134•bitbasher•5h ago•25 comments

Why are there so many rationalist cults?

https://asteriskmag.com/issues/11/why-are-there-so-many-rationalist-cults
410•glenstein•12h ago•614 comments

RISC-V single-board computer for less than 40 euros

https://www.heise.de/en/news/RISC-V-single-board-computer-for-less-than-40-euros-10515044.html
131•doener•4d ago•75 comments

Fixing a loud PSU fan without dying

https://chameth.com/fixing-a-loud-psu-fan-without-dying/
22•sprawl_•3d ago•26 comments

The equality delete problem in Apache Iceberg

https://blog.dataengineerthings.org/the-equality-delete-problem-in-apache-iceberg-143dd451a974
47•dkgs•8h ago•23 comments

Evaluating LLMs playing text adventures

https://entropicthoughts.com/evaluating-llms-playing-text-adventures
94•todsacerdoti•11h ago•58 comments

Weave (YC W25) is hiring a founding AI engineer

https://www.ycombinator.com/companies/weave-3/jobs/SqFnIFE-founding-ai-engineer
1•adchurch•10h ago

Debian GNU/Hurd 2025 released

https://lists.debian.org/debian-hurd/2025/08/msg00038.html
190•jrepinc•3d ago•102 comments

Dumb to managed switch conversion (2010)

https://spritesmods.com/?art=rtl8366sb&page=1
40•userbinator•3d ago•17 comments

Galileo’s telescopes: Seeing is believing (2010)

https://www.historytoday.com/archive/history-matters/galileos-telescopes-seeing-believing
18•hhs•3d ago•7 comments

The Missing Protocol: Let Me Know

https://deanebarker.net/tech/blog/let-me-know/
81•deanebarker•7h ago•61 comments

Is Meta Scraping the Fediverse for AI?

https://wedistribute.org/2025/08/is-meta-scraping-the-fediverse-for-ai/
8•nogajun•1h ago•0 comments

Australian court finds Apple, Google guilty of being anticompetitive

https://www.ghacks.net/2025/08/12/australian-court-finds-apple-google-guilty-of-being-anticompetitive/
335•warrenm•13h ago•125 comments
Open in hackernews

AI Eroded Doctors' Ability to Spot Cancer Within Months in Study

https://www.bloomberg.com/news/articles/2025-08-12/ai-eroded-doctors-ability-to-spot-cancer-within-months-in-study
42•zzzeek•2h ago

Comments

throwup238•2h ago
https://archive.ph/whVMI
smitty1e•1h ago
Thanks for the usable archive link. AI erodes human skill like advertising erodes site utility.
frankc•2h ago
But it sounds like with AI doctors did better overall, or that is how I read the first couple of lines. If that is true, I don't really see a problem here. Compilers have eroded my ability to write assembly, that is true. If compilers went away, I would get back up to speed in a few weeks.
lurking_swe•1h ago
if the hospital IT system is temporarily down, i certainly expect my doctors to still be able to do their job. So it is a (small) problem that needs addressing.

Perhaps a monthly session to practice their skills would be useful? So they don’t atrophe…

akoboldfrying•1h ago
> if the hospital IT system is temporarily down

I think we have to treat the algorithm as a medical tool here, whose maintenance will be prioritised as such. So your premise is similar to "If all the scalpels break...".

tylerrobinson•1h ago
I’m think you might agree, though, that the likelihood of one premise is significantly greater than the other!
akoboldfrying•1h ago
Sure. "MRI machine" would have been a better metaphor than "scalpel".
azemetre•1h ago
Which is easier to build resilient systems for: the one where you have a few dozen extra scalpels in a storage closet or the one that requirements offsite backups, separate generators, constant maintenance?

The premise is absolutely not the same.

akoboldfrying•1h ago
s/scalpel/MRI machine/. How about now?
randcraw•1h ago
Or, each time the software is updated or replaced, the human in the loop will be unable to... be in the loop. Patients will be entirely dependent on software vendor corporations who will claim that their app is always correct and trustworthy. But what human (or other external unbiased trustworthy authority) will be available to check its work?
add-sub-mul-div•1h ago
"People will practice their skills" is the new "drivers will keep their attention on the road and remain ready to quickly take the wheel from the autonomous driver in an emergency."
xorbax•1h ago
It's like research. People had encyclopedias. If they wanted to know real, deep information about subjects they'd have to specifically spend effort seeking and finding books or papers about that specific subject (which are typically just distilled papers in a far wider range and number than an encyclopedia would be)

Then we could just go Google it, and/or skim the Wikipedia page. If you wanted more details you could follow references - which just made it easier to do the first point.

Now skills themselves will be subject to the same generalizing phenomenon as finding information.

We have not seen information-finding become better as technology has advanced. More people are able to become barely-capable regarding many topic, and this has caused a lot of fragmentation, and a general lowering of general knowledge with regard to information.

The overall degradation that happened with politics and public information will now be generalized to anything that AI can be applied to.

You race your MG? Hey my exoskeleton has a circuit racer blob we should go this weekend. You like to paint? I got this Bougereau app I'll paint some stuff for you. You're a physicist? The font for chalk writing just released so maybe we can work on the grand unified theory sometime, you say you part and I can query the LLM and correct your mistakes

seanmcdirmid•1h ago
This already happened in aviation a long time ago, they have to do things to keep the pilots paying attention and not falling asleep on a long haul where the auto pilot is doing most of the work. It isn't clear at what point it will just be safer to not have pilots if automated systems are able to tackle exceptions as well as take offs and landings well enough.
accoil•1h ago
Does cancer progress that fast?
sothatsit•1h ago
If the AI systems allow my doctor to make more accurate diagnoses, then I wouldn't want a diagnosis done without them.

Instead, I would hope that we can engineer around the downtime. Diagnosis is not as time-critical as other medical procedures, so if a system is down temporarily they can probably wait a short amount of time. And if a system is down for longer than that, then patients could be sent to another hospital.

That said, there might be other benefits to keeping doctors' skills up. For example, I think glitches in the diagnosis system could be picked up by doctors if they are double-checking the results. But if they are relying on the AI system exclusively, then unusual cases or glitches could result in bad diagnoses that could otherwise have been avoided.

stavros•1h ago
Do you expect doctors to be able to image your insides if the X-ray machine is down too?
lurking_swe•1h ago
An x ray machine can be used “locally” without uploading the images into the IT system. So i don’t understand the question. If it was designed to be cloud only then that would be horrendous design (IMO).

The x ray machine would still work, it’s connected directly to a PC. A doctor can look at the image on the computer without asking some fancy cloud AI.

A power outage on the other hand is a true worst case scenario but that’s different.

stavros•55m ago
I'm not talking about the IT system, I'm talking about when the X-ray machine breaks, same as how we're talking about when the colonoscopy diagnosis machine breaks.
fzeroracer•46m ago
How often do you think the x-ray machine breaks vs how often software shits the bed?

Like one of the biggest complaints I've heard around hospital IT systems is how brittle they are because there are a million different vendors tied to each component. Every new system we add makes it more brittle.

BolexNOLA•1h ago
The stakes of a colonoscopy are typically way, way higher than your typical assembly projects.
akoboldfrying•1h ago
Think of it as a medical device, like an MRI machine. Should we have workarounds for when the MRI machine is down? I think we are better off allocating budget to keeping the MRI machine maintained and running, and assuming that as the normal state -- and likewise for this.
jghn•1h ago
In many cases these software tools are literally classified as medical devices by the FDA with all of the regulatory compliance that comes with it
xorbax•1h ago
I think the same thing about meals.
jackvalentine•1h ago
The compiler example is very helpful, thanks for posting it.

My follow up question is now “if junior doctors are exposed to AI through their training is doctor + AI still better overall?” e.g. do doctors need to train their ‘eye’ without using AI tools to benefit from them.

frankc•1h ago
I think that is a good question and we don't really know yet. I think we are going to have to overhaul a lot of how we educate people. Even if all AI progress stops today, there is still a massive shift in how many professions operate that is incoming.
jackvalentine•1h ago
> I think we are going to have to overhaul a lot of how we educate people.

I agree.

I work in healthcare and if you take a tech view of all the data there are a lot of really low hanging fruit to pick to make things more standardised and efficient. One example is extracting data from patient records for clinical registries.

We are trying to automate that as much as possible but I have the nagging sense that we’re now depriving junior doctors of the opportunity to look over hundreds of records about patients treated for X to find the data and ‘get a feel’ for it. Do we now have to make sure we’re explicitly teaching something since it’s not implicitly being done anymore? Or was it a valueless exercise.

The assumptions that we make about training on the job are all very chesterton’s fence really.

WhyOhWhyQ•1h ago
You would get back up to speed in a few weeks. The guy who comes after you and never had formative years writing assembly would never get to the level you were at.
frankc•1h ago
Perhaps, but I don't think we should optimize for scenario of going back before these tools existed. Of course you need the medical equivalent of BCP, but it's understood that BCP doesn't imply you must maintain the same capacities, just that you can function until you get your systems back online.

To continue to torture analogies, and be borderline flippant, almost no one can work an abacus like old the masters. And I don't think it's worth worrying about. There is an opportunity cost in maintaining those abilities.

dweinus•1h ago
Except the paper doesn't say that the doctors + AI performed better than doctors pre-AI. It is well documented that people will trust and depend on AI, even if it is not better. It is not clear in the paper, but possible that this is just lose-lose.

Paper link: https://www.thelancet.com/journals/langas/article/PIIS2468-1...

mcbrit•51m ago
Line: The ADR of standard colonoscopy decreased significantly from 28·4% (226 of 795) before to 22·4% (145 of 648) after exposure to AI

Supprt: Statistically speaking, on average, for each 1% increase in ADR, there is a 3% decrease in the risk of CRC. (colorectal cancer)

My objection is all the decimal points without error bars. Freshman physics majors are beat on for not including reasonable error estimates during labs, which massively overstates how certain they should be; sophomores and juniors are beat on for propogating errors in dumb ways that massively understates how certain they should be.

This article is up strolls rando doctor (granted: with more certs than I will ever have) with a bunch of decimal points. One decimal point, but that still looks dumb to me. What is the precision of your measuring device? Do you have a model for your measuring device? Are you quite sure that your study, given error bars, which you don't even acknowledge the existence of, don't cancel out the study?

shaldengeki•1h ago
Should be in the first, not seventh paragraph: this was a survey of 19 doctors, who performed ~1400 colonoscopies.
bn-l•1h ago
I know it’s lowering my programming ability. I’m forgetting a lot syntax.

My solution is increase the amount I write purely by hand.

SethMurphy•1h ago
Avoiding copy and paste is the key for me to keeping my syntax memory.
frankc•1h ago
I think it's doing the same for me but tbh, I am ok with that and not trying to fix it. I do not want to go back to the world before claude could knock out all of the tedious parts of programming.
stavros•1h ago
Good riddance to my syntax memory. When am I going to ever need it again? The skill I need now is reviewing and architecture.
AstroBen•54m ago
Being able to spend more of my time thinking about architecture has been amazing
seanmcdirmid•1h ago
I'm forgetting small nuanced details about programming systems that I only occasionally have to access.
mrmincent•1h ago
Doctors’ ability to taste diabetes in urine has also probably eroded since more effective methods have come on the market. If they’re more accurate with the use of AI, why would you continue without it?
alberth•1h ago
Another perspective…

I’m sure similar things have been said with:

- calculators & impact on math skills

- sewing machines & people’s stitching skills

- power tools & impacts on craftsmanship.

And for all of the above, there’s both pros and cons that result.

ViscountPenguin•1h ago
My concern is that people seemingly lack the ability to be discerning about when and where to use new technologies. A world in which more deep thought was put into where to apply AI almost certainly wouldn't feature things like AI image generation, as an example.

If we accidentally put ourselves in a position where humans fundamental skills are being eroded away, we could potentially lose our ability to make deep progress in any non-AI field and get stuck in a suboptimal and potentially dangerous trajectory.

alberth•57m ago
I completely agree — it’s a tricky human challenge.

For example, (a) we’ve lost the knowledge of how the Egyptian pyramids were built. Maybe that’s okay, maybe it’s not. (b) On a smaller scale, we’ve also forgotten how to build quality horse-and-buggies, and that’s probably fine since we now live in a world of cars. (c) We almost forgot how to send someone to the moon, and that was just in the last 50-years (and that’s very bad).

stavros•1h ago
Every time I see "but your skills will atrophy" arguments like this, they always leave an implied "and you'll need them!" lingering, which is a neat trick because then you never need to explain.

However, I would like someone to explain this to me: If I haven't needed these skills in enough time for then to atrophy, what catastrophic event has suddenly happened that means I now urgently need them?

This just sounds very much like the old "we've forgotten how to shoe our own horses!" argument to me, and exactly as relevant.

zzzeek•47m ago
I think it's a problem when decisions about disease regimens are turned over to software which then becomes the sole arbiter of these decisions, because humans no longer know how to verify the results and have no choice but to trust the machines entirely.

The scenario we want to avoid is:

"sorry, your claim was denied, the AI said your condition did not need that treatment. You'll have to sell your house."