frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

From GPT-4 to GPT-5: Measuring Progress in Medical Language Understanding [pdf]

https://www.fertrevino.com/docs/gpt5_medhelm.pdf
38•fertrevino•2h ago•15 comments

Uv format: Code Formatting Comes to uv (experimentally)

https://pydevtools.com/blog/uv-format-code-formatting-comes-to-uv-experimentally/
115•tanelpoder•4h ago•79 comments

Happy 0b100000th Birthday, Debian

https://lists.debian.org/debian-devel-announce/2025/08/msg00006.html
14•pabs3•3d ago•0 comments

Crimes with Python's Pattern Matching (2022)

https://www.hillelwayne.com/post/python-abc/
106•agluszak•5h ago•38 comments

An interactive guide to SVG paths

https://www.joshwcomeau.com/svg/interactive-guide-to-paths/
188•joshwcomeau•3d ago•20 comments

Elegant mathematics bending the future of design

https://actu.epfl.ch/news/elegant-mathematics-bending-the-future-of-design/
36•robinhouston•3d ago•0 comments

Show HN: Changefly ID + Anonymized Identity and Age Verification

https://www.changefly.com/blog/2025/08/anonymized-identity-and-age-verification-a-new-era-of-privacy-for-changefly-id
9•davidandgoli4th•5h ago•2 comments

AI tooling must be disclosed for contributions

https://github.com/ghostty-org/ghostty/pull/8289
492•freetonik•6h ago•257 comments

DeepSeek-v3.1 Release

https://api-docs.deepseek.com/news/news250821
262•wertyk•5h ago•59 comments

My other email client is a daemon

https://feyor.sh/blog/my-other-email-client-is-a-mail-daemon/
86•aebtebeten•16h ago•17 comments

Beyond sensor data: Foundation models of behavioral data from wearables

https://arxiv.org/abs/2507.00191
189•brandonb•10h ago•41 comments

Miles from the ocean, there's diving beneath the streets of Budapest

https://www.cnn.com/2025/08/18/travel/budapest-diving-molnar-janos-cave
98•thm•3d ago•13 comments

Show HN: Splice – CAD for Cable Harnesses and Electrical Assemblies

https://splice-cad.com
21•djsdjs•3h ago•4 comments

Text.ai (YC X25) Is Hiring Founding Full-Stack Engineer

https://www.ycombinator.com/companies/text-ai/jobs/OJBr0v2-founding-full-stack-engineer
1•RushiSushi•3h ago

Weaponizing image scaling against production AI systems

https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
311•tatersolid•12h ago•83 comments

How well does the money laundering control system work?

https://www.journals.uchicago.edu/doi/10.1086/735665
176•PaulHoule•11h ago•176 comments

The Onion Brought Back Its Print Edition. The Gamble Is Paying Off

https://www.wsj.com/business/media/the-onion-print-subscribers-6c24649c
63•andsoitis•2h ago•11 comments

Beyond the Logo: How We're Weaving Full Images Inside QR Codes

https://blog.nitroqr.com/beyond-the-logo-how-were-weaving-full-images-inside-qr-codes
36•bhasinanant•3d ago•14 comments

Using Podman, Compose and BuildKit

https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/
241•LaSombra•14h ago•79 comments

Philosophical Thoughts on Kolmogorov-Arnold Networks (2024)

https://kindxiaoming.github.io/blog/2024/kolmogorov-arnold-networks/
8•jxmorris12•3d ago•0 comments

Show HN: OS X Mavericks Forever

https://mavericksforever.com/
289•Wowfunhappy•3d ago•120 comments

Building AI products in the probabilistic era

https://giansegato.com/essays/probabilistic-era
85•sdan•6h ago•50 comments

The power of two random choices (2012)

https://brooker.co.za/blog/2012/01/17/two-random.html
41•signa11•3d ago•3 comments

Privately-Owned Rail Cars

https://www.amtrak.com/privately-owned-rail-cars
91•jasoncartwright•12h ago•130 comments

Mirage 2 – Generative World Engine

https://demo.dynamicslab.ai/chaos
14•selimonder•3h ago•4 comments

Mark Zuckerberg freezes AI hiring amid bubble fears

https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-freezes-ai-hiring-amid-bubble-fears/
673•pera•13h ago•679 comments

The contrarian physics podcast subculture

https://timothynguyen.org/2025/08/21/physics-grifters-eric-weinstein-sabine-hossenfelder-and-a-crisis-of-credibility/
152•Emerson1•7h ago•180 comments

Launch HN: Skope (YC S25) – Outcome-based pricing for software products

38•benjsm•9h ago•30 comments

I forced every engineer to take sales calls and they rewrote our platform

https://old.reddit.com/r/Entrepreneur/comments/1mw5yfg/forced_every_engineer_to_take_sales_calls_they/
246•bilsbie•9h ago•171 comments

The Core of Rust

https://jyn.dev/the-core-of-rust/
142•zdw•8h ago•118 comments
Open in hackernews

In the long run, LLMs make us dumber

https://desunit.com/blog/in-the-long-run-llms-make-us-dumber/
76•speckx•5h ago

Comments

cabacon•3h ago
Plato's _Phaedrus_ features Socrates arguing against writing; "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."

I have heard people argue that the use of calculators (and later, specifically graphing calculators) would make people worse at math; quick searching found papers like https://files.eric.ed.gov/fulltext/ED525547.pdf discussing the topic.

I can't see how the "LLMs make us dumber" argument is different than those. I think calculators are a great tool, and people trained in a calculator-having environment certainly seem to be able to do math. I can't see that writing has done anything but improve our ability to reason over time. What makes LLMs different?

chankstein38•3h ago
Because they do it all for us and they frequently do it wrong. We're not offloading the calculation or the typing to the thing we're using it to solve the whole problem for us.

Calculators don't solve problems, they solve equations. Writing didn't kill our memories because there's still so much to remember that we almost have to write things down to be able to retain it.

If you don't do your own research and present the LLM with your solution and let it point out errors and instead just type "How do I make ____?" it's solving the entire thought process for you right there. And it may be leading you wrong.

That's my view on how it's different at least. They're not calculators or writing. They're text robots that present solutions confidently and offer to do more work immediately afterwards, usually ending a response in "Want me to write you a quick python script to handle that?"

A thought experiment, if you're someone who has used a calculator to calculate 20% tips your whole life, try to calculate one without it. Maybe you specifically don't struggle because you're good at math or have a lot of math experience elsewhere but if you have approached it the way this article is calling bad, you'd simply have no clue where to start.

cabacon•3h ago
I guess my point is that the argument being made is "if you lift dumbbells with a forklift, you aren't getting strong by exercising". And that's correct. But that doesn't mean that the existence of forklifts makes us weaker.

So, I guess I'm just saying that LLMs are a tool like any other. Their existence doesn't make you worse at what they do unless you forgo thinking when you use them. You can use a calculator to efficiently solve a wrong equation - you have to think about what it is going to solve for you. You can use an LLM to make a bad argument for you - you have to think about the inputs you're going to have it output for you.

I was just feeling anti-alarmist-headline - there's no intrinsic reason we'd get dumber because LLMs exist. We could, but I think history has shown that this kind of alarmism doesn't come to fruition.

chankstein38•3h ago
Fair! I'd definitely agree with that! I don't really know the author's intentions here but my read of this article is that it's for the people that ARE skipping thinking entirely using them. I agree completely, to me LLMs are effectively a slightly more useful (sometimes vastly more useful) search engine. They help me find out about features or mechanisms I didn't know existed and help demonstrate their value for me. I am still the one doing the thinking.

I'd argue we're using them "right" though.

tines•3h ago
The analogy falls apart because calculating isn't math. Calculating is more like spelling, while math is more akin to writing. Writing and math are creative, spelling and calculating are not.
toss1•3h ago
>>What makes LLMs different?

Good question!

Writing or calculators likely do reduce our ability memorize vast amounts of text or do arithmetic in our heads; but to write or do math with writing and calculation, we still must fully load those intermediate facts into our brain and fully understand what was previously written down or calculated to wield and wrangle it into a new piece of work.

In contrast, LLMs (unless used with great care as only one research input) can produce a fully written answer without ever really requiring the 'author' to fully load the details of the work into their brain. LLMs basically reduc ethe task to editing not writing. As editing is not the same as writing, so it is no surprise this study shows an serious inability to remember quotes from the "written" piece.

Perhaps it is similar to learning a new language wherein we tend to be much sooner able to read the new language at a higher complexity than write or speak it?

cabacon•3h ago
I have a kid in high school who uses LLMs to get feedback on essays he has written. It will come back with responses like "you failed to give good evidence to support your point that [X]", or "most readers prefer you to include more elaboration on how you changed subject from [Y] to [Z]".

You (and another respondent) both cite the case where someone unthinkingly generates a large swath of text using the LLM, but that's not the only modality for incorporating LLMs into writing. I'm with you both on your examples, fwiw, I just think that only thinking about that way of using LLMs for writing is putting on blinders to the productive ways that they can be used.

It feels to me like people are reacting to the idea that we haven't figured out how to work it into our pedagogy, and that their existence hurts certain ways we've become accustomed to measuring people having learned what we intended them to learn. There's certainly a lot of societal adaptation that should put guardrails around their utility to us, but when I see "They will make us dumb!" it just sets of a contrarian reaction in me.

blamestross•3h ago
Its all about who "us" are.

Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.

The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."

So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!

tptacek•3h ago
I buy this for writing. There's a very limited set of things GPT is good at for improving my writing (basic sentence voice and structure stuff, overusing words), but mostly I find it makes my writing worse, and I don't trust any argument it makes because, as the post observes, I haven't thought them through and had the opportunity to second-guess them myself.

Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.

But I don't buy this at all for software development. I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent, because there's minimal exertion to trying multiple paths out and seeing how they work out. Without an agent, every new function I write is a kind of bet on how the software is going to come out in the end, and like every human I'm loss-averse, so I'm not good at cutting my losses on the bad bets. Agents free me from that.

chankstein38•3h ago
That's wild. My experience has been vastly different. ChatGPT, Claude, Claude Code, Gemini, etc whatever it may be even the simplest scripts I've had them write usually come out with issues. As far as writing functions is concerned, it's way less risky for me to write functions based on my prior knowledge than to ask ChatGPT to write the entire thing for me and just paste it in and call it good.

I do use it for learning and to help me access new concepts I've never thought about but if you're not proving what it's writing yourself and understanding what its written yourself then I hope I never have to work on code you've written. If you are, then you are not doing what the article is talking about.

tptacek•3h ago
I don't know what you're having it write; I mostly have it write Go. When I ask it to write shell scripts, its shell scripts are better than what I would have written (my daily drivers are whatever Sketch.dev is using under the hood --- Claude, I assume --- and Gemini).

I've been writing Go since ~2012 and coding since ~1995. I read everything I merge to `main`. The code it produces is solid. I don't know that it one-shots stuff; I work iteratively and don't care enough to try to make it do that, I just care about the endpoint. The outcomes are very good.

I know I'm not alone in having this experience.

chankstein38•3h ago
That makes sense! I frequently have it write python. I'll say though, working on Go for more than a decade and coding for more than a lot of people have been alive is likely proof you're not one of the people this article is talking about. I don't think I've been made stupider by LLMs either but, like someone else said, maybe a bit lazier about things. I am not the author so I should stop talking as if I know their thoughts but, at least in my opinion, this message is more important for the swathes of people who don't have 10-20 years of experience solving complex problems.
tptacek•3h ago
I'm not that old.
Xmd5a•2h ago
com'on
reptation•3h ago
Remake/Remodel is an all time great! https://youtu.be/m-zSnO7sbXg?list=RDm-zSnO7sbXg
spondylosaurus•2h ago
> Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.

Whoa, whoa, are we talking Bryan Ferry as an artist, or Bryan Ferry as a guy? Because I love me some Roxy Music but have heard that Bryan is kind of a dick.

devmor•21m ago
> I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent

Every developer that uses LLMs believes this. And every time they are objectively measured, it is shown that they are wrong. Just look at the FOSS study from METR, or the cognitive bias study by Microsoft.

If you understand how this applies to writing can you not connect the dots and realize that it is giving you a false sense of productivity?

dothereading•3h ago
I agree with this, but at the same time I think LLMs will make anyone who wants to learn much smarter.
j45•3h ago
If it's doing the thinking for you, just like social media, but much more intense.
deepsun•3h ago
Plato was against writing, as it makes us dumber.

https://fs.blog/an-old-argument-against-writing/

FollowingTheDao•3h ago
Sounds like you’re saying this in favor of AI, but I’m taking it as a just favor of both AI and writing.
neom•3h ago
Bit tangential, but I find oral traditions really interesting, the sheer scale of what can be done is quite impressive: https://blog.education.nationalgeographic.org/2016/04/08/abo... -- https://en.wikipedia.org/wiki/Songline
mrec•2h ago
I don't think it's too tangential. If you haven't already seen it, I suspect you'd enjoy this; it really made an impact on me when I first read it, especially the ending.

https://www.fantasticanachronism.com/p/having-had-no-predece...

Refreeze5224•3h ago
I imagine his memory and those of people who memorized instead of wrote were better. So by that metric, writing is making people dumber. It's just not all that relevant today, and we don't prioritize memorization to the extent Plate and the ancient Greeks probably did.
pixl97•2h ago
Civilization is the process of externalizing our individual needs to others.

We externalize our information to books. We externalize our jobs to specialists. We externalize our shelter to home builders. We externalize our food to farmers. We externalize our water to manucipalities.

Individually we may be weaker because of it. Yet in the end we are all stronger, and now billions of us can live at levels unimaginable in the past.

NitpickLawyer•1h ago
> It's just not all that relevant today, and we don't prioritize memorization to the extent Plate and the ancient Greeks probably did.

Funny enough, that's kinda what we're seeing with LLMs. We're past the "regurgitate the training set" now, and we're more interested in mixing and matching stuff in the context window so we get to a desired goal (i.e. tool use, search, "thinking" and so on). How about that...

jazzyjackson•2h ago
It probably made us worse orators
timoth3y•2h ago
There is some important nuance needed.

Plato was not against writing. In fact, he wrote prolifically. Plato's writings form the basis of Western Philosophy.

Plato's teacher Socrates was against writing, and Plato agreed that writing is inferior to dialog in some ways; memory, inquiry, deeper understanding, etc.

We know this because Plato wrote it all down.

I think it would be more accurate to say that Plato appreciated the advantages of both writing and the Socratic method.

TZubiri•2h ago
That's very interesting, especially as we think of Plato as philosophy that is learned by reading.

To some extent plato wrote though, which is mostly how we can learn about him, but most of his writings are dialogues between characters.

Also a lot of what we know is written by his disciples.

watwut•2h ago
If only people stopped misinterpreting what past people said or wrote for facile arguments.
whydoineedthis•3h ago
Similar fear mongering when calculators came about. No one got dumber, we just got faster at doing simple math. WOrking out complex math will always be interesting to those who really want to do it, and the rest likely wont contribute mu ch anyway - thier just consumers. Let the kids have thier wordy calculators, it actually may unblock critical paths of success needed for someone to really go deep.
BizarroLand•3h ago
Yep. I force memorized so many calculations because our teachers constantly told us that in the future we wouldn't always have a calculator with us.

It was helpful, I got pretty far along in collegiate math without tutors or assistance thanks to the hard calculation skills I drilled into my head.

But, counterpoint, if I leave my calculator/computer/all in one everything device at home on any given day it can ruin my entire day. I haven't gone 72 hours without a calculator in nearly a decade.

CuriouslyC•3h ago
LLMs haven't made me dumber, but they have made me lazier. I think about writing code by hand now and groan.
sitzkrieg•3h ago
thats kinda embarrassing
CuriouslyC•3h ago
Would you groan if you had to take public transit while your car was broken down?

If you love to knit that's cool but don't get on me because I'd rather buy a factory sweater and get on with my day.

I love creating things, I love solving problems, I love designing elegant systems. I don't love mashing keys.

blibble•3h ago
my public transport is faster and cheaper than driving...
WD-42•2h ago
I love how you immediately go for public transit as an analogy for something regrettable. Fits.
CuriouslyC•2h ago
I put my time in on the city bus brother. In the time that I had the displeasure I had bodily fluids thrown on me and someone almost stabbed me. Maybe you have first class magic fairy busses with plush reclining seats and civil neighbors but that's not the norm.
WD-42•1h ago
I used to take BART into the city for years before I moved away from the bay. Yea, I'd sometimes see a rare scuffle and occasionally there would be someone who smelled like they had shat themselves 3 times living in one of the cars, but overall it was a positive experience. I've never been a more prolific reader than I was during those days. It still blows my mind that I had coworkers who would choose to drive over the bay bridge and get nothing done other than work on their road rage than to take BART.

Maybe it was worse where you were.

Avshalom•1h ago
public transit drives you places, it does the work for you, like the llm.

biking instead of driving would be a better analogy... which you might have caught if llms hadn't made you dumber.

scarface_74•3h ago
I don’t get paid to “write code”. I use my 30 years of professional industry experience to either make the company money or to save the company money and in exchange for my labor, they put money in my account and formerly RSUs in my brokerage account.

It’s not about “passion”. It’s purely transactional and I will use any tool that is available to me to do it.

If an LLM can make me more efficient at that so be it. I’m also not spending months getting a server room built out to hold a SAN that can store a whopping 3TB of storage like in 2002. I write 4 lines of Yaml to provision an S3 bucket.

sys_64738•2h ago
How much of your coding time do you spend writing mundane, repetitive code that should just instantly appear? I think there's a dual benefit of removing the drudgery of that but also gives you more time to write new, interesting code to achieve your goals and priorities sooner.
nomel•2h ago
Wait until you hear how code used to be written, and how "lazy" you have it!

Everyone, even yourself, enjoys things being easier, when moving towards a solution. Programming is a means to an end, to solve an actual problem.

pixl97•2h ago
Why?

Or are you growing everything you need to eat by yourself?

Keep it to programming then, I'm sure you write all your own libraries right? In assembly, that is.

Everything about modern life, especially programming is about enabling more with less work.

gerdesj•2h ago
So how do you interact with your LLM? (by hand perhaps?)

I find prompt fettling a great way of getting to grips with a problem. If I can explain my problem effectively enough to get a reasonable start on an answer, then I likely thoroughly understand my problem.

An LLM is like a really fancy slide rule or calculator (I own both). It does have a habit of getting pissed and talking bollocks periodically. Mind you, so do I.

CuriouslyC•2h ago
I voice chat with chatgpt to generate a very very detailed architecture document / spec / prd / whatever, in an implementation checklist order, then I take that and save it as a "VISION.md" file in the repo, and queue up a command for the agent to start working on the problem. I have detailed subagent and task forking logic in my claude setup, so I can get plans that involve 7-8 subagents (some being invoked by claude -p for parallel execution) and take a project from that spec to a very real project in ~6-8 hours, during which time my agents will typically phone home to see if they should continue maybe 2 or 3 times (I have anti stopping hooks but they're lazy SoBs).
BizarroLand•3h ago
Dumb is more the inability to make expedient, salient, and useful decisions either from the lack of knowledge or the fundamental incapability to process the available knowledge.

Dumb is accidental or genetic.

AI won't affect how dumb we are.

I think they will decrease the utility of crystalline knowledge skills and increase our fluid knowledge skills. Smart people will still find ways to thrive in the environment.

Human intelligence will continue moving forward.

Avshalom•1h ago
it made doctors worse at diagnosing cancer and diagnosing is very much "mak[ing] expedient, salient, and useful decisions"
codespin•3h ago
Just as the engine replaced physical strength, artificial intelligence, through models like large language models, is now replacing cognitive labor and thought.

From the article "Muscles grow by lifting weights" yet we do that now as a hobby and not as a critical job. I'm not sure I want to live in a world where thinking is a gym like activity, however if you go back 200 years it would probably be difficult to explain the situation today to someone living in a world where most people are doing physical labor or using animals to do it.

amdivia•2h ago
I doubt that would happen.

The engine provides artificial strength, granted, but AI does not provide artificial intelligence. It's a misnomer.

pixl97•2h ago
>back 200 years it would probably be difficult to explain the

"Almost everyone lives a life closer to that of nobility or the merchent class"

I'm sure the vast majority of the people from that time would rather live in ours if explained that way.

stevage•1h ago
>I'm not sure I want to live in a world where thinking is a gym like activity

That kind of describes the experience of retired people who do sudokus to stave off dementia. I suspect it's a bit akin to going from being a lumberjack to doing 10 squats a day though.

evanjrowley•2h ago
Impressive research but I can't help feeling like it's fundamentally flawed. The analysis considered "essay ownership" a property of LLM, Search, and Brain-Only participants, but what would have been more valuable is flipping all of the graphs based on percieved ownership levels. On average, less LLM users felt a sense of ownership, and this should not surprise anyone. The researchers lumped together people who let LLMs do all of the writing vs using LLMs constructive ways. What would have been more interesting is studying the LLM users who maintained a sense of owership, because then we could learn more way to use LLMs that potentially make us smarter.

I also feel like there's more to be said about LLMs fostering the ability to ask questions better than you might if you primarily used search. If the objective was to write, for example, about an esoteric organic chemistry topic, and a "No Brain" group of non-experts was only allowed to formulate a response by asking real-life experts as much as they can about the esoteric topic, then would users more experienced with LLMs come out ahead on the essay score? Understanding how to leverage a tight communication loop most effectively is a skill that the non-LLM groups in this study should be evaluated on.

hnuser123456•2h ago
When working well, they enable us to offload needing to memorize a wikipedia worth of information and think about higher level problems. We become more intelligent at higher level solutions. Of course people don't know what was written if they were required to "submit an essay" where the main grade is whether or not they submitted one and the topic may have been one not interesting to them. Ask people to write essays about things they're truly, honestly interested in, and people who have access to an LLM are likely able to enrich their knowledge faster than those without.
nateglims•2h ago
The problem is in the chain of learning required to understand or master something. If you offload the foundational things that you learn and chunk you will hinder yourself. We can see this right now in the reading dilemma in US schools where early decoding skills were misunderstood and it lead to significant struggles later.
macawfish•2h ago
I disagree that this is a given. In the long run they just allow us to stay in other levels of abstraction for longer periods of time, whether those be lazy/ignorant or otherwise.
stevage•2h ago
I've been thinking about this a bit in the context of a current, very complex and challenging side project.

When I first began, I tried vibe coding the backend, but found the feeling of disconnect from "my" code too uncomfortable so abandoned that approach.

I've been relying pretty heavily on ChatGPT to help me learn unfamiliar technologies, but perhaps because of its fallibility with DuckDB, I've been spending a lot of time in the documentation and writing my own queries, and I think I'm going through enough cognitive difficulty to be learning properly.

There was one time when I'd spent probably at least a day trying to optimise a query in Postgres (yes, two database technologies, don't ask) without much success, and ChatGPT completely solved it in about 10 minutes. Incredible result, helped me learn some useful techniques. There are so many rabbit holes on this project it's nice not to have to go down every single one by myself.

On HN there's often this split between anti-AI and AI evangelists but there really is a lot of space in the middle: judicious use of AI for specific purposes, managing the risks and benefits, etc.

(Side note...did the OP really mention the highly discredited broken windows theory?)

christophilus•1h ago
I’ve done something similar recently. AI is really useful at helping me learn the Hare programming language, and some related UNIX stuff I was unfamiliar with. It’s wrong maybe 20% of the time, but that’s often quickly apparent to me (I’m pretty experienced— 25 years in the industry). It also helps me bounce ideas around and consider alternative approaches. I don’t want to go back to a pre-Claude world.
senectus1•1h ago
I use it little more than as a search engine.

Did Google and Yahoo make us dumber?