frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Tell HN: Help restore the tax deduction for software dev in the US (Section 174)

1169•dang•5h ago•459 comments

Containerization is a Swift package for running Linux containers on macOS

https://github.com/apple/containerization
125•gok•1h ago•33 comments

Apple announces Foundation Models and Containerization frameworks, etc

https://www.apple.com/newsroom/2025/06/apple-supercharges-its-tools-and-technologies-for-developers/
404•thm•4h ago•255 comments

Show HN: Munal OS: a graphical experimental OS with WASM sandboxing

https://github.com/Askannz/munal-os
135•Gazoche•4h ago•52 comments

Apple introduces a universal design across platforms

https://www.apple.com/newsroom/2025/06/apple-introduces-a-delightful-and-elegant-new-software-design/
344•meetpateltech•5h ago•536 comments

What methylene blue can (and can’t) do for the brain

https://neurofrontiers.blog/what-methylene-blue-can-and-cant-do-for-the-brain/
62•wiry•3d ago•29 comments

Domains I Love

https://www.ahmedsaoudi.com/blog/domains-i-love/
29•ahmedfromtunis•1h ago•15 comments

Launch HN: Chonkie (YC X25) – Open-Source Library for Advanced Chunking

84•snyy•6h ago•30 comments

Go is a good fit for agents

https://docs.hatchet.run/blog/go-agents
86•abelanger•5d ago•67 comments

Show HN: Somo – a human friendly alternative to netstat

https://github.com/theopfr/somo
61•hollow64•4h ago•19 comments

Doctors could hack the nervous system with ultrasound

https://spectrum.ieee.org/focused-ultrasound-stimulation-inflammation-diabetes
107•purpleko•7h ago•11 comments

Hokusai Moyo Gafu: an album of dyeing patterns

https://ndlsearch.ndl.go.jp/en/imagebank/theme/hokusaimoyo
119•fanf2•7h ago•13 comments

Bruteforcing the phone number of any Google user

https://brutecat.com/articles/leaking-google-phones
401•brutecat•8h ago•128 comments

Pi in Pascal's Triangle

https://www.cut-the-knot.org/arithmetic/algebra/PiInPascal.shtml
36•senfiaj•3d ago•3 comments

Algovivo an energy-based formulation for soft-bodied virtual creatures

https://juniorrojas.com/algovivo/
48•tzury•6h ago•3 comments

Why quadratic funding is not optimal

https://jonathanwarden.com/quadratic-funding-is-not-optimal/
88•jwarden•7h ago•69 comments

The new Gödel Prize winner tastes great and is less filling

https://blog.computationalcomplexity.org/2025/06/the-new-godel-prize-winner-tastes-great.html
85•baruchel•7h ago•23 comments

Show HN: Most users won't report bugs unless you make it stupidly easy

136•lakshikag•7h ago•73 comments

A bit more on Twitter/X's new encrypted messaging

https://blog.cryptographyengineering.com/2025/06/09/a-bit-more-on-twitter-xs-new-encrypted-messaging/
92•vishnuharidas•3h ago•58 comments

How do you prototype a nice language?

https://kevinlynagh.com/newsletter/2025_06_03_prototyping_a_language/
8•surprisetalk•3d ago•0 comments

Myanmar's chinlone ball sport threatened by conflict and rattan shortages

https://www.aljazeera.com/gallery/2025/6/5/myanmars-chinlone-ball-sport-threatened-by-conflict-and-rattan-shortages
13•YeGoblynQueenne•4d ago•0 comments

A man rebuilding the last Inca rope bridge

https://www.atlasobscura.com/articles/last-inca-rope-bridge-qeswachaka-tradition
55•kaonwarb•2d ago•14 comments

Finding Shawn Mendes (2019)

https://ericneyman.wordpress.com/2019/11/26/finding-shawn-mendes/
325•jzwinck•15h ago•51 comments

Astronomers have discovered a mysterious object flashing signals from deep space

https://www.livescience.com/space/unlike-anything-we-have-seen-before-astronomers-discover-mysterious-object-firing-strange-signals-at-earth-every-44-minutes
53•gmays•2h ago•28 comments

Show HN: Glowstick – type level tensor shapes in stable rust

https://github.com/nicksenger/glowstick
31•bietroi•6h ago•3 comments

Maypole Dance of Braid Like Groups (2009)

https://divisbyzero.com/2009/05/04/the-maypole-braid-group/
32•srean•7h ago•3 comments

LLMs are cheap

https://www.snellman.net/blog/archive/2025-06-02-llms-are-cheap/
279•Bogdanp•10h ago•250 comments

RFK Jr. ousts entire CDC vaccine advisory committee

https://apnews.com/article/kennedy-cdc-acip-vaccines-3790c89f45b6314c5c7b686db0e3a8f9
45•doener•41m ago•4 comments

Potential and Limitation of High-Frequency Cores and Caches (2024)

https://arch.cs.ucdavis.edu/simulation/2024/08/06/potentiallimitationhighfreqcorescaches.html
18•matt_d•3d ago•10 comments

Why Android can't use CDC Ethernet (2023)

https://jordemort.dev/blog/why-android-cant-use-cdc-ethernet/
325•goodburb•1d ago•130 comments
Open in hackernews

Trusting your own judgement on 'AI' is a risk

https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/
86•todsacerdoti•6h ago

Comments

whatevermom•4h ago
Wish more AI bros would read this.
rblatz•3h ago
Why? This is a trash article that basically says "I don't care how much more productive you claim to be with AI, you are a faulty narrator and falling into a physiological trap. Any reports or experience on success with AI should be completely ignored. Don't even look at AI until the bigger smarter science people evaluate it for us simple folk."
tptacek•3h ago
Fun note, someone on Bsky pointed out that this piece is a kind of sequel to an earlier piece by the same author titled (and I am not making this up) "React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity."
mwcampbell•13m ago
You present that title as if it's obviously outlandish, but I think that earlier article does a reasonably good job of backing up that title.
flatline•4h ago
What's up with the typescript rant? Don't we have decades of research about the improvements in productivity and correctness brought by static type checking? It starts out strong but stuff like this really detracts from the overall point the article is trying to make, and it stands out as an incorrect and unjustified (and unjustifiable) point.
fhars•4h ago
Maybe they were alluding to the fact that typescript's type system is unsound?
tptacek•4h ago
They do not appear to be alluding to type system soundness.
zzzeek•4h ago
> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?

correct me if I'm wrong those studies would not have looked at TypeScript itself, which for all I know could be complete garbage designed to lock you into MSFT products.

kristianc•4h ago
It’s ironic as static typing exists precisely because developers don’t trust their instincts?

It’s a formal acknowledgement that humans make mistakes, implicit assumptions are dangerous and that code should be validated before it runs. That’s literally the whole point, and if developers by type were YOLO I’ll push it anyway, TS wouldn’t have got anywhere near the traction it has. Static typing is a monument to distrust.

sdlifer•4h ago
I used to think this way, but TypeScript allows you to autocomplete faster, which lets you develop faster.
diggan•3h ago
You mean like the autocomplete server/LSP shows you the results faster if you're doing TS than JS? Sounds like a weird editor quirk to bring up if so. In neovim the autocomplete is as fast in JS as any other language, not sure why it wouldn't be like that for you.
Dan42•4h ago
> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?

Yes, we have decades of such research, and the aggregate result of all those studies is that no productivity gain can be significantly demonstrated for static over dynamic, and vice-versa.

bazoom42•3h ago
Ate you referring to a specific review?
tgma•3h ago
Not sure what result you are referring to but in my experience, many of the academic research papers use “students” as test subjects. This is especially fucked up when you want to get Software Engineering results. Outside Google et al where you can get corporate sanctioned software engineering data at scale, I would be wary most academic results in the area could be garbage.
rblatz•3h ago
The whole thing is basically you can't know anything, and personal experience can't be trusted. So we are all helpless until "Science" tells us what to do. Until then sit on your hands.

Where is the proof that Javascript is a better language than Typescript? How do you know if you should be writing in Java/Python/C#/Rust/etc? Probably should wait to create your startup lest you fall into a psychological trap. That is the ultimate conclusion of this article.

It's ok to learn and experiment with things, and to build up your own understanding of the world based on your lived experiences. You need to be open minded and reevaluate your positions as more formalized understandings become available, but to say it's too dangerous to use AI because science hasn't tested anything is absurd.

rightbyte•2h ago
> Where is the proof that Javascript is a better language than Typescript?

This is an interesting question really. It feels like it would be really hard to do a study on that. I guess the strength of TS would show up mainly as program complexity grows such that you can't compare toy problems in student exams or what ever.

jpeloquin•3h ago
> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?

It seems messy. Just one example that I remember because it was on HN before: https://www.hillelwayne.com/post/this-is-how-science-happens...

agentultra•3h ago
As far as I know we don't have decades of research about the improvements in productivity and correctness brought by static type systems.

We have one study on test driven development. Another study that attempted to reproduce the results but found flaws in the original. Nothing conclusive.

The field of empirical research in software development practices is... woefully underfunded and incomplete. I think all we can say is, "more data needed."

hwayne did a talk on this [0].

If you ever try to read the literature it's spartan. We certainly haven't improved enough in recent years to make conclusions about the productivity of LLM-based coding tools. We have a study on CoPilot by Microsoft employees who studied Microsoft employees using it (Microsoft owns and develops CoPilot). There's another study that suggests CoPilot increases error rates in code bases by 41%.

What the author is getting at is that you can't rely on personal anecdotes and blog posts and social media influencers to understand the effects of AI on productivity.

If we want to know how it affects productivity we need to fund more and better studies.

[0] https://www.hillelwayne.com/talks/ese/

moritzwarhier•28m ago
> the debate about types, TypeScript, and web development is, for example, largely anecdotal gossip not backed by much in terms of structured research

I'd be interested in the type of structured research the author is interested in. Could it also be researched whether Go or PHP is better for web development? In some sense, I guess. Both are probably more efficient than writing Apache extensions in assembler, but who knows?

ramoz•4h ago
The gradient effect on this page - idk what you think you are doing, but I had to stop reading. Extremely discombobulating (idk how else to describe it). Using mobile.
ramoz•4h ago
The iOS safari reader makes it much better sorry for the rant.
GardenLetter27•4h ago
Typescript is a psychological hazard?
john-radio•4h ago
Welcome to the antimemetics division. This is not your first day.
croes•3h ago
That we are easily tricked was obvious when we switched from „don’t let FAANG get your data“ to „here is all my code and data so I can ask AI questions about it and let it rewrite it.“
sdlifer•3h ago
I wanted to like this, and there’s value in realizing that LLMs often generate a bunch of content that takes longer to understand just to modify it to do the right thing; so yes, it “lies” and that’s a hazard. But, I’ve been using LLMs a lot daily for the past several months, and they’re better than the article lets on.

The FUD to spread is not that AI is a psychological hazard, but that critical reasoning and training are much, much more important than they once were, it’s only going to get more difficult, and a large percentage of white-collar workers, artists and musicians will likely lose their jobs.

Animats•3h ago
> The FUD to spread is not that AI is a psychological hazard, but that critical reasoning and training are much, much more important than they once were.

Not sure which side of the argument this statement is promoting.

There must be something for which humans are essential. Right? Hello? Anybody? It's not looking good for new college graduates.[1]

[1] https://www.usatoday.com/story/money/2025/06/05/ai-replacing...

tptacek•2h ago
Why do you assume this is going to be particularly bad for new entrants and not for veterans?
Animats•2h ago
Because the entry level jobs are going away first.
tptacek•1h ago
Yes, but: why should that be the case? Entry-level programmers are inexpensive.
simonw•3h ago
This article starts with this section about how easily we can trick ourselves and ignore clear evidence of something:

> But Cialdini’s book was a turning point because it highlighted the very real limitations to human reasoning. No matter how smart you were, the mechanisms of your thinkings could easily be tricked in ways that completely bypassed your logical thinking and could insert ideas and trigger decisions that were not in your best interest.

The author is an outspoken AI skeptic, who then spends the rest of the article arguing that, despite clear evidence, LLMs are not a useful tool for software engineering.

I would encourage them to re-read the first half of their article and question if maybe they are falling victim to what it describes!

Baldur calls for scientific research to demonstrate if LLMs are useful programming productivity enhancements or not. I would hope that, if such research goes against their beliefs, they would chose to reassess.

(I'm not holding my breath with respect to good research: I've read a bunch of academic papers on software development productivity in the past and found most of them to be pretty disappointing: this field is notoriously difficult to measure.)

Beijinger•3h ago
Man, I have no idea of programming and I wrote a hackernews clone in one day with chatgpt.
Beijinger•39m ago
Why the down vote? It is true.

I took a template (very few of the code should be the same), added language support, included RSS feeds. Here it is: http://news.expatcircle.com/

(registration does not work. Have to upload the latest version. This thing is not really live yet. I will make it available on my github).

BTW, VS Studio is the best software MS ever produced.

vouaobrasil•3h ago
I think the question of whether LLMs are useful for software engineering is not the right question at all.

The better question should be whether long-term LLM use in software will make the overall software landscape better or worse. For example, LLM use could theoretically allow "better" software engineering by reducing bugs, making coding complex interfaces easier --- but in the long run, that could also increase complexity, making the overall user experience worse because everything is going to be rebuilt on more complex software/hardware infrastructures.

And, the top 10% of coder use of LLMs could also make their software better but make 90% of the bottom-end worse due to shoddy coding. Is that an acceptable trade-off?

The problem is, if we only look at one variable, or "software engineering efficiency" measured in some operational way, we ignore the grander effects on the ecosystem, which I think will be primarily negative due to the bottom 90% effect (what people actually use will be nightmarish, even if a few large programs can be improved).

simonw•3h ago
If we assume that LLMs will make the software ecosystem worse rather than better, I think we have two options:

1. Attempt to prevent LLMs from being used to write software. I can't begin to imagine how that would work at this point.

2. Figure out things we can do to try and ensure that the software ecosystem gets better rather than worse given the existence of these new tools.

I'm ready to invest my efforts in 2, personally.

vouaobrasil•3h ago
I would rather not play the prisoner's dilemma at all, and focus on 1 if possible. I don't code much but when I do code or create stuff, I do with without LLMs from scratch and at least some of my code is used in production :)
lumenwrites•3h ago
For a person so eager to psychoanalyze others, the author sure seems oblivious to his own biases.
moritzwarhier•38m ago
I came to say the same thing.

This is a bad introduction in my view, because it makes me imagine the author as the opposite of what they paint themselves as:

> The reason why I’m outlining just how weird I was as a teenager and a young man is that software developers in particular are prone to being convinced by these hazards and few in the field seem to have ever had that “oh my, I can’t always trust my own judgement and reasoning” moment that I had.

Complaining about "biases" that you don't have yourself, as one of "very few" in your profession, makes me want to quit the article. I'm reading on, because it's not written badly in form, and might make interesting points.

But painting oneself explicitly as one the of the few humble people there are (while composing a blog post about a widely discussed topic, arguing for a widely discussed opinion): it doesn't inspire confidence.

Sure, I skimmed the top-level comment before reading (about typescript), that might have biased me in some way.

But in the end, this thorough interest in psychological manipulation, along with the belief that only a select few would know about this, like it's a secret, has been a trait of a certain kind of person in my life to whom I'm not interested in listening to.

Maybe it's just a tone thing, so now, I'm gonna continue the article.

The mentioned popular book is certainly interesting, I wanted to read that for a while.

So maybe the author did convince me of something there.

dist-epoch•3h ago
Author in this article:

> Our only recourse as a field is the same as with naturopathy: scientific studies by impartial researchers. That takes time, which means we have a responsibility to hold off as research plays out, much like we do with promising drugs

Author in another article:

> Most of the hype is bullshit. AI is already full of grifters, cons, and snake oil salesmen, and it’s only going to get worse.

https://illusion.baldurbjarnason.com/

So I assume he has science research at hand to back up his claim that AI is full of grifters, cons, ... and that it will get worse.

colonCapitalDee•3h ago
I'm not going to wait for some scientist to tell me whether AI is useful or not. I'm going to use it myself and form my own opinion, I'm going to look at other people using it and listen to their opinions, and I'm going to follow scientific consensus once it forms. Sure, my opinion may be wrong. But that's the price you pay for having an opinion.

I also have to point out that the author's maligning of the now famous Cloudflare experiment is totally misguided.

"There are no controls or alternate experiments" -- there are tons and tons of implementations of the OAuth spec done without AI.

"We also have to take their (Cloudflare’s) word for it that this is actually code of an equal quality to what they’d get by another method." -- we do not. It was developed publicly in Github for a reason.

No, this was not a highly controlled lab experiment. It does not settle the issue once and for all. But it is an excellent case study, and a strong piece of evidence that AI is actually useful, and discarding it based on bad vibes is just dumb. You could discard it for other reasons! Perhaps after a more thorough review, we will discover that the implementation was actually full of bugs. That would be a strong piece of evidence that AI is less useful than we thought. Or maybe you concede that AI was useful in this specific instance, but still think that for development where there isn't a clearly defined spec AI is much less useful. Or maybe AI was only useful because the engineer guiding it was highly skilled, and anything a highly skilled engineer works on is likely to be pretty good. But just throwing the whole thing out because it doesn't meet your personal definition of scientific rigor is not useful.

I do hear where the author is coming from on the psychological dangers of AI, but the author's preferred solution of "simply do not use it" is not what I'm going to do. It would be more useful if instead of fearmongering, the author gave concrete examples of the psychological dangers of AI. A controlled experiment would be best of course, but I'd take a Cloudflare style case study too. And if that evidence can not be provided, then perhaps the psychological danger of AI is overstated?

fungiblecog•2h ago
Whether you think LLM's in coding are good or bad largely depends on what you think of current software dev practice. He only gets to this towards the end of the article but this is the main source of personal bias.

If you think the shoddy code currently put into production is fine you're likely to view LLM generated code as miraculous.

If you think that we should stop reinventing variations on the same shoddy code over and over - and instead find ways of reusing existing solid code and generally improving quality (this was the promise of Object Orientation back in the nineties which now looks laughable) then you'll think LLM's are a cynical way to speed up the throughput of garbage code while employing fewer crappy programmers.

disambiguation•2h ago
Open question to HN: In your opinion, what product or piece of work best represents the high-watermark of AI achievement - LLMs or otherwise? I find articles like this are less viable in the face of real counter example. I see a few comments already challenging the author for downplaying the CloudFlare OAuth project - is that repo the current champion of SOTA LLMs?
tptacek•1h ago
People will disagree but I think this is a category error. If you're looking for a shining example of sharp, crystallized code to stack up against Fabrice Bellard, you're not going to find it, because that's not what LLM agents do.

'kentonv said this best on another thread:

It's not the typing itself that constrains, it's the detailed but non-essential decision-making. Every line of code requires making several decisions, like naming variables, deciding basic structure, etc. Many of these fine-grained decisions are obvious or don't matter, but it's still mentally taxing" [... they go on from here].

(Thread: https://news.ycombinator.com/item?id=44209249).

What does that look like on a scoreboard? I guess you'll have to wait a while. Most things that most people write, even when they're successful, aren't notable as code artifacts. A small fraction of successful projects do get that kind of notability; at some point in the next couple years, a couple of them will likely be significantly LLM-assisted, just because it's a really effective way of working. But the sparkliest bits of code in those projects are still likely to be mostly human.

supern0va•1h ago
This piece is rather puzzling. The author is essentially claiming that no one can trust their own judgement on AI (or anything?), and that the lack of scientific research means we should be in a "wait and see" pattern.

...and yet, the author literally just published a book called "The Intelligence Illusion: Why generative models are bad for business": https://www.baldurbjarnason.com/2024/intelligence-illusion-2...

It seems they might have missed "motivated reasoning" in their study of human cognitive faults.

tengbretson•34m ago
Has it ever been scientifically validated with peer review that digging building foundations with an excavator is more effective than using a shovel? Has the author ever held a shovel?