frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Code never mattered in the first place

https://mar.coconauts.net/blog/posts/code-never-mattered/
1•marbartolome•2m ago•0 comments

The Place of Houses: Questionnaire [pdf]

http://www.duodickinson.com/Yours_complete.pdf
1•Kaibeezy•2m ago•0 comments

How to improve code quality of Claude Code and codex (on 2026-05)

1•david_d8912•2m ago•0 comments

It is distressing that AI does not know the seven cardinal virtues

https://en.wikipedia.org/wiki/Seven_virtues
2•chasil•9m ago•2 comments

You made me rich, thank you

https://github.com/theori-io/copy-fail-CVE-2026-31431/issues/128
2•mfi•10m ago•0 comments

Donlyn Lyndon, Last Surviving Creator of the Sea Ranch, Dies at 90

https://www.nytimes.com/2026/05/05/arts/design/donlyn-lyndon-dead.html
2•Kaibeezy•10m ago•1 comments

NHS England withdraws public software over AI hacking fears

https://www.computing.co.uk/news/2026/security/nhs-england-withdraws-public-software-over-hacking...
1•latein•11m ago•0 comments

Show HN: Yumi – Your workspace for AI chat, notes, and research

https://askyumi.app
1•yumi-dev•11m ago•0 comments

Fedora is now the default Linux recommendation, and Ubuntu did this to itself

https://www.xda-developers.com/fedora-becoming-default-linux-recommendation-ubuntu-fault/
1•bundie•12m ago•0 comments

The Deletion Test – The Phoenix Architecture

https://aicoding.leaflet.pub/3md5ftetaes2e
2•fagnerbrack•14m ago•0 comments

The 80% Problem in Agentic Coding

https://addyo.substack.com/p/the-80-problem-in-agentic-coding
1•fagnerbrack•14m ago•0 comments

How the Lobsters front page works – nilenso blog

https://blog.nilenso.com/blog/2026/01/20/lobsters-front-page/
1•fagnerbrack•14m ago•0 comments

The Boring Internet

https://www.terrygodier.com/the-boring-internet
2•crowdhailer•16m ago•1 comments

Code coverage tells you what you didn't test – not whether your tests are good

https://bubble.ro/2026/05/04/code-coverage-in-ci-cd-what-it-really-tells-you-and-what-it-doesnt/
2•birdculture•19m ago•0 comments

Exclusive / House Committees Probe Cursor Parent, Airbnb over Chinese AI

https://www.semafor.com/article/04/29/2026/house-committee-probes-cursor-parent-airbnb-over-chine...
1•Palmik•19m ago•0 comments

The Economy Will Be 10x the Size in 10 Years [video]

https://www.youtube.com/watch?v=N5KCm_55xeQ
2•andsoitis•21m ago•0 comments

Rayhan Khilji

https://www.tryoptic.app/
1•khilji•25m ago•0 comments

Where the Despairing Log On, and Learn Ways to Die (2021)

https://www.nytimes.com/interactive/2021/12/09/us/where-the-despairing-log-on.html
2•Cider9986•27m ago•0 comments

Running a Local LLM Coding Server on MacBook Pro M5 Pro 48 GB

https://blog.kulman.sk/running-local-llm-coding-server/
3•ingve•29m ago•0 comments

GitAgentProtocol (Open Gap)

https://github.com/open-gitagent/gitagent-protocol
2•mpgirro•29m ago•0 comments

Mr_hacker

https://docs.google.com/document/d/1Ab1-CHNJNyoOxybkH0ngjsl-3soLt87J1VYmpF01-sw/edit?tab=t.0#head...
1•deadwin_yt•29m ago•0 comments

We rebuilt our auth back end in under an hour using a DSL/compiler approach

https://github.com/nikoma/carrier
2•nikoma777•34m ago•0 comments

Progressive Web Components

https://arielsalminen.com/2026/progressive-web-components/
2•mpweiher•35m ago•0 comments

RCP8.5 Is Officially Dead

https://rogerpielkejr.substack.com/p/rcp85-is-officially-dead
1•mpweiher•35m ago•0 comments

Show HN: Cybersecurity Phishing Guard for Chrome using local LLMs for privacy

https://github.com/tommyjepsen/local-llm-phishing-guard-for-chrome
2•tommyjepsen•35m ago•0 comments

Ask Claude (LRB)

https://www.lrb.co.uk/the-paper/v48/n08/paul-taylor/diary
2•OZYMANDIASAK•37m ago•0 comments

Ask HN: Is quantum computing worth the struggle?

2•alexyan0431•41m ago•4 comments

Cornered Rats and Personal Betrayals (1997)

https://www.latimes.com/archives/la-xpm-1997-oct-20-ca-44690-story.html
2•robtherobber•47m ago•0 comments

The vi family

https://lpar.ATH0.com/posts/2026/05/the-vi-family/
2•hggh•48m ago•2 comments

A/B Testing for Alien Life

https://arxiv.org/abs/2605.02969
1•pppone•48m ago•0 comments
Open in hackernews

Mark Cuban: OpenAI Will Never Return the $1T It's Investing [video]

https://www.youtube.com/watch?v=oEVHNvE_jDw
14•operatingthetan•1h ago

Comments

jqpabc123•39m ago
In my experience, Cuban is generally pretty good at stripping away the stupidity and BS.
aurareturn•16m ago
Sometimes he is the stupidity and BS.
rwmj•14m ago
He's stating the obvious, but perhaps it needed to be said.
aurareturn•27m ago
He's right, there is a race. It's going to be a natural monopoly or duopoly because the cost to train the next SOTA model is always increasing. I can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google. Everyone else has fallen behind. The flywheel of generate more revenue, get more data, get more compute train a better model might already be too great to overcome for anyone else.

I don't understand why he thinks OpenAI can't be one of the duopolies or become the monopoly. OpenAI's models are always the first or second best overall - usually the first. They are also leading in the consumer market by a wide margin. They also made a strategic decision that is paying off which was committing to more compute early on while Anthropic is hammered by the lack of compute.

PS. They've raised ~$200b total, not $1 trillion.

jqpabc123•17m ago
https://www.reuters.com/business/openai-makes-five-year-plan...
aurareturn•16m ago
This is a 5 year pledge - likely based on hitting revenue goals and not just using investor money.
libertine•13m ago
Out of those 3, only Google seems to be in the position to reach that kind of profit levels due to distribution and advertising.

Claude is kicking ass in the niche of coding and processes.

1 trillion is a lot of money for something that's not differentiated and protected in a massive market.

Does it look like OpenAI has that in place?

Cuban thinks they don't, and won't.

aurareturn•8m ago
I wrote about how I think OpenAI is going to kill it in advertisements here: https://news.ycombinator.com/item?id=46087109

Claude is kicking ass in coding but it seems like Codex is catching up fast. Claude Code's PR has taken a hit recently due to the lack of compute forcing Anthropic to dumb down the models. Codex has been gaining momentum.

Chip manufacturing aren't really differentiated either - it didn't stop TSMC from becoming the monopoly for high end chip nodes, capturing 90%+ of the advanced chip market. The reason they have is because Rock's Law makes it too expensive to build the next node unless you've generated enough revenue from the current node. I don't see why it isn't the same for SOTA models.

atwrk•12m ago
How can this become a monopoly/duopoly? There is no moat, the Chinese providers will continue to hunt the market leader at 10% of the price, there is no network effect (OpenAI's Sora was a play in that direction and failed).

I'm constantly amazed how this AGI/monopoly narrative can be kept up so long in the West, it just doesn't make sense (unless the state creates said monopoly by forbidding competition).

aurareturn•7m ago
There is clearly a moat - or Claude Code wouldn't be generating over $10b in ARR every single month.
piker•3m ago
That's not what "moat" means. Claude Code has a castle. A "moat" is meant to protect the castle from invaders. It would be things like high switching costs, proprietary formats, network effects, etc. that aren't there.

In other comments people mention the "flywheel" of data and money feeding training, but there's a view that at some point the baseline open-weight models are "good enough" that the money will dry up.

preommr•8m ago
> I can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google.

I could see people saying this in 2022, but now? No chance.

Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost. The innovation out of these companies keep showing diminishing returns, with a greater emphasis on the tooling and application layer. Having the right workflow with the right data is more important than having the right model. We could freeze AI now, and I'd bet good money that the current state of things is good enough to - not be first - but competitive for the next few years.

Even if we do end up with a oligopoly situaiton, it'll be less like Microsoft in the 90s and more like Microsoft now where they just give out windows for free, have support for WSL and the focus is on cloud services rather than their OS.

Jare•18m ago
> Fewer people applying for patents, because the minute you apply for the patent, it's available to everybody, which means every model can train on it

We know LLM companies have, for lack of a better word, "sidestepped" the copyright on millions of works with their "transformative fair use" arguments. Are LLMs also a way to sidestep patents?

pjc50•4m ago
LLMs are accelerants. They enable people to do patent and copyright infringement at a much larger scale. As we know from previous examples, if you break the law enough as a company eventually they have to let you keep doing it.