frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Pentagon Used Anthropic's Claude in Maduro Venezuela Raid

https://www.wsj.com/politics/national-security/pentagon-used-anthropics-claude-in-maduro-venezuel...
1•2OEH8eoCRo0•22s ago•0 comments

Spotify says its best developers haven't written code since Dec, thanks to AI

https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-...
1•samspenc•38s ago•0 comments

Is your website ready for GTM

https://docsalot.dev/tools/gtm-audit
1•fazkan•1m ago•0 comments

Show HN: Claw Patrol – so that's where the quota went

https://finds.one/clawpatrol
1•frankbyte•3m ago•0 comments

OK, so Anthropic's AI built a C compiler. That don't impress me much

https://www.theregister.com/2026/02/13/anthropic_c_compiler/
2•nickorlow•4m ago•0 comments

A Climate Supercomputer Is Getting New Bosses. It's Not Clear Who.

https://www.nytimes.com/2026/02/13/climate/derecho-supercomputer-ncar.html
1•mitchbob•5m ago•1 comments

A Survey on Federated Fine-Tuning of Large Language Models

https://openreview.net/forum?id=rnCqbuIWnn
1•mldev_exe•8m ago•0 comments

Tackling the problem of "naturalness" in voice AI [video]

https://www.youtube.com/watch?v=7zIEC8tmWkA
1•underfox•9m ago•1 comments

What Happened to Amazon. How Founders Become Day Two, Take Company with Them

https://markatwood.substack.com/p/what-what-happened-to-amazon-or-dont
1•slyall•10m ago•0 comments

Show HN: API-pilot – deterministic API key resolution with runtime validation

https://github.com/Avichay1977/api-pilot
1•avi7777•11m ago•1 comments

The wonder of modern drywall

https://www.worksinprogress.news/p/the-wonder-of-modern-drywall
30•jger15•19h ago•57 comments

Something Big Is Happening. Here's What It Is

https://medium.com/@vishalmisra/something-big-is-happening-heres-what-it-actually-is-9523482c4e00
1•rfonseca•12m ago•0 comments

Thousands of Amateur Gamblers Are Beating Wall Street PhDs

https://www.nytimes.com/2026/02/11/business/economy/forecasts-prediction-markets-economy.html
2•bookofjoe•16m ago•2 comments

Ontologies are all you need

https://lexifina.com/blog/ontologies-are-all-you-need
2•alansaber•18m ago•0 comments

Show HN: We open-sourced MusePro, a Metal-based realtime AI drawing app for iOS

https://github.com/StyleOf/MusePro
1•okaris•23m ago•0 comments

Launching Interop 2026

https://hacks.mozilla.org/2026/02/launching-interop-2026/
1•linolevan•24m ago•1 comments

Show HN: Create a clean tree graph of your projects with my App on iOS

https://apps.apple.com/us/app/motive-project-visualiser/id6754777255
1•Seth_k•26m ago•0 comments

Seven Billion Reasons for Facebook to Abandon Its Face Recognition Plans

https://www.eff.org/deeplinks/2026/02/seven-billion-reasons-facebook-abandon-its-face-recognition...
3•hn_acker•28m ago•0 comments

Andreessen vs. Thiel

https://web.archive.org/web/20200318115004/https://allenleein.github.io/2019/06/12/games2.html
1•eamag•30m ago•0 comments

Show HN: Infoseclist.com – Compare 90 cybersecurity tools ranked by practition

https://infoseclist.com/
1•aleks5678•31m ago•0 comments

Show HN: Clonar – A Node.js RAG pipeline with 8-stage multihop reasoning

https://github.com/clonar714-jpg/clonar
1•sowmith-tsrc•31m ago•1 comments

Grub 2.0

https://grubcrawler.dev
3•kordlessagain•32m ago•0 comments

Cmux: Tmux for Claude Code

https://github.com/craigsc/cmux
3•Soupy•33m ago•1 comments

Trump FTC wants Apple News to promote more Fox News and Breitbart stories

https://arstechnica.com/tech-policy/2026/02/trump-ftc-denies-being-speech-police-but-says-apple-n...
9•pseudalopex•33m ago•0 comments

Posteo and Mailbox.org: Many authorities do not create encrypted requests

https://www.heise.de/en/news/Posteo-and-Mailbox-org-Many-authorities-do-not-create-encrypted-requ...
2•doener•33m ago•0 comments

Google Might Think Your Website Is Down

https://codeinput.com/blog/google-seo
2•janpio•35m ago•0 comments

Show HN: TrustVector – Trust evaluations for AI models, agents, & MCP

https://github.com/guard0-ai/TrustVector
2•hckdisc•36m ago•1 comments

An AI Agent Published a Hit Piece on Me [pdf]

https://img.sauf.ca/pictures/2026-02-12/88fce2f8bbe49f40d83dec69800a2aa9.pdf
1•ColinWright•37m ago•2 comments

4K Restoration: 1984 Super Bowl Apple Macintosh Ad by Ridley Scott [video]

https://www.youtube.com/watch?v=ErwS24cBZPc
1•ipnon•37m ago•0 comments

Show HN: First Embeddable Web Agent

https://www.rtrvr.ai/blog/10-billion-proof-point-every-website-needs-ai-agent
2•arjunchint•38m ago•1 comments
Open in hackernews

Something Big Is Coming (Annotated by Ed Zitron) [pdf]

https://www.dropbox.com/scl/fi/qw6k5c3m575cq21p7jjac/Something-Big-Is-Coming-Annotated.pdf?dl=0&e=1&noscript=1&rlkey=qlr0mgnlpjifo5xkon2crhrhw
22•frizlab•1h ago

Comments

pwillia7•1h ago
context?
dcre•1h ago
It's a response to this: https://shumer.dev/something-big-is-happening

The post is silly, but I do not expect Zitron's commentary to be particularly illuminating as he is a charlatan himself. I could point to many examples, but here is a blog post I wrote about one case of him trying very hard to not understand a simple and familiar situation: https://crespo.business/posts/cost-of-inference/.

gjsman-1000•59m ago
Everyone's a charlatan until their claims come true. For that matter, your rebuttal comes with its own statements of faith, "I just don’t buy it."
dcre•55m ago
Everyone is free to make their own judgment about who is offering a genuine analysis that clarifies reality rather than obscuring it.
meowface•53m ago
Someone who predicts 15 of the last 2 recessions is a charlatan even when their claims come true.
gjsman-1000•51m ago
But who was the charlatan? The person predicting the recession, or the government who stopped the predicted recession by adding another $5T to the debt pile, to almost inevitably cause a recession later, at a more politically convenient time for those in power today? The recession happened, as predicted, the government absorbed it for another day.
meowface•48m ago
The person predicting the recession. Even if the government were preventing each new recession through historically unrivalled foresight, the predictor should eventually start incorporating that into the prediction.

If the prediction is "there will be a recession within the next 20 years", then, okay. If it's https://podcasts.apple.com/us/podcast/the-a-i-bubble-is-burs... every single month...

dcre•46m ago
I'm going with the pathologically incurious guy who is wrong in essentially every detail.
paulryanrogers•59m ago
> ...as he is a charlatan himself.

What's the evidence for that?

dcre•57m ago
See edit. Tens of thousands of lines of borderline gibberish for the gullible.
paulryanrogers•30m ago
Thanks for the link. You make some good points.

I still fear for what AI training will cost (financially and ecologically). The outputs also seem like a force multiplier that's more likely to be used for bad than good, at least without better guardrails. And it doesn't seem to make people any better, aside from a narrow view of productivity.

Hopefully Ed is wrong. Or at least there are more articulate and methodical skeptics who can keep us grounded.

meowface•6m ago
>And it doesn't seem to make people any better, aside from a narrow view of productivity.

This could be said about almost any new technology. Spreadsheets, word processors, nearly any tech startup.

People who use LLMs daily generally feel their lives are better because of them. Yes, including the non-"4o cultist psychosis" types.

As for harms: thoughtful AI worriers and doomers have been trying to sound those alarms for decades, but AI skeptics generally shoot it all down because it would require accepting what "hype" and "boosters" say about likely future capabilities, or something like that.

meowface•54m ago
One of the most widely ridiculed and discredited AI skeptics, outmatched only by Gary Marcus.

Note the date, then imagine this take repeated every single month up to now: https://podcasts.apple.com/us/podcast/the-a-i-bubble-is-burs...

Not to say all AI skepticism (especially concerning very short timelines) is necessarily unwarranted, but Zitron and Marcus are just professional contrarians selling a message to people who want their biases and priors affirmed.

int32_64•49m ago
The guy comes across as a non-technical grifter https://archive.is/m9pHl
devin•1h ago
This is in reply to this post the other day, which did numbers: https://x.com/mattshumer_/status/2021256989876109403
snowwrestler•29m ago
Also in reply (satirically):

Something Small is Happening

https://x.com/johnpalmer/status/2021966462198460849?s=12

dang•54m ago
Recent and related:

Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (73 comments)

gjsman-1000•52m ago
I don't know whether Ed Zitron is telling the truth.

I do know that Suleyman, Altman, and Amodei have lied, lied, and lied repeatedly, whether intentional or not.

For that matter, I do not believe AGI will happen in our lifetimes. https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

habitable5•32m ago
There's a good article about why AGI is not happening rather it's the religion of Silicon Valley: https://fluxus.io/article/alchemy-2-electric-boogaloo it's good despite written by a promptfondler.
kn8•2m ago
However, it already did? Interesting how everyone seems to have a different perspective on that.
johnfn•52m ago
I'm not sure what to say about calling someone a "liar" for stating that AI can work for hours unattended. I can prompt AI and have it run for an hour+ at a time and get good results out of it. I have no reason to lie; this is just a factual statement, sort of like saying that my test suite runs for an hour or something. Yes, you need to prompt it correctly and have the right environment and so forth, but it is absolutely not a "lie".
gjsman-1000•52m ago
Yes; and you can also find a bear that dances, if you visit a circus. Therefore saying bears can't dance is a lie.
johnfn•50m ago
I don't really understand what you are trying to say with this comment.
gjsman-1000•47m ago
Something can be factually true; but in so rare a circumstance, that the claim is simultaneously true and so misleading it's practically a lie. Just like AIs that think for hours without guidance. That implies full automation is imminent, when the reality is it only works about 20-30% of the time correctly.
dcre•44m ago
Do you think can they work for 5 minutes without guidance? Because that's something Ed said would not and could never happen, and the people who said it would were dupes and idiots.
johnfn•33m ago
I use AI for an hour+ without interference fairly regularly, typically once a day, sometimes more. Why would you doubt that to the point that you call people like me a liar?
spwa4•30m ago
If you actually read the post you'll see the reasons to call him a liar:

1) faking benchmarks and lying about a model he profited from commercially (ie. fraud)

2) implying that only a few people (like himself) saw COVID coming. This is a lie: it was the New York Times that published a huge article on the coronavirus at the time indicated, and he, of course, didn't see it coming

3) he doesn't just fail to disclose his commercial interests in what he's peddling, he denies them

4) he confidently states that AI builds the next generation of AI, which he can't know, and has not been stated anywhere

The list goes on.

johnfn•23m ago
I did actually read the post -- or at least the first two pages, until the increasingly unhinged comments started to get a little redundant and I figured I had gotten the gist.

> implying that only a few people (like himself) saw COVID coming

Nowhere does the post imply this. The post says COVID was an exponential curve, and he thinks that AI is a similar curve. There is nothing in there saying that only he was the one to see this. The comment, and you, are responding to a sentiment that doesn't exist in the document.

> he confidently states that AI builds the next generation of AI, which he can't know

Anthropic reports 55% of engineers use Claude for debugging on a daily basis in December[1]. I am not sure how you come to the conclusion that "has not been stated anywhere".

I would respond to your other points but I feel like these are so thoroughly incorrect that I should probably stop here.

[1] https://www.anthropic.com/research/how-ai-is-transforming-wo...

NitpickLawyer•44m ago
> Commented [9]: This is fundamentally untrue. An LLM can certainly spit out thousands of lines of code, but "opens the app itself" is definitely up for question, as is "clicks the buttons" considering how unreliable basically every computer - use LLM is. "It iterates like a developer would, fixing and refining until it's satisfied" is just a bald - faced lie. What're you talking about? This is not what these models do, nor what Codex or Claude code does. This is a clever and sinister way to write, because it abuses the soft edges of the truth - while coding LLMs can test products, or scan/fix some bugs, this suggests they A) do this autonomously without human input, B) they do this correctly every time (or ever!), C) that there is some sort of internal "standard" they follow and D) that all of this just happens without any human involvement

---

Ummm. Yeah, no. This actually works. No idea why bozos who obviously don't use the tools write about how the tools don't do this or that. Yes they do. I know because I use them. Today's best agentic harnesses can absolutely do all of the above. Not perfect by any means, not every time, but enough to be useful to me. As some people say "stop larping". If you don't know how a tool works, or what it can do, why the hell would you comment on something so authoritatively? This is very bad.

(I'll make a note that the original article was written by a 100% certified grifter. I happened to be online on llocallama when that whole debacle happened. He's a quack. No doubt about it. But from the quote I just pasted, so is the commenter. Qwaks commenting on qwacks. This is so futile)

meowface•42s ago
[delayed]