frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•50s ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
1•keepamovin•1m ago•0 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•4m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•6m ago•0 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•6m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•11m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•12m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•13m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•16m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
2•breve•17m ago•0 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•19m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•21m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•24m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•25m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
5•tempodox•25m ago•2 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•30m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•33m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
6•petethomas•36m ago•2 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•41m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•56m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
3•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
2•computer23•1h ago•0 comments
Open in hackernews

Make Fun of Them

https://www.wheresyoured.at/make-fun-of-them/
56•disgruntledphd2•7mo ago

Comments

PaulHoule•7mo ago
I find any claim that superintelligence helps with physics to be a hoot.

Dark matter is the most notable contradiction in physics today, where there is a complete mismatch between the physics we see in the lab, in the solar system, and globular clusters and the physics we see at the galactic scale. Contrast that to Newton's unified treatment of gravity on Earth and the Solar System.

There is no lack of darkon candidates or MOND ideas [1] what is lacking is an experiment or observation that can confirm one or the other. Similarly, a 1000x bigger TeraKamiokande or GigaKATRIN could constrain proton decay or put some precision on the neutrino mass but both of these are basically blue-collar problems.

[1] I used to like MOND but the more I've looked at it the more I've adopted the mainstream view of "this dark matter has a galaxy in it" as opposed to "this galaxy has dark matter in it". MOND fever is driven by a revisionist history where dark matter was discovered by Vera Rubin, not Zwicky [2] and that privileges galactic rotation curves (which MOND does great at) over many other kinds of evidence for DM.

[2] ... which I'd love to believe since Rubin did her work at my Uni!

alganet•7mo ago
That's not the point of the article though.
PaulHoule•7mo ago
Altman claimed superintelligence would revolutionize physics. This is one of many bullshit statements attributed to Altman, just one I feel qualified to counter. I could say plenty about software dev too.
alganet•7mo ago
Lots of things were said about AI. I could take this sort of discussion to any subject I want.

The article tries to put these personalities into a "master manipulator" figure. It doesn't matter if 99% of the text actually _criticizes_ these personalities.

What matters is the takeaway a typical reader would get from reading it. It's in the lines of "they're not tech geniuses, they're manipulators".

This takeaway is carefully designed to cater to selected audiences (well, the author claims to be a media trainer, so fair game, I guess).

I think the intent is to actually _promote_ these personalities. I know it sounds contradictory, but as I said, it gives them too much credit for "being good manipulators".

Which audiences are catered to and how they are expected to react is an exercise I'll leave for the reader.

That's a general overview of what this article does. Nothing related to actual claims.

PaulHoule•7mo ago
Yeah, it's like the way you have to ask leftists "What is your expectation for how many votes this post will move the next election D or R?" with the recognition that their radical posturing might really be something the Koch Organization should fund.
alganet•7mo ago
I don't understand your comparison.
PaulHoule•7mo ago
There's a theory that "the meaning of a communication is it's effect". If I make a message that I think is a left-wing message but it causes sufficient right-wing backlash to motivate opponents to oppose me, my message wasn't really a left-wing message but a right-wing message from that viewpoint -- one that benefits my enemies.

Isn't that what you're saying here? The author pretty obviously thinks that Altman and company are horrible bullshitters and that's a bad thing, but you seem to think that somebody could come to the conclusion that they are actually really good bullshitters.

alganet•7mo ago
My original comment included a political example, but I removed it before posting precisely to avoid confusion between what I said and simple polarization schemes.

It's not about backlash. Media has several ways to deliver a different message, to different audiences, using a single piece.

What I mean by "audiences" is much more granular than "against" or "in favor".

The article attempts to do that (there are hints of this granularity of target profiles all over it), but it's not very subtle at doing it.

alganet•7mo ago
> a savvy negotiator and manipulator

You give them too much street cred. I'm not convinced they're even good at that.

lenerdenator•7mo ago
If you're referring specifically to the Altman brothers, ask them where you can find a soda and a sunduh for a qwarter on Rowte Farty-Far.

Teasing over the various Midwestern accents is sort of like dealing with boxing great Joe Louis: you can run, but you just can't hide.

Gothmog69•7mo ago
So the author can't conceive of a situation where a scientist can use AI to reduce his busy work by 4 hours a day? He's the one that sounds stupid here.
oytis•7mo ago
Like, to write grant reports? That's already happening on scale.
PaulHoule•7mo ago
Any productivity gained in writing grant reports is lost on the back end evaluating them.
oytis•7mo ago
I'd bet AI is used there as well
Spivak•7mo ago
I think the author leaves out one important point which is that most people sound like idiots when put on the spot and asked to talk about things outside their core competency, and for these men that core competency is business. It's entirely possible they're bad at that as well but a priori you would probably expect them to do a lot better.

It's the human person Gell-Mann effect, we listen to CEOs talk about science, tech, and engineering and they sound like morons because we know these fields. But their audience is specifically people who don't-- and to them, thanks to the effect, they sound like they know what they're talking about.

hbn•7mo ago
I read the first few paragraphs but when I saw the size of my scrollbar, I decided the author is putting way too much thought and effort into owning a VC-funded tech CEO playing the game you play when you're running a a VC-funded tech company.

"Our product is the second coming of Christ and if you give me money now you'll 100000x your investment!" is the correct answer to all questions when you're in that position. I'm not saying it's admirable, but it's what you do to keep money coming in for the time being. It's not that deep.

satai•7mo ago
In that case everything should just answer "piss off, salesman" and walk away because their words are of no value.
esafak•7mo ago
I'm almost tempted to run this lengthy article through chatGPT for a summary but the irony was too much so I just stopped reading.
ergonaught•7mo ago
Author makes a number of valid and valuable points, but desperately needed to edit this down for poignancy out of respect for everyone's time (including their own), taking their own advice ("clearly articulate what you're saying"). Don't make using an LLM to extract your point seem like such a good idea, eh?
amai•7mo ago
Has this guy ever heard the american president speaking? Compared to Trump Altman et al. are geniuses.
bediger4000•7mo ago
Zitron is correct. He reminds me of Linux advocates, who are also correct.
desktopninja•7mo ago
I'm in the camp now that asks the question, if AI is so good, why are we still tethered to big tech? why hasn't an untrained human prompted a product out that is 10 times better than anything big tech has to offer. After all intelligence is free :-)
oytis•7mo ago
It's not even that. Why are trained humans who are now 10 times more productive haven't created new amazing operating systems, programming languages, game engines, browsers etc. We should have had a lot of outstanding products since AI hype started, but instead every company except the few doing foundational models seems to be just stuck and confused.
rcarmo•7mo ago
This was worth reading for the Snowflake section alone. I’ve seen it happen in real life.
ch_fr•7mo ago
Ed Zitron's writing is inflamatory, but his points are extremely easy to grasp. I always find those reads very therapeutic, as looking at too much genAI discussion can make it seem like I'm the crazy one.

GenAI has all of the media attention in the world, all the capital in the world, and a huge amount of human resources put into it. I think (or at least hope) that this fact isn't controversial to anyone, be they for or against it. We can then ask ourselves if having models that can write an e-shop API really is an acceptable result, when looking at the near-incomprehensible amounts spent into it.

One could say "It has also led to advances in [other field of expertise]", but couldn't a fraction of that money have achieved greater results if invested directly in that field? To build actual specialized tools and structures? That's an unfalsifiable hypothetical, but reading Sam Altman's "Gentle Singularity[1]" blogpost, it seems like wild guesses are a perfectly fair arguing ground.

On a small tangent about "Gentle Singularity", I think it's not fair to scoff at Ed's delivery when Sam Altman also pulls a lot of sneaky tricks when addressing the public:

> being amazed that it can make live-saving medical diagnoses

The classification model that spots tumor has nothing to do with his product category, I find it very dishonest to sandwich this one example between to examples of generative AI.

> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.

"the world wants a lot more of both" doesn't quite justify the flood of slop on every single art-sharing platform. That's like saying the world wants a lot more of communication to hand wave the 10s of spam calls you get each day. "As long as they embrace the new tools" is just parroting the "adapt or die Luddite!" argument. As the CEO of the world's foremost AI company I expect more than the average rage bait comment you'd see on a forum, the fact that it's somehow an "improvement" is taken for granted, even though Sam is talking about fields he's never even dabbled in.

The statement probably doesn't weigh much since my biases are transparent, but I believe there's just so much more intellectual honesty in Ed's arguments than in much of what Sam Altman says.

[1] https://blog.samaltman.com/the-gentle-singularity