frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•11m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
3•o8vm•13m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•14m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•27m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•29m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•32m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•40m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•41m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•43m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•43m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•46m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•46m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•51m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•53m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•53m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•54m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•56m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•59m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Make Fun of Them

https://www.wheresyoured.at/make-fun-of-them/
56•disgruntledphd2•7mo ago

Comments

PaulHoule•7mo ago
I find any claim that superintelligence helps with physics to be a hoot.

Dark matter is the most notable contradiction in physics today, where there is a complete mismatch between the physics we see in the lab, in the solar system, and globular clusters and the physics we see at the galactic scale. Contrast that to Newton's unified treatment of gravity on Earth and the Solar System.

There is no lack of darkon candidates or MOND ideas [1] what is lacking is an experiment or observation that can confirm one or the other. Similarly, a 1000x bigger TeraKamiokande or GigaKATRIN could constrain proton decay or put some precision on the neutrino mass but both of these are basically blue-collar problems.

[1] I used to like MOND but the more I've looked at it the more I've adopted the mainstream view of "this dark matter has a galaxy in it" as opposed to "this galaxy has dark matter in it". MOND fever is driven by a revisionist history where dark matter was discovered by Vera Rubin, not Zwicky [2] and that privileges galactic rotation curves (which MOND does great at) over many other kinds of evidence for DM.

[2] ... which I'd love to believe since Rubin did her work at my Uni!

alganet•7mo ago
That's not the point of the article though.
PaulHoule•7mo ago
Altman claimed superintelligence would revolutionize physics. This is one of many bullshit statements attributed to Altman, just one I feel qualified to counter. I could say plenty about software dev too.
alganet•7mo ago
Lots of things were said about AI. I could take this sort of discussion to any subject I want.

The article tries to put these personalities into a "master manipulator" figure. It doesn't matter if 99% of the text actually _criticizes_ these personalities.

What matters is the takeaway a typical reader would get from reading it. It's in the lines of "they're not tech geniuses, they're manipulators".

This takeaway is carefully designed to cater to selected audiences (well, the author claims to be a media trainer, so fair game, I guess).

I think the intent is to actually _promote_ these personalities. I know it sounds contradictory, but as I said, it gives them too much credit for "being good manipulators".

Which audiences are catered to and how they are expected to react is an exercise I'll leave for the reader.

That's a general overview of what this article does. Nothing related to actual claims.

PaulHoule•7mo ago
Yeah, it's like the way you have to ask leftists "What is your expectation for how many votes this post will move the next election D or R?" with the recognition that their radical posturing might really be something the Koch Organization should fund.
alganet•7mo ago
I don't understand your comparison.
PaulHoule•7mo ago
There's a theory that "the meaning of a communication is it's effect". If I make a message that I think is a left-wing message but it causes sufficient right-wing backlash to motivate opponents to oppose me, my message wasn't really a left-wing message but a right-wing message from that viewpoint -- one that benefits my enemies.

Isn't that what you're saying here? The author pretty obviously thinks that Altman and company are horrible bullshitters and that's a bad thing, but you seem to think that somebody could come to the conclusion that they are actually really good bullshitters.

alganet•7mo ago
My original comment included a political example, but I removed it before posting precisely to avoid confusion between what I said and simple polarization schemes.

It's not about backlash. Media has several ways to deliver a different message, to different audiences, using a single piece.

What I mean by "audiences" is much more granular than "against" or "in favor".

The article attempts to do that (there are hints of this granularity of target profiles all over it), but it's not very subtle at doing it.

alganet•7mo ago
> a savvy negotiator and manipulator

You give them too much street cred. I'm not convinced they're even good at that.

lenerdenator•7mo ago
If you're referring specifically to the Altman brothers, ask them where you can find a soda and a sunduh for a qwarter on Rowte Farty-Far.

Teasing over the various Midwestern accents is sort of like dealing with boxing great Joe Louis: you can run, but you just can't hide.

Gothmog69•7mo ago
So the author can't conceive of a situation where a scientist can use AI to reduce his busy work by 4 hours a day? He's the one that sounds stupid here.
oytis•7mo ago
Like, to write grant reports? That's already happening on scale.
PaulHoule•7mo ago
Any productivity gained in writing grant reports is lost on the back end evaluating them.
oytis•7mo ago
I'd bet AI is used there as well
Spivak•7mo ago
I think the author leaves out one important point which is that most people sound like idiots when put on the spot and asked to talk about things outside their core competency, and for these men that core competency is business. It's entirely possible they're bad at that as well but a priori you would probably expect them to do a lot better.

It's the human person Gell-Mann effect, we listen to CEOs talk about science, tech, and engineering and they sound like morons because we know these fields. But their audience is specifically people who don't-- and to them, thanks to the effect, they sound like they know what they're talking about.

hbn•7mo ago
I read the first few paragraphs but when I saw the size of my scrollbar, I decided the author is putting way too much thought and effort into owning a VC-funded tech CEO playing the game you play when you're running a a VC-funded tech company.

"Our product is the second coming of Christ and if you give me money now you'll 100000x your investment!" is the correct answer to all questions when you're in that position. I'm not saying it's admirable, but it's what you do to keep money coming in for the time being. It's not that deep.

satai•7mo ago
In that case everything should just answer "piss off, salesman" and walk away because their words are of no value.
esafak•7mo ago
I'm almost tempted to run this lengthy article through chatGPT for a summary but the irony was too much so I just stopped reading.
ergonaught•7mo ago
Author makes a number of valid and valuable points, but desperately needed to edit this down for poignancy out of respect for everyone's time (including their own), taking their own advice ("clearly articulate what you're saying"). Don't make using an LLM to extract your point seem like such a good idea, eh?
amai•7mo ago
Has this guy ever heard the american president speaking? Compared to Trump Altman et al. are geniuses.
bediger4000•7mo ago
Zitron is correct. He reminds me of Linux advocates, who are also correct.
desktopninja•7mo ago
I'm in the camp now that asks the question, if AI is so good, why are we still tethered to big tech? why hasn't an untrained human prompted a product out that is 10 times better than anything big tech has to offer. After all intelligence is free :-)
oytis•7mo ago
It's not even that. Why are trained humans who are now 10 times more productive haven't created new amazing operating systems, programming languages, game engines, browsers etc. We should have had a lot of outstanding products since AI hype started, but instead every company except the few doing foundational models seems to be just stuck and confused.
rcarmo•7mo ago
This was worth reading for the Snowflake section alone. I’ve seen it happen in real life.
ch_fr•7mo ago
Ed Zitron's writing is inflamatory, but his points are extremely easy to grasp. I always find those reads very therapeutic, as looking at too much genAI discussion can make it seem like I'm the crazy one.

GenAI has all of the media attention in the world, all the capital in the world, and a huge amount of human resources put into it. I think (or at least hope) that this fact isn't controversial to anyone, be they for or against it. We can then ask ourselves if having models that can write an e-shop API really is an acceptable result, when looking at the near-incomprehensible amounts spent into it.

One could say "It has also led to advances in [other field of expertise]", but couldn't a fraction of that money have achieved greater results if invested directly in that field? To build actual specialized tools and structures? That's an unfalsifiable hypothetical, but reading Sam Altman's "Gentle Singularity[1]" blogpost, it seems like wild guesses are a perfectly fair arguing ground.

On a small tangent about "Gentle Singularity", I think it's not fair to scoff at Ed's delivery when Sam Altman also pulls a lot of sneaky tricks when addressing the public:

> being amazed that it can make live-saving medical diagnoses

The classification model that spots tumor has nothing to do with his product category, I find it very dishonest to sandwich this one example between to examples of generative AI.

> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.

"the world wants a lot more of both" doesn't quite justify the flood of slop on every single art-sharing platform. That's like saying the world wants a lot more of communication to hand wave the 10s of spam calls you get each day. "As long as they embrace the new tools" is just parroting the "adapt or die Luddite!" argument. As the CEO of the world's foremost AI company I expect more than the average rage bait comment you'd see on a forum, the fact that it's somehow an "improvement" is taken for granted, even though Sam is talking about fields he's never even dabbled in.

The statement probably doesn't weigh much since my biases are transparent, but I believe there's just so much more intellectual honesty in Ed's arguments than in much of what Sam Altman says.

[1] https://blog.samaltman.com/the-gentle-singularity