frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•54s ago•0 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•5m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•7m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•13m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•17m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•17m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•21m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•22m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•24m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•26m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•29m ago•4 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•30m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
3•1vuio0pswjnm7•32m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•33m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•35m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•38m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•43m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•45m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•48m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments
Open in hackernews

Sincerity Wins the War

https://www.wheresyoured.at/sic/
62•treadump•7mo ago

Comments

tptacek•7mo ago
In what way is this piece saying something different than Zitron said on June 9 in his "Never Forget What They've Done" piece?

https://www.wheresyoured.at/never-forget-what-theyve-done/

yifanl•7mo ago
You mean, besides how this one is targeted at journalists and that one is targeted at the tech industry?

The difference, besides everything else is expectations: He expects the tech industry to overhype things because they're salespeople, he expects journalists to call them out when they're overhyping things.

tptacek•7mo ago
Can you say more? I'm asking seriously; I'd like to have a better understand of "how to read" Zitron, because these pieces are long and emotive. Are they basically just responses to whatever the latest news is, the way David Gerard writes about blockchain stuff? Because I did see the utility in that kind of writing.
yifanl•7mo ago
Are you asking because of the perceived poor reception to your recent blogpost?

Because I'd say a lot of your questions are answered directly in this latest piece.

tptacek•7mo ago
I've never written anything that got a better reception than that blog post, much to my chagrin, so I don't know where you're coming from with that. But put it aside: what parts of this Zitron piece are responsive to what I wrote? That would be super helpful to understand.
altairprime•7mo ago
The message, if stripped of emotion and audience, may not be wholly unique. The target audience of the two pieces is certainly different, and the emotional tone as well. Zitron doesn’t seem to direct his writing towards any single ‘audience’ week-to-week, as some others do; but each post does always appear to have a target in particular. So the way I would read Zitron, then, is to ask: ‘what is he trying to persuade who of, and how does his emotional intensity uniquely promote that outcome?’ (relative to less-intense others).
rwmj•7mo ago
The article is a long complaint about "churnalism". It's not exactly a new complaint. Nick Davies wrote a whole book about it in 2008 [1]. But it's getting worse and worse so it's worth reminding people of it.

[1] https://en.wikipedia.org/wiki/Flat_Earth_News_(book)

tptacek•7mo ago
Thanks, that makes much more sense than my original read of the article.
potatolicious•7mo ago
I think there's some overlap in the thrust of both pieces but they're pretty distinct topic-wise?

The first piece strikes me as a paean against "enshittification" - the idea that the industry has done good things, but now subsists on a combination of hot air and making existing good things worse. It further makes the specific point that LLMs belong in the "hot air" category and that it shares little with other innovations of the past.

It does touch on what he perceives as overly-friendly press coverage of the above, but I didn't read the piece as focusing on that point.

FWIW, I find Zitron to be an... unreliable commentator on this subject, to put it mildly, but I am not entirely unsympathetic to the point.

The second piece is more specifically focused on the overly-friendly press coverage, and the idea that journalists are either overly credulous, especially to fantastical claims about the tech, or openly corrupted by the parties they are meant to cover.

tptacek•7mo ago
I feel like I pick up all that stuff but I don't really understand who the audience is for it. It would make more sense to me if these companies were public and taking investment (I mean, Google and Meta are, but they're not "AI plays", and this most recent piece focuses on Anthropic and OpenAI). Then the point of the piece would be, like David Gerard's blockchain pieces, "don't invest in this".
potatolicious•7mo ago
That's kinda my main beef with Ed's writing - it's pretty unfocused. Both pieces you linked demonstrate this - he careens from point to point.

The lack of focus I feel like comes from the fact that his audience is an amalgamation of multiple groups, much of which is the "tech is over, it's all grifters now" cadre, so a lot of this isn't really meant to be a persuasive argument of anything but rather just a dumping ground of grievances.

I alluded to finding him an unreliable narrator of this topic, and this is why. So much of this audience is so fully committed to "this is spicy autocomplete, a totally non-functional grift on par with NFTs" as a position that it compromises any of the other points he's trying to make.

FWIW I do find some things of his sympathetic - particularly around how much structural risk we're taking on every time the VCs decide to line up the hype cannon behind something. That said, I think it's also fair to say that this is my projection of his argument, because his actual arguments are often too muddied to even draw that level of specificity.

[edit] I continue to follow Ed's writing because a) I think there are glimmers or something in there and b) I treat it as a temperature read of a substantial minority of public opinion. The level of disillusionment and rising anger against our industry is concerning.

camgunz•7mo ago
I'm not a subscriber, and I find Zitron's style to be too rage bait to really absorb it all, but I share his tech skeptic perspective. I don't think he has an exhortation as direct as Gerard's "don't invest in this", but his general "there's something fundamentally grifty about tech" resonates, and I think TFA's basic "the press is recklessly credulous to tech industry claims" premise is more or less on point.

(I looked up the interview [0] to be sure and I wanna say I remembered this perfectly I am the greatest) My anecdata here is I quit listening to Hard Fork (Kevin Roose & Casey Newton's NYT podcast about tech) after Roose was interviewing... the Cruise CEO. I'll put the exchange here:

Kevin Roose: Do you feel a similar sense of responsibility for the people who are currently driving for a living to find them new work or to create new work for them?

Kyle Vogt: I think we have to contribute to it, for sure. You know, I think — and that could take the form of to the extent we’re able providing training programs or alternate jobs for people. But more than that, I think it’s interacting with our government and our regulators and letting them know this is coming, when and how, and giving them some notice so we can plan ahead a little bit.

---

I quit listening because I wanted _any follow up_ to that at all (there are zero followups in the entire interview--they should rename the podcast to "No Followups" or "Spew Literally Anything To Our Listeners"), like, "who in Congress are you talking to", "what are your plans for training programs", "how have you staffed these efforts", but like, they start joking about rolling gyms or some bullshit (the irony of putting a fuckin stationary bike in a self-driving car honestly is too much, it just is too much I can't take it). I wouldn't write a screed like Zitron did. I would simply say the press doesn't treat tech seriously. Can you imagine a political interview where you have color commentary? It would literally be a joke.

I don't know if this is a satisfying answer to your "who is the audience for this" question, but TL;DR: I think it's basically... I wouldn't call them tech skeptics but rather people who think tech is actually hugely important and deserves to be treated with greater seriousness.

[0]: https://www.nytimes.com/2023/05/12/podcasts/googles-ai-bonan...

tptacek•7mo ago
Ok so this all makes sense but it is basically the exact message I got off the other Zitron post. Tech skepticism I get! But these posts are like 8,000 words; are people skepticizing tech... recreationally?

The post upthread that suggested this was basically just a criticism of how journalism operates was clearer to me (still a point I think he could have made in half as many words, but at least a distinct point).

camgunz•7mo ago
Oh this is fair; I haven't deeply read a Zitron article in like... some time. If you're looking for the Matt Levine of tech you'll have to keep looking.
davidgerard•7mo ago
The canonical post length at Pivot to AI is 250 words and the canonical video length is 4 minutes, though I go over way often these days.

> are people skepticizing tech... recreationally?

As recreations go, it'd be a socially productive one! But do you seriously not understand that Ed is writing this stuff because he means it?

camgunz•7mo ago
We may be being a little hard on him; I do think he's sincere (I think a lot of stuff gets audience captured, but I absolutely don't think that's the case here). Maybe the way I would phrase it is to me, Zitron's writing is preaching to the choir. But to someone kind of at the beginning of the "hey this feels... a little rotten" journey I think he's a great entree, and I love linking people there.
tptacek•7mo ago
I 100% believe they mean it.
davidgerard•7mo ago
Ed and I basically agree that this shit is shit. People whine about his tone in direct proportion to how many directly useful worked-out numbers he provides. I also agree strongly with his declaration that he will never forgive them for what they've done to the computer. I'm a techie and he's a non-techie, but, y'know, he's not wrong at all.
camgunz•7mo ago
Yeah maybe I should subscribe. I read Never Forget What They've Done which I don't remember getting linked on HN and is it a little much? Yeah. Do I feel exactly the same? Also yeah. OK I convinced myself to subscribe haha, I did it.

[0]: https://www.wheresyoured.at/never-forget-what-theyve-done/

KerrAvon•7mo ago
We need more reality-grounded takes like this one. I do have a quibble:

> These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can[…]

I'd argue "agents" is actually reasonable technical jargon for this purpose, with a history. Tog on Interface (circa 1990) uses the term for a smart software feature in an app from that time period.

MisterKent•7mo ago
My tech friends and I cannot wait for this agentic bubble to pop. Much like the dotcom bubble, there's absolutely value in AI but the hype is absurd and is actively hurting investments into reasonable things (like just good UX).

The hype and zealotry remind me of a cult. And as I go higher up the chain at my big tech company, the more culty they are in their beliefs. And the less they believe AI can do their specific jobs, and the less they have actually tried to use AI beyond badly summarizing documents they barely read before.

AI, as far as I can tell, has been a net negative for humans. It's made labor cheaper, answers less reliable, reduced the value we placed on creativity and professionals in general, allows mass disinformation, and mostly results in people being lazier and not learning the basics of anything. There are of course spots of brightness, but the hype bubble needs to burst so we can move on.

fullshark•7mo ago
What depresses me is all these people that are leading us with these stupid decisions re: AI will get bonuses and promotions after the bubble pops. All the useless effort getting AI everywhere will be forgotten, no one will care or remember the idiotic decisions and we will all be chasing the new new thing.

Sincerity will not win in the end. VC money and the quest for insurmountable tech driven cash flows is what drives everything. The age of software being driven by sincere engineers trying to build is dead outside niche projects.

JohnMakin•7mo ago
My belief that's kind of settling in after a few years of observation is that I absolutely believe the "hype" claim that AI is a force multiplier. However, lots of things out there are terrible and shouldn't be force multiplied (spam, phishing, scams, etc) or say like, people that are very bad at their jobs. If people like this's output is multiplied, it clearly can and will be very bad. I have seen this play out at a small scale already on some teams I've worked with.

For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.

JohnMakin•7mo ago
> My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock.

I think the underlying belief causing people to believe things like this are "silly" or that AI criticism is overstated is that the market does not really make mistakes, at least not in the aggregate. So, if XYZ company's CEO says "Our product is doing ABC 300000% better and will take over the world!" and its value/revenue is also going up at the same time, that is seen as a sign that the market has validated this view, and it is infallible (to a point). Of course, this ignores that the market has historically and often been completely wrong, and that this type of reasoning is entirely circular - pay no attention to the man (marketing team) behind the curtain or think about it too hard.

radialstub•7mo ago
> market has validated this view, and it is infallible (to a point)

Irrational Exuberance. Speculative bubbles are scarily common.

bananalychee•7mo ago
What the author presents as "sincerity" comes off as injecting (his) biased views into reporting. The post devolves into a tedious series of anecdotes that ostensibly prove that "context" can reframe a story, and he argues that sincere reporting should take that context into account, which is reasonable in principle, but he doesn't seem to realize that he's only presenting context that suits his worldview and tosses out the rest. For example, he decries journalists being wrong or underemphasizing his bias by failing to account for data that proves him right in retrospect. In the same paragraph, he smears reporters for under-weighing and over-weighing soft data. That's easy to do in hindsight. My takeaway is that he undermines his own premise by demonstrating everything that can go wrong in opinionated reporting: cherry-picking, double standards, and confirmation bias.

P.S.: the most surprising thing to me about this blog post is that it went through an editor.

jszymborski•7mo ago
> comes off as injecting (his) biased views into reporting

The trouble is that not adding context is also a choice, which also reveals an authors belief on the topic, except with sufficient plausible deniability. This is why the article describes it as cowardly. It isn't sufficient to defer to people in positions in power. You may appear to be neutral to those who don't bother to think about it, but in truth you're just adopting the position of the person whose anecdata you've unthinkingly regurgitated. The job of a journalist is to think, apply rigorous thought, do research, challenge the status quo.

There is no "unbiased" media, just sincere and insincere. Good will arguments and bad will arguments.

We all perceive the world some way, and it isn't always how other people perceive it. What one calls boss-coddling, another might call common sense business. As long as you do your homework, "stand on your shit", and don't just remasticate the pablum handed to you from on high, we'll be fine. Sadly, as pointed out in the article, we're sorta drowning in soggy pablum these days.

bananalychee•7mo ago
Dispensing from editorialism is a choice, yes, but that only translates to bias if it's done inconsistently. Meanwhile, while contextualizing, and to a greater extent reframing, can also be done in a fair and objective manner, doing it well and consistently is much more difficult.

I don't think that Zitron cares about objectivity nearly as much as he cares about his worldview being validated by reporters, thus the idea that failing to inject context [which promotes that worldview] is inherently insincere. Since journalism is a fairly ideologically homogeneous profession, I can understand how that might appeal to him, but I doubt he'd make that argument from the other side of the fence.