frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•8m ago•0 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•12m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•16m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•18m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•27m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•32m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•33m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•38m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•38m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•39m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•41m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•46m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•57m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•1h ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•2h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•2h ago•1 comments
Open in hackernews

An untidy history of AI across four books

https://hedgehogreview.com/issues/lessons-of-babel/articles/perplexity
126•ewf•4mo ago

Comments

jstanley•4mo ago
The four books discussed in that passage are:

AI Snake Oil – by Arvind Narayanan and Sayash Kapoor

Nexus – by Yuval Noah Harari

Genesis – by Henry Kissinger, Craig Mundie, and Eric Schmidt

The Singularity Is Nearer – by Ray Kurzweil

ks2048•4mo ago
Henry Kissinger, noted AI expert.
homarp•4mo ago
well craig mundie and eric schmidt not much better
KerrAvon•4mo ago
it's not complimentary about them, FWIW
Igrom•4mo ago
As criticized in the featured article, yes.
imperfect_light•4mo ago
He did so well with Theranos
DebtDeflation•4mo ago
In 20 years, Kurzweil will write another book entitled, "The Singularity is Almost Here".
FungalRaincloud•4mo ago
He's 77 years old. Let the man retire, damn.
kelseyfrog•4mo ago
Pills in; books out. That's the deal.
tim333•4mo ago
He should have merged with AI by then.
rishi_rt•4mo ago
Aravind Narayanan seems to be the only guy qualified enough to be called an expert.
dang•4mo ago
HN's own https://news.ycombinator.com/user?id=randomwalker!
randomwalker•4mo ago
Thanks! HN was part of the origin story of the book in question.

In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.

It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.

While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.

CamperBob2•4mo ago
It's hard to miss the similarity between your book's title and Cliff Stoll's 1995 Silicon Snake Oil, an indictment of the general concept of the "information superhighway" that was starting to resonate with the public. Stoll is a really smart guy, but that particular book hasn't held up too well:

   Few aspects of daily life require computers...They're 
   irrelevant to cooking, driving, visiting, negotiating, 
   eating, hiking, dancing, speaking, and gossiping. You 
   don't need a computer to...recite a poem or say a 
   prayer." Computers can't, Stoll claims, provide a richer
   or better life.
(excerpted from the Amazon summary at https://www.amazon.com/Silicon-Snake-Oil-Thoughts-Informatio... ).

So, was this something that you guys were conscious of when you chose your own book's title? How well have you future-proofed your central thesis?

randomwalker•4mo ago
Yes, we're aware! Fortunately our book is not a broad indictment of AI :) And none of our claims are premised on tasks people can do remaining out of reach for AI. More here: https://www.normaltech.ai/p/faq-about-the-book-and-our-writi...

Our more recent essay (and ongoing book project) "AI as Normal Technology" is about our vision of AI impacts over a longer timescale than "AI Snake Oil" looks at https://www.normaltech.ai/p/ai-as-normal-technology

I would categorize our views as techno-optimist, but people understand that term in many different ways, so you be the judge.

dang•4mo ago
> While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.

Sounds like a job for the community! Maybe someone will track it down...

Edit: I tried something like https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&... (note the custom date range) but didn't find anything that quite matches your description.

z2•4mo ago
https://news.ycombinator.com/item?id=14944613

This was from 2017, and it made such an impression on me that I could find it on my first search attempt!

ontouchstart•4mo ago
“… machine learning everything that focuses on dealing with problems with a complex structure and low noise, and statistics everything that focuses on dealing with problems with a large amount of noise.”
ontouchstart•4mo ago
https://news.ycombinator.com/item?id=20010182
NooneAtAll3•4mo ago
what does irreducible error mean?
mikepalmer•4mo ago
And how do you know it's irreducible? In the sense of knowing there's no short program to describe it (Kolmogorov style).
eco•4mo ago
That's one of the things that drives me nuts about all the public discourse about AI and our future. The vast majority of words written/spoken on the subject are by generic "thought leaders" who really have no greater understanding of AI than anyone else who uses it regularly.
libraryofbabel•4mo ago
And the article agrees with you, and is pretty scathing about all the books except Narayanan’s (which is also the only book with a balanced anti-hype perspective):

> A puzzling characteristic of many AI prophets is their unfamiliarity with the technology itself

> After reading these books, I began to question whether “hype” is a sufficient term for describing an uncoordinated yet global campaign of obfuscation and manipulation advanced by many Silicon Valley leaders, researchers, and journalists

mmaia•4mo ago
A characteristic of the field since the beginning. Reading What Computers Can't Do in college (early 2000s) was an important contrast for me.

> A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.

> Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.

https://en.wikipedia.org/wiki/Hubert_Dreyfus's_views_on_arti...

red75prime•4mo ago
Expert futurologist? Anyway. The article has very little substance. "See those ridiculous predictions," mostly. If there's anything about fundamental or practical limitations of the current machine learning approaches (deep learning, transformers, RL, and so on), I haven't seen it.
kouru225•4mo ago
Ok so what is this publication? Because apparently they’ve been around since the 90s. I’ve never heard of them though. Their title and its reference suggests a very strong philosophical stance about something and I imagine that because of that they have political leanings, but I can’t tell what their leanings are
FungalRaincloud•4mo ago
The Hedgehog Review? Yes, they've been around since 1999, and publish a few times a year. But I'm not sure where you're leaping to a strong political leaning. They're an academic journal published by the University of Virginia. I don't religiously follow them, but I've been cursorily aware of them for a while. I don't think I've ever considered them to lean one way or another when reading their publications.
jayers•4mo ago
They don't have any political leanings but they do have a philosophical project. If you dig into the site a little you'll find that they're published by the Institute for Advanced Studies in Culture (housed at UVA) and IASC exists to promote research into the contradictions of modernity, by examining how culture manifests itself in metaphor, symbol, ideals, principles, institutions, and material objects [1]. I've been a reader of THR for a few years and I'd say generally they publish articles that promote moral realism and humanism. They're sort of metaphysically open-minded.

[1]: https://iasculture.org/about/vision

PeterStuer•4mo ago
Read "brainmakers", even though it completely ignores Europe's and the East's significant contributions to AI history https://www.newquistbooks.com/brainmakers/brainmakers.html
bmau5•4mo ago
Just finished this and I enjoyed it, though as you mention is very America-centric
intelkishan•4mo ago
Have you read 'The Quest for Artificial Intelligence' by Nils J. Nilsson? I think it gives a good overview of development of AI upto 2000s.

Link: https://ai.stanford.edu/~nilsson/QAI/qai.pdf

RyanShook•4mo ago
Just finished reading The Thinking Machine. Highly recommend it if you're interested in how Nvidia became the most valuable company on earth: https://amzn.to/42z8JPF
coolandsmartrr•4mo ago
How does that compare to "The NVidia Way" by Tae Kim?
adastra22•4mo ago
“Machines Who Think” is conspicuously missing from the list.
bbor•4mo ago
Apologies in advance for the passionate critique, but I just can't help but attack what I see as a faux-intellectual, misleading piece. It starts with a notoriously-biased pop-science book that assumes its conclusions before any investigations begin ("AI is bad" hidden behind a thin veneer of "oh but not good AI"), and just goes downhill from there. It's honestly shocking that the brief discussion of that book is intended in a laudatory manner:

  A big part of the problem, the authors maintain, is confusion about the meaning of artificial intelligence itself, a confusion that sustains and originates in the present AI commercial boom.
This is just blatantly untrue to anyone who bothered to learn the names skipped with a brief "once apon a time, there was symbolic AI" -- from Turing to Minsky, Neumann to Pearl, Shannon to McCarthy, on and on and on. This incredible article from "Quote Investigator" lays out the situation well going all the way back to 1971: https://quoteinvestigator.com/2024/06/20/not-ai/ Personally, my favorite phrasing of this sentiment is the one preferred by Hofstadter: "AI is whatever hasn’t been done yet."

  Narayanan and Kapoor are particularly worried about the conflation of generative AI, which produces content through probabilistic response to human input, and predictive AI, which is purported to accurately forecast outcomes in the world, whether those be the success of a job candidate or the likelihood of a civil war. While products employing generative AI are “immature, unreliable, and prone to misuse,” Narayanan and Kapoor write, those using predictive AI “not only [do] not work today but will likely never work.”
1. That distinction is vacuous at best. Even if we exclude all symbolic AI (pure and hybridized) from the term "AI", literally all machine learning models produce probabilistic responses to inputs -- that's why it's called the "inference" step! This kind of false dichotomy is employed regularly by passionate amateurs on bsky and Reddit to allow them to hate bad AI while leaving a vague carveout for things they can't argue against like cancer detection systems, but without any real basis it's more obfuscation than distinction. God forbid any of these people convince the EU parliament to pass laws based on this idea...

2. The idea that using ML to predict outcomes "does not work" is so obviously wrong that I don't really feel the need to argue against it. Perhaps weather models, content moderation systems, NLP analyzers, spatial modelers, and the vast universe of other examples are all not really AI in the first place, in their book? In that case, what is "predictive AI"? Just a few cherry-picked examples of local governments trying to cheap out on bureaucratic processes, I guess?

After this brief intro, we arrive at the meat of the article. Picking on a Harari book seems like beating a dead horse, but y'know, sometimes that's fun! Still, the specific criticisms fall flat:

  [Harari] offers the example of “present-day chess-playing AI” that are “taught nothing except the basic rules of the game.” Never mind that Stockfish, currently the world’s most successful chess engine, is programmed with several human game strategies
That's just blatantly untrue, and even when it was true (pre-2023[1]), it's a misleading anecdote that obscures an overwhelming trend.

  Harari fails to explain that while machine-learning models assemble a template of solutions to a specific problem (e.g., the best possible move in a given chess position), the framework in which those problems and solutions are defined is entirely constructed by engineers.
That's an absurd way to describe modern deep learning, where the Bitter Lesson[2] is cited as gospel. Yes, technically all neural network topologies are laid out by humans at some level, but just saying that is another misleading snippet of the truth at best; even the author later acknowledges "the opacity of machine-learning tools is a genuine technical problem". How can both things be the case?

  Harari bungles straightforward issues and ideas concerning artificial intelligence.. But Harari, attempting to argue that the alignment problem is a timeless conundrum, applies [the alignment problem] to historical events that did not materially involve artificial intelligence
Yes, he's applying the concept in a broader way than usual. That doesn't make it invalid, and I'm 100% sure that even someone like Harari is well aware of what he's doing there. Describing this as "bungling straightforward ideas" rather than "saying something I disagree with" is, well... bungled!

Finally, there's the criticism about the COMPAS system that ProPublica uncovered (the true GOATs in any story). But what exactly is the criticism there? "He was critical, yes, but not critical in exactly the way I prefer"? That applies to pretty much every book ever in some way or another...

I'll skip going through the other two as closely--because I'm on the anti-markdown site, where walls of text are the only option--but it's all just the same tired assumptions wrapped in a condescending attitude. The writers of Genesis are far from experts in AI, but regardless, the criticisms of both them and Kurzweil come down to variations on one theme: "these people think AI is a big deal, which is obviously wrong, because it's not". I don't think you need me to tell you that this is not a solid argument.

I mean... Ugh. Criticizing the idea of a technological singularity as an "imaginary event" that "consists almost entirely in extrapolation" is again technically true, but the implied pejorative usage of these terms is completely unfounded; it is no more imaginary than climate change, nuclear war, or the simple empirical assumption that the sun will rise again tomorrow.

It's especially tiring to read this when we're literally in the middle of the singularity right now, which is quite obvious if you hear the real meaning of the term ("a point where our models must be discarded and a new reality rules"[3]), rather than the somewhat-bungled description here that relates more to Intelligence Explosions (" sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version, and that this process could continue to the point of vastly exceeding human intelligence"[4]).

The only people who still think the future of AI('s effect on humanity) is predictable post-2022 are the ones who are dogmatically certain that computers as we know them will always be crappy tools at best. I implore you, privileged reader: do not fall into this comforting trap. Face the future with us, despite the terror. Posterity is counting on us.

[1] https://github.com/official-stockfish/Stockfish/commit/af110...

[2] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

[3] https://edoras.sdsu.edu/~vinge/misc/singularity.html

[4] https://intelligence.org/files/IEM.pdf

YeGoblynQueenne•4mo ago
>> That's an absurd way to describe modern deep learning, where the Bitter Lesson[2] is cited as gospel. Yes, technically all neural network topologies are laid out by humans at some level, but just saying that is another misleading snippet of the truth at best; even the author later acknowledges "the opacity of machine-learning tools is a genuine technical problem". How can both things be the case?

Sorry, why is it misleading to recognise that all neural networks are created manually by human engineers?

bbminner•4mo ago
A real history of AI should start with Pierre-Simon Laplace developing a closed form solution to least squares in 18th century :)
NoMoreNicksLeft•4mo ago
Meh... I've been trying to pounce on HN posts that review books (or even mention them), as it's difficult to find titles to download. Jump into this one (it has four!) only to discover that I've got two of them already.
YeGoblynQueenne•4mo ago
Sounds like another "history of AI" that only really starts in 2011/12.