frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•3m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•4m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•5m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•5m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•7m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•7m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•9m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•10m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•10m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•11m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•12m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•12m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•13m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•13m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•13m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•13m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•15m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•16m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•16m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•16m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•17m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•17m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•17m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•18m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•19m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•21m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•22m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•22m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•23m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•24m ago•1 comments
Open in hackernews

The Case That A.I. Is Thinking

https://www.newyorker.com/newsletter/the-daily/is-ai-amazing-or-are-we-simple
5•jsomers•2mo ago

Comments

jsomers•2mo ago
This was posted when it came out here: https://news.ycombinator.com/item?id=45802029. It generated a lot of comments -- more heat than light, possibly -- and I wonder if instead of just taking the title as a jumping-off point, folks could engage with the meat of the article itself.

(I wrote the article. I'm a longtime HN user. I find that threads here lately have gotten very jumpy-offy -- commenters use a specific article about e.g. icebergs melting to have a conversation about climate change and climate change denial, instead of to talk about the merits of the particular article -- and I was hoping to nudge folks to read the full piece, then comment on specific parts of it. I'm not sure that'll work but figured it's worth a try!)

emtel•2mo ago
I think the burden to show that AI is not thinking lies on the skeptics. There are two broad categories of arguments that skeptics use to show this, and they are both pretty bad.

The first category is what I'd call "the simplifying metaphor", in which it is claimed that AIs are actually "just" something very simple, and therefore do not think.

- "AIs just pick the most likely next token"

- "AI is just a blurry jpeg of the web" (Ted Chiang)

- "AIs are just stochastic parrots"

The problem with all of these is that "just" is doing an awful lot of work. For instance, if AIs "just" pick the most likely next token, it is going to matter a lot _how_ they do that. And one way they could do that is... by thinking.

There are many different stochastic processes that you could use to try to build a chat bot. LLMs are the only one so far that actually works well, and any serious critique has to explain why LLMs work better than (say) Markov chains despite "just" doing the same fundamental thing.

The second category of argument is "AIs are dumb". Here, skeptics claim that because AI fail at task X, they aren't thinking, because any agent capable of thought would be able to do task X. For instance, AIs hallucinate, or AIs fail to follow explicit instructions, and so on.

But this line of argument is also very poor, because we clearly don't want to define "thinking" as "a process by which an agent avoids all mistakes". That would exclude humans as well. It seems we need a theory that splits the universe of intellectual tasks into "those that require thinking" and "those that don't", and then we need to show that AI is good only at the latter, while humans are good at both. But unless I missed it no such theory is forthcoming.

sema4hacker•2mo ago
"Splitting the universe of intellectual tasks" would be a gigantic job. Various AI implementations already fail at so many tasks it seems reasonable for skeptics to claim the AI is not yet thinking, and the burden is on the implementers to fix that.
emtel•2mo ago
> "Splitting the universe of intellectual tasks" would be a gigantic job

What I mean is a theory that allows you to categorize any given task according to whether it requires "thinking" or not, not literally cataloging all conceivable tasks.

_wire_•2mo ago
All you need to make your case is an intelligible definition of thought as an activity.

So far your claim is trapped behind the observation that when an AI produces an output, it looks like thought to you.

In the vein Serle's arguments about the appearance of cognition and your premise, consider the mechanics of consulting a book with respect to the mechanics (so to speak) of solicited thought:

There's something you want to know, so you pick up a book and prompt the TOC or index and it returns a page of stored thought. Depending completely on your judgment, the thought retrieved is deemed appropriate and useful.

No one argues that books think.

Explain how interacting with an LLM to retrieve thought stored in its matrix is distinct from consulting a book in a manner that manifests thought.

If the distinction is only in complexity of internal functioning of the device's retrieval mechanism, then explain precisely what about the mechanism of the LLM brings its functioning into the realm of thought that a book doesn't.

To do that you'll first need to formulate a definition of thinking that's about more than retrieval of stored thoughts.

Or are you truly saying that your 'knowing thinking when you see it' is sufficient for a scientific discourse on the matter?