frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
111•theblazehen•2d ago•29 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
658•klaussilveira•13h ago•193 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
947•xnx•19h ago•550 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•39 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
49•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•116 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
329•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•20h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
287•eljojo•16h ago•168 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
410•lstoll•20h ago•278 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
22•jesperordrup•4h ago•13 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
60•kmm•5d ago•5 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
89•quibono•4d ago•21 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
7•speckx•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
253•i5heu•16h ago•195 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1065•cdrnsf•23h ago•444 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
148•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
181•limoce•3d ago•97 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
145•SerCe•10h ago•134 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
31•gmays•9h ago•12 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments
Open in hackernews

Minds, brains, and programs (1980) [pdf]

https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf
103•measurablefunc•3mo ago

Comments

jmkni•3mo ago
Long read, I'm sure it's fascinating, will get through it in time

Just Googling the author, he died last month sadly

measurablefunc•3mo ago
It's the responses & counter-responses that are long. The actual article by Searle is only a few pages.
BadThink6655321•3mo ago
A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.
wk_end•3mo ago
So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.

> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.

BadThink6655321•3mo ago
Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.
siglesias•3mo ago
"Actual understanding" means you have a grounding for the word down to conscious experience and you have a sense of certainty about its associations. I don't understand "sweetness" because I competently use the word "sweet." I understand sweetness because I have a sense of it all the way down to the experience of sweetness AND the natural positive associations and feelings I have with it. There HAS to be some distinction between understanding all the way down to sensation and a competent or convincing deployment of that symbol without those sensations. If we think about how we "train" AI to "understand" sweetness, we're basically telling it when and when not to use that symbol in the context of other symbols (or visual inputs). We don't do this when we teach a child that word. The child has an inner experience he can associate with other tastes.
mannykannot•3mo ago
The irony here is that performing like an LLM the very thing that Searle has the human operator do. If it the sort of interaction that does not need intelligence, then no conclusion about the feasibility of AGI can be drawn from contemplating it. Searle’s arguments have been overtaken by technology.
siglesias•3mo ago
Can you expand on this? The thought experiment is just about showing that there is more to having a mind than having a program. It’s not an argument about the capabilities of LLMs or AGI. Though it’s worth noting that behavioral criteria continue to lead to people overestimating the capabilities of promise of AI.
mannykannot•3mo ago
LLMs are capable of performing the task specified for the Chinese room over a wide range of complex topics and for a considerable length of time. While it is true that their productions are wrong or ill-conceived more often than one would expect from a well-informed human, and sometimes look like the work of a rather stupid one, the burden now rests on Searle's successors to show that every such interaction is purely syntactic.
bonobo•3mo ago
You mentioned experience, but it's not clear to me if you mean that it's a requirement for "actual understanding." Is this what you're saying? If so, does that mean a male gynecologist doesn't have an "actual understanding" of menstrual cycles and menopause?

I think about astronomers and the things they know about stars that are impossible to experience even from afar, like sizes and temperatures. No one has ever seen a black hole with their own eyes, but they read a lot about it, collected data, made calculations, and now they can have meaningful discussions with their peers and come to new conclusions from "processing and correlating" new data with all this information in their minds. That's "actual understanding" to me.

One could say they are experiencing this information exchange, but I'd argue we can say the same about the translator in the chinese room. He does not have the same understanding of chinese as us humans, associating words to memories and feelings and other human experiences, but he does know that a given symbol evokes the use of other specific symbols. Some sequences require the usage of lots of symbols, some are somewhat ambiguous, and some require him to fetch a symbol that he hasn't used in a long time, maybe doesn't even know where he stored it. To me this looks a lot like the processes that happen inside our minds, with the exception that his form of "understanding" and the experiences that this evokes to him are completely alien to us. Just like an AGI would possibly be.

I'm not confortable looking at the translator's point of view as if he's the analogous to a mind. To me he's the correlator, the process inside our minds that makes these associations. This is not us, it's not under our conscious control, from our perspectives it just happens, and we know today it's a result of our neural networks. We emerge somehow from this process. Similarly, it seems to me that the experience of knowing chinese belongs to the whole room, not the guy handling symbols. It's a weird conclusion, I still don't know what to think of it though...

siglesias•3mo ago
When I say "experience," I mean a sufficient grounding of certainty about what a word means, which includes how it's used, how it relates to the world that I'm experiencing, but also the mood or valence the word carries. I can't feel your pain, or maybe you've been to a country that I haven't been to and you're conveying that experience to me. Maybe you've been to outer space. I'm not saying to understand you I need to literally have had the exact experience as you, but I should be able to sufficiently relate to the words you are saying in order to understand what you are saying. If I can't sufficiently relate, I say I don't understand. You can see how this differs from what an AI is doing. The AI is drawing on relationships between symbols, but it doesn't really have a self, or experience, etc etc.

The process of fetching symbols, as you put it, doesn't feel at all like what I do when somebody asks me what it was like to listen to the Beatles for the first time and I form a description.

mannykannot•3mo ago
People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.
TheOtherHobbes•3mo ago
Yes, it's a wrong-end-of-the-telescope kind of answer.

A human does simulates a Turing machine to do... something. The human is acting mechanically. So what?

If there's any meaning, it exists outside the machine and the human simulating it.

You need another human to understand the results.

All Searle has done is distract everyone from whatever is going on inside that other human.

rcxdude•3mo ago
In that case you've basically just created a split-brain situation (I mean like the actual phenomenon of someone who's had the main part of the connection between the two hemispheres of the brain). There's one system which is the man and the rules that he has internalized, and there's what the man himself consciously understands, and there's no reason that the two are necessarily communicating in some deeper way, in much the same way as a split-brain patient may be able to point to something they see in one side of their vision when asked but be unable to say what it is.

(Also, IMO, the question of whether the program understands chinese mainly depends on whether you would describe an unconscious person as understanding anything)

I also can't help but think of this sketch when this topic comes up (even though, importantly, it is not quite the same thing): https://www.youtube.com/watch?v=6vgoEhsJORU

glyco•3mo ago
You and Searle both seem to not understand a simple, obvious fact about the world, which is that (inhomogenous) things don't have the same thing inside. A chicken pie, for example, doesn't have any chicken pie inside. There's chicken inside, but that's not chicken pie. There's sauce, vegetables and pastry, but those aren't chicken pie either. All these things together still may not make a chicken pie. The 'chickenpieness' of the pie is an additional fact, not derivable from any facts about its components.

As with pie, so with 'understanding'. A system which understands can be expected to not contain anything which understands. So if you find a system which contains nothing which understands, this tells you nothing about whether the system understands[0].

Somehow both you and Searle have managed to find this simple fact about pie to be 'the grip of an ideology' and 'metaphysical'. But it really isn't.

[0] And vice-versa, as in Searle's pointlessly overcomplicated example of a system which understands Chinese containing one which doesn't containing one which does.

gradschool•3mo ago
tl;dr:

If a computer could have an intelligent conversation, then a person could manually execute the same program to the same effect, and since that person could do so without understanding the conversation, computers aren't sentient.

Analogously, some day I might be on life support. The life support machines won't understand what I'm saying. Therefore I won't mean it.

31337Logic•3mo ago
Wow. That was remarkably way off base.
rcxdude•3mo ago
I think it gets to the heart of the matter quite succinctly, but the more I see discussions on this the more I think that there's two viewpoints on this which just don't seem to overlap. (as in, I feel like people feel like the Chinese room is either obviously true or obviously false and there's not really an argument or elaboration on it that will change their minds).
generuso•3mo ago
It all started with ELIZA. Although Weizenbaum, the author of the chatbot, always emphasized that the program was performing a rather simple manipulation of the input, mostly based on pattern matching and rephrasing, popular press completely overhyped the capabilities of the program, with some serious articles debating whether it would be a good substitute for psychiatrists, etc.

So, many people, including Searle, wanted to push back on reading too much into what the program was doing. This was a completely justified reaction -- ELIZA simply lacked the complexity which is presumably required to implement anything resembling flexible understanding of conversation.

That was the setting. In his original (in)famous article, Searle started with a great question, which went something like: "What is required for a machine to understand anything?"

Unfortunately, instead of trying to sketch out what might be required for understanding, and what kinds of machines would have such facilities (which of course is very hard even now), he went into dazzling the readers with a "shocking" but a rather irrelevant story. This is how stage magicians operate -- they distract a member of the audience with some glaring nonsense, while stuffing their pockets with pigeons and handkerchiefs. That is what Searle did in his article -- "if a Turing Machine were implemented by a living person, the person would not understand a bit of the program that they were running! Oh my God! So shocking!" And yet this distracted just about everyone from the original question. Even now philosophers have two hundred different types of answers to Searle's article!

Although one could and should have explained that ELIZA could not "think" or "understand" -- which was Searle's original motivation, this of course doesn't imply any kind of fundamental principle that no machine could ever think or understand -- after all, many people agree that biological brains are extremely complex, but nevertheless governed by the ordinary physics "machines".

Searle himself was rather evasive regarding what exactly he wanted to say in this regard -- from what I understand, his position has evolved considerably over the years in response to criticism, but he avoided stating this clearly. In later years he was willing to admit that brains were machines, and that such machines could think and understand, but somehow he still believed that man-made computers could never implement a virtual brain.

fellowniusmonk•3mo ago
Meaning bootstrapped consciousness, just ask dna and rna.

I don't get any of these anthropocentric arguments, meaning predates humanity and consciousness, that's what dna is, meaning primitives are just state changes the same thing as physical primitives.

syntactic meaning exists even without an interpreter in the same way physical "rock" structures existed before there were observers, it just picks up causal leverage when there is one.

Only a stateless universe would have no meaning. Nothing doesn't exist, meaninglessness doesn't exist, these are just abstraction we've invented.

Call it the logos if that's what you need, call it field pertubations, reality has just traveling up the meaning complexity chain, but complex meaning is just structural arrangement of meaning simples.

Stars emit photons, humans emit complex meaning. Maybe we'll be part of the causal chain that solves entropy, until then we are the only empirically observed, random walk write heads of maximally complex meaning in the universe.

We are super rare and special as far as we've empirically observed, doesn't mean we get our own weird metaphysical (if that even exists) carve out.

marshfarm•3mo ago
There's much more meaning than can be loaded into statements, thoughts, etc. And conscious will is a post-hoc after effect.

Any computer has far less access to the meaning load we experience since we don't compute thoughts, thoughts aren't about things, there is no content to thoughts, there are no references, representations, symbols, grammars, words in brains.

Searle is only at the beginning of this refutation of computers, we're far more along now.

It's just actions, syntax and space. Meaning is both an illusion and fantastically exponential. That contradiction has to be continually made correlational.

fellowniusmonk•3mo ago
meaning is an illusion? That's absurdly wrong, it's a performative contradiction to even say such a thing, you might not like semantic meaning but it, like information, physically exists, and even if you're a solipsist you can't deny state change, and state change is a meaning primitive, meaning primitives are one thing that must exist.

this isn't woo, this is just empirical observation, and no one is capable of credibly denying state change.

marshfarm•3mo ago
The idea of meaning is contradictory, it's not strictly an illusion. There's a huge difference. State changes mean differences, they don't ensure meaning. This is an obvious criteria. We have tasks and the demands are variable. We can assign meaning, but where is the credibility? Is it ever objectively understood? No. That's contradictory.

You have to look at mental events and grasp not only what they are, both material and process, how the come to happen, they're both prior and post-hoc, etc.

I study meaning in the brain. We are nit sure if it exists and the meaning we see in events and tasks are at a massive load. Any one event can have 100s even 1000s of meaningful changes to self, environment and others. That's contradictory. Searle is not even scratching the surface of the problem.

https://arxiv.org/vc/arxiv/papers/1811/1811.06825v2.pdf

https://www.frontiersin.org/journals/psychology/articles/10....

https://pubmed.ncbi.nlm.nih.gov/39282373/

https://aeon.co/essays/your-brain-does-not-process-informati...

fellowniusmonk•3mo ago
What does ensure meaning? Interpretation?

If that's your position, that's where we disagree, state changes in isolation and state changes in sequence are all meaning.

State change is the primitive of meaning, starting at the fermion, there is no such thing as meaninglessness, just uncomplex, non-cohered meaning primitives, the moment they start to be associated through natural processes you have increasing complex meaning sequences and structures through coherence.

We move up the meaning ladder, high entropy meaning (rng) is decohered primitives, low entropy meaning is maximally cohered meaning like human speech or dna.

Meaning interactions (quantum field interactions) creates particles and information. Meaning is upstream, not downstream.

Now people hate when you point out semantic/structural meaning is meaning, but it's the only non fuzzy definition I've ever seen, and with complexity measures we can reproducably examine emissions objectively for semantinc complexity across all emitter types.

The reason everyone has such crappy and contradictory interpretations of meaning is because they are trying to turn a primitive into something that is derive or emergent and it's just simply not, and you can observe the chain of low to high complexity without having to look at human structures.

This meaning predates consciousness, even if you are a dualist you have to recognize that dna and rna bootstrap each "brain reciever" structure.

Meaning exists without an interpreter, the reason so many people get caught up in the definition is because they can't let go of anthropocentric views of meaning, meaning comes before consciousness, logic, rationality, in the same way the atom comes before the arrangement of atoms rockwise.

Even RNG, the rng emissions from stars lets say, which is maximally decohered meaning, has been made meaningful to the point of extreme utility by humans via encryption.

Now, you may be a dualist, and that's fine, the physical reality of state change doesn't preclude dualism, it sets a physical empirical floor, not an interpretive ceiling.

Even some very odd complaints about human interpretation, like still images being interpreted as movement some how being a problem, in the viewing frame you are 100% seeing state changes and all you need for meaning are state changes, each frame is still but the photon stream carried to our eyeballs is varying, and that's all you need.

Anyway, you make meaning, you are a unqiue write head in the generation of meaning, we can't ex ante calculate how important you are for our causal survival because the future stretches out for an indeterminate time, and we haven't yet ruled out that entropy can be reversed in some sense, so you are an important meaning generator that needs to be preserved, our very species, the very universe may depend on the meaning you create in the network (is reversing entropy even locally likely? I doubt it, but we haven't ruled it out yet, it's still early days.)

marshfarm•3mo ago
Without being a dualist, we can say from neurobiology, ecological psych, coord dynamics, neural reuse that meaning isn't simply upstream.

Technically it can't be because of the language problem is post-hoc.

You're an engineer so you have a synthetic view of meaning, but it has nothing to do with intelligence. I'd study how you gained that view of meaning.

A meaning ladder is arbitrary, quantum field dynamics can easily be perceived as Darwinism, and human speech isn't meaningful, it's external and arbitrary and suffers from the conduit metaphor paradox. The meaning is again derived from the actual tasks, scientifically no speech act ever coheres the exact same mental state or action-syntax.

Sorry you're using a synthetic notion of meaning that's post-hoc. Doesn't hold in terms of intelligence. Not even Barbour (who sees storytelling in particles) et al would assign meaning to Fermions or other state changes. It's good science fiction, but it's not science.

In neuroscience we call isolated upstream meaning "wax fruit." You can see it is fruit, but bite into it, the semantic is tasteless (in many dimensions).

fellowniusmonk•3mo ago
[flagged]
Marshferm•3mo ago
Scientists hacking engineers who pretend meaning is in fermions is one of the great experiences here. Don't sell it short, engineer. Science is coming to overtake binary. And if you ever get to sign a paper for a presidential session at a top-level conference, you'll know what it's like to practice science and not debate ideas merely in social media.
31337Logic•3mo ago
RIP John Searle, and thanks for all the fish.
musicale•3mo ago
I've always thought that Searle's argument relied on misleading and/or tautological definitions, and I liked Nils Nilsson's rebuttal:

"For the purposes that Searle has in mind, it is difficult to maintain a useful distinction between programs that multiply and programs that simulate programs that multiply. If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought."

I also find Searle's rebuttal to the systems reply to be unconvincing:

> If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

Perhaps the overall argument is both more and less convincing in the age of LLMs, which are very good at translation and other tasks but still make seemingly obvious mistakes. I wonder (though I doubt) whether Searle might have been convinced if by following the instructions the operator of the room ended up creating, among other recognizable and tangible artifacts, an accurate scale model of the city of Beijing, and an account of its history, and refer to both in answering questions. (I might call this the "world model" reply.)

In any case, I'm sad that Prof. Searle is no longer with us to argue with.

https://news.ycombinator.com/item?id=45563627

tug2024•3mo ago
Searle’s argument is like a captain claiming his ship isn’t sailing because the compass is inside a cabin, not on deck.

Nilsson points out: if the vessel moves as if it’s cutting through waves, most sailors would say it’s sailing. Even Searle’s “deep thought” may just be a convincing simulation, but the wake is real enough.

The systems reply? Claiming the ship can’t navigate because the captain doesn’t understand the ropes feels like denying the ocean exists while staring at the harbor.

In the age of LLMs, the seas are charted better than ever, yet storms of obvious mistakes and rows of confusion, misguided and misled folk still appear. Perhaps a model city of Beijing as old town, new streets, and maps can sway Searle readers in the 21st century!

Alas, the old captain has sailed into the horizon, leaving the debate with the currents.

emil-lp•3mo ago
Related: John Searle has died (Oct 2025)

146 points, 216 comments

https://news.ycombinator.com/item?id=45563627

jedberg•3mo ago
I knew this title looked familiar! It was required reading when I took Searle's course. I always thought it funny that CogSci majors (basically the AI major at Berkeley in the 90s) were required to take a course from a guy who strongly believed that computers can't think.

It would be like making every STEM major take a religion course.

actionfromafar•3mo ago
Not a bad idea, actually. Religion is a big deal and it can only help to know the basics of how it works. Some of the fanboi behavior common in tech is at least religion adjacent.
countrymile•3mo ago
Not sure that equivalence works, cognitive science doesn't require that people believe that computers can think; and STEM doesn't require that people think of the world in a purely mechanistic way - e.g. historically, many scientists were looking for the rules of a lawgiver.

Apologies if I'm misreading you here.

epixu•3mo ago
What do you think of this thought experiment?https://publish.obsidian.md/offmark/The+Chinese+Room+by+John...