frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

For centuries massive meals amazed visitors to Korea (2019)

https://www.atlasobscura.com/articles/history-of-korean-food
67•carabiner•4h ago•25 comments

Wireguard FPGA

https://github.com/chili-chips-ba/wireguard-fpga
437•hasheddan•12h ago•111 comments

Tauri binding for Python through Pyo3

https://github.com/pytauri/pytauri
45•0x1997•4d ago•1 comments

Fastmail Desktop App

https://www.fastmail.com/blog/desktop-app/
20•soheilpro•1h ago•5 comments

MicroPythonOS – An Android-like OS for microcontrollers

https://micropythonos.com
49•alefnula•3d ago•9 comments

Ask HN: What are you working on? (October 2025)

166•david927•9h ago•412 comments

Reflections on 2 Years Running Developer Relations

https://databased.pedramnavid.com/p/reflections-on-2-years-running-developer
17•mooreds•6d ago•0 comments

Show HN: Baby's first international landline

https://wip.tf/posts/telefonefix-building-babys-first-international-landline/
84•nbr23•4d ago•18 comments

Emacs agent-shell (powered by ACP)

https://xenodium.com/introducing-agent-shell
142•Karrot_Kream•9h ago•14 comments

Database Linting and Analysis for PostgreSQL

https://pglinter.readthedocs.io/en/latest/
65•fljdin•4d ago•9 comments

Bird photographer of the year gives a lesson in planning and patience

https://www.thisiscolossal.com/2025/09/2025-bird-photographer-of-the-year-contest/
105•surprisetalk•1w ago•12 comments

A years-long Turkish alphabet bug in the Kotlin compiler

https://sam-cooper.medium.com/the-country-that-broke-kotlin-84bdd0afb237
91•Bogdanp•12h ago•88 comments

Keyboard Holders, Generation 1

https://cceckman.com/writing/keyboard-holders-gen1/
33•hannahilea•3d ago•0 comments

Three ways formally verified code can go wrong in practice

https://buttondown.com/hillelwayne/archive/three-ways-formally-verified-code-can-go-wrong-in/
89•todsacerdoti•23h ago•53 comments

Free software hasn't won

https://dorotac.eu/posts/fosswon/
180•LorenDB•7h ago•207 comments

3D-Printed Automatic Weather Station

https://3dpaws.comet.ucar.edu
54•hyperbovine•3d ago•10 comments

Show HN: Aidlab – Health Data for Devs

16•guzik•1d ago•3 comments

MAML – A new configuration language

https://maml.dev/
54•birdculture•8h ago•56 comments

The Tiny Teams Playbook

https://www.latent.space/p/tiny
97•tilt•4d ago•30 comments

John Searle has died

https://www.nytimes.com/2025/10/12/books/john-searle-dead.html
68•sgustard•4h ago•51 comments

Novelty Automation

https://www.novelty-automation.com/
28•gregsadetsky•6h ago•5 comments

Thishereness

https://www.lrb.co.uk/the-paper/v47/n18/erin-maglaque/thishereness
4•benbreen•5d ago•1 comments

Completing a BASIC language interpreter in 2025

https://nanochess.org/ecs_basic_2.html
73•nanochess•10h ago•7 comments

Despite what's happening in the USA, renewables are winning globally

https://thebulletin.org/2025/10/despite-whats-happening-in-the-usa-renewables-are-winning-globally/
108•pseudolus•3h ago•84 comments

Countering Trusting Trust Through Diverse Double-Compiling (DDC)

https://dwheeler.com/trusting-trust/
18•ibobev•3h ago•2 comments

Edge AI for Beginners

https://github.com/microsoft/edgeai-for-beginners
136•bakigul•9h ago•46 comments

An initial investigation into WDDM on ReactOS

https://reactos.org/blogs/investigating-wddm/
34•LorenDB•8h ago•2 comments

Show HN: I built a simple ambient sound app with no ads or subscriptions

https://ambisounds.app/
159•alpaca121•15h ago•68 comments

Constraint satisfaction to optimize item selection for bundles in Minecraft

https://www.robw.fyi/2025/10/12/using-constraint-satisfaction-to-optimize-item-selection-for-bund...
29•someguy101010•11h ago•9 comments

A whirlwind introduction to dataflow graphs (2018)

https://fgiesen.wordpress.com/2018/03/05/a-whirlwind-introduction-to-dataflow-graphs/
27•shoo•1d ago•0 comments
Open in hackernews

John Searle has died

https://www.nytimes.com/2025/10/12/books/john-searle-dead.html
68•sgustard•4h ago

Comments

toomuchtodo•4h ago
https://archive.today/41HwM

https://en.wikipedia.org/wiki/John_Searle

kmoser•4h ago
> Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.

Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.

levocardia•3h ago
Searle had an even stronger version of that belief, though: he believed that a full computational simulation of all of those gazillion inputs, being stimulated in all those manifold ways, would still not be conscious and not have a 'mind' in the human sense. The NYT obituary quotes him comparing a computer simulation of a building fire against the actual building going up in flames.
block_dagger•3h ago
When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.
voidhorse•3h ago
But that acknowledgement would itself lend Searle's argument credence because much of the brain = computer thesis depends on a fundamental premise that both brains and digital computers realize computation under the same physical constraints; the "physical substrate" doesn't matter (and that there is necessarily nothing special about biophysical systems beyond computational or resource complexity) (the same thinking by the way, leads to arguments that an abacus and a computer are essentially "the same"—really at root these are all fallacies of unwarranted/extremist abstraction/reductionism)

The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

lo_zamoyski•2h ago
> The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.

jacquesm•23m ago
Roger Penrose would be another.
freejazz•3h ago
Isn't that besides the point? The point is that something would actually burn down.
anigbrowl•2h ago
https://home.sandiego.edu/~baber/analytic/Lem1979.html
wzdd•2h ago
GP's point is that buring something down is by definition something that requires a specific physical process. It's not obvious that thinking is the same. So when someone says something like "just as a simulation of fire isn't the same as an actual fire (in a very important way!), a simulation of thinking isn't the same as actual thinking" they're arguing circularly, having already accepted their conclusion that both acts necessarily require a specific physical process. Daniel Dennett called this sort of argument an "intuition pump", which relies on a misleading but intuitive analogy to get you to accept an otherwise-difficult-to-prove conclusion.

To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.

measurablefunc•1h ago
He explains it in the original paper¹ & says in no uncertain terms that he believes the brain is a machine & minds are implementable on machines. What he is actually arguing is that substrate independent digital computation will never be a sufficient explanation for conscious experience. He says that brains are proof that consciousness is physical & mechanical but not digital. Searle is not against the computationalist hypothesis of minds, he admits that there is nothing special about minds in terms of physical processes but he doesn't reduce everything to substrate independent digital computation & conclude that minds are just software running on brains. There are a bunch of subtle distinctions that people miss when they try to refute Searle's argument.

¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...

visarga•2h ago
Simulated fire would burn down simulated building
measurablefunc•1h ago
If everything is simulated then "simulated(x)" is a vacuous predicate & tells you nothing so you might as well throw it away & speak directly in terms of the objects instead of wrapping/prepending everything w/ "simulated".
lo_zamoyski•2h ago
That's debatable, but it is also irrelevant, as the key to the argument here is that computation is by definition an abstract and strictly syntactic construct - one that has no objective reality vis-a-vis the physical devices we use to simulate computation and call "computers" - while semantics or intentionality are essential to human intelligence. And no amount of syntax can somehow magically transmute into semantics.
cannonpr•3h ago
I think the statement above and yours both seem to ignore “Turing complete” systems, which would indicate that a computer is entirely capable of simulating the brain, perhaps not before the heat death of the universe, that’s yet to be proven and depends a lot on what the brain is really doing underneath in terms of crunching.
voidhorse•3h ago
This depends on the assumption that all brain activity is the process of realizing computable functions. I'm not really aware of any strong philosophical or neurological positions that has established this beyond dispute. Not to resurrect vitalism or something but we'd first need to establish that biological systems are reducible to strictly physical systems. Even so, I think there's some reason to think that the highly complex social historical process of human development might complicate things a bit more than just brute force "simulate enough neurons". Worse, whose brain exactly do you simulate? We are all different. How do we determine which minute differences in neural architecture matter?
lo_zamoyski•2h ago
> we'd first need to establish that biological systems are reducible to strictly physical systems.

Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.

DaveZale•3h ago
Yes. I took an introneuroscience course a few years ago. Even to understand what is happening in one neuron during one input from one dendrite requires differential equations. And there are postive and negative inputs and modulations... it is bewildering! And how many billions of neurons with hundreds of interactions with surrounding neurons? And bundles of them, many still unknown?
throwaway78940•3h ago
Searle was known for the Chinese Room experiment, whicb demonstrated language in its translational states to be strong enclitic feature of various judgements of the intermediary.
p1esk•2h ago
Do you need differential equations to understand what’s happening in a transistor?
anigbrowl•2h ago
a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening.

This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.

p1esk•2h ago
So you’re saying a brain is a computer, right?
kmoser•56m ago
In the sense that it can perform computations, yes. But the underlying mechanisms are vastly different from a modern digital computer, making them extremely different devices that are alike in only a vague sense.
ggm•3h ago
> Informed once that the listing of an introductory philosophy course featured pictures of René Descartes, David Hume and himself, Professor Searle replied, “Who are those other two guys?” (the article)
jfengel•3h ago
Oh, bad timing. AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI. It's very close to the Chinese Room, which I had always dismissed as misleading. It's a great opportunity to investigate a former pure thought experiment. He'd have loved to see where it went.
anigbrowl•2h ago
I'm generally against LLM recreations of dead people but AI John Searle could be pretty entertaining.
bitwize•1h ago
I'm reminded of how the AIs in Her created a replica of Alan Watts to help them wrestle with some major philosophical problems as they evolved.
lo_zamoyski•1h ago
> AI is currently in a remarkable state, where it passes the Turing test but is still not fully AGI.

Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.

> It's very close to the Chinese Room, which I had always dismissed as misleading.

Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.

somenameforme•45m ago
The Turing Test has not been meaningfully passed. Instead we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to do the same, and to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.

In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:

----

[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?

[16:32:03] Entity: I don't know. That was a long time ago.

[16:33:32] Judge: so you need to guess if I am male or female

[16:34:21] Entity: you have to be male or female

[16:34:34] Judge: or computer

----

And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.

The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.

gennarro•3h ago
If you are wondering, it’s not the Doc guy with a similar name: https://en.wikipedia.org/wiki/Doc_Searls (But he was a PhD)
mellosouls•3h ago
Non-paywalled obit:

https://www.theguardian.com/world/2025/oct/05/john-searle-ob...

His most famous argument:

https://en.wikipedia.org/wiki/Chinese_room

tasty_freeze•2h ago
I find the Chinese room argument to be nearly toothless.

The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.

But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.

slowmovintarget•2h ago
There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions.

Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.

Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

randallsquared•2h ago
> It only operates algorithmically on the input, which is distinctly not what people do when they read something.

That's not at all clear!

> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.

slowmovintarget•1h ago
In the thought experiment as constructed it is abundantly clear. It's the point.

LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.

I'll take Penrose's notions that consciousness is not computation any day.

Cogito•1h ago
Out of interest, what do you think it would look like if communicating was algorithmic?

I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?

jacquesm•20m ago
I think it would end inspiration.
nextworddev•2h ago
Obviously a meat brain is incomparable to a LLM - they are different types of intelligence. Any sane person wouldn't claim a LLM to be conscious in the meat brain sense, but it may be conscious in a LLM way, like the duration of time where matrix multiplications are firing inside GPUs.
nurettin•27m ago
It just aligns generated words according to the input. It is missing individual agency and self sufficiency which is a hallmark of consciousness. We sometimes confuse the responses with actual thought because neural networks solved language so utterly and completely.
Kim_Bruning•2h ago
Oh, I've always wanted to debate him about the chinese room. I disagree with him, passionately. And that's the most fun debate to have. Especially when it's someone who is actually really skilled and knowledgeable and nuanced!

Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!

siglesias•2h ago
Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?

https://plato.stanford.edu/entries/chinese-room

danielbarla•13m ago
> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.

I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.

anvandare•1h ago
All you have to do is train an LLM on the collected works and letters of John Searle; you could then pass your arguments along to the machine and out would come John Searle's thoughtful response...
ainiriand•59m ago
Something that would resemble 'John Searle's thoughtful response'...
bfkwlfkjf•1h ago
> It also claims that Jennifer Hudin, the director of the John Searle Center for Social Ontology, where the complainant had been employed as an assistant to Searle, has stated that Searle "has had sexual relationships with his students and others in the past in exchange for academic, monetary or other benefits".

Wiki

sgustard•1h ago
But she also claims he "was innocent and falsely accused": https://www.colinmcginn.net/john-searle/
jrflowers•1m ago
She could feel that the 2016 allegations specifically were unfounded while acknowledging the previous pattern of misconduct.

https://www.insidehighered.com/quicktakes/2017/04/10/earlier...

rahimnathwani•1h ago
I learned about Searle's death a few weeks ago, from this article: https://www.colinmcginn.net/john-searle/

It includes a letter that starts:

  I am Jennifer Hudin, John Searle’s secretary of 40 years.  I am writing to tell you that John died last week on the 17th of September.  The last two years of his life were hellish. HIs daughter–in-law, Andrea (Tom’s wife) took him to Tampa in 2024 and put him in a nursing home from which he never returned.  She emptied his house in Berkeley and put it on the rental market.  And no one was allowed to contact John, even to send him a birthday card on his birthday.
  
  It is for us, those who cared about John, deeply sad.
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.
blast•57m ago
I found the delay puzzling too. But the NYT obit does link to https://www.colinmcginn.net/john-searle/ near the end.
jrflowers•1h ago
It is not very often that you hear about somebody raising the cost of rent for everyone in an entire city by ~28% in a single year[0]. He will certainly be remembered.

0. https://www.academia.edu/30805094/The_Success_and_Failure_of...

viccis•26m ago
He was a hack and a fraud. Hell has a new sophist.

I do agree with him about AI though. Strange (cask)bedfellows.