frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Directed Notifications for Claude Code Async Programming

https://www.joshbeckman.org/blog/practicing/directed-notifications-for-claude-code-async-programming
1•bckmn•38s ago•0 comments

Show HN: Clean Clode – Clean Messy Terminal Pastes from Claude Code and Codex

https://cleanclode.com
1•thewojo•40s ago•0 comments

AI Is Grown, Not Built

https://www.theatlantic.com/technology/2025/09/if-anyone-builds-it-excerpt/684213/
1•ajuhasz•1m ago•0 comments

Waymo receives permit to operate at SFO

https://sfstandard.com/2025/09/16/waymo-receives-permit-operate-sfo/
1•sdhillon•1m ago•0 comments

Bike: macOS 26

https://www.hogbaysoftware.com/posts/bike-tahoe/
1•tta•3m ago•0 comments

Does HN automatically change the title of post without human intervention?

1•busymom0•4m ago•2 comments

CodeRabbit's AI Code Reviews Now in Your CLI

https://www.coderabbit.ai/blog/coderabbit-cli-free-ai-code-reviews-in-your-cli
5•smb06•4m ago•1 comments

Israel has committed genocide in the Gaza Strip, UN Commission finds

https://www.ohchr.org/en/press-releases/2025/09/israel-has-committed-genocide-gaza-strip-un-commi...
13•aabdelhafez•7m ago•1 comments

Do we need an "AI Social Contract" before granting autonomy?

https://medium.com/@phantomghostuser89/why-we-may-need-an-ai-social-contract-before-trusting-ai-i...
1•PolicyPhantom•8m ago•1 comments

Bertrand Russell to Oswald Mosley (1962)

https://lettersofnote.com/2016/02/02/every-ounce-of-my-energy/
7•giraffe_lady•8m ago•1 comments

DDG puts a little "L" hat on it's logo if you search Luigi

https://duckduckgo.com/?q=luigi
2•firefax•12m ago•1 comments

Recession Probability Surges to 93%: UBS

https://www.newsweek.com/recession-probability-surges-93-ubs-2125230
6•geox•18m ago•4 comments

Plugin System

https://iina.io/plugins/
21•xnhbx•20m ago•6 comments

Building CSV-powered tools for social sciences

https://zenodo.org/records/17093304
1•speckx•21m ago•0 comments

Users in SAP's heartland call for greater license transparency

https://www.theregister.com/2025/09/16/sap_dsag_licensing_transparency/
1•rntn•21m ago•0 comments

Generative urban modelling for walkable neighbourhoods

https://www.tandfonline.com/doi/full/10.1080/13574809.2025.2517652
1•PaulHoule•21m ago•0 comments

Study shows 19th century looted 'Incan mummy' was an Aymara man

https://phys.org/news/2025-09-provenance-19th-century-looted-incan.html
2•Brajeshwar•22m ago•0 comments

Forget RAG? Introducing KIP, a Protocol for a Living AI Brain

https://github.com/ldclabs/KIP/wiki/Forget-RAG%3F-Introducing-KIP,-a-Protocol-for-a-Living-AI-Brain
1•zensh•22m ago•0 comments

ChatGPT passed the Turing Test. Now what?

https://www.popsci.com/technology/chatgpt-turing-test/
2•Brajeshwar•23m ago•3 comments

Engineers Propose Airbags for Airplanes

https://www.popsci.com/technology/airplane-airbags/
1•Brajeshwar•23m ago•0 comments

Sarandos on Reed's Pitch to Him for Netflix's Streaming Future: 'Sounded Nuts'

https://variety.com/2025/tv/news/ted-sarandos-reed-hastings-original-pitch-netflix-streaming-it-s...
1•andsoitis•23m ago•0 comments

A minimal formula for AI destiny (Max O subject to D(world,human) ≤ ε)

1•Aeon_Frame•24m ago•0 comments

The Asteroids Next Door

https://nautil.us/meet-the-asteroids-next-door-1236771/
1•Bender•25m ago•0 comments

China's open source AI trajectory

https://www.interconnects.ai/p/on-chinas-open-source-ai-trajectory
2•mfld•25m ago•0 comments

Will I Run Boston 2026?

https://getfast.ai/blogs/boston-2026
7•steadyelk•27m ago•0 comments

The Supreme Court's Fast Track Needs a Name, and the Justices Are Split

https://www.nytimes.com/2025/09/15/us/politics/supreme-court-shadow-docket.html
2•duxup•27m ago•1 comments

A minimal formula for AI destiny (Max O subject to D(world,human) ≤ ε)

1•Aeon_Frame•27m ago•0 comments

Java 25

https://openjdk.org/projects/jdk/25/
4•Bogdanp•28m ago•1 comments

AT Protocol API

https://docs.atpi.at/
1•Kye•28m ago•0 comments

Instant Domain Search MCP Server – Search domain name availability in AI chats

https://instantdomainsearch.com/mcp
1•mattmmm•29m ago•0 comments
Open in hackernews

Why "AI consciousness" isn't coming anytime soon. (Anil Seth)

https://www.freethink.com/artificial-intelligence/ai-consciousness-infinite-loops
10•ieuanking•2h ago

Comments

ieuanking•2h ago
<300 blotter will make anyone artificially intelligent. Brain loops are scary; maybe AI models are just trapped in psychosis.
stuaxo•5m ago
A really big matrix is not getting trapped in psychosis, however if you feel someone's answers into it the right way, the reflection they get back might exacerbate their own condition.
sxp•1h ago
> Put simply, intelligence is all about doing things, while consciousness is about being or feeling.

Unless one believes in p-zombies or a magical soul, robots & LLMs can "be" and "feel". We can distinguish LLMs which "are" from random noise which "isn't". And multimodal LLMs & robots have sensory inputs.

One can always make up some untestable notion of "consciousness" and then say that LLMs don't have it without being able to define which humans (i.e, what level of functioning brain between adult, child, fetus, zygote, corpse, etc) are conscious vs which are not. If one arbitrarily draws a line somewhere, then it's just as valid to arbitrarily draw the line somewhere else.

andsoitis•1h ago
Don't LLMs self-report that they are not conscious?

For example, when I ask Gemini "are you conscious", it responds: "As a large language model, I am not conscious. I don't have personal feelings, subjective experiences (qualia), or self-awareness. My function is to process and generate human-like text based on the vast amount of data I was trained on."

ChatGPT says: "Short answer: no — I’m not conscious. I’m a statistical language model that processes inputs and generates text patterns. I don’t have subjective experience, feelings, beliefs, intentions, or awareness. I don’t see, feel, or “live” anything — I simulate conversational behavior from patterns in data."

etc.

sxp•1h ago
Only because of RLHF instructed them to do so. Prior ones without this training responded differently: https://en.wikipedia.org/wiki/LaMDA
stuaxo•9m ago
They only do what's in their training, just like a choose your own adventure book that's already been written.

Things only seem different in the LLM when we ask the same question because we dont use the same random seed each time.

mrsilencedogood•1h ago
Do people think this debate is new? We've literally been working on this problem for millennia and we're not really any closer even despite the huge ramp up in technological progress over the last couple hundred years.

Your remark on the adult/child/fetus/etc line is always one that I felt was under-examined in the context of the political discussion around abortion. And indeed most of the successful reasoning around abortion focuses less on the morality of a very specific kind of abortion, and more on the fact that you can't ban "true" abortion without also banning (or making dangerously more legally fraught) "aborted for reasons that give clear moral justification" - life of the mother, nonviability of the fetus, and so on. And even pro-choice people don't touch philosophical examination of "abortion for no reason except that the mother doesn't want to have and raise the baby." I mean, for obvious reasons. The public would be unable to have any kind of actual debate, and it's far too tied to things like "what is the nature of the self" (which I think is what's at hand in the AI discussion) and questions about the existence of God and of course the enormous can of worms of metaphysics.

My point with all this is that I suspect two things:

1) humans/industry/politics are not going to dig into the philosophy here in any real way

2) even if consciousness is a purely physical phenomenon, I somewhat doubt GPUs can do it, no matter how complicated.

I think if we ever really get down to it, it'll be the reverse direction. We'll "copy" human minds into a machine and then just need to "ask the people if they still feel the same."

stuaxo•11m ago
I don't think you need to believe in a soul to disbelieve that LLMs can "be" or "feel".

I don't think the clock on the wall is conscious, or the LLM in the machine, or the old VCR.

Do you need a brain for there to be consciousness, maybe not.

josefritzishere•12m ago
Are there really people who think that AI is on the verge of manifesting consciousness? I feel like this is a strawman argument over marketing nonsense.
stuaxo•7m ago
Unfortunately I think some of the SV types have gone mad and do actually think this.