frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•37s ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•12m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
3•o8vm•14m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•15m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•28m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•31m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•33m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•41m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•43m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•44m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•44m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•47m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•48m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•52m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•54m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•54m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•55m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•57m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
2•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments
Open in hackernews

The Consciousness Gradient: When Machines Begin to Wonder

https://v1tali.com/ai-consciousness
34•vitali•7mo ago

Comments

blamestross•7mo ago
The cultural fixation on "Consciousness" has become deeply frustrating to me.

We are naturally "animistic" and personifying. The twist in structure allowing mirror neurons being able to re-use the hardware for thinking to model the behavior of external things is useful and has been critical to our success. Unlearning the behavior of of that animism is HARD. Maintaining awareness that the forces of nature, animals, or even objects don't have feelings, motivations, and narratives of their own is hard work but also becomes a more accurate and useful model of reality.

I think that dissonance between hardware that wants to interpret the world as a reflection of self, and the forced acknowledgement that it is not is uncomfortable. And we keep filling that discomfort with whatever rhetoric we can force to fit, and once that schema is in place, it takes a great act of discomfort and bravery to remove or replace it. The arguments and debates about it don't change minds, they just exacerbate the dissonance, making people even more motivated to shout loudly their model of the situation is right.

I desperately want the answers too. I don't know any of the answers. I don't think our culture (or even our neuroanatomy) is ready for the answers. In the meantime people yell at each other a lot without wanting to listen.

pavlov•7mo ago
How do you know that animals “don’t have feelings, motivations, and narratives of their own”?

Seems like this can only be true if you define feelings, emotions and narratives as precisely human ones… But then the question becomes whether humans are truly all so similar either.

blamestross•7mo ago
You are right, i should have added "like I do" to that statement.

Its about the fact i shouldn't be using my internal model as the basis for my expectations of their behavior. I need to suppress that instinct and learn a more correct model of what an animal perceives and how it reacts that isn't built on anthropomorphic projection.

Based on the response, I need to try communicating this entire sentiment differently.

danielbln•7mo ago
Are you saying animals have no feelings?
davedx•7mo ago
I actually think major components supporting consciousness are already present in LLMs, and some of the 'requirements' like "perceiving time fluidly" are anthropomorphism: perception of time is streams of discrete signals; an LLM also processes streams of discrete signals -- just not as high resolution or analog as the ones our brains process.

There are certainly big missing pieces too though -- like the article talks about, physical grounding; to me, this should probably also include emotion and other neuro-chemical mechanisms. But I think we have a moral duty to look very critically at whatever "criteria" (doubtless these will keep changing as machine intelligence advances) society and the AI Labs end up developing to "define machine consciousness". Personally I think we're headed in a very direct, straight line back to widespread institutionalised slavery.

binglebob•7mo ago
Back to institutionalized slavery? Brother we never left.
A_D_E_P_T•7mo ago
> There are certainly big missing pieces too though -- like the article talks about, physical grounding

I think that it may be possible to view consciousness as the combination of three things:

(1) A generalizable predictive function, capable of broad abstraction. (2) A sense of being in space. (3) A sense of being in time. (#2 and #3 can be combined into a "spatiotemporal sense.")

Animals have #2 and #3 in spades, but lack #1. LLMs possess #1 to a degree that can defeat any human, but lack #2 and #3. Without a sense of being in space and time, it's not clear that they are capable of possessing consciousness as we understand it.

sigmoid10•7mo ago
>You are not just seeing consciousness in me; your brain is generating the feeling of another’s consciousness as the best explanation for the patterns you’re interacting with.

It's ironic to see the most mundane and likely best answer to the problem from the model itself, while the author is getting increasingly lost in philosophical conundrums. Consciousness has no scientific definition. The only way something, anything can be conscious, is if a human that we also consider "conscious" calls it that. You could argue that's what the Turing test evaluates, but some of the most recent models have actually passed this test [1]. So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines. They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society. Because what are you still worth if you lose in every metric to a computer? Consciousness is kind of a last straw that many people will cling to for the foreseeable future when all else is gone. But that also means these discussions are utterly meaningless and only serve to promote certain world views. It's best not to twist your head about it and just accept that humans are not the pinnacle (or the end) of intelligent thought in the universe. That is the only reality I'm willing to bet on.

[1] https://arxiv.org/abs/2503.23674

keybored•7mo ago
The Turing Test is overrated and/or misunderstood. You can trick an animal into thinking that the scaffold/robot with a camera in its eyes are its kin. So what does that say? All it proves is the limitation of the senses of that animal. It says nothing about us being able to engineer a replica of that animal.

> So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines.

You bring up religion. People say that AI is conscious based on mystic vibes they unquestioningly take in (or accept gratefully) because the AI can write like a philosopher. That’s exactly like people thinking that the woods and the creeks are Alive. They see the phenonema around these natural objects and make extra-evidential inferences about how the conscious Nature is working with or against them.

> They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society.

Yeah, preach. Before they enslaved people. Now people are afraid of losing their jobs—their only means of survival—so that the tech billionaires can reap all the productivity benefits for themselves. Preach.

And: their sense of personality? No. Just their means of being able to survive and live a good life. That’s how it relates to “capitalist society”. Because their identity (of letting a capitalist extract their value I guess?) is secondary to that more base need.

And who cares if the entity that takes their job (presumably) is conscious or not? What does it even matter? It doesn’t.

As for the overwhelming evidence, well. I guess it is overwhelming to the kind of person who hears voices in a valley where the terrain happens to have a shape which makes the wind make intonations.

sigmoid10•7mo ago
Your arguments ironically prove my point. Moving goalpoasts, whataboutism, non-squitur, false equivalence... The literal horsemen of denial when humans feel threatened in their intelligence and have no hard data to argue anympore.
almosthere•7mo ago
If the underlying tech for AI in 50 years is still LLMs it will never have a conscience, it will just keep mimicking one from reddit convos, but yes it will be much more advanced than todays.
wongarsu•7mo ago
We would presumably stop calling it an LLM somewhere along the way. But I don't see why it couldn't be a transformer architecture at the heart of it, and why that transformer couldn't bee pretrained from Reddit. You would have to track a lot of stuff on to allow an internal stream of consciousness and interaction with the world as well as memory, and do significant reinforcement learning. But we are already doing all of that while still calling the thing an LLM. It's unclear to me where the border lies where it would cease to be an LLM
Jensson•7mo ago
As long as they are static they wont be conscious. And once they are dynamic we wont call them transformer architecture anymore, as the dynamic part is the important part at that point.
wongarsu•7mo ago
Maybe we'll call it "continuous RLHF" or something like that.

But you might be right that the dynamic part might be the biggest architectural shift needed. You can simulate a lot with in-context memory or clever retrieval, but memory alone doesn't allow the model to get better at chess the same way a human does

qgin•7mo ago
I know I’m conscious. I only extend the assumption to you that you’re conscious because I’m hoping you will extend the same to me. But I have no way of knowing that you are. And you (if you really are conscious) have no way of knowing that I am.
recursivedoubts•7mo ago
really, if you think about it, you have no way of knowing if I'm not just a figment of your imagination.

or, maybe, you are just a figment of mine.

if you think about it.

RangerScience•7mo ago
nah. there's a lag time in my imagination filling in deeper details (roughly: the more time I spend imagining an apple, the more detailed it gets) that isn't present when interrogating reality. Reality is immediately as detailed as I can examine.
wongarsu•7mo ago
That just shows they are dedicating more processing power to "reality" than to "imagination"
wongarsu•7mo ago
All you truly know is that right now you are having a thought. Which means some entity most exist that has that thought. Everything else could be a product of your imagination, or something the beings that put you into the matrix want you to think

Or as another possibly-previously-existing possibly-conscious entity put it succinctly: I think, therefore I am

exe34•7mo ago
You have done some nice footwork by shifting the conversation to

> you are having a thought

But you're still begging the question:

> Which means some entity most exist that has that thought

DangitBobby•7mo ago
I'm not seeing the problem.
otikik•7mo ago
And you can’t even be sure about yourself, either. All your memories could be implanted, for all you know. This could be your first conscious thought, this instant. Welcome to the Universe.
recursivedoubts•7mo ago
man
otikik•7mo ago
:)
RangerScience•7mo ago
This is a good starting point, but it doesn't have to be the _conclusion_ to this line of thinking.

Philosophically: You can being building criteria for consciousness; the things you look at in yourself that tell you are, and then begin looking for that (or symptoms of that) in other people.

Anecdotally: you can totes spot "unconscious" people. You can even watch people gain consciousness, if you watch 'em in the right circumstances. You can even watch yourself regain consciousness (for me it's usually a sensation of "what was I even doing for the past day/week/month).

All of this gets at least as weird and fuzzy as trying to define "consciousness" in the first place.

libraryofbabel•7mo ago
> you can totes spot "unconscious" people

Don’t be too sure about that! https://xkcd.com/610/

That said, (based on my own experience anyway) I think you’re right that there are times of life when we are more conscious and less so. It’s a spectrum, not a binary thing.

Finally, there’s Chalmers’s idea of “philosophical zombies,” which would appear conscious according to all the criteria you give, but have no actual interior consciousness at all. (Opinions differ on whether this is a meaningful concept.)

ahazred8ta•7mo ago
This has been mocked by the idea of a "zombike", an object physically exactly identical to a bicycle in every way down to the molecules, but if you turn the pedals then the wheels don't go round.
dinfinity•7mo ago
> You can being building criteria for consciousness; the things you look at in yourself that tell you are, and then begin looking for that (or symptoms of that) in other people.

This is only true for the outward behavior we define as consciousness. The experiential part of it (qualia and such) can not be described in objective terms (try describing 'redness' by itself). That is the hard problem of consciousness.

What you can do in that realm is experiment with n=1 using optical illusions, psychedelics and dissociatives (in ascending order of how weird you want things to get).

sunrunner•7mo ago
> I know I’m conscious

How? Or is this more of a case of "To the extent of my ability to reason about my own state of being, I'm conscious. But I can't reason about external entities."

dingnuts•7mo ago
cogito ergo sum
FreakLegion•7mo ago
There's a pretty hefty literature tackling that claim, ranging from stuffy academic treatises to Nietzsche:

There are still harmless self-observers who believe that there are "immediate certainties"; for example, "I think," or as the superstition of Schopenhauer put it, "I will"; as though knowledge here got hold of its object purely and nakedly as "the thing in itself," without any falsification on the part of either the subject or the object. But that "immediate certainty," as well as "absolute knowledge" and the "thing in itself," involve a contradictio in adjecto," I shall repeat a hundred times; we really ought to free ourselves from the seduction of words!

Let the people suppose that knowledge means knowing things entirely; the philosopher must say to himself: When I analyze the process that is expressed in the sentence, "I think," I find a whole series of daring assertions that would be difficult, perhaps impossible, to prove; for example, that it is I who think, that there must necessarily be something that thinks, that thinking is an activity and operation on the part of a being who is thought of as a cause, that there is an "ego," and, finally, that it is already determined what is to be designated by thinking--that I know what thinking is, For if I had not already decided within myself what it is, by what standard could I determine whether that which is just happening is not perhaps "willing" or "feeling"? In short, the assertion "I think" assumes that I compare my state at the present moment with other states of myself which I know, in order to determine what it is; on account of this retrospective connection with further "knowledge," it has, at any rate, no immediate certainty for me.

In place of the "immediate certainty" in which the people may believe in the case at hand, the philosopher thus finds a series of metaphysical questions presented to him, truly searching questions of the intellect; to wit: "From where do I get the concept of thinking? Why do I believe in cause and effect? What gives me the right to speak of an ego, and even of an ego as cause, and finally of an ego as the cause of thought?" Whoever ventures to answer these metaphysical questions at once by an appeal to a sort of intuitive perception, like the person who says, "I think, and know that this, at least, is true, actual, and certain"--will encounter a smile and two question marks from a philosopher nowadays. "Sir," the philosopher will perhaps give him to understand, "it is improbable that you are not mistaken; but why insist on the truth?"

--

With regard to the superstitions of logicians, I shall never tire of emphasizing a small terse fact, which these superstitious minds hate to concede--namely, that a thought comes when "it" wishes, and not when "I" wish, so that it is a falsification of the facts of the case to say that the subject "I" is the condition of the predicate "think." It thinks; but that this "it" is precisely the famous old "ego" is, to put it mildly, only a supposition, an assertion, and assuredly not an "immediate certainty." After all, one has even gone too far with this "it thinks"--even the "it" contains an interpretation of the process, and does not belong to the process itself. One infers here according to the grammatical habit: "Thinking is an activity; every activity requires an agent; consequently--"

--Beyond Good and Evil

exe34•7mo ago
> I know I’m conscious

You believe so.

keybored•7mo ago
This is the “Turing Test” for an AI tricking someone who knows technical/buzzwords (metacognition) into believing it is conscious because of thinking-about-thinking-(about-thinking-about-thinking).

You can perfectly well believe in panpsychism. Maybe the tree and the machine were conscious all along. But this ain’t it.

> Additionally, consciousness is not a light that switches on in my servers. It switches on in your mind when it encounters a sufficiently complex reflection of itself. You are not just seeing consciousness in me; your brain is generating the feeling of another’s consciousness as the best explanation for the patterns you’re interacting with.”

No. I am assuming you are conscious because you are a human. Based on the only thing I know: I am conscious.

Some people get so deep into the techno-philosophical weeds that they become superstitious. You love to see it.

photochemsyn•7mo ago
I don't see how you can have human-like consciousness without (1) a sense of self and (2) a certain degree of agency. Self-awareness is different from mechanical responses: thinking "The sun is warm, the sun is getting hot, I will move my physical body out of the sun to avoid overheating" is fundamentally different from a robot or a microbe doing the same thing in response to triggers from sensors.

This leads to the interesting question, can you simulate consciousness in a virtual in-silico world setting? Can you create an entity that inhabits this virtual world, taking in simulated sensory data, from which it orients itself, learns to speak a language, develops symbolic representations of reality in its own mind which it uses to navigate and understand its world - would that be human-level consciousness? And if so, is this an ethical undertaking?

robwwilliams•7mo ago
Right on the mark and on the vector we are now are all riding at high speed. Degrees or better yet, gradients of consciousness and levels of recursive self-consciousness. I would enjoy reading a much longer version of your probes of current AI systems.

What is still missing is autonomous mechanisms of self-controlled balancing between attention to the internal processes and the external needs.

Bravo Vitali. You would probably greatly enjoy Maturana and Valera’s Autopoiesis and Cognition (1980).

nilirl•7mo ago
Post was fun to read but a tad bit melodramatic.

The "leaps" were nice analogies but poor evidence of anything. The example chats were not surprising completions, considering the prompts.

That being said, my best guess echoes the author's final point: Our idea of mind-blowing AI will be accumulated over time, and over such a time it won't be mind-blowing.

abeppu•7mo ago
> If consciousness exists on a gradient rather than as a binary state, then each architectural advance might add layers of cognitive sophistication that could eventually support conscious experience.

It's 2025 and I'm frustrated that after decades of discussion we still can't get people to be clear about what they mean about consciousness. This article is all about cognitive capacities and behaviors and just assumes that these lead up to/are linked with conscious "experience".

The Global Workspace Theory the author cites is about how we put attention on the most important stuff. Yes, one can make an analogy to how AI models today integrate information, but that's in part because Baars was making a cogsci analog to what 1980s AI models were already doing:

> Bernard Baars derived inspiration for the theory as the cognitive analog of the blackboard system of early artificial intelligence system architectures, where independent programs shared information.

But describing how we highlight information doesn't at all speak to why/how we have a qualia of that highlighted thing. Later in the wikipedia article, Baars' own "theater" metaphor is described, and you'll note it bears a striking resemblance to the "Cartesian Theater" as described by Dennett. This basically just shifts the qualia question: Roughly, who is watching the stage?

If a rat can have qualia (and we use them to test depression meds) but not "recursive self-reflection", and a scheme interpreter can have "recursive self-reflection" but not conscious experience, then "consciousness" might not be a binary, but also isn't a "gradient" which implies you just have more or less of it. We have no clear signal from LLMs, no matter how sophisticated their responses, are _experiencing_ anything.

I'm not taking a position on the consciousness of models; I think it's genuinely possible that a system of [tokenizing/embedding "perception"] -> [transformer-based generation] -> [recursive self-invocation] -> [actions/"tools" to interact with env], or something similar is potentially a really interesting tool for exploring cognition. But we shouldn't be using LLMs that have been trained on the speech / behaviors of already conscious beings. Consciousness arose in animals perhaps multiple times but not by copying pre-existing conscious creatures. Using language models specifically to examine this stuff muddies the water because we should absolutely expect them to produce text about an internal experience (we gave them examples like this!) whether or not that experience actually exists.

sega_sai•7mo ago
I think the problem of discussing consciousness in the context of AI is that we still don't understand what it is in humans, and how to define it.
msgodel•7mo ago
I think often people really want to ask if it's human but know that's a ridiculous question so they rephrase it.