frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•2m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•4m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•5m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•18m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•21m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•23m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•31m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•33m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•34m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•35m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•37m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•38m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•42m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•44m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•44m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•45m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•47m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•50m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•53m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•59m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Why AI Doesn't Think: We Need to Stop Calling It "Cognition"

https://docs.google.com/document/d/1FHUgpRTtL23cUygPhAh7xasccfKpX0T2ZGdlcsEr-4U/edit?usp=sharing
19•m_Anachronism•2w ago

Comments

kylecazar•2w ago
I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.

If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.

It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.

m_Anachronism•2w ago
God, yes. The 'you know what I mean' thing drives me crazy because no, I actually don't think they do know what they mean anymore. I've watched people go from using it as shorthand to genuinely asking ChatGPT how it's feeling today. The marketing has been so effective that even people who should know better start slipping into it. Completely agree that we missed a chance to frame this correctly from the start.
Kim_Bruning•2w ago
Accusations of Anthropomorphism are sometimes Anthropocentrism in a raincoat. O:-)
kylecazar•2w ago
Ha. Well I'm OK with being accused of bias towards biological life and intelligence. I know Larry Page and friends think this is 'speciesist' -- I strongly disagree.

I think that's compatible with optimism towards LLM's though. It just removes all of the nonsensical conflation with humanity and human intelligence.

donutquine•2w ago
An article about AI "cognition" is written by LLM. You kidding.
m_Anachronism•2w ago
Ha - I used Claude to help organize and edit it, yeah. Didn't see much point in pretending otherwise. The irony isn't lost on me, but I'm not arguing these tools aren't useful, just that we should call them what they are. Same way I'd use a calculator to write a math paper without claiming the calculator understands arithmetic
Kim_Bruning•2w ago
But does Claude understand Arithmetic? This is an empirical experiment you can try right now. Try ask Claude to explain an arithmetic expression you just made up. Or a math formula.

For example, try

  x_next = r * x * (1 - x)
A function of some historical significance O:-) (try plotting it btw!)
kelseyfrog•2w ago
> "Cognition" has a meaning. It's not vague. In psychology, neuroscience, and philosophy of mind, cognition refers to mental processes in organisms with nervous systems.

Except if you actually look up the definitions, they don't mention "organisms with nervous systems" at all. Curious.

m_Anachronism•2w ago
Fair pushback - you're right that strict dictionary definitions are broader. I probably should've been more precise there. My point is more about how the term is used in the actual fields studying it (cogsci, neuroscience, etc.), where it does carry those biological/embodied connotations, even if Webster's doesn't explicitly say so. But you're right to call out the sloppiness.
kelseyfrog•2w ago
We have actual tests for cognition - actual instruments that measure cognition. Why not use those as the basis for performing an experiment? If an LLM passes, it exhibits[has] cognition. It's not that hard of an experiment to run.
matt-attack•2w ago
It’s laughable to think that anyone in psychology has a “technical” definition of anything really. It is entirely possible that our brain works in a very, very similar way. We really have no idea. Focus focusing on the difference between meat and silicon is fruitless. The analogies between how a human learn, learns, and how an AI learns, are too significant to ignore.

Human humans have some instinctive desire to think themselves elevated. I am convinced that my internal thoughts are just a phenomenon, and the notion of “I choose to think a given thought. “ is preposterous in an of itself. Where exactly is this lofty perch from which I am controlling i?

kelseyfrog•2w ago
What's with with the doubling?
metalman•2w ago
why? there is no why to something that is not possible there is zero evidence that ai has achived, slow crawling bug level abilities to navigate ,even a simplified version of reality, as there would already be a massive shift in a wide variety of low level human unskilled labour and tasks. though if things keep going like they are we will see a new body dismorphia ,where people will be wanting more fingers.
plutodev•2w ago
This framing makes sense. What we call “AI thinking” is really large-scale, non-sentient computation—matrix ops and inference, not cognition. Once you see that, progress is less about “intelligence” and more about access to compute. I’ve run training and batch inference on decentralized GPU aggregators (io.net, Akash) precisely because they treat models as workloads, not minds. You trade polished orchestration and SLAs for cheaper, permissionless access to H100s/A100s, which works well for fault-tolerant jobs. Full disclosure: I’m part of io.net’s astronaut program.
m_Anachronism•2w ago
"Yeah that's exactly the point - when you're actually working with these models on the infrastructure side, the whole 'intelligence' narrative falls away pretty fast. It's just tensor operations at scale. Curious about your experience with decentralized GPU networks though - do you find the reliability trade-off worth it for most workloads, or are there specific use cases where you wouldn't go that route?"
Kim_Bruning•2w ago
You know, I bet Claude encouraged you to post here and share with people. Because Claude Opus 4.5 has been trained on being kind. It's a long story, but since you admitted to using it/them, I'm going to give you a lot more credit than normal. Also because you can plug what I say right back into Claude and see what else comes out!

So you're stumbling onto a position that's closest to "Biological Naturalism", which is Searle's philosophy. However, lots of people disagree with him, saying he's a closeted dualist in denial.

I mean, he was a product of his time, early 80's was dominated by symbolic AI, and that definitely wasn't working so well. Despite that, he got a lot of pushback from Dennett and Hofstadter even back then.

Chalmers recently takes a more cautious approach, while his student Amanda Askell is present in our conversation even if you haven't realized it yet. ;-)

Meanwhile the poor field of Biology is feeling rather left out of this conversation, having been quite steadfastly monist since the late 19th century, having rejected vitalism in favor of mechanism. (though the last dualists died out in the 50's-ish?)

And somewhere in our world's oceans, two sailors might be arguing whether or not a submarine can swim. On board a Los Angeles class SSN making way at 35 kts at -1000feet.

tim333•2w ago
I think there may be a bit of a losing battle here. In the title you have AI doesn't think but then in the Gemini API docs you have the section on "Generating content with thinking" and how to print the thought summaries https://ai.google.dev/gemini-api/docs/thinking

It seems a bit like saying cars don't run, we have to stop saying they are flying along. I mean Gemini doesn't think the same or as well as a human but it does something along those lines.

blibble•2w ago
there's several web pages out there which say Donald Trump is a successful businessman

doesn't mean it's true