frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic's team cut ad creation time from 30 minutes to 30 seconds

https://claude.com/blog/how-anthropic-uses-claude-marketing
1•Brajeshwar•1m ago•0 comments

Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework

https://elysiajs.com/internal/jit-compiler
1•saltyaom•2m ago•0 comments

Cache Monet

https://cachemonet.com
1•keepamovin•2m ago•0 comments

Chinese Propaganda in Infomaniak's Euria, and a Reflection on Open Source AI

https://gagliardoni.net/#20260208_euria
1•tomgag•3m ago•1 comments

Show HN: A free, browser-only PDF tools collection built with Kimi k2.5

https://pdfuck.com
2•Justin3go•5m ago•0 comments

Curating a Show on My Ineffable Mother, Ursula K. Le Guin

https://hyperallergic.com/curating-a-show-on-my-ineffable-mother-ursula-k-le-guin/
2•bryanrasmussen•11m ago•0 comments

Show HN: HackerStack.dev – 49 Curated AI Tools for Indie Hackers

https://hackerstack.dev
1•pascalicchio•18m ago•0 comments

Pensions Are a Ponzi Scheme

https://poddley.com/?searchParams=segmentIds=b53ff41f-25c9-4f35-98d6-36616757d35b
1•onesandofgrain•24m ago•7 comments

Divvy.club – Splitwise alternative that makes sense

https://divvy.club
1•filepod•25m ago•0 comments

Betterment data breach exposes 1.4M customers

https://www.americanbanker.com/news/1-4-million-data-breach-betterment-shinyhunters-salesforce
1•NewCzech•26m ago•0 comments

MIT Technology Review has confirmed that posts on Moltbook were fake

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
2•helloplanets•26m ago•0 comments

Epstein Science: the people Epstein discussed scientific topics with

https://edge.dog/templates/cml9p8slu0009gdj2p0l8xf4r
2•castalian•26m ago•0 comments

Bambuddy – a free, self-hosted management system for Bambu Lab printers

https://bambuddy.cool
2•maziggy•31m ago•1 comments

Every Failed M4 Gun Replacement Attempt

https://www.youtube.com/watch?v=jrnAU67_EWg
3•tomaytotomato•31m ago•1 comments

China ramps up energy boom flagged by Musk as key to AI race

https://techxplore.com/news/2026-02-china-ramps-energy-boom-flagged.html
2•myk-e•32m ago•0 comments

Show HN: ClawBox – Dedicated OpenClaw Hardware (Jetson Orin Nano, 67 Tops, 20W)

https://openclawhardware.dev
2•superactro•34m ago•0 comments

Ask HN: AI never gets flustered, will that make us better as people or worse?

1•keepamovin•34m ago•0 comments

Show HN: HalalCodeCheck – Verify food ingredients offline

https://halalcodecheck.com/
3•pythonbase•37m ago•0 comments

Student makes cosmic dust in a lab, shining a light on the origin of life

https://www.cnn.com/2026/02/06/science/cosmic-dust-discovery-life-beginnings
1•Brajeshwar•39m ago•0 comments

In the Australian outback, we're listening for nuclear tests

https://www.abc.net.au/news/2026-02-08/australian-outback-nuclear-tests-listening-warramunga-faci...
6•defrost•39m ago•0 comments

'Hermès orange' iPhone sparks Apple comeback in China

https://www.ft.com/content/e2d78d04-7368-4b0c-abd5-591c03774c46
1•Brajeshwar•40m ago•0 comments

Show HN: Goxe 19k Logs/S on an I5

https://github.com/DumbNoxx/goxe
1•nxus_dev•41m ago•1 comments

The async builder pattern in Rust

https://blog.yoshuawuyts.com/async-finalizers/
2•fanf2•42m ago•0 comments

(Golang) Self referential functions and the design of options

https://commandcenter.blogspot.com/2014/01/self-referential-functions-and-design.html
1•hambes•43m ago•0 comments

Show HN: Model Training Memory Simulator

https://czheo.github.io/2026/02/08/model-training-memory-simulator/
1•czheo•45m ago•0 comments

Claude Code Controller

https://github.com/The-Vibe-Company/claude-code-controller
1•shidhincr•49m ago•0 comments

Software design is now cheap

https://dottedmag.net/blog/cheap-design/
1•dottedmag•49m ago•0 comments

Show HN: Are You Random? – A game that predicts your "random" choices

https://github.com/OvidijusParsiunas/are-you-random
1•ovisource•54m ago•1 comments

Poland to probe possible links between Epstein and Russia

https://www.reuters.com/world/poland-probe-possible-links-between-epstein-russia-pm-tusk-says-202...
2•doener•1h ago•0 comments

Effectiveness of AI detection tools in identifying AI-generated articles

https://www.ijoms.com/article/S0901-5027(26)00025-1/fulltext
3•XzetaU8•1h ago•0 comments
Open in hackernews

AI isn't bored yet (but that might be the key)

1•vayllon•9mo ago
Ever since I introduced my son to LLMs (large language models) he hasn’t stopped asking whether AI will eventually think like humans.

Today’s AI is bioinspired by the human brain, mimicking how neurons connect and process information hierarchically. Conversely, advances in AI are now inspiring neuroscientists to rethink how our brain works. This feedback loop is driving breakthroughs in both fields—and forcing them to reconsider what thinking truly means. My 10-year-old son says that thinking is like meditating: boring yourself on purpose.

So, should we worry when AI starts feeling bored?

As a father working in deep learning (DL) and natural language processing (NLP) with a passion for neuroscience, I want to explore this fascinating technical-philosophical question: How does our brain think, and in what ways does it resemble AI?

Let's start with Daniel Kahneman, a prestigious cognitive scientist who popularized the theory of two systems of thinking. He called them System 1—fast, intuitive, and automatic thinking—and System 2—slower, deliberative, and logical.

From my perspective and knowledge of AI, I'd venture to say that the first is based on an extremely powerful DNN, capable of processing large amounts of information in parallel. The second is a special type of thinking that we could call “narrative”, based on language.

Language processing is the most studied brain function, partly because it’s conscious. But while language sets us apart from animals, it’s not always the most efficient tool. Intuition emerging from deep, interconnected neural networks—often outperforms it in creativity and speed.

The challenge with intuition, like artificial DNNs, lies in its lack of explainability: both operate as black boxes, unable to reveal how they reach their conclusions. This lack of transparency, while generating mistrust, does not invalidate their usefulness. After all, the human mind and AI share this paradox: they are not always transparent.

So, we can say that we have two types of thinking: network-based thinking (implicit, rapid, and intuitive) and narrative thinking (sequential, linguistic, and conscious), both really useful. These systems aren’t isolated. Narrative thinking externalizes ideas generated by the neural network.

When I talk about "ideas," I'm referring to complex, abstract thoughts that don't rely on language, similar to the latent representations in a DNN: internal encodings that encapsulate the essence of data through nonlinear patterns. These representations emerge intuitively, without linguistic intervention.

"Language, on the other hand, is a superpower," I told my son. But, you know, with great power comes great responsibility.

Language is our ultimate tool for shaping reality: labeling the world (like AI’s feature tagging), constructing mental embeddings, and enabling self-supervised learning—through questions, trial-and-error, and the inner dialogue we call thought.

But language has its limits. Low-bandwidth by design, it’s slow, sequential, and lossy—like compressing a symphony into sheet music. Some nuances always escape the page.

Both systems operate as sophisticated prediction engines - powerful pattern recognizers wired by expectation. Language and LLM forecast words based on statistical probabilities learned from training data, our biological DNN works similarly.

This becomes super clear in everyday moments, “Like when you sat in my desk chair and changed the height without me knowing”, I told my son. “Then, I went to sit down, and I stumbled a little, right?” That tiny wobble isn’t just my body being surprised — it’s my brain going, Whoa, something's wrong!’

“When will AI truly think like humans?” - asked my son — “Perhaps when it gets genuinely bored. Until then, we’re safe” (or just impatient).

Comments

bigyabai•9mo ago
> Today’s AI is bioinspired by the human brain, mimicking how neurons connect and process information hierarchically.

This is not true, AI model weights do not connect to and influence each other like neurons. You should know better if you're a neuroscientist and deep learning researcher.