frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

There's no such thing as "tech" (Ten years later)

1•dtjb•46s ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•1m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•2m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•8m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•9m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•9m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
9•bookofjoe•9m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•10m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•11m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•12m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•12m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•12m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•12m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•13m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•14m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•14m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•15m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•19m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•19m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•20m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•20m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•22m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•22m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•23m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•23m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•23m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•24m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•24m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•25m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•27m ago•0 comments
Open in hackernews

AI Models Are Not Ready to Make Scientific Discoveries

https://www.thealgorithmicbridge.com/p/harvard-and-mit-study-ai-models-are
7•jonbaer•6mo ago

Comments

nickpsecurity•6mo ago
I'll add something I read in books on human intuition when I was younger. The authors pointed out that reasoning and intuitive parts of the brain are different. They can work together or override each other situation by situation.

Reasoning can establish the facts, analyze them, generalize/analogize, weigh possible outcomes, and even backtrack. Memories of successes and failures might be considered with all the techniques I described available for them. It takes lots of time and energy, though.

Intuition finds patterns in what our senses observe and how we respond to it. It tries to approximate a good enough reaction. Over time, it tries to do that by default unless we consciously override it. We can train it with conscious practice.

The authors proposed this was for efficency and survival. For efficiency, most of our tasks are repetitive in various ways. Using quick shortcuts saves time and energy. For survival, we seem to more vividly remember horrible things that can hurt us, our experiences or others' stories. Intuition's fast response, milliseconds, might save our life from a threat that would hurt us if we took time to analyze it.

We also have memory that connects to both components. We have multiple layers of memory. I'm not sure how often our reasoning and intuitive components consult with our memory vs use their own internal state. I imagine God gave the brain heuristics on that.

There's also one part of the brain that's damaged in people who hallucinate a lot. It might be designed to mitigate hallucinations. I speculate it is a combination of it and memory work together for this.

Finally, incoming data begins grounded in the senses that see our actual reality. What humans tell us is integrated with that. We also constantly generate our own predictions, especially as children play, which are tested in the real world.

There's also continuous training with different, reward mechanisms. There's also changes to learning rates balancing adaptability vs stability. Whatever this is can work without fine tuning (human feedback) but works much better with it.

So, whatever architecture the AGI (or scientist replacement) will need these components. Minimum: a goal-oriented, reasoning system; intuitive system; memory; hallucination mitigation. We can use the first model like that to help us build the rest.

codingdave•6mo ago
Sounds like you are remembering the book "Thinking Fast and Slow". It is definitely an interesting model, but less than half the research on which it is based has been successfully replicated.

Besides, TFA wasn't trying to figure out how to architect AGI. They were just testing if LLMs were a potential basis for it. And while I just read the article, not the underlying study, it seems like their conclusion is "No."

nickpsecurity•6mo ago
I don't know if I read that one. I remembet r reading "Intuition at Work" and "Emotional Intelligence."

One pointed out military drills are built on the theory I shared. Martial arts and sports use "muscle memory" the same way. The workplace book applied the concept to design a series of realistic scenarios for specific duties that trained the intuition of employees.

I think there's overwhelming, anecdotal evidence of the examples I just gave. Maybe empirical in science but I haven't looked at that for military, sports, etc. I still build on it regularly, like programming practice.

I'm curious if you saw scientific counter-evidence to that or a different set of claims in the book you referenced. Just because the studies might only disagree with a subset of the claims. We might also find the mechanisms are different than what the other, two books said.