frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1m ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•1m ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•2m ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
1•medbar•3m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•4m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•4m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•4m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•7m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•10m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•12m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•16m ago•1 comments

Ask HN: The Coming Class War

1•fud101•16m ago•1 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•18m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
1•petethomas•19m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•19m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•23m ago•1 comments

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
2•logicprog•29m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•29m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
3•todsacerdoti•30m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•31m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•32m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•35m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
2•tzury•36m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•38m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•41m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•RebelPotato•45m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
2•dev_tty01•47m ago•0 comments

Show HN: Freenet Lives – Real-Time Decentralized Apps at Scale [video]

https://www.youtube.com/watch?v=3SxNBz1VTE0
1•sanity•49m ago•1 comments

In the AI age, 'slow and steady' doesn't win

https://www.semafor.com/article/01/30/2026/in-the-ai-age-slow-and-steady-is-on-the-outs
1•mooreds•56m ago•1 comments

Administration won't let student deported to Honduras return

https://www.reuters.com/world/us/trump-administration-wont-let-student-deported-honduras-return-2...
1•petethomas•56m ago•0 comments
Open in hackernews

AI Models Are Not Ready to Make Scientific Discoveries

https://www.thealgorithmicbridge.com/p/harvard-and-mit-study-ai-models-are
7•jonbaer•6mo ago

Comments

nickpsecurity•6mo ago
I'll add something I read in books on human intuition when I was younger. The authors pointed out that reasoning and intuitive parts of the brain are different. They can work together or override each other situation by situation.

Reasoning can establish the facts, analyze them, generalize/analogize, weigh possible outcomes, and even backtrack. Memories of successes and failures might be considered with all the techniques I described available for them. It takes lots of time and energy, though.

Intuition finds patterns in what our senses observe and how we respond to it. It tries to approximate a good enough reaction. Over time, it tries to do that by default unless we consciously override it. We can train it with conscious practice.

The authors proposed this was for efficency and survival. For efficiency, most of our tasks are repetitive in various ways. Using quick shortcuts saves time and energy. For survival, we seem to more vividly remember horrible things that can hurt us, our experiences or others' stories. Intuition's fast response, milliseconds, might save our life from a threat that would hurt us if we took time to analyze it.

We also have memory that connects to both components. We have multiple layers of memory. I'm not sure how often our reasoning and intuitive components consult with our memory vs use their own internal state. I imagine God gave the brain heuristics on that.

There's also one part of the brain that's damaged in people who hallucinate a lot. It might be designed to mitigate hallucinations. I speculate it is a combination of it and memory work together for this.

Finally, incoming data begins grounded in the senses that see our actual reality. What humans tell us is integrated with that. We also constantly generate our own predictions, especially as children play, which are tested in the real world.

There's also continuous training with different, reward mechanisms. There's also changes to learning rates balancing adaptability vs stability. Whatever this is can work without fine tuning (human feedback) but works much better with it.

So, whatever architecture the AGI (or scientist replacement) will need these components. Minimum: a goal-oriented, reasoning system; intuitive system; memory; hallucination mitigation. We can use the first model like that to help us build the rest.

codingdave•6mo ago
Sounds like you are remembering the book "Thinking Fast and Slow". It is definitely an interesting model, but less than half the research on which it is based has been successfully replicated.

Besides, TFA wasn't trying to figure out how to architect AGI. They were just testing if LLMs were a potential basis for it. And while I just read the article, not the underlying study, it seems like their conclusion is "No."

nickpsecurity•6mo ago
I don't know if I read that one. I remembet r reading "Intuition at Work" and "Emotional Intelligence."

One pointed out military drills are built on the theory I shared. Martial arts and sports use "muscle memory" the same way. The workplace book applied the concept to design a series of realistic scenarios for specific duties that trained the intuition of employees.

I think there's overwhelming, anecdotal evidence of the examples I just gave. Maybe empirical in science but I haven't looked at that for military, sports, etc. I still build on it regularly, like programming practice.

I'm curious if you saw scientific counter-evidence to that or a different set of claims in the book you referenced. Just because the studies might only disagree with a subset of the claims. We might also find the mechanisms are different than what the other, two books said.