frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•15m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•21m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•21m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•24m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•27m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•37m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•37m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•42m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•46m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•47m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•50m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•50m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•53m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments
Open in hackernews

Revisiting Minsky's Society of Mind in 2025

https://suthakamal.substack.com/p/revisiting-minskys-society-of-mind
119•suthakamal•7mo ago

Comments

suthakamal•7mo ago
As a teen in the '90s, I dismissed Marvin Minsky’s 1986 classic, The Society of Mind, as outdated. But decades later, as monolithic large language models reach their limits, Minsky’s vision—intelligence emerging from modular "agents"—seems strikingly prescient. Today’s Mixture-of-Experts models, multi-agent architectures, and internal oversight mechanisms are effectively operationalizing his insights, reshaping how we think about building robust, scalable, and aligned AI systems.
detourdog•7mo ago
I was very inspired by the book in 1988-89 as a second year industrial design student. I think this has been a thread on HN about 2 years ago.
generalizations•7mo ago
Finally someone mentions this. Maybe I've been in the wrong circles, but I've been wishing I had the time to implement a society-of-mind-inspired system ever since llamacpp got started, and I never saw anyone else reference it until now.
sva_•7mo ago
Honestly, I never really saw the point of it. It seems like introducing a whole bunch of inductive biases, which Richard Sutton's 'The Bitter Lesson' warned against.
suthakamal•7mo ago
Rich Sutton's views are far less interesting than Minsky's IMO.
RaftPeople•7mo ago
> Rich Sutton's views are far less interesting than Minsky's IMO.

I don't think Minsky's and Sutton's views are in contradiction, they seem to be orthogonal.

Minsky: the mind is just a collection of a bunch of function specific areas/modules/whatever you want to call them

Sutton: trying to embed human knowledge into the system (i.e. manually) is the least effective way to get there. Search and learning are more effective (especially as computational capabilities increase)

Minsky talks about what the structure of a generalized intelligent system looks like. Sutton talks about the most effective way to create the system, but does not exclude the possibility that there are many different functional areas specialized to handle specific domains that combine to create the whole.

People have paraphrased Sutton as simply "scale" is the answer and I disagreed because to me learning is critical, but I just read what he actually wrote and he emphasizes learning.

suthakamal•7mo ago
Okay, consider my perspective changed.

I take Sutton's Bitter Lesson to basically say that compute scale tends to win over projecting what we think makes sense as a structure for thinking.

I also think that as we move away from purely von neumann architectures to more neuromorphic things, the algorithms we design and ways those systems will scale will change. Still, I think I agree that scaling compute / learning will continue to be a fruitful path.

lubujackson•7mo ago
I found and read this book from the library completely randomly like 20 years ago and I still remember a lot of the concepts. Definitely seems like a foundational approach for how to architect intelligent systems with a computer. Before I was even thinking about any of that and was just interested in the philosophy I thought his approach and fullness of his ideas was remarkable. Glad to see it becoming a more central text!
fishnchips•7mo ago
Having studied sociology and psychology in my previous life I am now surprised how relevant some of the almost forgotten ideas became to my current life as a dev!
griffzhowl•7mo ago
Interesting. What kind of psychological ideas are most relevant?
fishnchips•7mo ago
Skinner's behaviorism for sure ("The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man").

But also Dennet's origins of consciousness.

What I mean here is that the discussion among the AI proponents and detractors about machines "thinking" or being "conscious" seems to ignore what neuropsychology and cognitive psychology found obvious for decades - that there is no uniform concept of "thinking" or "consciousness" in humans, either.

colechristensen•7mo ago
MIT OpenCourseWare course including video lectures taught by Minsky himself:

https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...

suthakamal•7mo ago
amazing find. thank you for sharing this!
homefree•7mo ago
> Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.

That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.

This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?

I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).

spiderxxxx•7mo ago
I think you may be mistaking Society of Mind with a different book. It's not about lisp or "good old fashioned AI" but about how the human mind may work - something that we could possibly simulate. It's observations about how we perform thought. The ideas in the book are not tied to a specific technology, but about how a complex system such as the human brain works.
suthakamal•7mo ago
I don't think we're talking about the same book. Society of Mind is definitely not an in-the weeds book that digs into things like lisp, etc. in any detail. Instead of changing your mind, I'd encourage you to re-read Minsky's book if you found my essay compelling, and ignore it if not.
adastra22•7mo ago
You are surrounded by GOFAI programs that work well every moment of your life. From air traffic control planning, do heuristics based compiler optimization. GOFAI has this problem where as soon as they solve a problem and get it working, it stops being “real AI” in the minds of the population writ large.
homefree•7mo ago
Because it isn't AI and it never was and had no path to becoming it, the new stuff is and the difference is obvious.
adastra22•7mo ago
Go read an AI textbook from the 80’s. It was all about optimizations and heuristics. That was the field.

Now if you write a SAT solver or a code optimizer you do t call it AI. But those algorithms were invented by AI researchers back when the population as a whole considered these sorts of things to be intelligent behavior.

homefree•7mo ago
I agree with you that it was called AI by the field, but that’s also why the field was a joke imo.

Until LLMs everything casually called AI clearly wasn’t intelligence and the field was pretty uninteresting - looked like a deadend with no idea how to actually build intelligence. That changed around 2014, but it wasn’t because of GOFAI, it was because of a new approach.

mcphage•7mo ago
Philosophy has the same problem, as a field. Many fields of study have grown out of philosophy, but as soon as something is identified, people say “well that’s not Philosophy, that’s $X” … and then people act like philosophy is useless and hasn’t accomplished anything.
empiko•7mo ago
I completely agree with you and I am surprised by the praise in this thread. The entire research program that this books represents is dead for decades already.
photonthug•7mo ago
It seems like you might be confusing "research programs" with things like "branding" and surface-level terminology. And probably missing the fact that society-of-mind is about architecture more than implementation, so it's pretty agnostic about implementation details.

Here, enjoy this thing clearly building on SoM and edited earlier this week: ideas https://github.com/camel-ai/camel/blob/master/camel/societie...

suthakamal•7mo ago
I pretty clearly articulate the opposite. What's your evidence to support your claim?
empiko•7mo ago
The problem with your argument is that what you call agent is nothing like what Minsky envisioned. The agents in Minsky's world are very simple rule based entities ("nothing more than a few switches") that are composed in vast hierarchies. The argument Minsky is making is that if you compose enough simple agents in a smart way, an intelligence will emerge. What we use today as agents is nothing like that, each agents itself is considered intelligent (directly opposing Minsky's vision "none of our agents is intelligent"), while organized along very simple principles.
homefree•7mo ago
This is reminding me of what I thought I was remembering, I don't have the book anymore - but I remember starting it and reading a few chapters before putting it back on the shelf, it's core ideas seemed to have been shown to be wrong.
drannex•7mo ago
Good timing, I just started rereading my copy last week to get my vibe back.

Not only is it great for tech nerds such as ourselves for tech, but its a great philosophy on thinking about and living life. Such a phenomenal read, easy, simple, wonderful format, wish more tech-focused books were written in this style.

mblackstone•7mo ago
In 2004 I previewed Minsky's chapters-in-progress for "The Emotion Machine", and exchanged some comments with him (which was a thrill for me). Here is an excerpt from that exchange: Me: I am one of your readers who falls into the gap between research and implementation: I do neither. However, I am enough of a reader of research, and have done enough implementation and software project management that when I read of ideas such as yours, I evaluate them for implementability. From this point of view, "The Society of Mind" was somewhat frustrating: while I could well believe in the plausibility of the ideas, and saw their value in organizing further thought, it was hard to see how they could be implemented. The ideas in "The Emotion Machine" feel more implementable.

Minsky: Indeed it was. So, in fact, the new book is the result of 15 years of trying to fix this, by replacing the 'bottom-up' approach of SoM by the 'top-down' ideas of the Emotion machine.

suthakamal•7mo ago
agree. A lot has changed in the last 20 years, which makes SoM much more applicable. I would've agreed in 2004 (and say as much in the essay).
neilv•7mo ago
It might've been Push Singh (Minsky's protege) who said that every page of Society of Mind was someone's AI dissertation waiting to happen.

When I took Minsky's Society of Mind class, IIRC, it actually had the format -- not of going through the pages and chapters -- but of him walking in, and talking about whatever he'd been working on that day, for writing The Emotion Machine. :)

ggm•7mo ago
Minsky disliked how Harry Harrison changed the end of "the turing option" and wrote a different ending.

(not directly related to the post but anyway)

frozenseven•7mo ago
Jürgen Schmidhuber's team is working on this, applying these ideas in a modern context:

https://arxiv.org/abs/2305.17066

https://github.com/metauto-ai/NLSOM

https://ieeexplore.ieee.org/document/10903668