frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•11m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
3•o8vm•13m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•14m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•27m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•30m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•32m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•40m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•42m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•43m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•43m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•46m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•47m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•51m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•53m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•53m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•54m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•56m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•59m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Revisiting Minsky's Society of Mind in 2025

https://suthakamal.substack.com/p/revisiting-minskys-society-of-mind
119•suthakamal•7mo ago

Comments

suthakamal•7mo ago
As a teen in the '90s, I dismissed Marvin Minsky’s 1986 classic, The Society of Mind, as outdated. But decades later, as monolithic large language models reach their limits, Minsky’s vision—intelligence emerging from modular "agents"—seems strikingly prescient. Today’s Mixture-of-Experts models, multi-agent architectures, and internal oversight mechanisms are effectively operationalizing his insights, reshaping how we think about building robust, scalable, and aligned AI systems.
detourdog•7mo ago
I was very inspired by the book in 1988-89 as a second year industrial design student. I think this has been a thread on HN about 2 years ago.
generalizations•7mo ago
Finally someone mentions this. Maybe I've been in the wrong circles, but I've been wishing I had the time to implement a society-of-mind-inspired system ever since llamacpp got started, and I never saw anyone else reference it until now.
sva_•7mo ago
Honestly, I never really saw the point of it. It seems like introducing a whole bunch of inductive biases, which Richard Sutton's 'The Bitter Lesson' warned against.
suthakamal•7mo ago
Rich Sutton's views are far less interesting than Minsky's IMO.
RaftPeople•7mo ago
> Rich Sutton's views are far less interesting than Minsky's IMO.

I don't think Minsky's and Sutton's views are in contradiction, they seem to be orthogonal.

Minsky: the mind is just a collection of a bunch of function specific areas/modules/whatever you want to call them

Sutton: trying to embed human knowledge into the system (i.e. manually) is the least effective way to get there. Search and learning are more effective (especially as computational capabilities increase)

Minsky talks about what the structure of a generalized intelligent system looks like. Sutton talks about the most effective way to create the system, but does not exclude the possibility that there are many different functional areas specialized to handle specific domains that combine to create the whole.

People have paraphrased Sutton as simply "scale" is the answer and I disagreed because to me learning is critical, but I just read what he actually wrote and he emphasizes learning.

suthakamal•7mo ago
Okay, consider my perspective changed.

I take Sutton's Bitter Lesson to basically say that compute scale tends to win over projecting what we think makes sense as a structure for thinking.

I also think that as we move away from purely von neumann architectures to more neuromorphic things, the algorithms we design and ways those systems will scale will change. Still, I think I agree that scaling compute / learning will continue to be a fruitful path.

lubujackson•7mo ago
I found and read this book from the library completely randomly like 20 years ago and I still remember a lot of the concepts. Definitely seems like a foundational approach for how to architect intelligent systems with a computer. Before I was even thinking about any of that and was just interested in the philosophy I thought his approach and fullness of his ideas was remarkable. Glad to see it becoming a more central text!
fishnchips•7mo ago
Having studied sociology and psychology in my previous life I am now surprised how relevant some of the almost forgotten ideas became to my current life as a dev!
griffzhowl•7mo ago
Interesting. What kind of psychological ideas are most relevant?
fishnchips•7mo ago
Skinner's behaviorism for sure ("The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man").

But also Dennet's origins of consciousness.

What I mean here is that the discussion among the AI proponents and detractors about machines "thinking" or being "conscious" seems to ignore what neuropsychology and cognitive psychology found obvious for decades - that there is no uniform concept of "thinking" or "consciousness" in humans, either.

colechristensen•7mo ago
MIT OpenCourseWare course including video lectures taught by Minsky himself:

https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...

suthakamal•7mo ago
amazing find. thank you for sharing this!
homefree•7mo ago
> Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.

That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.

This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?

I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).

spiderxxxx•7mo ago
I think you may be mistaking Society of Mind with a different book. It's not about lisp or "good old fashioned AI" but about how the human mind may work - something that we could possibly simulate. It's observations about how we perform thought. The ideas in the book are not tied to a specific technology, but about how a complex system such as the human brain works.
suthakamal•7mo ago
I don't think we're talking about the same book. Society of Mind is definitely not an in-the weeds book that digs into things like lisp, etc. in any detail. Instead of changing your mind, I'd encourage you to re-read Minsky's book if you found my essay compelling, and ignore it if not.
adastra22•7mo ago
You are surrounded by GOFAI programs that work well every moment of your life. From air traffic control planning, do heuristics based compiler optimization. GOFAI has this problem where as soon as they solve a problem and get it working, it stops being “real AI” in the minds of the population writ large.
homefree•7mo ago
Because it isn't AI and it never was and had no path to becoming it, the new stuff is and the difference is obvious.
adastra22•7mo ago
Go read an AI textbook from the 80’s. It was all about optimizations and heuristics. That was the field.

Now if you write a SAT solver or a code optimizer you do t call it AI. But those algorithms were invented by AI researchers back when the population as a whole considered these sorts of things to be intelligent behavior.

homefree•7mo ago
I agree with you that it was called AI by the field, but that’s also why the field was a joke imo.

Until LLMs everything casually called AI clearly wasn’t intelligence and the field was pretty uninteresting - looked like a deadend with no idea how to actually build intelligence. That changed around 2014, but it wasn’t because of GOFAI, it was because of a new approach.

mcphage•7mo ago
Philosophy has the same problem, as a field. Many fields of study have grown out of philosophy, but as soon as something is identified, people say “well that’s not Philosophy, that’s $X” … and then people act like philosophy is useless and hasn’t accomplished anything.
empiko•7mo ago
I completely agree with you and I am surprised by the praise in this thread. The entire research program that this books represents is dead for decades already.
photonthug•7mo ago
It seems like you might be confusing "research programs" with things like "branding" and surface-level terminology. And probably missing the fact that society-of-mind is about architecture more than implementation, so it's pretty agnostic about implementation details.

Here, enjoy this thing clearly building on SoM and edited earlier this week: ideas https://github.com/camel-ai/camel/blob/master/camel/societie...

suthakamal•7mo ago
I pretty clearly articulate the opposite. What's your evidence to support your claim?
empiko•7mo ago
The problem with your argument is that what you call agent is nothing like what Minsky envisioned. The agents in Minsky's world are very simple rule based entities ("nothing more than a few switches") that are composed in vast hierarchies. The argument Minsky is making is that if you compose enough simple agents in a smart way, an intelligence will emerge. What we use today as agents is nothing like that, each agents itself is considered intelligent (directly opposing Minsky's vision "none of our agents is intelligent"), while organized along very simple principles.
homefree•7mo ago
This is reminding me of what I thought I was remembering, I don't have the book anymore - but I remember starting it and reading a few chapters before putting it back on the shelf, it's core ideas seemed to have been shown to be wrong.
drannex•7mo ago
Good timing, I just started rereading my copy last week to get my vibe back.

Not only is it great for tech nerds such as ourselves for tech, but its a great philosophy on thinking about and living life. Such a phenomenal read, easy, simple, wonderful format, wish more tech-focused books were written in this style.

mblackstone•7mo ago
In 2004 I previewed Minsky's chapters-in-progress for "The Emotion Machine", and exchanged some comments with him (which was a thrill for me). Here is an excerpt from that exchange: Me: I am one of your readers who falls into the gap between research and implementation: I do neither. However, I am enough of a reader of research, and have done enough implementation and software project management that when I read of ideas such as yours, I evaluate them for implementability. From this point of view, "The Society of Mind" was somewhat frustrating: while I could well believe in the plausibility of the ideas, and saw their value in organizing further thought, it was hard to see how they could be implemented. The ideas in "The Emotion Machine" feel more implementable.

Minsky: Indeed it was. So, in fact, the new book is the result of 15 years of trying to fix this, by replacing the 'bottom-up' approach of SoM by the 'top-down' ideas of the Emotion machine.

suthakamal•7mo ago
agree. A lot has changed in the last 20 years, which makes SoM much more applicable. I would've agreed in 2004 (and say as much in the essay).
neilv•7mo ago
It might've been Push Singh (Minsky's protege) who said that every page of Society of Mind was someone's AI dissertation waiting to happen.

When I took Minsky's Society of Mind class, IIRC, it actually had the format -- not of going through the pages and chapters -- but of him walking in, and talking about whatever he'd been working on that day, for writing The Emotion Machine. :)

ggm•7mo ago
Minsky disliked how Harry Harrison changed the end of "the turing option" and wrote a different ending.

(not directly related to the post but anyway)

frozenseven•7mo ago
Jürgen Schmidhuber's team is working on this, applying these ideas in a modern context:

https://arxiv.org/abs/2305.17066

https://github.com/metauto-ai/NLSOM

https://ieeexplore.ieee.org/document/10903668