I don't think Minsky's and Sutton's views are in contradiction, they seem to be orthogonal.
Minsky: the mind is just a collection of a bunch of function specific areas/modules/whatever you want to call them
Sutton: trying to embed human knowledge into the system (i.e. manually) is the least effective way to get there. Search and learning are more effective (especially as computational capabilities increase)
Minsky talks about what the structure of a generalized intelligent system looks like. Sutton talks about the most effective way to create the system, but does not exclude the possibility that there are many different functional areas specialized to handle specific domains that combine to create the whole.
People have paraphrased Sutton as simply "scale" is the answer and I disagreed because to me learning is critical, but I just read what he actually wrote and he emphasizes learning.
I take Sutton's Bitter Lesson to basically say that compute scale tends to win over projecting what we think makes sense as a structure for thinking.
I also think that as we move away from purely von neumann architectures to more neuromorphic things, the algorithms we design and ways those systems will scale will change. Still, I think I agree that scaling compute / learning will continue to be a fruitful path.
But also Dennet's origins of consciousness.
What I mean here is that the discussion among the AI proponents and detractors about machines "thinking" or being "conscious" seems to ignore what neuropsychology and cognitive psychology found obvious for decades - that there is no uniform concept of "thinking" or "consciousness" in humans, either.
https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...
That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.
This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?
I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).
Now if you write a SAT solver or a code optimizer you do t call it AI. But those algorithms were invented by AI researchers back when the population as a whole considered these sorts of things to be intelligent behavior.
Until LLMs everything casually called AI clearly wasn’t intelligence and the field was pretty uninteresting - looked like a deadend with no idea how to actually build intelligence. That changed around 2014, but it wasn’t because of GOFAI, it was because of a new approach.
Here, enjoy this thing clearly building on SoM and edited earlier this week: ideas https://github.com/camel-ai/camel/blob/master/camel/societie...
Not only is it great for tech nerds such as ourselves for tech, but its a great philosophy on thinking about and living life. Such a phenomenal read, easy, simple, wonderful format, wish more tech-focused books were written in this style.
Minsky: Indeed it was. So, in fact, the new book is the result of 15 years of trying to fix this, by replacing the 'bottom-up' approach of SoM by the 'top-down' ideas of the Emotion machine.
When I took Minsky's Society of Mind class, IIRC, it actually had the format -- not of going through the pages and chapters -- but of him walking in, and talking about whatever he'd been working on that day, for writing The Emotion Machine. :)
(not directly related to the post but anyway)
https://arxiv.org/abs/2305.17066
suthakamal•7mo ago
detourdog•7mo ago