>I am more interested in the possibility of producing models of the action of the brain than in the applications to practical computing...although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model... https://en.wikipedia.org/wiki/Unorganized_machine
Honestly, trying to reverse engineering something to understand how it works is interesting and potentially worthwhile! To me it's obvious that "broadly mechanistic" or causal explanations of specific cognitive functions can be created. I am not doubting that a "machine" can mimic human cognitive abilities -insofar as we can state them or "tokenize" them precisely. I am pretty sure that is the whole basis of Cognitive Science.
But just because we can mimic those capacities: does that imply that those are the same mechanisms that exist in nature? Herbert Simon made a distinction between "Natural" and "artificial" system: an LLM's function is to model language (and they do a damn good job of that!) does the brain have one function and what is it? If you build a submarine does that mean it tells you something about how fish swim? Even if it swims faster than any of the fish?
Artificial neural networks are already helping some understanding of brains for example there was a lot of debate about "universal grammar":
>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...
and it now seems to be demonstrated that LLM like neural networks are quite good at picking up language without an 'acquisition device' beyond the general network.
The point of this thread and the paper isn't that cognition is not an important goal to understand nor that it isn't computational (computation seems to be the best model we currently have). But that AGI is (as the previous comment mentioned) a Marketing term of little scientific value. It is too vague and has the baggage of some religious belief than cold hard scientific inquiry. It used to just be called "AI" or as was being debated at the infancy of the field just "complex information processing". The current for-profit (let's be clear OpenAI is not really a charity) companies don't really actually care about understanding anything ... to an outsider they appear to maximize hype to drum up investment so that they could build a God, while some people get very very rich. To many in these communities, intelligence is some magical quantity that can "solve everything!" I am not sure which part of those beliefs are scientific? Why are we ear marking $100s of billions (some of which is public money) to benefit these companies?
>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...
Would you say that one day someone just happened to find an LLM chilling under the sun and we spoke some words to it for like a few years by pointing to things and one day it was speaking full sentences and asking about the world? Or is it that a lot of engineering work was put into specifically design something for the purpose of generating text ... Do you think humans were designed to speak or to be intelligent and by whom? Can Dolphins, Gorilla's, and Elephants also speak language? They have complex brains with a lot of neurons. Chomsky’s point was just that “If Human then can speak language” so “not human can speak language” doesn’t refute the central point. I am no expert on Chomsky you may know much more about that. But again doesn’t seem relevant to the actual thread.
Are you saying it's impossible to understand human brains?
As for "understanding" you have to be more precise about what you mean: we created LLMs and Transformer based ANNs (and ANNs themselves) and it appears we are all mystified by what they can do ... as though they are magic ... and will lead to Super-intelligence (an even more poorly defined term than regular-ass intelligence).
I'm not trying to be difficult: but I sometimes wonder if all of us were to take a step back and really try and understand this tech before jumping to conclusions! "The thing that was designed to be a universal function approximator approximates the function we trained it to approximate! HOLY CRAP WE MAY HAVE MADE GOD!" It's clear that the the technologies we currently have are miraculous and do amazing things! But are they really doing exactly what humans do? Is it possible to converge at similar destinations without taking the same route? Are we even at the exact same destination?
I am not an expert ... but to me anything that is associated with these companies is marketing. I understand that makes me a "stick in the mud" but it's not a crime to be skeptical! THAT SHOULD BE THE DEFAULT ... we used to believe in gods, demons, and monsters. Given that Anthropic is very very closely related to EA and Longtermism and given that this is the "slickest" paper I have ever read ...
If I had the mental capacity to have read a good amount of the internet and millions of pirated books ... I wouldn't be confused by perturbations in questions I have already previously seen.
I am sure there are lots of cogent rebuttals to what I am saying and hey maybe I'm just a sack of meat that is miffed about being replaced by a "superior intelligence" that is "more evolved". But that isn't how evolution works either and it's troubling to see that sentiment becoming to prevalent.
I'll explain why very simply. The vision of AI and the vision of Virtual Reality all existed well before the technology. We envisioned humanoid robots well before we ever had a chance of making them. We also envisioned an all-knowing AI well before we had our current technology. We will continue to envision the end-state because it is the most natural conclusion. No human can not not imagine the inevitable. That every human, technical or not, has the capacity to fully imagine this future, which means the entirety of the human race will be directed to this forgone conclusion.
Like God and Death (and taxes). shrugs
Smith: It is inevitable Mr. Anderson
If instead we called them what they are, Large Language Models, would you still say that they were hurtling inevitably towards Generalized Intelligence?
Der_Einzige•9mo ago
az09mugen•9mo ago
ashoeafoot•9mo ago