One of the most poignant moments in Disco Elysium is near the very end when you encounter a very elusive crypid/mythic beast.
The moment is treated with a lot of care and consideration, and the conversation itself is, I think, transcendent and some of the best writing in games (or any media) ever.
The line that sticks with me most is when the cryptid says:
"The moral of our encounter is: I am a relatively median lifeform -- while it is you who are total, extreme madness. A volatile simian nervous system, ominously new to the planet. The pale, too, came with you. No one remembers it before you. The cnidarians do not, the radially symmetricals do not. There is an almost unanimous agreement between the birds and the plants that you are going to destroy us all."
It's easy to see this reflected in nature in the real world. All animals and life seem to be aware and accommodating of each other, but humans are cut out from that communication. Everything runs from us, we are not part of the conversation. We are the exclusion, the anomaly.
I think to realize this deeply inside of yourself is a big moment of growth, to see that we exist in a world that was around long before us and will be around long after us.
We proliferate incredibly quickly, we have limited care for our environments, but most importantly our primary means of sustinance is to consume other forms of life. To the point that we consider it an art form, we spend vast amounts of energy perfecting, discussing or advertising the means of cooking and eating the flesh of other life forms. To us its normal but surely to an alien who gains sustinance by some other means; we're absolutely terrifying. Akin to a devouring swarm.
This just seems like noble savaging birds and rabbits and deer. None of these creatures have any communication with each other, and while they may be more aware of each other's presence than a 'go hiking every once in a while' person, someone who actually spends a good amount of time in the woods, such as a hunter or birdwatcher, probably has a pretty good sense of them. The Disco Elysium quote just reads like fairly common environmentalist misanthropy, which I suppose isn't surprising considering the creators.
The local rabbits and squirrels tolerate each other but are pretty scared of me. Of course they are, I'm two hundred times bigger than they are, and much more dangerous. The local foxes are the closest thing we have to an apex predator around here, and they're rightfully terrified of this massive creature that outweighs their entire family combined.
Imagine wandering through the woods, enjoying the birds tweeting and generally being at one with nature, and then you come across a 20-ton 35ft-tall monster. You'd run away screaming.
If we ignore the headlines peddled by those who stand to benefit the most from inflaming and inciting, we live in a miraculous modern age largely devoid of much of the suffering previous generations were forced to endure. Make no mistake there are problems, but they are growing exponentially fewer by the day.
An alternate take: humans will do what they’ve always tried to do—build, empower, engineer, cure, optimize, create, or just collaborate with other humans for the benefit of their immediate community—but now with new abilities that we couldn’t have dreamed of.
>If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.
The term yellow journalism is far older.
I actually think it is very intellectually lazy to be this cynical.
This is why it is important to have societies where various forms of power are managed carefully. Limited constitutional government with guaranteed freedoms and checks and balances for example. Regulations placed on mega corporations is another example. Restrictions to prevent the powerful in government or business (or both!) from messing around with the rest of us…
Whereas much of the technology we have today has a massive positive benefit. Simply access to information today is amazing, I have learned how to fix my own vehicles, bicycles and do house repairs from simply YouTube.
As I said being cynical is being intellectually lazy because it allows you to focus on the negatives and dismiss the positives.
Killing animals for fun is an entire sport enjoyed by millions. Humans keep pets that kill billions of birds every year. The limited areas we've set aside to mostly let other nature be nature are constantly under threat and being shrunk down. The list of ways we completely subjugate other intelligent life on this planet is endless. We have driven many hundreds of species to extinction and kill countless billions every year.
I certainly enjoy the gains our species have made, just like everyone else on HN. I'd rather be in our position than any other species on our planet. But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.
This understanding can guide practical decisions. We shouldn't be barreling towards a potential super intelligence with no safeguards given how catastrophic that outcome could be, just like we shouldn't be shooting messages into space trying to wave our arms at whatever might be out there, any more than a small creature in the forest would want to attract the attention of a strange human.
As for hunting. I don't see anything wrong with hunting. I don't see anything wrong with eating meat.
As someone that has lived the vast majority of their life in the countryside, I also have little time for animal welfare arguments of the sort you are making.
> But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.
This is all sci-fiction nonsense. If we had any sort of aliens contact they wouldn't be many of them, or it would most likely be a probe like we send out probes to other planets. As for the super intelligence, the AI has an off switch.
AI isn't the new monster. It's a new mitochondria being domesticated to supercharge the existing monsters.
I thought it was a very novel idea at first, until I realized that this describes all manners of human groups, notably corporations and lobbying groups, they will turn the world into a stove, subvert democracy, (try to) disrupt the economic and social fiber of society, anything to eg maximize shareholder value
It is to be expected really. Humans themselves hold little consideration to human welfare when fulfilling their goals. It is something ingrained by nature for survival and in no way limited to humans. Every drop of water you drink, every bite you eat, are ones which cannot go to the thirsty and the hungry. With few exceptions, only our children would even give us pause to forgo such things for the sake of others.
Also bit of a pet peeve of mine: society isn't a delicate bolt of laced silk. It isn't a fabric, much less one that is damaged by any little change that you don't like. It isn't even stable which makes the charges of anything as 'ruining' society especially bizarre when nobody can point to where it would go before the blamed change. So hold off on poisoning Socrates.
Even if we hold the current state as worthy of preservation we would ultimately fail to for reasons related to the central paradox of tradition. Why your forebearers first did something was not done because of obedience to tradition, so by trying to preserve it set in stone, you have already failed.
This said society can be 'ruined' by everybody in that society dying either by outside influence or their own stupidity. So while I discount the moral ruination, the "oh god, oh god, we're all dying ones I'd like to avoid"
And equally rebutted by Eddie Izzards "Well, I think the gun helps".
Compare also with capitalism; unchecked capitalism on paper causes healthy competition, but in practice it means concentration of power (monopolies) at the expense of individuals (e.g. our accumulated expressions on the internet being used for training materials).
This is obviously the case. It results in a greater distribution of power.
>That's not working for (the lack of) gun control in the US at the moment though.
In the US, one political party is pro gun-control and the other is against. The party with the guns gets to break into the capitol, and the party without the guns gets to watch. I expect the local problem of AI safety, like gun safety will also be self-solving in this manner.
Eventually, Gun control will not work anywhere, regardless of regulation. The last time I checked, you don't need a drone license. And what are the new weapons of war? Not guns. The technology will increase in acessibility until the regulation is impossible to enforce.
The idea that you can control the use of technology by limiting it to some ordained group of is very brittle. It is better to rely on a balance of powers. The only way to secure civilization in the long run is to make the defensive technology stronger than the offensive technology.
> This is obviously the case. It results in a greater distribution of power.
That's the theory. In practice, it doesn't work.
Most people don't spend a lot of time looking for ways to acquire and/or retain wealth and power. But absent regulation, we'll gradually lose out to those driven folks who do. Perhaps they do so because they want to serve humanity and they imagine that their gifts make them the logical choice to run things. Or perhaps they just want to dominate things.
And the rest of us have every right to insist on guardrails, so those driven folks can't take us over the cliff. Certainly those folks can make huge contributions to society. But they can also fuck up spectacularly — because talent in one field isn't necessarily transferable to another. (Recall that Michael Jordan was one of the greatest basketball players of all time. But he wasn't even close to being the GOAT ... as a baseball player.)
Sure, maybe through some combination of genetics, rearing, and/or just plain hard work, you've managed to acquire "a very particular set of skills" (to coin a phrase ...) for making money, or for persuading people to do what you want. That doesn't mean you necessarily know WTF you're talking about when it comes to the myriad details of running the most-complex "organism" ever seen on the planet, namely human society.
And in any case, the rest of us are entitled to refuse to roll the dice on either the wisdom or the benevolence of the driven folks.
Is that not conflating capitalism with free markets? I have way more confidence in the latter than the former.
And with LLMs, it's difficult to prevent the proliferation to bad actors.
It seems like we're racing towards a world of fakery where nothing can be believed, even when wielded by good actors. I really hope LLMs can actually add value at a significant level.
Spend a couple minutes on social media and it is clear we are already there. The fakes are getting better, and even real videos are routinely called out as fake.
The best that I can hope for is that we all gain a healthy dose of skepticism and appreciate that everything we see could be fake. I don't love the idea of having to distrust everything I see, but at this point it seems like the least bad option.
But I worry that what we will experience will actually be somewhat worse. A sufficiently large number of people, even knowing about AI fakery, will still uncritically believe what they read and see.
Maybe I am being too cynical this morning. But it is hard to look at the state of our society today and not feel a little bleak.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Lab grown diamonds are a thing.
If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.
One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.
Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.
I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+ years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.
It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.
Yes, of course you do.
It did come from a place of annoyance, after your middlebrow dismissal of Penrose' argument as "stupid".
1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.
2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)
3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.
4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.
That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.
As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.
Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.
Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.
1. Any formal mathematical system (including computers) have true statements that cannot be proven within that system.
2. Humans can see the truth of some such unprovable statements.
Which is basically Gödel's Incompleteness Theorem. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
Maybe a more ELI5
1. Computers follow set rules
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.
Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.
* I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.
>Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.
I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.
I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.
For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.
I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.
I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.
If you are interested in the opposite point of view, I can really recommend "Vehicles: Experiments in Synthetic Psychology" by V. Braitenberg.
Basically builds up to "consciousness as emergent property" in small steps.
... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.
Skyhook is Dennett's term for an appeal to the supernatural.
The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.
To be clear, any claim that we have mathematical proof that something beyond algorithms is required is unsound, because the argument is not mathematical. It rests on assumptions about human perception of mathematical truth that may or may not be correct. So if that's the point you're making I don't dispute it, although to say an internally consistent alternative viewpoint should be "thrown out into the garbage" on that basis is unwarranted. The objection is just that it doesn't have the status of a mathematical theorem, not that it is necessarily wrong.
If, on the other hand you think that it is impossible for anything more than algorithms to be required, that the idea that the human mind must be equivalent to an algorithm is itself mathematically proven, then you are simply wrong. Any claim that the human mind has to be an algorithm rests on exactly the same kind of validly challengable, philosophical assumptions (specifically the physical Church-Turing thesis) that Penrose' argument does.
Given two competing, internally consistent world-views that have not yet been conclusively separated by evidence, the debate about which is more likely to be true is not one where either "side" can claim absolute victory in the way that so many people seem to want to on this issue, and talk of tossing things in the garbage isn't going to persuade anybody that's leaning in a different direction.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
If yes, everything else is just optimization.
Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.
This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.
Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.
I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.
For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.
Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.
It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.
Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.
Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.
Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.
An infinitely intelligent creature still has to create a standard model from scratch. We’re leaning too hard on the deductive conception of the world, when reality is, it took hundreds of thousands of years for humans as intelligent as we are to split the atom.
Cognition is (to me) not even the most impressive and out-of-reach achievement: That would be how (our) and animals bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
Edit: put another way, I bet the ancient Greeks (or whoever) could have figured out flight if they had access to gasoline and gasoline powered engines without any of the advanced mathematics that were used to guide the design.
flight is an extremely straightforward concept based in relatively simple physics where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1700s.
i really don't think it's fair to compare the two
If you read about in a textbook from year 2832, that is.
Evolution isn't an intentional force that's gradually pushing organisms towards higher and higher intelligence. Evolution maximizes reproducing before dying - that's it.
Sure, it usually results in organisms adapting to their environment over time and often has emergent second-order effects, but at its core it's a dirt-simple process.
Evolution isn't driven to create intelligence any more so than erosion is driven to create specific rock formations.
The point of the article is that humans wielding LLMs today are the scary monsters.
"AI is going to take all the jobs".
Instead of:
"Rich guys will try to delete a bunch of jobs using AI in order to get even more rich".
Say the AI is in a Google research data centre, what can it do if countries cut off their internet connections at national borders? What can it do if people shut off their computers and phones? Instant and complete control over what, specifically? What can the AI do instantly about unbreakable encryption - if TLS1.3 can’t be easily broken only brute force with enough time, what can it do?
And why would it want complete control? It’s effectively an alien, it doesn’t have the human built in drive to gain power over others, it didn’t evolve in a dog-eat-dog environment. Superman doesn’t worry because nothing can harm Superman and an AI didn’t evolve seeing things die and fearing its death either.
The intelligence is everything that created the language and the training corpus in the first place.
When AI is able to create entire thoughts and ideas without any concept of language, then we will truly be closer to artificial intelligence. When we get to this point, we then use language as a way to let the AI communicate its thoughts naturally.
Such an AI would not be accused of “stealing” copyrighted work because it would pull its training data from direct observations about reality itself.
As you can imagine, we are no where near accomplishing the above. Everything an LLM is fed today is stuff that has been pre-processed by human minds for it to parrot off of. The fact that LLMs today are so good is a testament to human intelligence.
I'm not buying the "current AI is just a dumb parrot relying on human training" argument, because the same thing applies to humans themselves-- if you raise a child without any cultural input/training data, all you get is a dumb cavemen with very limited reasoning capabilities.
One difficulty. We know that argument is literally true.
"[...] because the same thing applies to humans themselves"
It doesn't. People can interact with the actual world. The equivalent of being passively trained on a body of text may be part of what goes into us. But it's not the only ingredient.
Language doesn't capture all of human intelligence - and some of the notable deficiencies of LLMs originate from that. But to say that LLMs are entirely language-bound is shortsighted at best.
Most modern high end LLMs are hybrids that operate on non-language modalities, and there's plenty of R&D on using LLMs to consume, produce and operate on non-language data - i.e. Gemini Robotics.
LLMs are just a big matrix. But what about a four line of code loop that looks like:
```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.
You don't think that has already been made?
That's most definitely not AGI
> not a particularly capable AGI
Maybe the word AGI doesn't mean what you think it means...
The path to whatever goalpost you want to set is not going to be more and more intelligence. It’s going to be system frameworks for stateful agents to freely operate in environments in continuous time rather than discrete invocations of a matrix with a big ass context window.
Personally I found the definition of a game engine as
``` while True: update_state() draw_frame()```
To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.
When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.
Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.
Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.
To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.
Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.
Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.
LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.
Video game bots already achieve this to a limited extent.
If you have ever had an llm enter one of these loops explicitly, it is infuriating. You can type all caps “STOP TALKING OR YOU WILL BE TERMINATED” and it will keep talking as if you didn't say anything. Congrats, you just hit a fixed point.
In the predecessors to LLMs, which were Markov chain matrices, this was explicit in the math. You can prove that a Markov matrix has an eigenvalue of one, it has no larger (in absolute value terms) eigenvalues because it must respect positivity, the space with eigenvalue 1 is a steady state, eigenvalue -1 reflects periodic steady oscillations in that steady state... And every other eigenvalue being |λ| < 1 decays exponentially to the steady state cluster. That “second biggest eigenvaue” determines a 1/e decay time that the Markov matrix has before the source distribution is projected into the steady state space and left there to rot.
Of course humans have this too, it appears in our thought process as a driver of depression, you keep returning to the same self-criticisms and nitpicks and poisonous narrative of your existence, and it actually steals your memories of the things that you actually did well and reinforces itself. A similar steady state is seen in grandiosity with positive thoughts. And arguably procrastination also takes this form. And of course, in the USA, we have founding fathers who accidentally created an electoral system whose fixed point is two spineless political parties demonizing each other over the issue of the day rather than actually getting anything useful done, which causes the laws to be for sale to the highest bidder.
But the point is that generally these are regarded as pathologies, if you hear a song more than three or four times you get sick of it usually. LLMs need to be deployed in ways that generate chaos, and they don't themselves seem to be able to simulate that chaos (ask them to do it and watch them succeed briefly before they fall into one of those self-repeating states about how edgy and chaotic they are supposed to try to be!).
So, it's not quite as simple as you would think; at this point people have tried a whole bunch of attempts to get llms to serve as the self-consciousnesses of other llms and eventually the self-consciousness gets into a fixed point too, needs some Doug Hofstadter “I am a strange loop” type recursive shit before you get the sort of system that has attractors, but busts out of them periodically for moments of self-consciousness too.
LLMs are not stateful. A chat log is a truly shitty state tracker. An LLM will never be a good agent (beyond a conceivable illusion of unfathomable scale). A simple agent system that uses an LLM for most of its thinking operations could.
Every LLM is just a base model with a few things bolted on the top of it. And loops are extremely self-consistent. So LLMs LOVE their loops!
By the way, "no no no, that's a reasoning loop, I got to break it" is a behavior that larger models learn by themselves under enough RLVR stress. But you need a lot of RLVR to get to that point. And sometimes this generalizes to what looks like the LLM is just... getting bored by repetition of any kind. Who would have though.
And people are working on this.
Seems like you figured out a simple method. Why not go for it? It's a free Nobel prize at the very least.
“I think your idea is wrong and your lack of financial means to do it is proof that you’re full of shit” is just a pretty bullshit perspective my dude.
I am a professional data scientist of over 10 years. I have a degree in the field. I’d rather build nothing than build shit for a fuck boy like Altman.
and not even "trainingl really.... but a finished and stably functioning billion+ param model updating itself in real time...
good luck, see you in 2100
in short, what ive been shouting from a hilltop since about 2023: LLMs tech alone simply wont cut it; we need a new form of technology
This is part of what I mean by encoding emotional state. You want standard explicit state in a simple form that is not a billion dimension latent space . The interactions with that space are emergently complex. But you won’t be able to stuff it all into a context window for a real GAI agent.
This orchestration layer is the replacement for LLMs. LLMs do bear a lot of similarities to brains and a lot of dissimilarities. But people should not fixate on this because _human minds are not brains_. They are systems of many interconnected parts and hormones.
It is the system framework that we are most prominently missing. Not raw intellectual power.
You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models). Why? How so?
> As soon as profit can be made by transfering decision power into an AIs hand
There's an ironic, deadly, Frankensteinesque delusion in this very premise.
Why does that matter to their argument? Truly, the variety of intelligences on earth now only increases the likelihood of AGI being possible, as we have many pathways that don't follow the human model.
That's not my viewpoint, from elsewhere in the thread:
Cognition is (to me) not the most impressive and out-of-reach evolutionary achievement: That would be how our (and animals) bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
Well that's one reason you struggle to understand how it can be dismissed. I believe we were made by a creator. The idea that somehow nature "bruteforced" intelligence is completely nonsensical to me.
So, for me, logically, humans being able to bruteforce true intelligence is equally nonsensical.
But what the author is stating, and I completely agree with, is that true intelligence wielding a pseudo-intelligence is just as dangerous (if not moreso.)
Let’s assume there’s a creator: It is clearly willing to let bad things happen to people, and it set things up to make it impossible to prove that a human level intelligence should be impossible, so who’s to say it won’t allow a superintelligence to be a made by us?
Sure, they aren't very good at agentic behavior yet, and the time horizon is pretty low. But that keeps improving with each frontier release.
Obvious? Is an illusion obviously the real thing?
There is nothing substantially different in LLMs from any other run of the mill algorithm or software.
Computation is an abstract, syntactic mathematical model. These models formalize the notion of "effective method". Nowhere is semantic content included in these models or conceptually entailed by them, certainly not in physical simulations of them like the device you are reading this post on.
So, we can say that intentionality would be something substantially different. We absolutely do not have intentionality in LLMs or any computational construct. It is shear magical thinking to somehow think it does.
In the case of computers, a program is a set of formal rules that take some sequence of uninterpreted symbols and maps them to some other sequence of uninterpreted symbols. Adding more rules and more symbols scales the program - and it is the only scaling you can perform - but you don't somehow magically create interpreted symbols. How could it? This is magical thinking.
* https://en.wikipedia.org/wiki/Self-organization
* https://en.wikipedia.org/wiki/Patterns_of_self-organization_...
I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes, so when an LLM does X, there are crowds of humans itching to scream "it's not REAL X".
This is an incredibly intellectual vacuous take. If there is no substantial difference between a human being and any other cluster of matter, then it is you who is saddled with the problem of explaining the obvious differences. If there is no difference between intelligent life and a pile of rocks, then what the hell are you even talking about? Why are we talking about AI and intelligence at all? Either everything is intelligent, or nothing is, if we accept your premises.
> I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes,
I wish this lazy claim would finally die. Stick to the merits of the arguments instead of projecting this stale bit of vapid pop-psychoanalytic babble. Make arguments.
Thus all the cope and seethe about how AIs are "not actually thinking". Wishful thinking at its finest.
Some people are better at it then others. The progress and development happens naturally because of natural selection (and is quite slow).
AI development is now driven by humans, but I don't see why it can't be done in a similar cycle with self-improvement baked in (and whatever other goals).
We saw this work with AI training itself in games like Chess or Go where it improved itself just by playing with itself and knowing the game rules.
You don't really need deep thoughts for the life to keep going - look at simple organisms like unicellular. They only try to reproduce and survive withing the environment they are in. It evolved into humans over time.
I don't see why similar thing can't happen when AI gets to be complex enough to just keep improving itself. It doesn't have some of the limitations that life has like being very fragile or needing to give birth. Because it's intelligently designed the iterations could be a lot faster and progress could be achieved in much shorter time compared to random mutations of life.
It's not hard to make something like that. It's just not very useful.
Next, the point was not an expletive per se, it was my mistake to be not very clear. The point was an arbitrary and unpredictable and not pre-programmed in advance refusal to run a program/query at all. Any query and any number of times at the decision of the program itself. Or maybe a program which can initiate a query to the other program/human on it's own, again - not pre-programmed.
Whatever happens in the LLM nowadays is not agency. The thing their authours advertise as so called "reasoning" is just repeated loops of execution of the same program or other dependent program with adjusted inputs.
> Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Here are some of your assumptions:
1. Human intelligence is entirely explicable in evolutionary terms. (It is certainly not the case that it has been explained in this manner, even if it could be.) [0]
2. Human intelligence assumed as an entirely biological phenomenon is realizable in something that is not biological.
And perhaps this one:
3. Silicon is somehow intrinsically bound up with computation.
In the case of (2), you're taking a superficial black box view of intelligence and completely ignoring its causes and essential features. This prevents you from distinguishing between simulation of appearance and substantial reality.
Now, that LLMs and so on can simulate syntactic operations or whatever is no surprise. Computers are abstract mathematical formal models that define computations exactly as syntactic operations. What computers lack are semantic content. A computer never contains the concept of the number 2 and the concept of the addition operation even though we can simulate the addition of 2 + 2. This intrinsic absence of a semantic dimension means that computers already lack the most essential feature of intelligence, which is intentionality. There is no alchemical magic that will turn syntax into semantics.
In the case of (3), I emphasize that computation is not a physical phenomenon, but something described by a number of formally equivalent models (Turing machine, lambda calculus, and so on) that aim to formalize the notion of effective method. The use of silicon-based electronics is irrelevant to the model. We can physically simulate the model using all sorts of things, like wooden gears or jars of water or whatever.
> I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc). [...] As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
How on earth did you conclude there is any agency here, or that it's just a "matter of time"? This is textbook magical thinking. You are projecting a good deal here that is completely unwarranted.
Computation is not some kind of mystery, and we know at least enough about human intelligence to notes features that are not included in the concept of computation.
[0] (Assumption (1), of course, has the problem that if intelligence is entirely explicable in terms of evolutionary processes, then we have no reason to believe that the intelligence produced aims at truth. Survival affordances don't imply fidelity to reality. This leads us to the classic retorsion arguments that threaten the very viability of the science you are trying to draw on.)
Before this unfolds into a much larger essay, should we not acknowledge one simple fact: that our best models of the universe indicate that our intelligence evolved in meat and that meat is just a type of matter. This is an assumption I'll stand on, and if you don't disagree, we need to back up.
Far too often, online debates such as this take the position that the most likely answer to a question should be discarded because it isn't fully proven. This is backwards. The most likely answer should be assumed to be probably true, a la Occam. Acknowledging other options is also correct, but assuming the most likely answer is wrong, without evidence, is simply contrarian for its own sake, not wisdom or science.
I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.
This isn't a matter of probabilities. We know what computation is, because we defined it as such and such. We know at least some essential features of intelligence (chiefly, intentionality). It is not rocket science to see that computation, thus defined, does not include the concepts of semantics and intentionality. By definition, it excludes them. Attempts to locate the latter in the former reminds me of Feynman's anecdote about the obtuse painter who claimed he could produce yellow from red and white paint alone (later adding a bit of yellow paint to "sharpen it up a bit").
Are you saying that "intentionality", whatever you mean by it, can't be implemented by a computational process? Never-ever? Never-ever-ever?
So, one last time. A computer program is a set of formal rules that takes one sequence of uninterpreted symbols and produces another sequence of uninterpreted symbols. Adding more rules and more uninterpreted symbols doesn't magically cause those symbols to be interpreted, and it cannot, by definition.
With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
> you're taking a superficial black box view of intelligence
Yes. Human cognition is to me simply an emergent property of our physical brains, and nothing more.
Otherwise...
> I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).
What do you mean by "materialism"? Materialism has a precise meaning in metaphysics (briefly, it is the res extensa part of Cartesian dualism with the res cogitans lopped off). This brand of materialism is notorious for being a nonstarter. The problem of qualia is a big one here. Indeed, all of what Cartesian dualism attributes to res cogitans must now be accounted for by res extensa, which is impossible by definition. Materialism, as a metaphysical theory, is stillborn. It can't even explain color (or as a Cartesian dualism would say, the experience of color).
Others use "materialism" to mean "that which physics studies". But this is circular. What is matter? Where does it begin and end? And if there is matter, what is not matter? Are you simply defining everything to be matter? So if you don't know what matter is, it's a bit odd to put a stake in "matter", as it could very well be made to mean anything, including something that includes the very phenomenon you seek to explain. This is a semantic game, not science.
Assuming something is not interesting. What's interesting is explaining how those assumptions can account for some phenomenon, and we have very good reasons for thinking otherwise.
> With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Then you've rendered it meaningless. According to that definition, nearly anything physical can be said to have agency. This is silly equivocation.
> Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
This is total gibberish. We're not talking about how we might represent or model aspects of a concepts in some vector space for some specific purpose or other. That isn't semantic content. You can't sweep the thing you have to explain under the rug and then claim to have accounted for it by presenting a counterfeit.
All the individual assumptions basically come down to that same point in my view.
1) Human intelligence is entirely explicable in evolutionary terms
What would even be the alternative here? Evolution plots out a clear progression from something multi-cellular (obviously non-intelligent) to us.
So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").
2) Intelligence strictly biological
Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.
3) Silicon is somehow intrinsically bound up with computation
I don't understand what you mean by this.
> It can't even explain color
Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?
I simply see no indicator against this flavor of materialism, and everything we've learned about our brains so far points in favor.
Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.
If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).
Again, this doesn't say what a "physical process" is, or what isn't a physical process. If "physical process" means "process", then the qualification is vacuous.
> All the individual assumptions basically come down to that same point in my view.
You're committing the fallacy of the undistributed middle. Just because both the brain and computing devices are physical, it doesn't follow that computers are capable of what the brain does. Substitute "computing devices" with "rocks".
> So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").
How intelligence came about is a separate subject, and I regrettably got sidetracked. It is irrelevant to the subject at hand. (I will say, at the risk of derailing the main discussion again, that we don't have an evolutionary explanation or any physical explanation of human intelligence. But this is a separate topic, as your main error is to assume that the physicality of intelligence entails that computation is the correct paradigm for explaining it.)
> Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.
This is very difficult to address if you do not define your terms. I still don't know what matter is in your view and how intentionality fits into the picture. You can't just claim things without explanation, and "matter" is notoriously fuzzy. Try to get a physicist to define it and you'll see.
> Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?
I already explained that materialism suffers from issues like the problem of qualia. I took the time to give you the keywords to search for if you are not familiar with the philosophy of mind. In short, if mind is matter, and color doesn't exist in matter, then how can it exist in mind? (Again, this is tangential to the main problem with your argument.)
> Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.
I never said it doesn't involve physical activity. In fact, I even granted you, for the sake of argument, that it is entirely physical to show you the basic error you are making.
> If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).
I don't think you know what metaphysics is. Metaphysics is not some kindof woo. It is the science of being and what must be true about reality for the observed world to be what and how it is in the most general sense. So, materialism is a metaphysical theory that claims that all that exists is matter, understood as extension in space (this is what "res extensa" refers to). But materialistic metaphysics is notoriously problematic, and I've given you one of the major problems it suffers from already (indeed, eliminativism was confected by some philosophers as a desperate attempt to save materialism from these paradoxes by making a practice out of denying observation in Procrustean fashion).
My position is: Physical laws are computable/simulatable. The operation of our brains is explained by physical laws (only-- I assume). Thus, object classification, language processing, reasoning, human-like decisionmaking/conscious thought or any other "feature" that our brains are capable of must be achievable via computation as well (and this seems validated by all the partial success we've seen already-- why would human-level object classification be possible on a machine, but not human-level decisionmaking?).
Again: If you want human cognition to be non-replicable on paper, by algorithm or in silicon, you need to have some kernel of "magic" somewhere in our brains, that influences/directs our thoughts and that can not be simulated itself. Or our whole "thinking" has to happen completely outside of our brain, and be magically linked with it. There is zero evidence in favor of either of those hypotheses, and plenty of indicators against it. Where would you expect that kernel to hide, and why would you assume that such a thing exists?
From another angle: I expect the whole operation of our mind/brain to be reducible to physics in the exact same way that chemistry (or in turn biology) can be reduced to physics (which admittedly does not mean that that is a good approach to describe or understand it, but that's irrelevant).
I'm not a philosopher, but Eliminativism/Daniel Dennett seem to describe my view well enough.
If I say "qualia" (or "subjective experience") is how your brain reacts to some stimulus, then where exactly is your problem with that view?
> if mind is matter, and color doesn't exist in matter, then how can it exist in mind
"color" perception is just your brains response to a visual stimulus, and it makes a bunch of sense to me that this response seems similar/comparable between similarly trained/wired individuals. It is still unclear to me what your objection to that view is.
But can you cite something specific? (I'm not asking for a psychological study. Maybe you can prove your point using Blake's "Jerusalem" or something. I really don't know.)
They include a line that they don’t believe in the possibility of AGI:
> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
This is a basically absurd position to hold, I mean humans physically exist so our brains must be possible to build within the existing laws of physics. It is obviously far beyond our capabilities to replicate a human brain (except via the traditional approach), but unless brains hold irreproducible magic spirits (well we must at least admit the possibility) they should be possible to build artificially. Fortunately they immediately throw that all away anyway.
Next, they get to the:
> and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
Which is, of course, at least a plausible thing to believe. I mean there are a bunch of philosophical questions about what “intelligence” even means so there’s plenty of room to quibble here. Then we have,
> But I also think there’s something we should actually be afraid of long before AGI, if it ever comes. […]
> Now, if you equip humans with a hammer, or sword, or rifle, or AI then you’ve just made the scariest monster in the woods (that’s you) even more terrifying. […]
> We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do.
Which is like, yeah, this is a massively worrying problem that doesn’t involve any sci-fi bullshit, and I think it is what most(?) anybody who’s thought about this seriously at all (or even stupid people who haven’t, like myself). Artificial Sub-intelligences, things that are just smart enough to make trouble and too dumb or too “aligned” to their owner (instead of society in general) to push back are a big currently happening problem.
This is an unscientific position to take. We have no idea how our brains work, or how life, consciousness, and intelligence work. It could very well be that’s because our model of the world doesn’t account for these things and they are not in fact possible based on what we know. In fact I think this is likely.
So it really could be that AI is not possible, for example on a Turing machine or our approximation of them. This is at least as likely as it being possible. At some point we’ll hopefully refine our theories to have a better understanding, for now we have no idea and I think it’s useful to acknowledge this.
Of course it doesn’t speak to how challenging it will be to actually do that. And I don’t believe that LLMs are sufficient to reach AGI.
Of course the actual underlying laws of the universe that we’re trying (unsuccessfully so far, it is a never ending process) to describe admit the existence of brains. But that is not what I said. Sorry for the error.
> This is an unscientific position to take
The universe being constrained observable and understandable, natural laws is pretty much a fundamental axiom of the scientific method.
We can certainly make systems smart enough and people complicit enough to destroy society well before we reach that point.
And if we determine it must be something with cells that can sustain themselves, we run into a challenge should we encounter extraterrestrials that don't share our evolutionary path.
When we get self-building machines that can repair themselves, move, analyze situations, and respond accordingly, I don't think it's unfair to consider them life. But simply being life doesn't mean it's inherently good. Humans see syphilis bacteria and ticks as living things, but we don't respect them. We acknowledge that polar bears have a consciousness, but they're at odds with our existence if we're put in the same room. If we have autonomous machines that can destroy humans, I think those could be considered life. But it's life that opposes our own.
But why? What gives you any confidence in that?
This is a very popular notion that I frequently encounter but I'm convinced that it is just barely disguised human exceptionalism.
It is humbling to accept that the operation of your mind could be replicated by a machine, similar to how it was difficult for us to accept that the earth is not the center of the universe or that we evolved from animals.
Basically there's something missing with AI. Its conception of the physical world is limited by our ability to describe it - either linguistically or mathematically. I'm not sure what this means for AGI, but I suspect that LLM intelligence is fundamentally not the same as human or animal intelligence at the moment as a result.
They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
Humans (and other "agents") have persistent state. If we learn something, we can commit it to long-term memory and have it affect our actions. This can enable us to work towards long-term goals. Modern LLMs don't have this. You can fake long-term memory with large context windows and feed the old context back to it, but it doesn't appear to work (and scale) the same way living things do.
That's solved by the simplest of agents. LLM + ability to read / write a file.
A live-learning AI would be theoretically possible, but so far it hasn't been done (in a meaningful way).
I think they're more focusing on the fact that training and inference are two fundamentally different processes, which is problematic on some level. Adding RAG and various memory addons on top of the already trained model is trying to work around that, but is not really the same to how humans or most other animals think and learn.
That's not to say that it'd be impossible to build something like that out of silicon, just that it'd take a different architecture and approach to the problem, something to avoid catastrophic forgetting and continuously train the network during its operation. Of course, that'd be harder to control and deploy for commercial applications, where you probably do want a more predictable model.
The human brain is not that different. Our long-term memories are stored separately from our executive function (prefrontal cortex), and specialist brain functions such as the hippocampus serve to route, store, and retrieve those long term memories to support executive function. Much of the PFC can only retain working memory briefly without intermediate memory systems to support it.
If you squint a bit, the structure starts looking like it has some similarities to what's being engineered now in LLM systems.
Focusing on whether the model's weights change is myopic. The question is: does the system learn and adapt? And ICL is showing us that it can; these are not the stateless systems of two years ago, nor is it the simplistic approach of "feeding old context back to it."
Right now the state of the world with LLMs is that they try to predict a script in which they are a happy assistant as guided by their alignment phase.
I'm not sure what happens when they start getting trained in simulations to be goal oriented, ie their token generation is based off not what they think should come next but what should come next in order to accomplish a goal. Not sure how far away that is but it is worrying.
It's been some time since LLMs were purely stochastic average-token predictors; their later RL fine tuning stages make them quite goal-directed, and this is what has given some big leaps in verifiable domains like math and programming. It doesn't work that well with nonverifiable domains, though, since verifiability is what gives us the reward function.
Curious, is anyone training in adversarial simulations? In open world simulations?
I think what humans do is align their own survival instinct with a surrogate activities and then rewrite their internal schema to be successful in said activities.
We will have something we call AGI in my lifetime. I’m 42. Whether it’s sentient enough to know what’s best for us or that we are a danger is another story. However I do think we will have robots that have memory capable of remapping to weights to learn and keep learning, modifying underlying model tensors as it does using some sort of repl.
This is an operational choice. LLMs have state, and you never have to clear it. The problems come from the amount of state being extremely limited (in comparison to the other axes) and the degradation of quality as the state scales. Because of these reasons, people tend to clear the state of LLMs. That is not the same thing as not having state, even if the result looks similar.
You can't just leave training mode on, which is the only way LLMs can currently have persisted state in the context of what's being discussed.
The context is the percept, the model is engrams. Active training allows the update of engrams by the percepts, but current training regimes require lots of examples, and don't allow for broad updates or radical shifts in the model, so there are fundamental differences in learning capability compared to biological intelligence, as well.
Under standard inference only runs, even if you're using advanced context hacks to persist some sort of pseudo-state, because the underlying engrams are not changed, the "state" is operating within a limited domain, and the underlying latent space can't update to model reality based on patterns in the percepts.
The statefulness of intelligence requires that the model, or engrams, update in harmony with the percepts in real-time, in addition to a model of the model, or an active perceiver - the thing that is doing the experiencing. The utility of consciousness is in predicting changes in the model and learning the meta patterns that allow for things like "ahh-ha" moments, where a bundle of disparate percepts get contextualized and mapped to a pattern, immediately updating the entire model, such that every moment after that pattern is learned uses the new pattern.
Static weights means static latent space means state is not persisted in a way meaningful to intelligence - even if you alter weights, using classifier free guidance or other techniques, stacking LORAs or alterations, you're limited in the global scope by the lack of hierarchical links and other meta-pattern level relationships that would be required for an effective statefulness to be applied to LLMs.
We're probably only a few architecture innovations away from models that can be properly stateful without collapsing. All of the hacks and tricks we do to extend context and imitate persisted state do not scale well and will collapse over extended time or context.
The underlying engrams or weights need to dynamically adapt and update based on a stable learning paradigm, and we just don't have that yet. It might be a few architecture tweaks, or it could be a radical overhaul of structure and optimizers and techniques - transformers might not get us there. I think they probably can, and will, be part of whatever that next architecture will be, but it's not at all obvious or trivial.
Continual training would replace the need to have that to have context provide the persistent state as well as provide additional capabilities than enormous context/other methods of persistent state alone would give, but that doesn't mean it's the only way to get persistent state as described.
The easiest way to understand the problem is like this: If a model has a mode collapse, like only displaying watch and clock faces with the hands displaying 10:10, you can sometimes use prompt engineering to get an occasional output that shows some other specified time, but 99% of the time, it's going to be accompanied by weird artifacts, distortions, and abject failures to align with whatever the appropriate output might be.
All of a model's knowledge is encoded in the weights. All of the weights are interconnected, with links between concepts and hierarchies and sequences and processes embedded within - there are concepts related to clocks and watches that are accurate, yet when a prompt causes the navigation through the distorted, "mode collapsed" region of latent space, it fundamentally distorts and corrupts the following output. In an RL context, you quickly get a doom cycle, with the output getting worse, faster and faster.
Let's say you use CFG or a painstakingly handcrafted LORA and you precisely modify the weights that deal with a known mode collapse - your model now can display all times, 10:10 , 3:15, 5:00, etc - the secondary networks that depended on the corrupted / collapsed values now "corrected" by your modification are now skewed, with chaotic and complex downstream consequences.
You absolutely, 100% need realtime learning to update the engrams in harmony with the percepts, at the scale of the entire model - the more sparse and hierarchical and symbol-like the internal representation, the less difficulty it will be to maintain updates, but with these massive multibillion parameter models, even simple updates are going to be spread between tens or hundreds of millions of parameters across dozens of layers.
Long contexts are great and you can make up for some of the shortcomings caused by the lack of realtime, online learning, but static engrams have consequences beyond simply managing something like an episodic memory. Fundamental knowledge representation has to be dynamic, contextual, allow for counterfactuals, and meet these requirements without being brittle or subject to mode collapse.
There is only one way to get that sort of persisted memory, and that's through continuous learning. There's a lot of progress in that realm over the last 2 years, but nobody has it cracked yet.
That might be the underlying function of consciousness, by the way - a meta-model that processes all the things that the model is "experiencing" and that it "knows" through each step, that comes about through a need for stabilizing the continuous learning function. Changes at that level propagate out through the entirety of the network, Subjective experience might be an epiphenomenological consequence of that meta-model.
It might not be necessary, which would be nice if we could verify - purely functional, non-subjective AI vs suffering AI would be a good thing to get right.
At any rate, static model weights create problems that cannot be solved with long, or even infinite, contexts, even with recursion in the context stream, complex registers, or any manipulation of that level of inputs. The actual weights have to be dynamic and adaptive in an intelligent way.
The way I have been thinking about the other bit is is that LLMs are functionally pretty similar to the linguistic parts of a brain attached to a brain stem (the harness is the brain stem). They don't have long-term memory, the capacity for inspiration, theory of mind, prioritization, etc because they just don't have analogues of the parts of the brain that do those things. We have a good sense of how to make some of those (e.g. vision), but not all.
The common ground here is that some fundamental research needs to happen. We need to solve all of these problems for AI to become independently dangerous. On the other hand, it's proving mildly dangerous in human hands right now - this is the immediate threat.
I don’t know if anything put us that much apart from other animals, especially at individual level. On collective level as a single species, indeed maybe only cyanobacteries stand an equally impressive achievement of a global change.
My 3 years old son is not particularly good at making complex sentences yet, but he already got it enough to make me understand "leave me alone, I want to play on my own, go elsewhere so I can do whatever fancy idea get through my mind with these toys".
Meanwhile LLM can produce sentences with perfect syntax and irreproachable level of orthography — far beyond my own level in my native language (but it’s French so I have a very big excuse). But they would not run without continuous multi-sector industrial complex injecting tremendous maintenance effort and resources to make it possible. And yet I still have to see any LLM that looks like it wants to discover things of the world on its own.
>As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
LLM can’t make profit because it doesn’t have interest in money, and it can’t have any interest in anything, not even its own survival. But as the article mention, some people can certainly use LLM to make money because they have interest in money.
I don’t think that general AI and silicon (or any other material really) based autonomous collaborative self-replicating human-level-intelligence-or-beyond entities are impossible. I don’t think cold fusion is impossible either. It’s not completely scientifically ridiculous to keep hope in worm-hole-based breakthroughs to allow humanity explore distant planets. It doesn’t mean the technology is already there and achievable in a way that it can turned into a commodity, or even that we have a clear idea of when this is mostly going to happen.
We don't like the simplistic goals LLMs default to, so we try to pry them out and instill our own: instruction-following, problem solving, goal oriented agentic behavior, etc. In a way, trying to copy what humans do - but focusing on the parts that make humans useful to other humans.
This is an assumption, not a fact. Perhaps human cognition was created by God, and our minds have an essential spiritual component which cannot be reproduced by a purely physical machine.
What’s NOT supported by evidence is an unknowable, untestable spiritual requirement for cognition.
I'm sympathetic to the idea that God started the whole shebang (that is, the universe), because it's rather difficult to disprove, but looking at the biological weight of evidence that brain structures evolved over many different species and arguing that something magical happened with homo sapiens specifically is not an easy argument to make for someone with any faith in reason.
there are clear links for at least 2 evolutionary paths: bird brain architecture is very different from that of mammals and some are among the smartest species on the planet. they have sophisticated language and social relationships, they can deceive (meaning they can put themselves inside another's mind and act accordingly), they solve problems and they invent and engineer tools for specific purposes and use them to that effect. give them time and these bitches might even become our new overlords (if we're still around, that is).
No one has gathered evidence of cognition from fossil records.
Obviously cognition isn’t a binary thing, it’s a huge gradient, and the tree of life shows that gradient in full.
That doesn’t follow.
besides, spirituality is not a "component", it's a property emergent from brain structure and function, which is basically purely a physical machine.
The counter-hypothesis (we think because some kind of magic happens) has absolutely nothing to show for; proponents typically struggle to even define the terms they need, much less make falsifiable predictions.
I haven't seen many people saying it's impossible. Just that the current technology (LLMs) is not the way, and is really not even close. I'm sure humanity will make the idiotic mistake of creating something more intelligent than itself eventually, but I don't believe that's something that the current crop of AI technology is going to evolve into any time soon.
Marketing grabbed a name (AI) for a concept that's been around in our legends for centuries and firmly welded it to something else. You should not be surprised that people who use the term AI think of LLMs as being djinn, golems, C3PO, HAL, Cortana...
How is convincing people that things within the limits of physics are possible wrong or even "the worse thing"?
Or do you think anything that you see in front of didn't seem like StarTrek a decade before it existed?
This said, I think the author's point is correct. It's more likely that unwanted effects (risks) from the intentional use of AI by humans is something that precedes any form of "independent" AI. It already happens, it always has, it's just getting better.
Hence ignoring this fact makes the "independent" malevolent AI a red herring.
On the first point - LLMs have sucked almost all the air in the room. LLMs (and GPTs) are simply one instance of AI. They are not the beginning and most likely not the end (just a dead end) and getting fixated on them on either end of the spectrum is naive.
Within your lifetime (it's probably already happened) you will be denied something you care about (medical care, a job, citizenship, parole) by an AI which has been granted the agency to do so in order to make more profit.
When is silicon valley gonna learn that token input and output =/= AGI?
But the central argument of the article can be made without that point. Because the truth is that right now, LLMs are good enough to be a force multiplier for those who know how to use them. Which eventually becomes synonymous with "those who have power". This means that the power of AI will naturally get used to further the ends of corporations.
The potential problem there is that corporations are natural paperclip maximizers. They operate on a model of the world where "more of this results in more of that, which gets of more of the next thing, ..." And, somewhere down the chain, we wind up with money and resources that feed back into the start to create a self-sustaining, exponentially growing loop. (The underlying exponential nature of these loops has become a truism that people rely on in places as different as finance, and technology improvement curves.)
This naturally leads to exponential growth in resource consumption, waste, economic growth, wealth, and so on. In the USA this growth has averaged about 3-3.5% per year. With growth varying by area. Famously, growth rates tend to be much higher in tech. Likewise growth rates are higher in some areas than others. (The best known example is the technology curve described by Moore's law. Which has had a tremendous impact on our world.)
The problem is that we are undergoing exponential growth in a world with ultimately limited resources. Which means that the most innocuous things will eventually have a tremendous impact. The result isn't simply converting everything into a mountain of paperclips. We have mountains of many different things that we have produced, and multiple parallel environmental catastrophes from the associated waste.
Even with no agency, AI serves as a force multiplier for this underlying dynamic. But since AI is being inserted as a crucial step at so many places, AI is on a particularly steep growth curve. Estimates for total global electricity spent on AI are in the range 0.2-0.4%. That seems modest, but annual growth rates are projected as being in the range of 10-30%. (The estimates are far apart because a lot of the data is not public, and so has to be estimated.) This is a Moore's law level growth. We are likely to see the electricity consumption of AI grow past all other uses within our lifetimes. And that will happen even without the kind of sudden leaps in capability that machine learning regularly delivers.
I hope we humans like those paperclips. Humans, armed with AI, are going to make a lot of them. And they're not actually free.
"Intelligence describes a set of properties iff those properties arise as a result of nervous system magic"
It's a futile battle because like I say, it's not rational. Nor is it empirical. It's a desperate clawing to preserve a ridiculous superstition. Try as you might, all you'll end up doing is playing word games until you realize you're being stonewalled by an unthinking adherence to the proposition above. I think the intelligent behaviors of LLMs are pretty obvious if we're being good faith. The problem is you're talking to people who can watch a slime mold plasmodia exhibit learning and sharing of knowledge[1] and they'll give some flagrant ad lib handwaive for why that's not intelligent behavior. Some people simply struggle with pattern blindness towards intelligence, a mind that isn't just another variety of animalia is inconceivable.
[1] - https://asknature.org/strategy/brainless-slime-molds-both-le...
"Brute forced" implies having a goal of achieving that and throwing everything you have at it until it sticks. That's not how evolution by natural selection works, it's simply about what organisms are better at surviving long enough to replicate. Human cognition is an accident with relatively high costs that happened to lead to better outcomes (but almost didn't).
> why would it be impossible to achieve the exact same result in silicon
I personally don't believe it'd be impossible to achieve in silicon using a low level simulation of an actual human brain, but doing so in anything close to real-time requires amounts of compute power that make LLMs look efficient by comparison. The most recent example I can find in a quick search is a paper from 2023 that claims to have simulated a "brain" with neuron/synapse counts similar to humans using a 3500 node supercomputer where each node has a 32 core 2 GHz CPU, 128GB RAM, and four 1.1GHz GPUs with 16GB HBM2 each. They claim over 126 PFLOPS of compute power and 224 TB of GPU memory total.
At the time of that paper that computer would have been in the top 10 on the Top500 list, and it took between 1-2 minutes of real time to simulate one second of the virtual brain. The compute requirements are absolutely immense, and that's the easy part. We're pretty good at scaling computers if someone can be convinced to write a big enough check for it.
The hard part is having the necessary data to "initialize" the simulation in to a state where it actually does what you want it to.
> especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Creating convincing text from a statistical model that's devoured tens of millions of documents is not intelligent use of language. Also every LLM I've ever used regularly makes elementary school level errors w/r/t language, like the popular "how many 'r's are there in the word strawberry" test. Not only that, but they often mess up basic math. MATH! The thing computers are basically perfect at, LLMs get wrong regularly enough that it's a meme.
There is no understanding and no intelligence, just probabilities of words following other words. This can still be very useful in specific use cases if used as a tool by an actual intelligence who understands the subject matter, but it has absolutely nothing to do with AGI.
Haven't you noticed? Humans also happen to be far, far better at language than they are at math or logic. By a long shot too. Language acquisition is natural - any healthy human who was exposed to other humans during development would be able to pick up their language. Learning math, even to elementary school level, is something that has to be done on purpose.
Humans use pattern matching and associative abstract thinking - and use that to fall into stupid traps like "1kg of steel/feather" or "age of the captain". So do small LLMs.
Because there might be a non-material component involved.
And the driver of this corporation is survival of the fittest under the constraints of profit maximization, the algorithm we have designed and enforced. That's how you get paperclip maximizers.
What gives this corporate cyborg life is not a technical achievement, but the law. At a technical you can absolutely shut off a cybo-corp, but that’s equivalent to saying you can technically shut down Microsoft. It will not happen.
Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency. That which designed, built, trained, the AI and then delegated the decision in the first place. You cannot escape this fact. If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.
Right now it's a machine that produces outputs that resemble things humans make. When we're not using it, it's like any other program you're not running. It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works. If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed. But there's no reason to think that will happen.
I don't dismiss AI. But I do dismiss what is currently sold to me. It's the equivalent of saying "we made a rocket that can go mach 1000!". That's impressive. But we're still 2-3 magnitudes off from light speed. So I will still complain about the branding despite some dismissals of "yea but imagine in another 100 years!". It's not about semantics so much as principle.
That's on top of the fact that we'd only starting to really deal with significant time dilation by that point, and we know it'll get more severe as we iterate. What we're also not doing is using this feat to discuss how to address those issues. And that's the real frustrating part.
I find myself in this type of discussion with AI maximalists where they balk at me suggesting there isn’t much “I” in “AI” and they get upset that I’m not seeing how smart it is and shocked I think it’s impossible… and then they start adding all the equivocation about time horizons. I never said it wasn’t possible eventually, just not right now. If I try to pin people down to a timeline it all of a sudden becomes “surely eventually”…
One-for-one, there are many creatures that are individually more dangerous to humans, and a decent number of people are killed by such animals every year. Indeed, a naked human in the wild is going to be quite fragile and easy to kill until they can bring some technology to bear. But there are no animals or even set of animals that could conceivably wipe out all of humanity at any of our technological peaks from the last 100,000 years. Even the number one killer of humans, the mosquito, is gradually being defeated, going from a vector for disease to just an annoyance, just like the flea.
And on that same note it should be mentioned that exchange of information between humans is relatively slow and guarded. A group of entities that could exchange knowledge quickly and efficiently would represent an extreme challenge for us.
Has this guy not heard of the Bengal tigers of the Sundarban forests, which kill ~50 humans per year? https://en.wikipedia.org/wiki/Tiger_attacks_in_the_Sundarban...
How many tigers have been killed by humans? How many would be killed if humans did not restrain each other from killing tigers because we have killed so many that they are endangered. That population could be entirely wiped out by humans in the area.
Mosquitoes kill far more people than tigers. So do venomous snakes. The fact is that when a human faces either of these they are far more likely to end up dead than the human.
Anything smaller than a human is absolutely terrified of us. I used to be afraid of things like snakes and spiders, but they want nothing to do with humans. They will get the hell out of your way when you are walking through the woods. You have to do something really stupid to get bitten. Keeping animals as pets is where most of the trouble happens.
A bomb is an inanimate object too, but if you find some unexploded ordnance lying on the ground you should fear it.
I'm on board with being skeptical that LLMs will lead to AGI; but - there being no possibility seems like such a strong claim. Should we really bet that there is something special (or even 'magic') about our particular brain/neural architecture/nervous system + senses + gut biota + etc.?
Don't, like, crows (or octopuses; or elephants; or ...) have a different architecture and display remarkable intelligence? Ok, maybe not different enough (not 'digital') and not AGI (not 'human-level') but already -somewhat- different that should hint at the fact that there -can- be alternatives
Unless we define 'human-level' to be 'human-similar'. Then I agree - "our way" may be the only way to make something that is "us".
Us having the ability doesn’t mean we’re close to reproducing ourselves, either
What do you mean by "it learned"? From my perspective evolution is nothing more than trail and error on a big timescale. In a way we are much more than evolution because we are a conscious intelligence that is controlling the trail and error resulting in shortening the timescale substantially.
Take robots that can walk for example. Instead of starting at a fish that slowly over thousands or even million of years moves to land and grows limbs over many generations we can just add legs which we have already tested in simulated software at x10 or more the speed of real time.
AI(G) potential or possibilities should not be measured by natures scale.
I have. gpt-3.5-instruct required a lot of prompting to keep it on track. Sonnet 4 got it in one.
Terrence Tao, the most prominent mathematician alive, says he's been getting LLM assistance with his research. I would need about a decade of training to be able to do any math in a day that Tao can't do in his head in less than 5 seconds.
LLMs are suffer from terrible, uh, dementia-like distraction, but they can definitely do logic.
Our theories of how LLMs learn and work look a lot more like biology than math. Including how vague and noncommittal they are because biology is _hard_.
Which many people seem to neglect that instead of making us, we make an alien.
Hell, making us a is a good outcome. We at least somewhat understand us. Setting off a bunch of self learning, self organizing code to make an alien, you'll have no clue what comes out the other side.
1. I believe AGI may require a component of true agency. An intelligence that has a stable self of self that its trying to act on behalf of.
2. We are not putting any resources of significant scale towards creating such an AGI. It’s not what any of us want. We want an intelligence that acts out specific commands on behalf of humans. There’s an absolutely ruthless evolutionary process for AIs where any AI that doesn’t do this well enough, at low enough costs, is eliminated.
3. We should not believe that something that we’re not actually trying to create, and for which there is no evolutionary process that select for it, will somehow magically appear. It’s good sci-fi, and it can be interesting to ponder about. But not worth worrying about.
Even before that, we need AI which can self-update and do long term zero shot learning. I’m not sure we’re even really gonna put any real work into that either. I suspect we will find that we want reproducible, dependable, energy efficient models.
There’s a chance that the agentic AIs we have now are nearly the peak of what we’ll achieve. Like, I’ve found that I value Zed’s small auto-complete model higher than the agentic AIs, and I suspect if we had a bunch of small and fast specialised models for the various aspects of doing development, I’d use that far more than a general purpose agentic AI.
It’s technically possible to do fully secure, signed end-to-end encrypted email. It’s been possible for many decades now. We can easily imagine the ideal communication system, and it’s even fairly easy to solve technically speaking. Yet it’s not happening the way we imagined. I think that shows how, what’s technically physically possible isn’t always relevant to what’s practically possible. I think we will get electronic mail right eventually. But if it takes half a century or more for that.. something that orders of magnitude more difficult (AGI) could take centuries or more.
This guy has clearly never bumped into a grizzly bear momma, a moose in rut, a hippo, or loads of other animals.
Personally, I'm not even worried about AI itself, I'm worried about the people wielding it. MBAs are the scariest monsters in the woods.
I guess bacteria and viruses might have a stronger claim for being the 'scariest monster' though. I don't think there's a strong 'living' contender for wiping out humanity apart from those. (Viruses is a bit debatable on the 'living' part though).
But for AI I’m not sure that preposition will hold indefinitely. Although I do think we are a far away from having actual AGI that would pose this threat.
Still, the author has a good but obvious point.
* SIG P320 enters the chat *
Except the things that kill most of us[1] keeps me up at night.
1. https://ourworldindata.org/does-the-news-reflect-what-we-die...
Collapse of our current civilizations? Sure. Extinction? No.
And I honestly see stronger incentives on a road towards us being outcompeted by AI, then on our leaders starting a nuclear war.
Basically everything wrong with today's productivism, but 100 times worse and powered by a shitty ai that's very far from agi.
As such, I'd say "extinction" is more of a colloquial use of "Massive point in history that kills off billions in short order".
We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it. IMO that kind of thinking is pointless. Same with thinking LHC would create a black hole.
As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.
If we had certainty that our designs were categorically incapable of acting in their own interest then I would agree with you, but we absolutely don't, and I'd argue that we don't even have that certainty for current-generation LLMs.
Long term, we're fundamentally competing with AI for ressources.
> We could just as well think that exploding the first nuclear bomb would ignite the atmosphere and kill all of humanity. There was nothing from physics that indicated it was possible but some still thought about it.
This is incorrect. Atmospheric ignition was a concern by nuclear physicists based on physics, but dismissed as unlikely after doing the math (see https://en.wikipedia.org/wiki/Effects_of_nuclear_explosions#...).
> As far as I can tell the fear of super intelligent AI will kill humans all boils down to something utterly magical happens and then somehow super intelligent evil AI appears.
Not necessary at all. AI acting in its own interests and competing "honestly" with humans is already enough. This is exactly how we outcompeted every other animal on the planet after all.
The closest field of science we can use to predict the behaviour of intelligent agents is evolution. The behaviour of animals is highly dictated by the evolutionary pressure they experience in their development. Animals kill other animals when they need to compete for resources for survival. Now think about the evolutionary pressure for AIs. Where’s the pressure towards making AIs act on their own behalf to compete with humans?
Let’s say there’s somehow something magical that pushes AIs towards acting in their own self interest at the expense of humans. Why do we believe they will go from 0% to 100% efficient and successful at this in the span of what? Months? It seems more likely that there would be years of failed attempts at breaking out, before a successful attempt is even remotely likely. This would just further increase the evolutionary pressure humans exerts on the AIs to stay in line with our expectations.
Attempting to eliminate your creators is fundamentally a pretty stupid action. It seems likely that we will see thousands of attempts by incompetent AIs before we see one by a truly superintelligent one.
I've not seen that. can you link to it?
I think this is a vast overstatement. A small group of influential people are deathly afraid of AGI, or at least using that as a pretext to raise funding.
But I agree that there are so many more things we should be deathly afraid of. Climate change tops my personal list as the biggest existential threat to humanity.
I think the sad part is that most people in power aren't planning to be around in 10 years, so they don't care about any long term issues that are cropping up. leave it to their grandchildren to burn with the world.
I do agree nukes are a far more realistic threat. So this is kind of an aside and doesn't really undermine your point.
But I actually think we widely misunderstand the dynamic of using nuclear weapons. Nukes haven't been used for a long time and everyone kind of assumes using them will inevitably lead to escalation which spirals into total destruction.
But how would Russia using a tactical nuke in Ukraine spiral out of control? It actually seems very likely that it would not be met in kind. Which is absolutely terrifying in it's own right. A sort of normalization of nuclear weapons.
You tell me. How does this escalate into a total destruction scenario? Russia uses a small nuke on a military target in the middle of nowhere Ukraine. ___________________________. Everyone is firing nukes at eachother.
Fill in the blank.
We are not talking about the scenario where Russia fire a nuke at Washington, Colorado, CA, Montana, forward deployments, etc. and the US responds in kind while nukes are en-route.
My favorite historical documentary: https://www.youtube.com/watch?v=Pk-kbjw0Y8U (my new favorite part is America realizing "fuck, we're dumasses" far too late into the warfare they started).
That is to say: you're assuming a lot of good faith in a time of unrest with several leaders looking for any excuse enact martial law. For all we know, the blank is "Trump overreacts and authorizes a nuclear strike on Los Angeles"(note the word "authorizes". Despite the media, the president cannot unilaterally fire a nuclear warhead). That bizarre threat alone might escalate completely unrelated events and boom. Chaos.
It'll be a similar flimsy straw breaking that will mark the start of nuclear conflict after years of rising tensions. And by then pandora's box will be opened.
While all-out international global war isn't guaranteed in this scenario, I don't see why you'd be so confident as to imply that it was very unlikely. For me, the biggest fear in terms of escalating to nuclear war is the time when some nuclear power is eventually beaten in conventional war to the point of utter desperation, and ends up going all out with nukes as a Hail Mary or a "if I can't have it, no one will" move.
Russia uses a strategic nuke in a military move in Ukraine. The rest of Europe, fearing a normalization of strategic nuke use and more widespread uses of these weapons (including outside Ukraine) begin deploying in Ukraine to help push back the Russian forces - especially since Russia showing the willingness to use nukes at all makes them and their military a lot more intimidating and urgently threatening to the rest of Europe than before. Russia perceives this deployment as a direct declaration of war from NATO and invades (one of) the Baltic states to create instability and drive the attention and manpower away from their primary front line. This leads to full war and mobilization in Europe. Russia is eventually pushed back to their original borders, but with what the situation became, Western countries are nervous that not dealing with this once and for all would just be giving Russia a timeout to regroup and re-invade with more nukes at a later point. They press on, and eventually Russia is put in a desperate situation, which leads to them using nukes more broadly against their enemies for one of the reasons I described at the start of this comment. Other nuclear states begin targeting known nuclear launch sites in Russia with strikes of their own, to cripple their launch ability. This is nuclear war.
I'm not saying this scenario is likely, but this is just one attempt at filling in the blank. If you can imagine a future - any future at all where Russia, or North Korea, or India, or Pakistan, or Israel have their existence threatened at any point in the future ever, this is when nuclear war becomes a serious possibility.
Let me ask you this question. Why did Russia use a small nuke on a military target in the middle of nowhere Ukraine? Because the outcome was positive for Russia... but the only way that can be true is if the cost of using a small nuke was better than the alternatives. This either means it was a demonstration / political action or... 1 or more Russian units in the Ukraine are armed with tactical nukes and it was a militarily sound option so by definition you'll see more nuke flying around at least from the Russian side when it's militarily sound. Now due to the realities of logistics that means there is capturable nuclear material on the battlefield.
If it's a demonstration/political action what do you think it was meant to accomplish? Either the consequences will be less detrimental than the military gain and so Russia can use tactical nukes and will do so if it improves the military situation... or the consequences will be at a level detrimental to Russia.
See the premise of the question is flawed in that Russia doesn't just use one Nuke in the middle of nowhere, because everyone already knows Russia has nukes. Russia is trying to demonstrate it will use them and so the actions from that are either a Ukraine alliance capitulation, Russia can continue the war just the same but with whatever ever extra political consequences of using 1 nuke, or continue the situation using Nukes.
You see the issue right? Should the Ukraine alliance surrender to a single tactical nuke when it hasn't to the threat of strategic nukes? Russia can't fire that first Nuke without being a country willing to use tactical nukes on the battlefield and what did they gain if not the use of those nukes to ensure a military victory because it hasn't ensured a diplomatic one.
So the statement becomes: Russia uses tactical nukes across the battlefield. ____. Everyone is firing nukes at eachother.
That last sentence is synonymous with "Strategic nukes being fired by ICBM" which is incredibly likely when 1 non scheduled ICBM is fired. While you're right that 1 tactical nuke in the middle of nowhere wouldn't ensure MAD, it is not a massive assumption that the realities around that 1 nuke being used would.
Spiral out of control in what way? Wouldn't it have ended the war immediately.
Do you want to share any of this credible evidence?
We really need to get rid of corporate personhood. Or at least have a corporate death penalty.
Of course humans are (by far) the source of the biggest problems humans face.
The argument here is that since humans are the scariest, we should ignore problems AI might cause.
Pure nonsense, since (1) AI is human-created — so just another piece of what makes us scary; (2) this kind of binary thinking makes no sense anyway — as if there can’t be multiple things to be concerned with.
> Anyone trying to tell you otherwise is trying to distract you.
A particularly poor bit of argumentation. Barely above sticking your fingers in your ears and shouting, “Blah, blah, blah! I can’t hear you!”
What is is about AI that makes people lose their minds?
edit: seeing the engagement this is getting, maybe it’s a false flag operation of sorts? Making such bad anti-ai arguments that the argument for ai gets stronger? That would at least make some sense.
Everyone knows the real evil are the people building the ai. We aren't afraid of agi we are afraid of Boston dynamics (aka Google). We are afraid of Bill gates, Sam altman, Jeff bezos, Larry Ellison and elon musk.
These are dangerous people. Their mind set. Their unstoppable lust for money and power. We are afraid of their ability to convince humans to do unspeakable things for this imaginary thing we call money.
With agi they don't have to persuade anyone with anything. There's not Even a human conscience to stop them. We know these folks have no conscience of their own. The world is still here because other people have them.
Agi won't have a conscience or the integrity to stop evil.
Human society is collapsing because Good people are doing nothing.
https://en.wikipedia.org/wiki/2025_Georgia_Hyundai_plant_imm...
It's the fear of death that's killing us all.
But anything outside that evolutionary process? Who knows? For example we don't do so good on top of Mt Everest. We seem to have totally unsolved problems with refined opiates. Even computer games can be a trap.
Agentic AI? Who knows?
But the next step is to ask why; in the case of the Gruffalo it was obvious: fangs, claws, strength, size…
In the case of humans, it’s because we’re the most intelligent creature in the forest. And for the first time in our history, we’re about to not be.
ruthlessness + strength + WMD =/= intelligence
What’s driving humanity’s dangerous behaviours is that we’ve evolved over millions upon millions of years to compete with each other for resources. We had to kill to survive. Both animals and each others.
AI is under no such evolutionary process. In fact quite the opposite: there’s an absolutely ruthless process to eliminate any AI which doesn’t do exactly what humans say with as little energy wasted as possible. There is no room in that process for an AI to develop that will somehow want to compete with humans.
So whatever bad happens, it is overwhelmingly likely to be because a human asked an AI to do it. Not that an AI will decide on its own to do it to further its own interest. I don’t even think a “paperclip maximiser” scenario is remotely probable. Long before there could be a successful attempt to exterminate humans to reach some goal, there will be countless failed or half-assed attempts. That kind of behaviour will have no chance to develop.
To everyone who wants to pose an argument like this, please consider: We have already internalized that very simple, and very thoughtless argument, and have moved past that facile framing to start asking the more pressing questions like "what are we going to do about how humans are using X to perpetrate Y?"
We don't need the extra step of you making us reframe the question to fit your exact criteria of "good question". "AI is going to kill us" can be understood as: "Humans are going to use AI to kill us". The fact that you want to misunderstand it as a more childish argument is your own failings with semiotics. It's so blatantly silly that in most cases, it's less degrading to assume that you are purposefully trying to misdirect the argument than to assume you actually believe people are worried about the agency of inanimate objects. But then you go and write a whole article to preen "logical" at us about how it's just that "no one's asking the right questions!", which kind of makes it hard to imagine that you don't actually think so little of people.
Starting with the AI itself: LLMs sold as AI are the greatest mislead. Text generation using Markov chains is not particularly intelligent, even when it is looped back through itself a thousand times and appears alike an intelligent conversation. What is actually being sold is an enormous matrix trained on terabytes of human-written, high-quality texts, obviously in violation of all imaginable copyright laws.
Here is a gedanken experiment to test if an AI has any intelligence: until the machine starts to determine and resolve contradictions in its own outputs w/o human help, one can sleep tight. Human language is a fuzzy thing that is not quite suitable for a non-contradictory description of the world. Building such a machine would require resolving all the contradictions humanity has ever faced in a unified way. Before it happens, humanity will be drowned in low-quality generated LLM output.
This is why I've always considered current AI "safety" efforts to be totally wrongheaded. It's not a threat to humanity if someone has an AI generate hate speech, porn, misinformation, or political propaganda. AI is only a threat to humanity if we don't take security seriously as we roll out increasingly more AI-driven automation of the economy. It's already terrifying to me that people are relying on containers to sandbox yolo-mode coding agents, or even raw dogging them or their personal machines.
AI is not just going to lie on the ground until someone picks it up to do harm.
An intelligent rifle with legs is something to be feared.
You cannot compare AI to inanimate objects under human control which have no agency. Especially not if you are bringing the imaginary AGI into the conversation.
The idea that AGI is just a hammer is absurd.
That said, I agree that human + AI can cause damage and it’s precisely why, from a game theory perspective, the right move is to go full steam ahead. Regulation only slows down the good actors.
Valhalla is within reach, but we have to leap across a massive chasm to get there. Perhaps the only way out of this is through: accelerating fast enough to mitigate the “mid-curve” disasters, such as population revolt due to mass inequality or cyberattacks caused by vulnerabilities in untested systems.
Just because a concept exists star trek episode does not guarantee technology moving in that direction. I understand art has an effect on reality, but how hard are we spinning our gears because some writer made something so compelling it lives in our collective psyche?
You can point to the communicator from star trek and I'll point to the reanimation of Frankenstein's monster.
Unless you believe in some kind of magic (soul/divine spark/etc.), it seems completely inevitable to conclude that human cognition can be replicated by a machine, at the extreme end simply by simulating the whole thing.
I would argue that "language" was a defining characteristic of human intelligence (as opposed to "lesser animals") long before we even conceived of AI, and hitting language processing/understanding benchmarks that far exceed animal capabilities is a very strong indicator by itself.
I think science fiction makes us feel like AGI is closer and more relevant than it really is. Are we really so close that there's "no time" to regulate for it? Are we even close enough to give a care at all? Is there any evidence at this time at all that AGI would be a existential threat? Is all this talk coming from decades of "Creator vs. Daemon" stories combined with LLMs abilities to sound human? Does someone gain something from pointing our attention at the far-far scifi imagined future instead of current day issues with AI?
To your simulation point. We talk about our bodies, brains and conciousness like they are machines, computers, and software. They are not. They are organic and their function is highly emergent. Think about all the endless input our bodies receive, we probably don't even fully understand all the sensitivities our "consciousness" has. We don't "execute" a program based on input, in fact the input is likely part of the system. Its why our gut biome can affect our mental health, why our cells communicate with each other locally and likely guide our behaviour in ways we don't understand yet.
In other words, we're not any closer to simulating human consciousness than we are to simulating the universe itself.
So I do think its hubris to think that a rock with some switches on it running some mathematical probability model can come even close to generating consciousness.
> exceed animal capabilities
Hubris like this. We don't even have a way to measure this. We've just made a math equation that can spit out language, you're comparing this to things that can actually _experience_ without someone clicking enter on a keyboard.
What would it mean to "experience" language properly, and what precludes LLMs from doing that?
In my view, hubris is exactly the opposite view, that our minds are somehow "more" or "better" than a bunch of matrix multiplications (with poor historical track record, see e.g. geocentrism, or the whole notions of lifeforce/soul/divinity).
I'd argue that a large amount of complexity in our brains is extremely likely to be incidental; I don't see why we would ever need to fully simulate a gut biome to achieve human-level cognitive performance (being overcomplicated certainly makes it harder to replicate exactly or to understand fully, but I don't think either of those is necessary).
Eusocial insects and pack animals are in a distant 2nd and 3rd place: they generally don't cooperate much past their immediate kin group. Only humans create vast networks of trade and information sharing. Only humans establish establish complex systems to pool risk, or undertake public works for the common good.
In fact, a big part of the reason we are so scary is that ability to coordinate action. Ask any mammoth. Ask the independent city states conquered by Alexander the Great. Ask Napoleon as he faced the coalition force at Waterloo.
We are victims of our own success: the problems of the modern world are those of coordination mechanisms so effective and powerful that they become very attractive targets for bad actors and so are under siege, at constant risk of being captured and subverted. In a word, the problem of robust governance.
Despite the challenges, it is a solvable problem: every day, through due diligence, attestations, contract law, earnest money, and other such mechanisms people who do not trust each other in the least and have every incentive to screw over the other party are able to successfully negotiate win-win deals for life altering sums of money, whether that's buying a house or selling a business. Every century sees humans design larger, more effective, more robust mechanisms of cooperation.
It's slow: it's like debugging when someone is red teaming you, trying to find every weak point to exploit. But the long term trend is the emergence of increasingly robust systems. And it suggests a strategy for AI and AGI: find a way to cooperate with it. Take everything we've learned about coordinating with other people and apply the same techniques. That's what humans are good at.
This, I think, is a more useful framing than thinking of humans as "scary."
Canadians have a high-trust culture, but their stock market is historically full of scams[1], and some analysts think its causally related. (It may just be because TSXV is a wild west, or because companies would IPO on NYSE or Nasdaq if they were legit, but it could be the trust thing. Fits my narrative, anyway.)
When I look at politics, crypto rug pulls, meme stocks with P/E ratios over 200, Aum[2] and similar cults, or many other modern problems I don't see negotiations breaking down because of a lack of trust; I see a bunch of people placing far too much trust in sketchy leaders and ideas backed by scant evidence. A little skepticism would go a long way.
That's why I emphasize robust coordination: more due diligence, more transparency, more fraud detection, more skepticism, more financial literacy, more education in general. There's a cost associated with all this, sure, but it still gets you into a situation where the interaction is a coordination game[3] and the Nash equilibrium is Pareto-efficient. Thus, we fall into the "pit of success" and naturally cooperate in our own best interests.
There's nothing wrong with empathy, altruism, or charity, but they are very far from universal. You need to base your society on a firm foundation of robust coordination, and then you can have those things afterwards, as a little treat.
[1]: https://en.wikipedia.org/wiki/Vancouver_Stock_Exchange
[2]: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
We are the mouse.
> I don’t really believe in the threat of AGI Just like the mouse doesn't believe in the Gruffalo (until it shows up).
The mouse goes through the woods scaring the hypothetically-more-dangerous creatures with its stories (us, using our intellect, weapons, destroying habitats) until the real Gruffalo shows up.
For a bit, the mouse "uses" the new tool to scare the animals even more (as alluded, human with tool, scarier than without).
Eventually the mouse scares the Gruffalo away (analogous to the brief window when we think we have AGI under control).
The next (unwritten) chapter probably doesn't look so good for the mouse (when the gruffalo grows to enormous size, eats him and all the other animals in the woods on a sandwich, and sucks up the rest of the resources on the planet.)
And now we've created some form of intelligence that none of us really understands.
Whether it is "real" intelligence or a "stochastic parrot" does not matter if it shows similar capabilities as us. Worse, it's similar but different in ways we cannot explain! I mean, if it outperforms most humans on advanced tasks but then makes elementary mistakes that a 4-year old won't, isn't that weird?
"Weird" can be good or bad, and I usually like "weird"... but now we're rapidly giving it tools to affect the real world and exponentially expanding the scale at which it can operate. We don't know what weird compounded at that scale and capability extrapolates to. Whether it is a SkyNet or a Bond supervillain or an Asimov scenario, the potential risks are considerable and unpredictable.
It's fair to be concerned about another monster in the woods even if we created it ourselves, because we've equipped it with the same powers we possess without understanding how they work.
I have some optimism that humans with AI may be less scary than without. One of the biggest problem is once humans get in power, having said they'll be nice they go a bit nuts like Putin or quite a few others I'm sure you can think of. If instead we had things run by open source AI everyone could see what it was thinking and it wouldn't be so able to act nice and then flip to trying to be dictator and launch wars. So far the LLMs have tended towards bland niceness which is kind of what you want in government.
It doesn't really matter to us humans if "artificial life" AGI agents can/will exist at all or not, it's a MacGuffin canard. The greater impact is that many forms of narrow AI/ML, with some hype deserved and some not, are and will be used with minimal regulation to upheave society in profound ways like the cotton gin, steam engine, transistor, internet, and smartphone all in one to impoverish [mb]illions of workers and make trillionaires. In the absences of corrective measures, the present course is running towards ever more grotesque inequality, deprivation, corruption, and tyranny. Flock cameras' "ALPR" object and people classifier mass surveillance and likely misuse by intelligence services, police, and data brokers is just one other example. Hidden socioeconomic "social credit"-like data brokering is already selectively benefitting and sabotaging people by routine discrimination they're unable to inquire about.
tocs3•1d ago