If you are a fan of the foundation books, recall that many of the leaders of various factions were a bunch of idiots little different than the carnival barkers we see today.
That's the problem with being nostalgic for something you possibly didn't even live. You don't remember all the other ugly complexities that don't fit your idealized vision.
Nothing about the world of the sci fi golden age was less exploitative or prone to human misery than it is today. If anything, it was far worse than what we have today in many ways (excluding perhaps the reach of the surveillance state)
Some of the US government's worst secret experiments against the population come from that same time and the naive faith by the population in their "leaders" made propaganda by centralized big media outlets all the more pervasively powerful. At the same time, social miseries were common and so too were many strictures against many more people on economic and social opportunities. As for technology being used for good purposes, bear in mind that among many other nasty things being done, the 50's and 60s were a time in which several governments flagrantly tested thousands of nukes out in the open, in the skies, above-ground and in the oceans with hardly a care in the world or any serious public scrutiny. If you're looking at that gone world with rose-tinted glasses, I'd suggest instead using rose tinted welding goggles..
The world of today may be full of flaws, but the avenues for breaking away from controlled narratives and controlled economic rules are probably broader than they've ever been.
On this I definitely agree, especially on the last part. Specifically, when I read the science fiction of previous decades, and see how its descriptions of a surveillance state compare to the surveillance capacities that literally get applied today by so many states (with varying degrees of authoritarianism), the old sci fi seems absurdly quaint.
>As I recall, many of his early stories involved "U.S. Robot & Mechanical Men" which was a huge conglomerate owning a lot of the market on AI...
>May want to reread. U.S. Robots and Mechanical Men is pretty prominent in his Robot stories.
Good points from some of these replies. The interview is fairly brief, perhaps he didn't feel he had the time to touch on the socio-economic issues, or that it wasn't the proper forum for those concerns.
Saying that, a variant of Susan Calvin role could prove to be useful today.
AI _researchers_ had a different idea of what AI would be like, as they were working on symbolic AI, but in the popular imagination, "AI" was a computer that acted and thought like a human.
The Star Trek computer is not like LLMs: a) it provides reliable answers, b) it is capable of reasoning, c) it is capable of actually interacting with its environment in a rational manner, d) it is infallible unless someone messes with it. Each one of these points is far in the future of LLMs.
So did ELIZA. So did SmarterChild. Chatbots are not exactly a new technology. LLMs are at best a new cog in that same old functionality—but nothing has fundamentally made them more reliable or useful. The last 90% of any chatbot will involve heavy usage of heuristics with both approaches. The main difference is some of the heuristics are (hopefully) moved into training.
I don't see much difference—you still have to take any output skeptically. I can't claim to have ever used gemini, but last I checked it still can't cite sources, which would at least assist with validation.
I'm just saying this didn't introduce any fundamentally new capabilities—we've always been able to GIGO-excuse all chatbots. The "soft" applications of LLMs have always been approximated by heuristics (e.g. generation of content of unknown use or quality). Even the summarization tech LLMs seem to offer don't seem to substantially improve over the NLP-heuristic-driven predecessors.
But yea, if you really want to generate content of unknown quality, this is a massive leap. I just don't see this as very interesting.
Yes, it can cite sources, just like any other major LLM service out there. Gemini, Claude, Deepseek, and ChatGPT are the ones I personally validated this with, but I bet other major LLM services can do so as well.
Just tested this using Gemini with “Is fluoride good for teeth? Cite sources for any of the claims” prompt, and it listed every claim as a bullet point accompanied by the corresponding source. The sources were links to specific pages addressing the claims from CDC, Cleveland Clinic, John Hopkins, and NIDCR. I clicked on each of the links to verify that they were corroborating what Gemini response was saying, and they were.
In fact, it would more often than not include sources even without me explicitly asking for sources.
Let's see an example:
Ask if america was ever a democracy and tell us what it uses as sources to evaluate its ability to function. Language really shows its true colors when you commit to floating signifiers.
I asked gemini "was america ever a democracy"? And it confidently responded "While the ideal of democracy has always been a guiding principle in the United States", which is a blatant lie, and provided no sources. The next prompt, "was america every a democracy? Please cite sources" gives a mealy-mouthed reply hedging on the definition of democracy... which it refuses to cite. If I ask it "will america ever be democratic" it just vomits up excuses about democracy being a principal and not measurable. With no sources. Etc. this is not a useful tool for things humans already do well. This is a PR megaphone with little utility outside of shitty copy editing.
Watched all seasons recently for the first time. While some things are "just" vector search with a voice interface, there are also goodies like "Computer, extrapolate from theoretical database!", or "Create dance partner, female!" :D
For anyone curious: https://www.youtube.com/watch?v=6CDhEwhOm44
No. The Star Trek computer is a fictional character, really. It's not a technology any more than Jean-Luc Picard is. It's does whatever the writers needed it to do to further the plot.
It reminds me: J. Michael Straczynski (of Babylon 5 fame) was once asked "How fast do Starfuries travel?" and he replied "At the speed of plot."
The three laws of robotics seemed ridiculous until 2021, when it became clear that you could just give AI general firm guidelines and let them work out the details (and ways to evade the rules) from there.
Multivac in "the last question"?
Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.
That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.
When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.
Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.
Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.
An analogy I think is like crypto problems that would require 1 billion years to compute. Even if we find a way to get that 100x more efficient, we're still not coming up with a solution anywhere near in our lifetimes.
> Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields
My impression is many of those are benchmarks that are chosen by companies to look good for VCs. For example, the video showing off Devin was almost completely faked (time gaps were cut out, tasks were actually simpler and more tailor made than they were implied to be).
Something I was trying to convey to a non-technical stake holder is that some tasks are stupid easy for humans, but insanely hard for computers - and vice versa. A big trick was therefore to delegate some things to humans and some things to computers. For example, computers are excellent at recollection and numerical computations - while humans can taste salt easily and tell you when something is too salty or undersalted trivially. In my opinion, AGI is an attempt to have computers do those things that are trivial for humans, but insanely tough for humans. There is a long, long way to go; getting that first 50% is the easy part, the last 50% (particularly the last 30% and the last 5%) IMO is several hundreds (if not thousands) of __magnitudes__ harder.
Folding laundry
“I don't like cleaning or dusting or cooking or doing dishes, or any of those things," I explained to her. "And I don't usually do it. I find it boring, you see."
"Everyone has to do those things," she said.
"Rich people don't," I pointed out.
Juniper laughed, as she often did at things I said in those early days, but at once became quite serious.
"They miss a lot of fun," she said. "But quite apart from that--keeping yourself clean, preparing the food you are going to eat, clearing it away afterward--that's what life's about, Wise Child. When people forget that, or lose touch with it, then they lose touch with other important things as well."
"Men don't do those things."
"Exactly. Also, as you clean the house up, it gives you time to tidy yourself up inside--you'll see.”
Can an AI novel add something new to the conversation of literature? That's less clear to me because it is so hard to get any model I work with to truly stand by its convictions.
- The world already hosts millions of organic AI (Actual Intelligence). Many statistically at genius-level IQ. Does their existence make you obsolete?
Depends on your definition of "intelligence." No, they can't reliably navigate the physical world or have long-term memories like cats or dogs do. Yes, they can outperform them on intellectual work in the written domain.
> Does their existence make you obsolete?
Imagine if for everything you tried to do, there was someone else who could do it better, no matter what domain, no matter where you were, and no matter how hard you tried. You are not an economically viable member of society. Some could deal with that level of demoralisation, but many won't.
> they are arguably no worse than 99% of what the commercial music business has been pumping out for years
Correct, and that says a lot about our society.
Struggling to find the words but the synthetic voice directly addressing the prompt is really surreal feeling.
It's not a pure AI output - I generated a bunch of lyrics in text (which doesn't use credits), selected the best one (obviously), padded them out with some repetition, entered a style, generated the audio a few times, selected my favourite audio, and edited the audio (poorly) by repeating a few bars of the intro to make it longer. You don't see the times it generated lyrics about X.509 certificates (even though the prompt was for them to be a valid X.509 certificate) or the times the vocals were unintelligible.
Here's another good version of the song with a different style: https://suno.com/song/2775f188-7582-4970-ac71-5a3b82e39a04?s...
Here's are two versions that are disqualified because you can't make out the lyrics: https://suno.com/song/9cebb5b3-c336-495e-be3d-195ea338eb52?s... https://suno.com/song/c6f0e666-ce91-4494-a8b5-1232862965c1?s...
---
I think generative AI does work as a toy. You can ask for all sorts of insane nonsense and laugh at what the program spits out to fulfil your request. I was a paying customer of AI Dungeon 2 (before the incident where OpenAI and/or the Mormons broke it in a poor attempt to impose safety rules).
I didn't keep any lyrics failures, but at the time, I was playing around with requesting songs that were also valid computer files, so here's one that went well: a "religious folk song that is also a valid Cisco configuration file", with the style changed to trance after the lyrics were generated: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s... and another: https://suno.com/song/32aa6d33-0f9f-4d3b-ad53-46a5fe238916?s...
Juniper doesn't work as well because of the punctuation - it can generate lyrics with braced blocks, but they don't sound like anything: https://suno.com/song/32a0d70c-c9c9-468e-8905-67669c6b90d4?s...
Here's "a religious folk song that is also a valid COBOL program, without any English words": https://suno.com/song/b75aae68-9c1e-46e5-94d4-8bc63387640e?s...
Here are some that aren't configuration files but just sound cool. Prompt was something like "Write a song about a technological dystopia where everyone can only speak BGP." https://suno.com/song/1866516b-e133-47a5-a0ac-23ccb36f81ab?s... . This one's probably a song about "network protocols and their pros and cons": https://suno.com/song/23584394-7058-4bc1-8187-b3d286d36ec4?s...
And while I'm looking at my Suno outputs list, the reason I ever bothered to use it was to see if it could render these lyrics as a ripoff of "Pure Imagination" from Willy Wonka (it cannot because it only makes actual music): https://suno.com/song/19d1a90d-9ed6-4087-94e5-89e41363726e?s...
(I'm assuming that you can open these pages just by having the links. Some of them are set to public visibility.)
(To be clear, I have no problem with AI-generated music. I think a lot of the commenters would be surprised to hear of its origin, though.)
AIs aren't really part of the whole evolutionary race for survival so far. We create them. And we allow them to run. And then we shut them down. Maybe there will be some AI enhanced people that start doing better. And maybe the people bit become optional at some point. At that point you might argue we've just morphed/evolved into whatever that is.
We already live lives which are artificial in almost every way. People used to die of physical exhaustion and malnutrition, now they die of lack of exercise and gluttony, surely we could have stopped somewhere in the middle. It's not a ressource or technology problem at that point, it's societal/political
Another possibility is not let us scale. I thought Logan's Run was a very interesting take on this.
This complementarity already exists in our brains. We have evolutionary older parts of brain that deal with our basic needs through emotions and evolutionary younger neocortex that deals with rational thought. They have complicated relationship, both can influence our actions, through mutual interaction. Morality is managed by both, neither of them is necessarily more "humane" than the other.
In my view, AI will be just another layer, an additional neocortex. Our biological neocortex is capable of tracking un/cooperative behavior of around 100 people of the tribe, and allows us to learn couple useful skills for life.
The "personal AI neocortex" will track behavior of 8 billion people on the planet, and will have mastery of all known skills. It is gonna change humans for the better, I have little doubt about it.
"we" don't control ourselves. If humans can't find enough energy sources in 2200 it doesn't mean they won't do it in 1950.
It would be pretty bad to lose access to energy after having it, worse than never having it IMO.
The amount of new technologies discovered in the past 100 years (which is a tiny amount of time) is insane and we haven't adapted to it, not in a stable way.
Read some philosophy. People have been wrestling with this question forever.
https://en.wikipedia.org/wiki/Philosophy
In the end, all we have is each other. Volunteer, help others.
Let me paint a purpose for you which could take millions of years. How about building a Atomic Force microscope equivalent which can probe Calabi Yau manifolds to send messages to other multiverses.
But I doubt most people would subscribe to that view now and would say Photography is an entirely new art form.
I remember that from a couple of years ago, when Stable Diffusion came out. There was a lot of talk about "art" and "AI" and someone posted a collection of articles / interviews / opinion pieces about this exact same thing - painting vs. cameras.
The reason we give them awards is that the camera can't tell you which lens will give you the effect you want or how to emphasize certain emotions with light.
Sure, patent trolls suck, so do MAFIAA, but a world where creators have no means to subsist, where everything you do will be captured by AI corps without your permission, just to be regurgitated into a model for a profit, sucks way way more
I know! It's totally and completely immoral to give the little guy rights against the powerful. It infringes in the privileges and advantages of the powerful. It is the Amazons, the Googles, the Facebooks of the world who should capture all the economic value available. Everyone else must be content to be paid in exposure for their creativity.
They occasionally allowed the people who actually make things to become wealthy in order to incentivize other people who make things to continue making things, but mostly it's just the people with lots of money (and the lawyers) who make most of the money.
Studios and publishers and platforms somehow convinced everyone that the "service" and "marketing" they provided was worth a vast majority of the revenue creative works created.
This system should be burned to the ground and reset, and any indirect parties should be legally limited to at most 15% of the total revenues generated by a creative work. We're about to see Hollywood quality AI video - the cost of movie studios, music, literature, and images is nominal. There are already creative AI series and ongoing works that beat 90's level visual effects and storyboarding being created and delivered via various platforms for free (although the exposure gets them ad revenue.)
We better figure this stuff out, fast, or it's just going to be endless rentseeking by rich people and drama from luddites.
I would argue patents are closer to protecting ideas, and those are alive and well.
I do agree copyright law is terribly outdated but I also feel the pain of the creatives.
This is all theoretical, I don’t know if I believe that we as humans can overcome our desire to hoard and fight over our possessions.
Banks' Culture Communism/Anarchism > Star Trek, any day imho.
A perfect time for LLMs to show up and do the same. The subreddit simulators were hilarious because of the unusual ways they would perform but a modern LLM is a near perfect approximation of the average HN commenter.
I would have assumed that making LLMs indistinguishable from these humans would make those kinds of comments less interesting to interact with but there’s a base level of conversation that hooks people.
On Twitter, LLM-equipped Indians cosplay as right wing white supremacists and amass large followings (also bots, perhaps?) revealed only when they have to participate in synchronous conversation.
And yet, they are still popular. Even the “Texas has warm water ports” Texan is still around and has a following (many of whom seem non-bot though who can tell?).
Even though we have a literal drone, humans still engage in drone behaviour and other humans still engage them. Fascinating. I wonder whether the truth is that the inherent past-replication of low-temperature LLMs is likely to fix us to our present state than to raise us to a new equilibrium.
Experiments in Musical Intelligence is now over 40 years old and I thought it was going to revolutionize things: unknown melodies discovered by machine married to mind. Maybe LLMs aren’t going to move us forward only because this point is already a strong attractor. I’m optimistic in the power of boredom, though!
I think it is heading in this direction, just takes a very long time. 50% of people are dumber than average
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.
It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.
Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.
There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.
AI can reinforce that. But - ironically - it can also be very good at subverting it.
I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.
(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)
It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).
Generally, it has to automate a task with some intelligence, so dishwashers qualify. It isn't a existence proof (nor did I state that).
My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.
This really seems like an "akshually" argument to me...
Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.
> Obviously this is really wishing for domestic robots, not AI
I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.
I don't mean to sound insensitive, but, how? Literal hours?
Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”
This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.
lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances
The term AI clearly has lost all its meaning, so thank you for making it so apparent.
Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.
Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.
I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".
Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?
I mean, that's AI "creativity," at its peak!
[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)
A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).
Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.
This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.
Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.
Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.
My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.
It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.
And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.
I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...
I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...
> So it embed how we may use words, but not why we use this word and not others.
Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Sensory data is not the main issue, but how we interpret them.
In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.
But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.
> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.
So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.
> there's a lot of training done with corrections when we say a sentence incorrectly.
There's a lot of the same training for LLMs.
> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.
I don't buy it. I think our eyes are approximately as fine as we perceive them to be.
When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.
- basic features (color, brightness and contrast, edges and shapes, motion and direction)
- depth and spatial relationships
- recognition
- location and movement
- focus and attention
- prediction and filling in gaps
“Seeing” real world requires much more than simply seeing with one eye.
No reason to think an LLM (a few generations down the line if not now) cannot do that
And we can distort quite far (see cartoons in drawing, dubstep in music,...)
My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.
The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.
However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.
It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.
Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.
On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.
This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.
Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?
If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?
LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.
But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.
Then when word processors came around, it was expected that faculty members will type it up themselves.
I don't know if there were fewer secretaries as a result, but professors' lives got much worse.
He misses the old days.
I see this referenced over and over again to trivialise AI as if it is a fait acompli.
I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).
> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.
To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.
Point taken. As lelandbatey said, your comment seems to be the one case where it's not meant to trivialise.
>I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.
The (regardless of the program they run) suggests you think that AI cannot be achieved by algorithmic means. That runs a little counter to the belief that it is possible to build thinking machines unless you think those future machines have some non algorithmic enhancement that takes them beyond machines,
I do not assume we will "accidentally" create thinking machines, but I certainly think it's not impossible.
On the other hand I suspect the best chance we have of understanding consciousness will be by attempting to build one.
Family Guy Nasty Wolf Pack
The perfect wish to outsmart a genie | Chris & Jack
In all the cases of killing, the robots were innocent. It was either a human that tricked the robot or didn't tell the robot what they were doing.
For example, a lady killed her husband by asking a robot to detach his arm and give it to here. Once she got it, she beat the husband to death and the robot didn't have the capability to stop her (since it gave her it's arm). That caused the robot to effectively self-destruct.
Giskard, I believe, was the only one that killed people. He ultimately ended up self-destructing as a result (the fate of robots that violate the laws).
That of course isn't stopping us from marching forwards though in the name of progress.
>A robot may not harm humanity, or, by inaction, allow humanity to come to harm
Therefore a robot could allow some humans to die, if the 0th law took precedence.
IIRC, none of the robots broke the laws of robotics, rather they ostensibly broke the laws but the robots were later investigated to have been following them because of some quirk.
Has anyone heard a viable solution, or even has one themselves?
I don’t hear anything about UBI anymore, could that be because after roughly 60+ million alien people flooding into western countries from countries with a populations so large that are effectively endless? What do we do about that? Will that snuff out any kind of advancement in the west when the roughly 6 billion people all want to be in the west where everyone gets UBI and it’s the land of milk and honey?
So what do we do then? We can’t all be tech industry people with 6-figure plus salaries, vested ownership, and most people aren’t multi-millionaires that can live far away from the consequences while demanding others subject themselves to them.
Which way?
I want everyone to have food, housing, healthcare, education, etc. in a post scarcity world. That should be possible. I don’t think giving people cash is the best way to accomplish that. If you want people to have housing, give them housing. If you want people to have food, give them food.
Cash doesn’t solve the supply problem, as we can see with housing now. You would think a rise in the cost of housing would lead to more supply, but the cost of real estate also increases the cost of building.
It would be very interesting to see the percentage breakdowns of how such people chose to spend their time. In my opinion, there would be enough benefit to society at large to make it worthwhile. For a large group (if not the majority), I'm certain the situation would turn out to be completely temporary-- they would have the option to prepare themselves for some type of work they're better adapted to perform and/or enjoy, ultimately enhancing the culture and economy. Most of the rest could be useful as research subjects, if they were willing of course.
Obviously this is a bit of a utopian fantasy, but what can I say, Star Trek primed me to hope for such a future.
1% of the labour force works in agriculture:
https://ourworldindata.org/grapher/share-of-the-labor-force-...
1%
let that number sink in; think about it really means.
And what it means is that at least basic food (unprocessed, no meat) could be completely free. It make take some smart logistics, but it's doable. All of our food is already one step, one small step, away from becoming free for everyone.
This applies to clothes and basic tools as well.
This is a pretty good definition, honestly. It explains the AI Effect quite well: calculators aren’t “AI” because it’s been a while since humans were the only ones who could do arithmetic. At one point they were, though.
That's humanity. We're tool users above anything else. This gets lost.
He can only be referring to these Jira tickets I need to write.
and MCP can work with deepseek running locally. hmm...
"MCP is highly intelligent and yet ruthless. It apparently wants to get rid of humans and especially users."
Did he though? Or was the Butlerian Jihad backstory whose function was allow him to believably center human characters in his stories, given sci-fi expectations of the time?
I like Herbert's work, but ultimately he (and Asimov) were producers of stories to entertain people, so entertainment always would take priority over truth (and then there's the entirely different problem of accurately predicting the future).
Why not? Who is this technology expert with flawless predictions? Talking about the future is inherently an exercise of the imagination, which is also what fiction writing is.
And nothing he's saying here contradicts our observations of AI up to this point. AI artwork has gotten good at copying the styles of humans, but it hasn't created any new styles that are at all compelling. So leave that to the humans. The same with writing; AI does a good job at mimicking existing writing styles, but has yet to demonstrate the ability to write anything that dazzles us with its originality. So his prediction is exactly right: AI does work that is really an insult to the complex human brain.
Asimov was mostly not a fantasy writer. He was a science writer and professor of biochemistry. He published over 500 books. I didn't feel like counting but half or more of them are about science. Maybe 20% are science fiction and fantasy.
https://en.wikipedia.org/wiki/Isaac_Asimov_bibliography_(cat...
But that's more a knock on people like Marc Andreessen than a reason you should put stock in Asimov.
The question I have is why AI technology is being so aggressively advertised nowadays, and why none of it seems to be liberating in any way.
Once the plow liberated humans from some kinds of work. Some time later it was just a tool that slaves, very non liberated, used to tend to rich people's farms.
Technology is tricky. I don't trust who is developing AI to be liberating.
The article also plays on the "favorite author" thing. It knows many young folk see Asimov as a role model, so it is leveraging that emotional connection to gather conversation around a topic that is not what it seems to be. I consider it a dirty trick. It is disgraceful given the current world situation (AI being used for war, surveillance, brainwashing).
We are better than this.
I'm not sure I've actually seen an advertisement for AI. It's being endlessly discussed though on HN and other places, probably because it's at an interesting point at the moment making rapid progress. And also shoved into a lot of products and services of course.
Focus on what matters for humans.
Something about his worldview always seemed off to me, although I didn't know he actually seriously held such utopian convictions about AI. It explains an awful lot of the way his stories are.
Oh boy, how foolish we've been!
Its usually called AGI these days.
lenerdenator•1w ago
I mean, I just got done watching a presentation at Google Next where the presenter talked to an AI agent and set up a landscaping appointment with price match and a person could intervene to approve the price match.
It's cool, sure, but understand, that agent would absolutely have been a person on a phone five years ago, and if you replace them with agentic AI, that doesn't mean that person has gone away or is now free to write poetry. It means they're out of an income and benefits. And that's before you consider the effects on the pool of talent you're drawing from when you're looking for someone to intervene on behalf of these agentic AIs, like that supervisor did when they approved the price match. If you don't have the entry-level person, you don't have them five years later when you want to promote someone to manage.
gh0stcat•1w ago
akuchling•1w ago
baxtr•1w ago
>Just saw a demo of a new word processor system that lets a manager dictate straight into the machine, and it prints the memo without a secretary ever touching it. Slick stuff. But five years ago, that memo would’ve gone through a typist. Replace her with a machine, and she’s not suddenly editing novels from home. She’s unemployed, losing her paycheck and benefits.
And when that system malfunctions, who’s left who actually knows how to fix it or manage the workflow? You can’t promote experience that never existed. Strip out the entry-level roles, and you cut off the path to leadership.
lenerdenator•1w ago
The 2025 equivalent of the secretary is potentially looking across a job market that is far smaller because the labor she was trained to do, or labor similar enough to it that she could have previously successfully been hired, is now handled by artificial intelligence.
There is, effectively, no where for her to go to earn a living with her labor.
seadan83•1w ago
Travel 75 to 150 miles outside of a US city and it will feel like time travel. If so much is still 100 years behind, how will civilization so broadly adopt something that is yet more decades into the future?
I got into starlink debates with people during hurricane helene. Folks were glowing over how people just needed internet. Reality, internet meant fuck all when what you needed was someone with a chainsaw, a generator, heater, blankets, diapers and food.
Which is to say, technology and its importance is a thin veneer on top of organized society. All of which is frail and still has a long way to go to fully penetrate rural communities for even recent technology. At the same time, that spread is less important than it would seem to a technologist. Hence, technology has not uniformly spread everywhere, and ultimately it is not that important. Yet, how will AI, even more futuristic, leap frog this? My money is that rural towns USA will look almost identical in 30 years from now. Many look identical to 100 years ago still.
xurias•1w ago
I see https://en.wikipedia.org/wiki/Beggars_in_Spain and the reason why they vote the way they do. Modern society has left them behind, abandoned them, and not given them any way to keep up with the rest of the US. Now they're getting taken advantage of by the wealthy like Trump, Murdoch, Musk, etc. who use their unhappiness to rage against the machine.
> My money is that rural towns USA will look almost identical in 30 years from now.
You mean poor, uneducated and without any real prospects of anything like a career? Pretty much. Except there will be far more people who are impoverished and with no hope for the future. I don't see any of this as a good thing.
seadan83•13h ago
Indeed, more the point though is that many people still live these lives. The propagation of technology is not uniform, slow, ongoing, and not necessarily even a good thing. My point is that technological progress and the feeling of living in a very advanced age is actually a veneer. The second point is how are we going to get massive adoption of technology that is decades away, when we still haven't fully adopted the technologies of the last two centuries?
> You mean poor, uneducated and without any real prospects of anything like a career?
A lot of those rural towns had large farms, which had people far richer than software engineers. I think there is a lot of complexity when characterizing 'rural' america (which is a lot closer to a lot people than I think they otherwise know).
I don't quite share those value judgements. I think it's varied and complicated. My point instead is really more about the propagation of technology. Another example is all of the US compared to say Japanese smart phones. I was told the USA is about 15 years behind in generalized smart phone tech. A podcast I was listening to recently talked about the deep integration of technology in Chinese Uber equivalents, something that is only recent in US offices where you can go into a room and 'cast' something onto a screen. Apparently in China, for a while, being able to play a movie on a screen in the back of an Uber has been a seemless and integrated experience for a long time. Another good example is credit card technology. The oldest is to do a carbon copy of the embossed phone numbers, to the magnetic strip, to the chip, to tap. Europe had chips used in all of their credit cards while some places in the US were still doing carbon copy, and even the "advanced places" were doing magnetic strip only. Canada has been ahead of the US for a while for point-of-payment systems, virtually every restaurant brings a card reader to you instead of (as is in the US) this dance where you give someone a credit card so they can go to the register where there is a wired machine where they swipe the card.
So, I suppose my biggest point is that technology spreads a lot slower than we tend to think. It's not a process of years, but decades and centuries. I'm really pushing back on this technophile sentiment that we're already living in a super advanced age with a strong society that is robust, that instead these are veneers with very uneven and slow moving advancement. This is not going to change overnight (or in the next century) just because someone creates an humanoid AI robot thing that can lift bails of hay and stack them in the right place. Given the lack of adoption of various technologies that we already see, I take that as evidence that nothing will change too quickly, 30 years or even more, just because we get a bit better with robotics.
827a•1w ago
baxtr•1w ago
Extreme poverty decreased, child mortality decreased, literacy and access to electricity has gone up.
Are people unhappier? Maybe. But not because they lack something materially.
827a•1w ago
milesrout•1w ago
Things that were common in that era that are rare today:
1. Living in shared accomodation. It was common then for people to live in boarding houses and bedsits as adults. Today these are largely extinct. Generally, the living space per person has increased substantially at every level of wealth. Only students live in this sort of environment today and even then it is usually a flat (ie. sharing with people you know on an equal basis) not a bedsit/boarding house (ie. living in someone's house according to her rules--no ladies in gentlemen's bedrooms, no noise after 8pm, etc.).
2. Second-hand clothes and repairing clothes. Most people wear new clothes. People buy second hand because it is trendy. Nobody really repairs anything because that is all they can afford. People just buy new. Nobody darns socks or puts elbow patches on jackets where they have worn out. Only people that buy expensive shoes get their shoes resoled. Normal people just buy cheap shoes more often and they really do save money doing this.
Today the woman that would have been a typist has a different job, and a more productive one that pays more.
Philpax•1w ago
mandmandam•1w ago
That's capitalism for ye :/ Join us on the UBI train.
Say, have you ever read the book 'Bullshit Jobs'...
lenerdenator•1w ago
The people with all of the money effectively froze wages for 45 years, and that was when there were people actually doing labor for them.
What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?
mandmandam•1w ago
Yep. And they didn't accomplish that 'peaceably' either, for the record. A lot of people got murdered, many more smeared/threatened/imprisoned etc. Entire countries get decimated.
> What makes you think that they'll peaceably agree to UBI for people who don't sell them labor for money?
I don't imagine for a moment that they'll like UBI. There is no shortage of examples over recent millenia of how far the parasite class will go to keep the status quo.
History also shows us that having all the money doesn't guarantee that people will do things your way. Class awareness, strikes, unions, protest, and alternative systems/technological advance have shown their mettle. These things scare oligarchs because they work.
Philpax•1w ago
someguyorother•6d ago
The third option is that the oligarchy fully internalizes its pursuit of ruthless concentration of power. But in that case, someone will probably create an AI that's better at playing the power game, and at that point, it's over for the oligarchs.
milesrout•1w ago
mandmandam•1w ago
The vast majority of the gains in productivity have been captured and funneled upward.
0 - https://assets.weforum.org/editor/HFNnYrqruqvI_-Skg2C7ZYjdcX...
milesrout•6d ago
jes5199•1w ago
foobarian•1w ago
milesrout•1w ago
Automation won't obsolete work and workers it will make us more productive and our desires will increase. We will all expect what today are considered luxuries only the rich can afford. We will all have custom software written for our needs. We will all have individual legal advice on any topic we need advice on. We will all have bigger houses with more stuff in them, better finishings, triple glazed windows, and on and on.
jes5199•6d ago
milesrout•6d ago
Spooky23•1w ago
Call center based services always suck. I remember going to a talk where American Express, who operated best in class call centers, found that 75% of their customers don’t want to talk to them. The people are there because that’s needed for a complex relationship, the more stuff you can address earlier in the funnel, the better.
Customers don’t want to talk to you, and ultimately serving the customer is the point.
nicbou•1w ago
In practice I fear that the savings will make the rich richer, drive down labour's negotiating power and generally fail to elevate our standard of living.