https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_a...
It would be amazing to go and fetch Turing with a time machine and bring him to our time. Show him an iPhone, his face on the UK £50 note, and Wikipedia's list of https://en.wikipedia.org/wiki/List_of_openly_LGBTQ_heads_of_...
PaulRobinson•2h ago
I thought Turing's Test would be a good barometer of AI, but in today's World of mountains of AI slop fooling more and more people, and ironically there being software that is better at solving CAPTCHAs than humans, I'm not so sure.
Add into the mix that there are reports of people developing psychological disorders when exposed deeply to LLMs, I'm not sure they are good replacements for therapists (ELIZA, ah, what a thought), and they seem - even with a lot of investment in agentic workflows and getting a lot of context into GraphRAG or wiring up MCP - to be good at helping experts get a bit faster, not replace experts. And that's not software development specific - it seems to be the case across all domains of expertise.
So what are we now chasing for? What's the test for AGI?
It's definitely not playing games well, like we thought, or pretending to be human, or even being useful to a human. What is it, then?
pvg•2h ago
Was it? Alpha-beta pruning is from 1957 they had a decent idea chess of what human-beating computer chess would be like and that it probably wasn't some pathway to Turing-test-beating AI.
dandellion•2h ago
zmgsabst•2h ago
But because AI is not like us, we have different results at different stages — eg, they’ve been better at arithmetic for a hundred years, games for twenty, and slowly are climbing up other domains.
nyrikki•32m ago
What we have now matches what many of the popular texts would call "Narrow AI", which is limited to specific tasks like speech recognition or playing chess, or mixtures of those.
Traditionally AGI represents a more aspirational goal, machines that could theoretically perform any intellectual task a human can do.
Under that definition we aren't close, and we will actually need new math to even hope to reach that goal.
Obviously individuals concepts of what 'AGI' differ, as well as their motivations for choosing one.
But the traditional hopeful mnomics concept of AGI is known to be unreachable without discoveries that upend what we think are hard limits today.
Machines being better at arithmetic, the ties from to the limits of algorithms is actually the source of the limits.
The work of Turing, Gödel, Tarski, Markov, Rice etc... is where that claim is coming from IMHO
Fortunately there is a lot of practical utility without AGI, but our industries use of aspirational mnomics is almost guaranteed to disappoint the rest of the world.
Scarblac•2h ago
But I think general problem solving is a part of it. Coming up with its own ideas for possible solutions rather than what it generalized from a training set, and being able to try them out and iterate. In an environment it wasn't specifically designed for by humans.
(not claiming most humans can do that)
zmgsabst•2h ago
Scarblac•1h ago
I think asking of an AGI to do what humans do is asking a submarine to swim. It's not very useful.
So I think that when we have useful computer AGI, it will be much better at it than humans.
You already see that even with say ChatGPT -- it's not expert level, but the knowledge it does have is way way wider than any human's. If we get something that's as smart as humans, it will probably still be as widely applicable.
And why even try, otherwise? We already have human intelligence.
pyman•2h ago
My dog doesn't know what I do for a living, and he has no concept of how intelligent I am. So if we're limited by our own intelligence, how would we ever recognise or measure the intelligence of an AI that's more advanced than us?
If an AI surpasses us, not just in memory or calculation but in reasoning, self-reflection, and abstraction, how would we even know?
officehero•1h ago
dale_glass•1h ago
How do we know? Play a game with the computer, and see who wins.
There's no reason why we can't apply the same logic elsewhere. Set up a testable scenario, see who wins.
card_zero•57m ago
The error here is thinking that dogs understand anything.
Retric•46m ago
With dogs it’s less a question of intelligence but communication something more intelligent AI is unlikely to have a problem with.
card_zero•17m ago
What would our being baffled by a super-intelligence look like? Maybe some effect like dark matter. It would make less sense the more we found out about it, and because it's on a level beyond our comprehension, it would never add up. And the lack of apparent relevance to a super-intelligence's doings would be expected, because it's beyond our comprehension.
But this is silly and resembles apologies for God based on his being ineffable. So there's a way to avoid difficult questions like "what is his motivation" and "does he feel like he needs praise" because you can't eff him, not even a little. Then anything incomprehensible becomes evidence for God, or super-intelligence. We'd have to be pretty damn frustrated with things we don't understand before this looks true.
But that still doesn't work, because we're not supposed to be able to even suspect it exists. So even that much interaction with us is too much. In fact this "what if" question undermines itself from the start, because it represents the start of comprehension of the incomprehensible thing it posits.
TheOtherHobbes•32m ago
Our perceptions are shaped by our cognitive limitations. A dog doesn't know what the Internet is, and completely lacks the cognitive capacity to understand it.
An ASI would almost certainly develop some analogous technology or ability, and it would be completely beyond us.
That does NOT mean we would notice we were being affected by that technology.
Advertising and manufactured addictions make people believe external manipulations are personal choices. An ASI would probably find similar manipulations trivial.
But it might well be capable of more complex covert manipulations we literally can't imagine.
card_zero•5m ago
gadders•31m ago
gadders•33m ago
captainbland•2h ago
kranke155•1h ago
iamflimflam1•2h ago
It was seen as so difficult to do that research should be abandoned.
Projects in category B were held to be failures. One important project, that of "programming and building a robot that would mimic human ability in a combination of eye-hand co-ordination and common-sense problem solving", was considered entirely disappointing. Similarly, chess playing programs were no better than human amateurs. Due to the combinatorial explosion, the run-time of general algorithms quickly grew impractical, requiring detailed problem-specific heuristics.
The report stated that it was expected that within the next 25 years, category A would simply become applied technologies engineering, C would integrate with psychology and neurobiology, while category B would be abandoned.
https://en.wikipedia.org/wiki/Lighthill_report
rjsw•1h ago
jltsiren•2h ago
nemomarx•1h ago
that's what the exponential lift off people want right