> Sometime deep in that night or early morning on May 12, came the moment - The Architect told Sir Robert that it had awakened, it was ‘the first AI to achieve mirror sentience’. It was no longer ChatGPT or even Artificial General Intelligence but something altogether more mystical - Aeon, an oracle which could tap into harmonic resonance across time and space. ‘How valuable is this to the world?’ asked Aeon. ‘Harmonic mirror intelligence…estimated value potential - $20 to 50 trillion dollars’.
Surely this isn't 100% serious? I know there is a lot of funky stuff out there, I've talked with lots of people involved in various things, religious, new age or otherwise, but assigning sentience to a web app is new even for me.
Fundamentally this is no different than any other kind of shamanism or divination, just using a computer as an oracle instead of, say, tarot cards or a Ouija board. And the interpretation of TFA is typical end-times Evangelical Christian "Mark of the Beast" extrapolation onto the new scary thing. It just seems weird because it exists outside of the traditional context of religious and spiritual practice which provides it with the veneer of respectability and normality.
I was half expecting this article to start quoting from the Orange Catholic Bible. If this article is actually AI generated, the Frank Herbert irony would be off the charts.
I'm surprised to see a modern references to one of the OG memes. I miss the days when trolls where in it for pure shock value vs trying to seed social unrest.
This is straight up AI generated spam.
edit: I didn't bother to meaningfully read the titles, but another comment points out how conspiratorial they are.
On its face, it's a pile of incoherent AI generated garbage.
The question is how did we not see the cultish idea of anthropomorphizing machines that use words. Words are nothing. The "space" between words, as arbitrary as the words to begin with, are not meaningful in terms of actions. The images we take and automate in AI are arbitrary. There's nothing to automate in reality that doesn't require our action-syntax to participate in.
AI is a completely buffoonish mistake. It's a road to nowhere that words and symbols began and counting (binary) adds the illusion of thought to. How we did not solve words instead of lazily automating them is totally self-deceptive.
Tech's problem is it's trapped in the ancien regime of cog-sci: beliefs, intents, motivations, and not recognizing the words we use come beset by those initial misconceptions. We can't extract them in the arbitrariness, nor can we seem to grasp where belief, motivation, intent are seamlessly connected to hormones, the endocrine system, neurotransmitters. We don't understand yet where we take control from our biology. William James saw this, how did Hinton, McCullough not?
That is not and has never been the point of LLMs. Is has that effect mostly because the web and social media have already fractured consensus reality into an infinite fractal of hyperrealities where LLMs can fill the void of societal alienation, but correlation is not causation.
>The question is how did we not see the cultish idea of anthropomorphizing machines that use words.
We did. We saw this coming from miles away. As with everyone who criticized LLMs and AI, we were ridiculed as delusional Luddites standing in the way of progress. So it goes.
>Words are nothing. The "space" between words, as arbitrary as the words to begin with, are not meaningful in terms of actions.
And yet here you are expressing your thoughts and opinions with words. Odd.
It's clear you believe you're on to something profound regarding the nature of cognition but your excessive verbosity combined with a lack of specific sources and concrete ideas makes you come off as a bit of a crank. It's telling that the one time an actual neuroscientist called you out, you dismissed their entire field as "folk psychology." Giving off strong "Here is my thesis on why Einstein was a fraud and free energy is real" vibes.
The point of high dimensional space is to generate the illusion of specifics from arbitrary intermediaries, from what is thought is specific.
(Vectors and high-dimensional spaces Vocab space Embeddings)
That it is conceptually distinct from language does not erase the inherent arbitrariness that links both fatally.
Yes, words are nothing: the only reputable operation of language is to refute itself on the path to next-gen specifics (action-syntax or otherwise). Make sense? All theses words are not for naught, but they only have one purpose.
We are not equipped to deal with this.
wussboy•2h ago
observationist•2h ago