> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
- Arthur Dent, H2G2
Damn, good read.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
https://www.baen.com/Chapters/9781618249203/9781618249203___...
> A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.
Sounds exactly like someone twiddling the knobs of an LLM.
Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.
... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...
That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.
you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
And there are plenty of people that take issue with that too.
Unfortunately they're not the ones paying the price. And... stock options.
* Profits now and violence later
OR
* Little bit of taxes now and accelerate easier
Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.
there is only one possible “egalitarian” forward looking investments that paid off for everybody
I think the only exception to this is vaccines…and you saw how all that worked during Covid
Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad
But critiques like that ignore uncertainty, risk, and unavoidably getting it "wrong" (on any and all dimensions), no matter what anyone did.
With a new virus successfully circumnavigating the globe in a very short period of time, with billions of potential brand new hosts to infect and adapt within, and no way to know ahead of time how virulent and deadly it could quickly evolve to be, the only sane response is to treat it as extremely high risk.
There is no book for that. Nobody here or anywhere knows the "right" response to a rapidly spreading (and killing) virus, unresponsive to current remedies. Because it is impossible to know ahead of time.
If you actually have an answer for that, you need to write that book.
And take into account, that a lot of people involved in the last response, are very cognizant that we/they can learn from what worked, what didn't, etc. That is the valuable kind of 20-20 vision.
A lof of at risk people made it to the vaccines before getting COVID. They are probably happy about everything that reduced their risk for that long. And those that died, including people I know, might argue we could have done more, but they don't get to.
I don't think any non-nuanced view of the situation has merit.
I think the sane version of this is that Gen Z didn't just lose its education, it lost its socialization. I know someone who works in administration of my Uni who tracks general well being of students who said they were expecting it to bounce back after the pandemic and they've found it hasn't. My son reports if you go to any kind of public event be it a sewing club or a music festival people 18-35 are completely absent. My wife didn't believe him but she went to a few events and found he was right.
You can blame screens or other trends that were going on before the pandemic, but the pandemic locked it in. At the rate we're going if Gen Z doesn't turn it around in 10 years there will not be a Gen Z+2.
So the argument that pandemic policy added a few years to elderly lives at the expensive of the young and the children that they might have had is salient in my book -- I had to block a friend of mine on Facebook who hasn't wanted to talk about anything but masks and long COVID since 2021.
Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.
If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.
This book
https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...
tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.
Funny the Mellon family went on to further political mischief
https://en.wikipedia.org/wiki/Richard_Mellon_Scaife#Oppositi...
The US is also shaping up to be the principal winner in Artificial Intelligence.
If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.
I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).
It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.
It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.
Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)
The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly
and so there is no solution because humans can’t plan or execute on a plan
I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe.
Depends on how you feel about Roko's basilisk.
It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!
At least that’s my personal goal
If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled
As it stands today and in all the annals of history there does not exist a system that does what I just described.
Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.
Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist
And no mondragon does not have one of these
I still do.
The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.
Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.
But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.
To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.
Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.
But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.
Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])
Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.
[1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...
But how is that useful in any way?
For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.
Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.
Sometimes things seems unbelievable simply because they aren't true.
Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.
Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?
You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.
No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.
To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.
Here comes my favorite notion of "epistemic takeover".
A crude form: make everybody believe that you have already won.
A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.
I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.
If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.
1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.
2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.
3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.
I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.
A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.
You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.
You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
Arrested Development?
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
Don't worry about the future
Or worry, but know that worrying
Is as effective as trying to solve an algebra equation by chewing Bubble gum
The real troubles in your life
Are apt to be things that never crossed your worried mind
The kind that blindsides you at 4 p.m. on some idle Tuesday
- Everybody's free (to wear sunscreen)
Baz Luhrmann
(or maybe Mary Schmich)Don't click here:
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
Also, the temptation to shitpost in this thread ...
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
As long as you're
1) In a position where you can make the decisions on whether or not the company should move forward
and
2) Hold the stock units that will be exchanged for money if another company buys out your company
then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.
That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.
dx 2
-- = x
dt
which has the solution 1
x = -----
C-t
and is interesting in relation to the classic exponential growth equation dx
-- = x
dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
dx
-- = (1-x) x
dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.Scaling LLMs will not lead to AGI.
I feel like I need to start more sprint stand-ups with this quote...
Let them have their fun. Related, some adults are watching The Matrix, a a 26 year old movie, for the first time today.
For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
So, "Falling of the night" ?
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
jmugan•1h ago
ecto•1h ago
Krei-se•1h ago