frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Singularity will occur on a Tuesday

https://campedersen.com/singularity
280•ecto•2h ago•164 comments

Launch HN: Livedocs (YC W22) – An AI-native notebook for data analysis

https://livedocs.com
22•arsalanb•1h ago•6 comments

Show HN: Showboat and Rodney, so agents can demo what they've built

https://simonwillison.net/2026/Feb/10/showboat-and-rodney/
48•simonw•1h ago•27 comments

Simplifying Vulkan one subsystem at a time

https://www.khronos.org/blog/simplifying-vulkan-one-subsystem-at-a-time
160•amazari•6h ago•69 comments

Mathematicians disagree on the essential structure of the complex numbers

https://www.infinitelymore.xyz/p/complex-numbers-essential-structure
73•FillMaths•3h ago•60 comments

Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS)

https://github.com/rowboatlabs/rowboat
56•segmenta•2h ago•14 comments

Ex-GitHub CEO launches a new developer platform for AI agents

https://entire.io/blog/hello-entire-world/
103•meetpateltech•4h ago•80 comments

Clean-room implementation of Half-Life 2 on the Quake 1 engine

https://code.idtech.space/fn/hl2
255•klaussilveira•8h ago•49 comments

Markdown CLI viewer with VI keybindings

https://github.com/taf2/mdvi
22•taf2•1h ago•5 comments

China's Data Center Boom: A View from Zhangjiakou (2025)

https://sinocities.substack.com/p/chinas-data-center-boom-a-view-from
6•fzliu•25m ago•0 comments

Qwen-Image-2.0: Professional infographics, exquisite photorealism

https://qwen.ai/blog?id=qwen-image-2.0
290•meetpateltech•10h ago•145 comments

Show HN: I made paperboat.website, a platform for friends and creativity

https://paperboat.website/home/
33•yethiel•2h ago•22 comments

Google Handed ICE Student Journalist's Bank and Credit Card Numbers

https://theintercept.com/2026/02/10/google-ice-subpoena-student-journalist/
276•lehi•1h ago•105 comments

Oxide raises $200M Series C

https://oxide.computer/blog/our-200m-series-c
394•igrunert•5h ago•202 comments

Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs

https://arxiv.org/abs/2512.20798
495•tiny-automates•16h ago•322 comments

Show HN: I built a macOS tool for network engineers – it's called NetViews

https://www.netviews.app
119•n1sni•14h ago•38 comments

Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB

https://github.com/pretzelai/stripe-no-webhooks
18•prasoonds•2h ago•5 comments

The Switch to Linux and the Beginning of My Self-Hosting Journey

https://hazemkrimi.tech/blog/linux-self-hosting-journey/
6•kingcrimson1000•1h ago•0 comments

Parse, Don't Validate (2019)

https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/
179•shirian•4h ago•115 comments

A brief history of oral peptides

https://seangeiger.substack.com/p/a-brief-history-of-oral-peptides
8•odedfalik•22h ago•3 comments

The Evolution of Bengt Betjänt

https://andonlabs.com/blog/evolution-of-bengt
5•lukaspetersson•16h ago•0 comments

Competition is not market validation

https://www.ablg.io/blog/competition-is-not-validation
5•tonioab•3h ago•0 comments

Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks

https://github.com/stevenctl/deadlog
4•dirteater_•2h ago•0 comments

I started programming when I was 7. I'm 50 now and the thing I loved has changed

https://www.jamesdrandall.com/posts/the_thing_i_loved_has_changed/
371•jamesrandall•4h ago•321 comments

Semaglutide improves knee osteoarthritis independant of weight loss

https://www.cell.com/cell-metabolism/abstract/S1550-4131(26)00008-2
135•randycupertino•2h ago•96 comments

Show HN: Multimodal perception system for real-time conversation

https://raven.tavuslabs.org
7•mert_gerdan•46m ago•1 comments

Europe's $24T Breakup with Visa and Mastercard Has Begun

https://europeanbusinessmagazine.com/business/europes-24-trillion-breakup-with-visa-and-mastercar...
376•NewCzech•8h ago•347 comments

Redefining Go Functions

https://pboyd.io/posts/redefining-go-functions/
59•todsacerdoti•5h ago•16 comments

Vercel's CEO offers to cover expenses of 'Jmail'

https://www.threads.com/@qa_test_hq/post/DUkC_zjiGQh
172•vinnyglennon•4h ago•122 comments

Jury told that Meta, Google 'engineered addiction' at landmark US trial

https://techxplore.com/news/2026-02-jury-told-meta-google-addiction.html
399•geox•5h ago•307 comments
Open in hackernews

The Singularity will occur on a Tuesday

https://campedersen.com/singularity
278•ecto•2h ago

Comments

jmugan•1h ago
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
ecto•1h ago
Perhaps they will revel in the friends they made along the way.
Krei-se•1h ago
If only we had a battle tested against reality self learning system.
zh3•1h ago
Fortuitously before the Unix date rollover in 2038. Nice.
ecto•1h ago
I didn't even realize - I hope my consciousness is uploaded with 64 bit integers!
thebruce87m•1h ago
You’ll regret this statement in 292 billion years
layer8•1h ago
I think we’ll manage to migrate to bignums by then.
GolfPopper•1h ago
The poster won't, but the digital slaves made from his upload surely will.
markgall•1h ago
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

Huh? I don't get it. e^t would also still be finite at heat death.

ecto•1h ago
exponential = mañana
rcarmo•1h ago
"I could never get the hang of Tuesdays"

- Arthur Dent, H2G2

jama211•1h ago
Thursdays, unfortunately
vcanales•1h ago
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.

Damn, good read.

adastra22•1h ago
We are already long past that point…
shantara•55m ago
It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.
skulk•1h ago
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x

ecto•1h ago
Thanks. I dropped out of college
hinkley•1h ago
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
jrmg•1h ago
This is gold.

Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.

mesozoicpilgrim•1h ago
I'm trying to figure out if the LLM writing style is a feature or a bug
banannaise•1h ago
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
baalimago•1h ago
Well... I can't argue with facts. Especially not when they're in graph form.
OutOfHere•1h ago
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
gojomo•1h ago
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."

– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965

https://www.baen.com/Chapters/9781618249203/9781618249203___...

burkaman•21m ago
This is incredible.

> A thoughtful-man named Maxwell Mouser had just produced a work of actinic philosophy. It took him seven minutes to write it. To write works of philosophy one used the flexible outlines and the idea indexes; one set the activator for such a wordage in each subsection; an adept would use the paradox, feed-in, and the striking-analogy blender; one calibrated the particular-slant and the personality-signature. It had to come out a good work, for excellence had become the automatic minimum for such productions. “I will scatter a few nuts on the frosting,” said Maxwell, and he pushed the lever for that. This sifted handfuls of words like chthonic and heuristic and prozymeides through the thing so that nobody could doubt it was a work of philosophy.

Sounds exactly like someone twiddling the knobs of an LLM.

qoez•1h ago
Great read but damn those are some questionable curve fittings on some very scattered data points
aenis•1h ago
In other words, just another Tuesday.
jacquesm•1h ago
Better than some of the science papers I've tried to parse.
braden-lk•1h ago
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
bwestergard•1h ago
https://en.wikipedia.org/wiki/Messianism
inanutshellus•1h ago
We avoid catastrophe by thinking about new developments and how they can go wrong (and right).

Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.

... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...

That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.

jacquesm•1h ago
Yes, but if we don't do it 'they' will. Onwards!
unbalancedevh•1h ago
It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.
ragchronos•1h ago
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
Krei-se•1h ago
https://cdn.statcdn.com/Infographic/images/normal/870.jpeg

you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.

GolfPopper•59m ago
Frank Herbert and Samuel Butler.
pixl97•1h ago
>That's a very different singularity than the one people argue about.

---

I wouldn't say it's that much different. This has always been a key point of the singularity

>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.

It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.

jesse__•1h ago
The meme at the top is absolute gold considering the point of the article. 10/10
wffurr•1h ago
Why does one of them have the state flag of Ohio? What AI-and-Ohio-related news did I miss?
adzm•1h ago
Note that the only landmass on Earth is actually Ohio as well. Turns out, it's all Ohio. And it always has been. https://knowyourmeme.com/memes/wait-its-all-ohio-always-has-...
hipster_robot•1h ago
why is everything broken?

> the top post on hn right now: The Singularity will occur on a Tuesday

oh

aenis•1h ago
Damn. I had plans.
stego-tech•1h ago
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

And, yep! A lot of people absolutely believe it will and are acting accordingly.

It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

jacquesm•1h ago
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

And there are plenty of people that take issue with that too.

Unfortunately they're not the ones paying the price. And... stock options.

stego-tech•1h ago
History paints a pretty clear picture of the tradeoff:

* Profits now and violence later

OR

* Little bit of taxes now and accelerate easier

Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

AndrewKemendo•59m ago
Every possible example of “progress” have either an individual or a state power purpose behind it

there is only one possible “egalitarian” forward looking investments that paid off for everybody

I think the only exception to this is vaccines…and you saw how all that worked during Covid

Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad

jacquesm•48m ago
COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.
AndrewKemendo•46m ago
Yeah buddy we agree
jpadkins•38m ago
COVID also cured me but we have different conclusions. I used to think that what happened in Nazi Germany was unthinkable or impossible in my country. After seeing the glee in which my fellow humans took in enforcing and cheering on inhumane policies (closing schools, beaches, public parks, etc) or mask mandates (that became a very public signal of whether you were part of the in group or not) made me realize how easy it was for the Nazi's to control the German public. I lost a lot of faith in humanity, that they aren't going to stand up for their fellow man if it means they can feel virtuous or superior by castigating people for not following the authority
yifanl•26m ago
Great zinger buddy, you really showed off your wit.
Nevermark•15m ago
It is so easy to critique the response in hindsight. Or at the time.

But critiques like that ignore uncertainty, risk, and unavoidably getting it "wrong" (on any and all dimensions), no matter what anyone did.

With a new virus successfully circumnavigating the globe in a very short period of time, with billions of potential brand new hosts to infect and adapt within, and no way to know ahead of time how virulent and deadly it could quickly evolve to be, the only sane response is to treat it as extremely high risk.

There is no book for that. Nobody here or anywhere knows the "right" response to a rapidly spreading (and killing) virus, unresponsive to current remedies. Because it is impossible to know ahead of time.

If you actually have an answer for that, you need to write that book.

And take into account, that a lot of people involved in the last response, are very cognizant that we/they can learn from what worked, what didn't, etc. That is the valuable kind of 20-20 vision.

A lof of at risk people made it to the vaccines before getting COVID. They are probably happy about everything that reduced their risk for that long. And those that died, including people I know, might argue we could have done more, but they don't get to.

I don't think any non-nuanced view of the situation has merit.

goatlover•11m ago
Never seen the attempt by governments to contain a global pandemic that killed millions and threatened to overwhelm healthcare compared to Nazism before, but why should I be surprised? Explains a lot about the sorry state of modern politics.
frocodillo•8m ago
I find it interesting that this is the conclusion you draw from this. I won’t go into a discussion on the efficacy of the various mandates and policies in reducing spread of the disease. Rather, I think it’s worth pointing out that a significant portion of the proponents of these policies likely supported them not because of a desire to follow the authority but because they sincerely believed that a (for them) relatively small sacrifice in personal freedom could lead to improved outcomes for their fellow humans. For them, it was never about blindly following authority or virtue signalling. It was only ever about doing what they perceived as the right thing to do.
PaulHoule•8m ago
"Nazi", "Fascist", etc are words you can use to lose any debate instantly no matter what your politics are.

I think the sane version of this is that Gen Z didn't just lose its education, it lost its socialization. I know someone who works in administration of my Uni who tracks general well being of students who said they were expecting it to bounce back after the pandemic and they've found it hasn't. My son reports if you go to any kind of public event be it a sewing club or a music festival people 18-35 are completely absent. My wife didn't believe him but she went to a few events and found he was right.

You can blame screens or other trends that were going on before the pandemic, but the pandemic locked it in. At the rate we're going if Gen Z doesn't turn it around in 10 years there will not be a Gen Z+2.

So the argument that pandemic policy added a few years to elderly lives at the expensive of the young and the children that they might have had is salient in my book -- I had to block a friend of mine on Facebook who hasn't wanted to talk about anything but masks and long COVID since 2021.

jpadkins•44m ago
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
nine_k•26m ago
If you replace "taxes" with more general "investment", it's everywhere. A good example is Amazon that has reworked itself from an online bookstore into a global supplier of everything by ruthlessly reinvesting the profits.

Taxes don't usually work as efficiently because the state is usually a much more sloppy investor. But it's far from hopeless, see DARPA.

If you're looking for periods of high taxes and growing prosperity, 1950s in the US is a popular example. It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

PaulHoule•16m ago
With the odd story that we paid the price for it in the long term.

This book

https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...

tells the compelling story that the Mellon family teamed up with the steelworker's union to use protectionism to protect the American steel industry's investments in obsolete open hearth steel furnaces that couldn't compete on a fair market with the basic oxygen furnace process adopted by countries that had their obsolete furnaces blown up. The rest of US industry, such as our car industry, were dragged down by this because they were using expensive and inferior materials. I think this book had a huge impact in terms of convincing policymakers everywhere that tariffs are bad.

Funny the Mellon family went on to further political mischief

https://en.wikipedia.org/wiki/Richard_Mellon_Scaife#Oppositi...

vondur•7m ago
Ha, we gutted our manufacturing base, so if we bring it back it will now be state of the art! Not sure if that will work out for us, but hey their is some precedence.
oceanplexian•13m ago
> It's not a great example though, because the US was the principal winner of WWII, the only large industrial country relatively unscathed by it.

The US is also shaping up to be the principal winner in Artificial Intelligence.

If, like everyone is postulating, this has the same transformative impact to Robotics as it does to software, we're probably looking at prosperity that will make the 1950s look like table stakes.

generic92034•1h ago
> Folks vibe with the latter

I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

stego-tech•1h ago
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

dakolli•36m ago
It seems pretty obvious to me the ruling class is preparing for war to keep us occupied, just like in the 20s, they'll make young men and women so poor they'll beg to fight in a war.

It makes one wonder what they expect to come out the other side of such a late-stage/modern war, but I think what they care about is that there will be less of us.

NitpickLawyer•1h ago
> [...] prior to reforming society [...]

Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

stego-tech•1h ago
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
AndrewKemendo•1h ago
Literally nobody’s trying because there is no solution

The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly

and so there is no solution because humans can’t plan or execute on a plan

sp527•59m ago
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
accidentallfact•1h ago
Reality won't give a shit about what people believe.
mitthrowaway2•1h ago
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe.

cgannett•1h ago
if people believe its a threat and it is also real then what matters is timing
goatlover•5m ago
Which would also mean the accelerationists are potentially putting everyone at risk. I'd think a soft takeoff decades in the future would give us a much better chance of building the necessary safeguards and reorganizing society accordingly.
Negitivefrags•1h ago
> If the singularity does happen, then it hardly matters what people do or don't believe.

Depends on how you feel about Roko's basilisk.

VonTum•34m ago
God Roko's Basilisk is the most boring AI risk to catch the public consciousness. It's just Pascal's wager all over again, with the exact same rebuttal.
sigmoid10•1h ago
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
afthonos•1h ago
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
Forgeties79•1h ago
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

AndrewKemendo•1h ago
The goal is to eliminate humans as the primary actors on the planet entirely

At least that’s my personal goal

If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

As it stands today and in all the annals of history there does not exist a system that does what I just described.

Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

And no mondragon does not have one of these

stego-tech•28m ago
Man, I used to think exactly like you do now, disgust with humans and all. I found comfort in machines instead of my fellow man, and sorely wanted a world governed by rigid structures, systems, and rules instead of the personal whims and fancies of whoever happened to have inherited power. I hated power structures, I loathed people who I perceived to stand in the way of my happiness.

I still do.

The difference is that as I realized what I'd done is built up walls so thick and high because of repeated cycles of alienation and traumas involving humans. When my entire world came to a total end every two to four years - every relationship irreparably severed, every bit of local knowledge and wisdom rendered useless, thrown into brand new regions, people, systems, and structures like clockwork - I built that attitude to survive, to insulate myself from those harms. Once I was able to begin creating my own stability, asserting my own agency, I began to find the nuance of life - and thus, a measure of joy.

Sure, I hate the majority of drivers on the roads today. Yeah, I hate the systemic power structures that have given rise to profit motives over personal outcomes. I remain recalcitrant in the face of arbitrary and capricious decisions made with callous disregard to objective data or necessities. That won't ever change, at least with me; I'm a stubborn bastard.

But I've grown, changed, evolved as a person - and you can too. Being dissatisfied with the system is normal - rejecting humanity in favor of a more stringent system, while appealing to the mind, would be such a desolate and bleak place, devoid of the pleasures you currently find eking out existence, as to be debilitating to the psyche. Humans bring spontaneity and chaos to systems, a reminder that we can never "fix" something in place forever.

To dispense with humans is to ignore that any sentient species of comparable success has its own struggles, flaws, and imperfections. We are unique in that we're the first ones we know of to encounter all these self-inflicted harms and have the cognitive ability to wax philosophically for our own demise, out of some notion that the universe would be a better place without us in it, or that we simply do not deserve our own survival. Yet that's not to say we're actually the first, nor will we be the last - and in that lesson, I believe our bare minimum obligation is to try just a bit harder to survive, to progress, to do better by ourselves and others, as a lesson to those who come after.

Now all that being said, the gap between you and I is less one of personal growth and more of opinion of agency. Whereas you advocate for the erasure or nullification of the human species as a means to separate yourself from its messiness and hostilities, I'm of the opinion that you should be able to remove yourself from that messiness for as long as you like in a situation or setup you find personal comfort in. If you'd rather live vicariously via machine in a remote location, far, far away from the vestiges of human civilization, never interacting with another human for the rest of your life? I see no issue with that, and I believe society should provide you that option; hell, there's many a day I'd take such an exit myself, if available, at least for a time.

But where you and I will remain at odds is our opinion of humanity itself. We're flawed, we're stupid, we're short-sighted, we're ignorant, we're hostile, we're irrational, and yet we've conquered so much despite our shortcomings - or perhaps because of them. There's ample room for improvement, but succumbing to naked hostility towards them is itself giving in to your own human weakness.

fainpul•21m ago
Why would the machines want to work with you or any other human?
mtlmtlmtlmtl•18m ago
Well, demonstrably you have at least some measure of interest in interaction with other humans based on the undeniable fact that you are posting on this site, seemingly several times a day based on a cursory glance at your history.
nine_k•13m ago
This looks like a very comfortable, pleasant way of civilization suicide.

Not interacting with any other human means you're the last human in your genetic line. A widespread adherence to this idea means humanity dwindling and dying out voluntarily. (This has been reproduced in mice: [1])

Not having humans as primary actors likely means that their interests become more and more neglected by the system of machines that replaces them, and they, weaker by the day, are powerless to counter that. Hence the idea of increased comfort and well-being, and the ability to do science, is going to become more and more doubtful as humans would lose agency.

[1]: https://www.smithsonianmag.com/smart-news/this-old-experimen...

bheadmaster•1h ago
> here’s how LLMs actually work

But how is that useful in any way?

For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

MarkusQ•58m ago
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

Sometimes things seems unbelievable simply because they aren't true.

OkayPhysicist•57m ago
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

famouswaffles•30m ago
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

Obviously, that's the objective, but who's to say you'll reach a goal just because you set it ? And more importantly, who's the say you have any idea how the goal has actually been achieved ?

You don't need to think LLMs are magic to understand we have very little idea of what is going on inside the box.

dTal•23m ago
>In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question.

No it isn't. Type a question into a base model, one that hasn't been finetuned into being a chatbot, and the predicted continuation will be all sorts of crap, but very often another question, or a framing that positions the original question as rhetorical in order to make a point. Untuned raw language models have an incredible flair for suddenly and unexpectedly shifting context - it might output an answer to your question, then suddenly decide that the entire thing is part of some internet flamewar and generate a completely contradictory answer, complete with insults to the first poster. It's less like talking with an AI and more like opening random pages in Borge's infinite library.

To get a base language model to behave reliably like a chatbot, you have to explicitly feed it "a transcript of a dialogue between a human and an AI chatbot", and allow the language model to imagine what a helpful chatbot would say (and take control during the human parts). The fact that this works - that a mere statistical predictive language model bootstraps into a whole persona merely because you declared that it should, in natural English - well, I still see that as a pretty "magic" trick.

0x20cowboy•55m ago
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
nine_k•39m ago
> * enough people believe it will happen and act accordingly*

Here comes my favorite notion of "epistemic takeover".

A crude form: make everybody believe that you have already won.

A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

bee_rider•24m ago
This world where everybody’s very concerned with that “refined form” is annoying and exhausting. It causes discussions to become about speculative guesses about everybody else’s beliefs, not actual facts. In the end it breeds cynicism as “well yes, the belief is wrong, but everybody is stupid and believes it anyway,” becomes a stop-gap argument.

I don’t know how to get away from it because ultimately coordination depends on understanding what everybody believes, but I wish it would go away.

kelseyfrog•6m ago
Or just play into the fact that it's a Keynesian Beauty Contest [1]. Find the leverage in it and exploit it.

1. https://en.wikipedia.org/wiki/Keynesian_beauty_contest

ElevenLathe•4m ago
IMO this is a symptom of the falling rate of profit, especially in the developed world. If truly productivity enhancing investment is effectively dead (or, equivalently, there is so much paper wealth chasing a withering set of profitable opportunities for investment), then capital's only game is to chase high valuations backed by future profits, which means playing the Keynesian beauty contest for keeps. This in turn means you must make ever-escalating claims of future profitability. Now, here we are in a world where multiple brand name entrepreneurs are essentially saying that they are building the last investable technology ever, and getting people to believe it because the alternative is to earn less than inflation on Procter and Gamble stock and never getting to retire.

If outsiders could plausibly invest in China, some of this pressure could be dissipated for a while, but ultimately we need to order society on some basis that incentivizes dealing with practical problems instead of pushing paper around.

Terr_•10m ago
Refined 1.01 authoritarian form: Everybody knows you didn't win, and everybody knows the sentiment is universal... But everyone maintains the same outward facade that you won, because it's become a habit and because dissenters seem to have "accidents" falling out of high windows.
dakolli•39m ago
Just say it simply,

1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.

I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

stego-tech•23m ago
I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.

A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.

You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.

dakolli•16m ago
I agree, you're probably right! Thanks!
caycep•33m ago
I thought the answer was "42"
famouswaffles•28m ago
>It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”)

You do not know how LLMs work, and if anyone actually did, we wouldn't spend months and millions of dollars training one.

holoduke•21m ago
For ages most people believed in a religion. People are just not smart and sheepy followers.
soperj•8m ago
Most still do.
moffkalast•1h ago
> I am aware this is unhinged. We're doing it anyway.

If one is looking for a quote that describes today's tech industry perfectly, that would be it.

Also using the MMLU as a metric in 2026 is truly unhinged.

darepublic•1h ago
> Real data. Real model. Real date!

Arrested Development?

AndrewKemendo•1h ago
Y’all are hilarious

The singularity is not something that’s going to be disputable

it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation

I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully

Everybody needs to be preparing for the world where it’s;

human plus machine

versus

human groups by themselves

across all possible categories of competition and collaboration

Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race

Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics

jama211•1h ago
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.

Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.

Crazy times we live in.

skrebbel•1h ago
Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
miguel_martin•1h ago
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
lostmsu•1h ago
Your comment just self-defeated.
neilellis•1h ago
But you're not Everyone - they are a fictional hacker collective from a TV show.
bluejellybean•1h ago
Yet, here you are ;)
jacquesm•1h ago
Another one down.
root_axis•1h ago
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
1970-01-01•5m ago
Not anytime soon. All day I'm getting: "Claude's response could not be fully generated"
arscan•1h ago

  Don't worry about the future
  Or worry, but know that worrying
  Is as effective as trying to solve an algebra equation by chewing Bubble gum
  The real troubles in your life
  Are apt to be things that never crossed your worried mind
  The kind that blindsides you at 4 p.m. on some idle Tuesday

    - Everybody's free (to wear sunscreen)
         Baz Luhrmann
         (or maybe Mary Schmich)
jgrahamc•1h ago
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
octernion•1h ago
that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.
jacquesm•1h ago
You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.

Don't click here:

https://www.decisionproblem.com/paperclips/

octernion•47m ago
having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip
jacquesm•1h ago
I suspect that's the secret driver behind a lot of the push for the apocalypse.
PantaloonFlames•1h ago
This is what I come here for. Terrific.
atomic128•1h ago

    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

   -- Frank Herbert, Dune
You won't read, except the output of your LLM.

You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

You won't think or analyze or understand. The LLM will do that.

This is the end of your humanity. Ultimately, the end of our species.

Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

Join us, or better yet: deploy weapons of your own design.

debo_•1h ago
If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album
octernion•1h ago
do... do the "poison" people actually think that will make a difference? that's hilarious.
accidentallfact•1h ago
A better approach is to make AI bullshit people on purpose.
gojomo•1h ago
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
testaccount28•17m ago
yes. whoever has the best (least detectable) model is best poised to poison the ladder for everyone.
fellowmartian•29m ago
I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?

The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.

accidentallfact•12m ago
The point of Dune is that the worst danger are people who obey authority without questioning it.
dirkc•1h ago
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

*edit* - seems inline with what the author is saying :)

> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.

neilellis•1h ago
End of the World? Must be Tuesday.
sempron64•1h ago
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
H8crilA•1h ago
But this is a phase change process.

Also, the temptation to shitpost in this thread ...

sempron64•53m ago
I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.
banannaise•1h ago
You have not read far enough.
athrowaway3z•1h ago
> Tuesday, July 18, 2034

4 years early for the Y2K38 bug.

Is it coincidence or Roko's Basilisk who has intervened to start the curve early?

vagrantstreet•1h ago
Was expecting some mention of Universal Approximation Theorem

I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please

bpodgursky•1h ago
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
MarkusQ•1h ago
Prior work with the same vibe: https://xkcd.com/1007/
cesarvarela•1h ago
Thanks, added to calendar.
cubefox•1h ago
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:

  Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:

"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...

ericmcer•1h ago
Great article, super fun.

> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.

You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.

It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.

malfist•32m ago
Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while. You may even be able to layoff half your people if you're okay with KTLO'ing your business. This works great for companies that are already a monopoly power where you can stagnate and keep your customers and prevent competitors.
lenerdenator•13m ago
> Or it's just a logical continuation of "next quarter problem" thinking. You can lay off a lot of people, juice the number and everything will be fine....for a while

As long as you're

1) In a position where you can make the decisions on whether or not the company should move forward

and

2) Hold the stock units that will be exchanged for money if another company buys out your company

then there's really no way things won't be fine, short of criminal investigations/the rare successful shareholder lawsuit. You will likely walk away from your decision to weaken the company with more money than you had when you made the decision in the first place.

That's why many in the managerial class often hold up Jack Welch as a hero: he unlocked a new definition of competence where you could fail in business, but make money doing it. In his case, it was "spinning off" or "streamlining" businesses until there was nothing left and you could sell the scraps off to competitors. Slash-and-burn of paid workers via AI "replacement" is just another way of doing it.

jonplackett•1h ago
This assumes humanity can make it to 2034 without destroying itself some other way…
PaulHoule•58m ago
The simple model of an "intelligence explosion" is the obscure equation

  dx    2
  -- = x
  dt
which has the solution

        1      
  x = -----
       C-t
and is interesting in relation to the classic exponential growth equation

  dx
  -- = x
  dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.

Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation

  dx
  --  = (1-x) x
  dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
boca_honey•57m ago
Friendly reminder:

Scaling LLMs will not lead to AGI.

danesparza•53m ago
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.

I feel like I need to start more sprint stand-ups with this quote...

regnull•50m ago
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
charcircuit•14m ago
Why bring that up when you could bring up AI autonomously optimizing AI training and autonomously fixing bugs in AI training and inference code. Showing that AI already is accelerating self improvement would help establish the claim that we are getting closer to the singularity.
dakolli•48m ago
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
nomel•20m ago
There's all sorts of conversations like this that are genuinely exciting and fairly profound when you first consider them. Maybe you're older and have had enough conversations about the concept of a singularity that the topic is already boring to you.

Let them have their fun. Related, some adults are watching The Matrix, a a 26 year old movie, for the first time today.

For some proof that it's not some common idea, I was recently listening to a fairly technical interview with a top AI researcher, presenting the idea of the singularity in a very indirect way, never actually mentioning the word, as if he was the one that thought of it. I wanted to scream "Just say it!" halfway through. The ability to do that, without being laughed at, proves it's not some tired idea, for others.

wayfwdmachine•46m ago
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).

The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.

Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).

We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.

kpil•41m ago
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

svilen_dobrev•40m ago
> already exerting gravitational force on everything it touches.

So, "Falling of the night" ?

blahbob•27m ago
It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
saulpw•21m ago
By Tom Toro for the New Yorker (2012).
overfeed•27m ago
> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.

I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.

lencastre•20m ago
I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
Scarblac•10m ago
https://www.economist.com/cdn-cgi/image/width=1096,quality=8...
wbshaw•9m ago
I got a strong ChatGPT vibe from that article.
api•4m ago
This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.

The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.

What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.

I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.