frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Singularity will occur on a Tuesday

https://campedersen.com/singularity
197•ecto•2h ago

Comments

jmugan•1h ago
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
ecto•1h ago
Perhaps they will revel in the friends they made along the way.
Krei-se•36m ago
If only we had a battle tested against reality self learning system.
zh3•1h ago
Fortuitously before the Unix date rollover in 2038. Nice.
ecto•1h ago
I didn't even realize - I hope my consciousness is uploaded with 64 bit integers!
thebruce87m•54m ago
You’ll regret this statement in 292 billion years
layer8•48m ago
I think we’ll manage to migrate to bignums by then.
GolfPopper•33m ago
The poster won't, but the digital slaves made from his upload surely will.
markgall•1h ago
> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."

> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.

Huh? I don't get it. e^t would also still be finite at heat death.

ecto•1h ago
exponential = mañana
rcarmo•1h ago
"I could never get the hang of Tuesdays"

- Arthur Dent, H2G2

jama211•43m ago
Thursdays, unfortunately
vcanales•1h ago
> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.

Damn, good read.

adastra22•35m ago
We are already long past that point…
shantara•18m ago
It doesn’t help when quite a few Big Tech companies are deliberately operating on the principle that they don’t have to follow the rules, just change at the rate that is faster than the bureaucratic system can respond.
skulk•1h ago
> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.

Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x

ecto•1h ago
Thanks. I dropped out of college
hinkley•1h ago
Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
jrmg•1h ago
This is gold.

Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.

mesozoicpilgrim•29m ago
I'm trying to figure out if the LLM writing style is a feature or a bug
banannaise•54m ago
Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
baalimago•51m ago
Well... I can't argue with facts. Especially not when they're in graph form.
OutOfHere•50m ago
I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
gojomo•49m ago
"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."

– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965

https://www.baen.com/Chapters/9781618249203/9781618249203___...

qoez•49m ago
Great read but damn those are some questionable curve fittings on some very scattered data points
aenis•43m ago
In other words, just another Tuesday.
jacquesm•38m ago
Better than some of the science papers I've tried to parse.
braden-lk•49m ago
lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
bwestergard•42m ago
https://en.wikipedia.org/wiki/Messianism
inanutshellus•42m ago
We avoid catastrophe by thinking about new developments and how they can go wrong (and right).

Catastrophizing can be unhealthy and unproductive, but for those among us that can affect the future of our societies (locally or higher), the results of that catastophizing helps guide legislation and "Overton window" morality.

... I'm reminded of the tales of various Sci-Fi authors that have been commissioned to write on the effects of hypothetical technologies on society and mankind (e.g. space elevators, mars exploration)...

That said, when the general public worries about hypotheticals they can do nothing about, there's nothing but downsides. So. There's a balance.

jacquesm•38m ago
Yes, but if we don't do it 'they' will. Onwards!
unbalancedevh•28m ago
It depends on how you define humanity. The singularity implies that the current model isn't appropriate anymore, but it doesn't suggest how.
ragchronos•48m ago
This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
Krei-se•37m ago
https://cdn.statcdn.com/Infographic/images/normal/870.jpeg

you can easily see that at the doubling rate every 2 years in 2020 we already had over 5 facebook accounts per human on earth.

GolfPopper•22m ago
Frank Herbert and Samuel Butler.
pixl97•47m ago
>That's a very different singularity than the one people argue about.

---

I wouldn't say it's that much different. This has always been a key point of the singularity

>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.

It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.

jesse__•46m ago
The meme at the top is absolute gold considering the point of the article. 10/10
wffurr•42m ago
Why does one of them have the state flag of Ohio? What AI-and-Ohio-related news did I miss?
adzm•32m ago
Note that the only landmass on Earth is actually Ohio as well. Turns out, it's all Ohio. And it always has been. https://knowyourmeme.com/memes/wait-its-all-ohio-always-has-...
hipster_robot•45m ago
why is everything broken?

> the top post on hn right now: The Singularity will occur on a Tuesday

oh

aenis•44m ago
Damn. I had plans.
stego-tech•43m ago
This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

And, yep! A lot of people absolutely believe it will and are acting accordingly.

It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.

jacquesm•40m ago
> “here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”

And there are plenty of people that take issue with that too.

Unfortunately they're not the ones paying the price. And... stock options.

stego-tech•34m ago
History paints a pretty clear picture of the tradeoff:

* Profits now and violence later

OR

* Little bit of taxes now and accelerate easier

Unfortunately we’ve developed such a myopic, “FYGM” society that it’s explicitly the former option for the time being.

AndrewKemendo•22m ago
Every possible example of “progress” have either an individual or a state power purpose behind it

there is only one possible “egalitarian” forward looking investments that paid off for everybody

I think the only exception to this is vaccines…and you saw how all that worked during Covid

Everything else from the semiconductor to the vacuum cleaner the automobile airplanes steam engines I don’t care what it is you pick something it was developed in order to give a small group and advantage over all the other groups it is always been this case it will always be this case because fundamentally at the root nature of humanity they do not care about the externalities- good or bad

jacquesm•12m ago
COVID has cured me (hah!) of the notion that humanity will be able to pull together when faced with a common enemy. That means global warming or the next pandemic are going to happen and we will not be able to stop it from happening because a solid percentage can't wait to jump off the ledge, and they'll push you off too.
AndrewKemendo•9m ago
Yeah buddy we agree
jpadkins•7m ago
Do you have a historical example of "Little bit of taxes now and accelerate easier"? I can't think of any.
generic92034•37m ago
> Folks vibe with the latter

I am not convinced, though, it is still up to "the folks" if we change course. Billionaires and their sycophants may not care for the bad consequences (or even appreciate them - realistic or not).

stego-tech•32m ago
Oh, not only do they not care about the plebs and riff-raff now, but they’ve spent the past ten years building bunkers and compounds to try and save their own asses for when it happens.

It’s willful negligence on a societal scale. Any billionaire with a bunker is effectively saying they expect everyone to die and refuse to do anything to stop it.

NitpickLawyer•32m ago
> [...] prior to reforming society [...]

Well, good luck. You have "only" the entire history of human kind on the other side of your argument :)

stego-tech•31m ago
I never said it was an easy problem to solve, or one we’ve had success with before, but damnit, someone has to give a shit and try to do better.
AndrewKemendo•25m ago
Literally nobody’s trying because there is no solution

The fundamental unit of society …the human… is at its core fundamentally incapable of coordinating at the scale necessary to do this correctly

and so there is no solution because humans can’t plan or execute on a plan

sp527•22m ago
The likely outcome is that 99.99% of humanity lives a basic subsistence lifestyle ("UBI") and the elite and privileged few metaphorically (and somewhat literally) ascend to the heavens. Around half the planet already lives on <= $7/day. Prepare to join them.
accidentallfact•31m ago
Reality won't give a shit about what people believe.
mitthrowaway2•29m ago
> whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.

I disagree. If the singularity doesn't happen, then what people do or don't believe matters a lot. If the singularity does happen, then it hardly matters what people do or don't believe.

cgannett•28m ago
if people believe its a threat and it is also real then what matters is timing
Negitivefrags•27m ago
> If the singularity does happen, then it hardly matters what people do or don't believe.

Depends on how you feel about Roko's basilisk.

sigmoid10•25m ago
Depends on what a post singularity world looks like, with Roko's basilisk and everything.
afthonos•25m ago
I don’t think that’s quite right. I’d say instead that if the singularity does happen, there’s no telling which beliefs will have mattered.
Forgeties79•28m ago
I just point to Covid lockdowns and how many people took up hobbies, how many just turned into recluses, how many broke the rules no matter the consequences real or imagined, etc. Humans need something to do. I don’t think it should be work all the time. But we need something to do or we just lose it.

It’s somewhat simplistic, but I find it get the conversation rolling. Then I go “it’s great that we want to replace work but what are we going to do instead and how will we support ourselves?” It’s a real question!

AndrewKemendo•27m ago
The goal is to eliminate humans as the primary actors on the planet entirely

At least that’s my personal goal

If we get to the point where I can go through my life and never interact with another human again, and work with a bunch of machines and robots to do science and experiments and build things to explore our world and make my life easier and safer and healthier and more sustainable, I would be absolutely thrilled

As it stands today and in all the annals of history there does not exist a system that does what I just described.

Be labs existed for the purpose of bell telephone…until it wasn’t needed by Bell anymore. Google moonshots existed for the shareholders of Google …until it was not uselful for capital. All the work done at Sandia and white sands labs did it in order to promote the power of the United States globally.

Find me some egalitarian organization that can persist outside of the hands of some massive corporation or some government that can actually help people and I might give somebody a chance but that does not exist

And no mondragon does not have one of these

bheadmaster•27m ago
> here’s how LLMs actually work

But how is that useful in any way?

For all we know, LLMs are black boxes. We really have no idea how did ability to have a conversation emerge from predicting the next token.

MarkusQ•21m ago
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Uh yes, we do. It works in precisely the same way that you can walk from "here" to "there" by taking a step towards "there", and then repeating. The cognitive dissonance comes when we conflate this way of "having a conversation" (two people converse) and assume that the fact that they produce similar outputs means that they must be "doing the same thing" and it's hard to see how LLMs could be doing this.

Sometimes things seems unbelievable simply because they aren't true.

OkayPhysicist•20m ago
> We really have no idea how did ability to have a conversation emerge from predicting the next token.

Maybe you don't. To be clear, this is benefiting massively from hindsight, just as how if I didn't know that combustion engines worked, I probably wouldn't have dreamed up how to make one, but the emergent conversational capabilities from LLMs are pretty obvious. In a massive dataset of human writing, the answer to a question is by far the most common thing to follow a question. A normal conversational reply is the most common thing to follow a conversation opener. While impressive, these things aren't magic.

0x20cowboy•18m ago
"'If I wished,' O'Brien had said, 'I could float off this floor like a soap bubble.' Winston worked it out. 'If he thinks he floats off the floor, and if I simultaneously think I see him do it, then the thing happens'".
nine_k•2m ago
> * enough people believe it will happen and act accordingly*

Here comes my favorite notion of "epistemic takeover".

A crude form: make everybody believe that you have already won.

A refined form: make everybody believe that everybody else believes that you have already won. That is, even if one has doubts about your having won, they believe that everyone else submit to you as a winner, and must act accordingly.

dakolli•2m ago
Just say it simply,

1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.

2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.

3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dope-manergic crash one gets when taking shortcuts in life.

I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.

moffkalast•43m ago
> I am aware this is unhinged. We're doing it anyway.

If one is looking for a quote that describes today's tech industry perfectly, that would be it.

Also using the MMLU as a metric in 2026 is truly unhinged.

darepublic•42m ago
> Real data. Real model. Real date!

Arrested Development?

AndrewKemendo•42m ago
Y’all are hilarious

The singularity is not something that’s going to be disputable

it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation

I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully

Everybody needs to be preparing for the world where it’s;

human plus machine

versus

human groups by themselves

across all possible categories of competition and collaboration

Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race

Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics

jama211•41m ago
A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.

Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.

Crazy times we live in.

skrebbel•40m ago
Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
miguel_martin•40m ago
"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
lostmsu•38m ago
Your comment just self-defeated.
neilellis•37m ago
But you're not Everyone - they are a fictional hacker collective from a TV show.
bluejellybean•36m ago
Yet, here you are ;)
jacquesm•33m ago
Another one down.
root_axis•40m ago
If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
arscan•39m ago

  Don't worry about the future
  Or worry, but know that worrying
  Is as effective as trying to solve an algebra equation by chewing Bubble gum
  The real troubles in your life
  Are apt to be things that never crossed your worried mind
  The kind that blindsides you at 4 p.m. on some idle Tuesday

    - Everybody's free (to wear sunscreen)
         Baz Luhrmann
         (or maybe Mary Schmich)
jgrahamc•38m ago
Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
octernion•37m ago
that was precisely my reaction as well. phew machines will deal with the timestamp issue and i can just sit on a beach while we singularityize or whatever.
jacquesm•34m ago
You won't be on the beach when you get turned into paperclips. The machines will come and harvest your ass.

Don't click here:

https://www.decisionproblem.com/paperclips/

octernion•10m ago
having played that when it came out, my conclusion was that no, i will definitely be able to be on a beach; i am too meaty and fleshy to be good paperclip
jacquesm•35m ago
I suspect that's the secret driver behind a lot of the push for the apocalypse.
PantaloonFlames•38m ago
This is what I come here for. Terrific.
atomic128•37m ago

    Once men turned their thinking over to machines
    in the hope that this would set them free.

    But that only permitted other men with machines
    to enslave them.

    ...

    Thou shalt not make a machine in the
    likeness of a human mind.

   -- Frank Herbert, Dune
You won't read, except the output of your LLM.

You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?

You won't think or analyze or understand. The LLM will do that.

This is the end of your humanity. Ultimately, the end of our species.

Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.

Join us, or better yet: deploy weapons of your own design.

debo_•35m ago
If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album
octernion•26m ago
do... do the "poison" people actually think that will make a difference? that's hilarious.
accidentallfact•24m ago
A better approach is to make AI bullshit people on purpose.
gojomo•24m ago
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
dirkc•37m ago
The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.

*edit* - seems inline with what the author is saying :)

> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.

neilellis•36m ago
End of the World? Must be Tuesday.
sempron64•35m ago
A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
H8crilA•34m ago
But this is a phase change process.

Also, the temptation to shitpost in this thread ...

sempron64•16m ago
I read TFA. They found a best fit to a hyperbola. Great. One more data point will break the fit. Because it's not modeling a process, it's assigning an arbitrary zero point. Bad model.
banannaise•32m ago
You have not read far enough.
athrowaway3z•34m ago
> Tuesday, July 18, 2034

4 years early for the Y2K38 bug.

Is it coincidence or Roko's Basilisk who has intervened to start the curve early?

vagrantstreet•33m ago
Was expecting some mention of Universal Approximation Theorem

I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please

bpodgursky•28m ago
2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
MarkusQ•27m ago
Prior work with the same vibe: https://xkcd.com/1007/
cesarvarela•27m ago
Thanks, added to calendar.
cubefox•26m ago
A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:

  Doomsday: Friday, 13 November, A.D. 2026
There is an excellent blog post about it by Scott Alexander:

"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...

ericmcer•25m ago
Great article, super fun.

> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.

You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.

It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.

jonplackett•25m ago
This assumes humanity can make it to 2034 without destroying itself some other way…
PaulHoule•22m ago
The simple model of an "intelligence explosion" is the obscure equation

  dx    2
  -- = x
  dt
which has the solution

        1      
  x = -----
       C-t
and is interesting in relation to the classic exponential growth equation

  dx
  -- = x
  dt
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature outside of an aside in one of Howard Odum's tomes on emergy.

Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation

  dx
  --  = (1-x) x
  dt
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
boca_honey•20m ago
Friendly reminder:

Scaling LLMs will not lead to AGI.

danesparza•16m ago
"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.

I feel like I need to start more sprint stand-ups with this quote...

regnull•13m ago
Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
dakolli•12m ago
Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
wayfwdmachine•10m ago
Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).

The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.

Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).

We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.

kpil•5m ago
"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.

I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.

* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)

The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?

svilen_dobrev•3m ago
> already exerting gravitational force on everything it touches.

So, "Falling of the night" ?

Jeffrey Epstein's digital cleanup crew

https://www.theverge.com/report/876081/jeffrey-epstein-files-seo-google-digital-footprint-emails
2•imartin2k•1m ago•0 comments

Real-time Reddit sentiment tracker for stock trading

https://www.wsbsentiment.com/
1•shawnmfarnum•1m ago•1 comments

Trump's War on History

https://www.motherjones.com/politics/2026/02/america-freedom-task-force-250-trump-anniversary-his...
1•leotravis10•1m ago•0 comments

Quitting .NET after 22 years

https://www.thatsoftwaredude.com/content/14253/quitting-dot-net-after-22-years
1•Waltz1•2m ago•0 comments

Is human collaboration the answer to the skill formation risks by AI?

https://www.gethopp.app/blog/pair-prompting
1•iparaskev•6m ago•0 comments

Microsoft Should Watch the Expanse

https://idiallo.com/blog/microsoft-should-watch-the-expanse
1•nomdep•6m ago•0 comments

Show HN: Cosmic CLI – Build, deploy, and manage apps from your terminal with AI

https://github.com/cosmicjs/cli
1•tonyspiro•6m ago•0 comments

AgentLogs: Open-source observability for AI coding agents

https://github.com/agentlogs/agentlogs
1•tosh•7m ago•0 comments

WordCatcher

https://wordwalker.ca/games/word-catcher/
1•petedrinnan•7m ago•0 comments

Breakthrough pancreatic cancer therapy blocks tumor resistance in mice

https://www.pnas.org/doi/10.1073/pnas.2523039122
1•DpdC•8m ago•0 comments

Show HN: Multimodal perception system for real-time conversation

https://raven.tavuslabs.org
2•mert_gerdan•9m ago•1 comments

Heuristics for lab robotics, and where its future may go

https://www.owlposting.com/p/heuristics-for-lab-robotics-and-where
1•abhishaike•10m ago•0 comments

Show HN: Traction – Security readiness framework for scaling SaaS teams

https://traction.fyi
1•ERROR_0x06•11m ago•0 comments

Crossview v3.5.0 – New auth modes (header / none), no DB required for proxy auth

https://github.com/corpobit/crossview
1•moeidheidari•11m ago•1 comments

Show HN: Tasty A.F. – Turn Any Online Recipe into a 3x5 Notecard

https://tastyaf.recipes
1•adammfrank•12m ago•0 comments

Photoswitching for chromocontrol of TRPC4/5 channel functions in live tissues

https://www.nature.com/articles/s41589-025-02085-x
2•PaulHoule•12m ago•0 comments

This feels so reminiscent of the whimsical times in tech

https://www.tryroro.com/code
2•songqipu•14m ago•1 comments

Hello, Dada

https://smallcultfollowing.com/babysteps/blog/2026/02/09/hello-dada/
2•ibobev•14m ago•0 comments

Expectation and Copysets

https://buttondown.com/jaffray/archive/expectation-and-copysets/
2•ibobev•15m ago•0 comments

LLMCode Lab – Compare up to 5 LLMs side-by-side, then fuse the best answers

https://LLMCode.ai
2•cmeshare•15m ago•2 comments

BurgerDisk Tests

https://www.colino.net/wordpress/archives/2026/02/08/burgerdisk-tests/
2•ibobev•15m ago•0 comments

In praise of the dad joke (2023)

https://wit.substack.com/p/the-familiar-patter-of-the-paterfamilias
2•NaOH•16m ago•0 comments

Looking for feedback from someone who hired technical freelancers earlier

2•yusufhgmail•17m ago•0 comments

Update on Update [video]

https://www.youtube.com/watch?v=M-ZLz8Wg34s
2•tosh•17m ago•0 comments

USDA's reputation suffers after revisions in US corn acres

https://www.reuters.com/business/usdas-reputation-suffers-after-massive-revisions-us-corn-acres-2...
3•DustinEchoes•18m ago•0 comments

Updating the Expiring Secure Boot Certificates Is Sure to Go Without a Hitch

https://pcper.com/2026/02/updating-the-expiring-secure-boot-certificates-is-sure-to-go-without-a-...
2•speckx•18m ago•0 comments

'We feel it in our bones': Can a machine ever love you?

https://www.bbc.com/future/article/20260209-can-a-machine-ever-love-you
4•devonnull•20m ago•0 comments

Google hit by European publishers' complaint to EU over AI Overviews

https://www.reuters.com/world/european-publishers-council-files-eu-antitrust-complaint-about-goog...
3•thm•21m ago•0 comments

Writing RSS reader in 80 lines of bash

https://yobibyte.github.io/yr.html
3•sharjeelsayed•21m ago•0 comments

Simulated phishing test f#%k off

https://github.com/orsifrancesco/simulated-phishing-test-list
2•orsifrancesco•21m ago•1 comments