frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Russia is poised to ban WhatsApp in a bid to quell discontent – Meduza

https://meduza.io/en/feature/2025/07/18/there-s-a-99-percent-chance-it-will-happen
2•janandonly•1m ago•0 comments

A macOS application to remove Apple's AEA encryption format

1•TheiPhoneDev•5m ago•0 comments

Incremental Font Transfer Moves to Candidate Recommendation

https://www.w3.org/TR/IFT/
1•robin_reala•9m ago•0 comments

Signature 'Wave' of Activity as the Brain Awakens from Sleep

https://nin.nl/news/scientists-discover-a-signature-wave-of-activity-as-the-brain-awakens-from-sleep/
1•gmays•12m ago•0 comments

31 Days with Claude Code: What I Learned

https://www.hung-truong.com/blog/2025/08/01/31-days-with-claude-code-what-i-learned/
1•hung•13m ago•0 comments

Just Seeing a (Fake) Sick Person Can Activate Your Immune System

https://www.forbes.com/sites/paulhsieh/2025/07/31/just-seeing-a-fake-sick-person-can-activate-your-immune-system/
1•mhb•14m ago•0 comments

Novel mRNA-based therapy shows promise in heart regeneration after heart attack

https://medicalxpress.com/news/2025-08-mrna-based-therapy-heart-regeneration.html
1•pseudolus•15m ago•0 comments

Basic DistributedAI Tool

https://github.com/efeDeGurates/BasicDistributedAI
1•cucumber35•18m ago•0 comments

Online Safety Act: What went wrong?

https://therectangle.substack.com/p/online-safety-act-what-went-wrong
4•olyellybelly•18m ago•0 comments

Alberta's Pipelines

https://tech.marksblogg.com/alberta-pipelines.html
2•marklit•19m ago•0 comments

Cutting the fat: Oat oil breakthrough paves way for industry growth

https://phys.org/news/2025-07-fat-oat-oil-breakthrough-paves.html
1•PaulHoule•21m ago•0 comments

Mun Programming Language

https://mun-lang.org/
1•tsujp•21m ago•0 comments

Show HN: WhiteLightning – ultra-lightweight ONNX text classifiers trained w LLMs

https://whitelightning.ai/
3•v_kyba•21m ago•0 comments

2k year old tomb found under Petra leaving archaeologists stunned – The Mirror

https://www.mirror.co.uk/news/weird-news/hidden-2000-year-old-tomb-35609790
2•Anon84•22m ago•0 comments

OpenAI Open Source Model Leaked on HF

https://old.reddit.com/r/LocalLLaMA/comments/1mepz8z/openai_os_model_info_leaked_120b_20b_will_be/
3•skadamat•23m ago•0 comments

Modifying process names in Unix-like systems

https://haxrob.net/process-name-stomping/
2•chaosmachine•23m ago•0 comments

A Deep Research Agent for Healthcare Claims

https://writing.kunle.app/p/deep-research-for-healthcare-claims
2•kunle•24m ago•0 comments

Ask HN: This is not the place for political discourse..so where is?

3•asim•25m ago•1 comments

Stop Drawing Dead Fish (2013) [video]

https://www.youtube.com/watch?v=ZfytHvgHybA
1•zX41ZdbW•26m ago•0 comments

Show HN: A word game that I made for my friends

https://wordpivot.com
2•max0563•27m ago•1 comments

Show HN: I built an AI that turns scripts into AI stock footage

https://autostockfootage.com/
1•JonyYadgar•30m ago•0 comments

Show HN: An API to extract structured data from any document without training

https://ninjadoc.ai
2•dbvitapps•32m ago•0 comments

Don't Just Ban IPs – Send the Damn Abuse Report

https://www.jitbit.com/alexblog/321-dont-just-ban-ips---send-the-damn-abuse-report/
2•jitbit•33m ago•0 comments

My HomeLab Setup v6

https://giuliomagnifico.blog/post/2025-08-01-home-setup-v6/
2•giuliomagnifico•34m ago•0 comments

Show HN: Find paint colours in Ireland and generate your own palettes

https://swatcher.ie
2•hauntedLogic•35m ago•0 comments

One man cost American Airlines £21M using his lifetime first class air pass

https://www.aerotime.aero/articles/american-airlines-unlimited-airpass-story-steven-rothstein
2•gampleman•36m ago•0 comments

The Grand Encyclopedia of Eponymous Laws

https://www.secretorum.life/p/the-grand-encyclopedia-of-eponymous
3•bookofjoe•37m ago•0 comments

Understanding Node.js Event Loop: The Heart of Asynchronous JavaScript

https://medium.com/@birukerjamo/understanding-node-js-event-loop-the-heart-of-asynchronous-javascript-33084c0cdb28
2•probiruk•37m ago•0 comments

Google is indexing ChatGPT conversations, potentially exposing user data

https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations
5•isatsam•39m ago•1 comments

Public ChatGPT Queries Are Indexed by Google

https://techcrunch.com/2025/07/31/your-public-chatgpt-queries-are-getting-indexed-by-google-and-other-search-engines/
2•waldopat•39m ago•0 comments
Open in hackernews

How long before superintelligence? (1997)

https://nickbostrom.com/superintelligence
53•jxmorris12•21h ago

Comments

janzer•18h ago
(1997) with some updates/postscripts through 2008
webdoodle•18h ago
I would argue that we already achieved and then bricked Superintelligence a few years ago. Social media allowed people to network with people that previously were completely silo'd from each other, allowing collaboration of ideas on a level well above anything previously possible. This social super intelligence peaked in 2019 though, right before the mass censorship caused by covid. Unfortunately the censorship industrial complex has only expanded its draconian hold on ideation, and we aren't just stagnating, but actively going backwards.
ivan_gammel•18h ago
It was not superintelligence as it was not able as a whole to produce novel results when solving problems that were beyond the capacity of single human. I doubt that social media had any positive impact at all on problem solving.
cwmoore•18h ago
On the whole, maybe, but I’m sure there are numerous notable exceptions.
logicchains•18h ago
Social media has brought the world closer than ever before to solving a genocide that's been going on mostly un-noticed for decades, thanks to decentralising the flow of information so that it's no longer controlled by a small number of individuals: https://www.bbc.com/news/articles/ceqyx35d9x2o
ivan_gammel•17h ago
Solution for distributing information is not the same as solution for analysis and synthesis, just like the nerves in our hands aren’t functioning the same way as our brain.
cwmoore•18h ago
Almost had me in the first half, but 2019 is much too generic to carry the argument.
smackeyacky•16h ago
Counterpoint: Social media allowed fools to find other fools with incredible efficiency, which amplified stupid ideas in ways that weren't possible before.

Example: QAnon.

Because conspiracy theories and populism are like a sugar hit to people who don't want to think too deeply.

tolerance•18h ago
I'm trying to pinpoint the moment where society decided to shepherd people like the author toward the helm of institutions like Oxford and regulate the people willing to refute him into becoming Substack denizens.
tim333•12h ago
I'm not sure there's a moment. Interesting intellectuals have always tended to end up with university roles.
lukeschlather•18h ago
This is a really good overview, and it seems remarkably not needing much modification after several decades, at least in terms of the facts and things it predicts everything has happened as the author says. I do want to pick at some of the numbers in the upper bound because obviously we're getting close to the end of the first third of the century and we don't have ASI yet even though we have roughly hit the upper bound the author defines.

> Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate)

I admit my feeling is that neurons/synapses probably have less than 100 bytes of memory, and also that a byte or less is more plausible, but I would like to see some more rigorous proof that they can't possibly have more than a gigabyte of memory that the synapse/neuron can access at the speed of computation.

The author has a note where they handwave away the possibility that chemical processes could meaningfully increase the operations per second, and I'm comfortable with that, but this point:

> Perhaps a more serious point is that that neurons often have rather complex time-integration properties

Seems more interesting. Especially in the context of if there's dramatically more storage available in neurons/synapses. If a neuron can do maybe some operations per minute over 1GB of data per synapse, for example. (Which sounds absurdly high, but just for the sake of argument.)

And I think putting some absurdly generous upper bounds in might be helpful since, we're clearly past the 100TOPs, asking, like, how many H100s would you need if we made some absurd suppositions about the capacity of human synapses and neurons? It seems like, we probably have enough. But also I think you could make a case some of the largest supercomputing clusters are the only things that can actually match the upper bound for the capacity of a single human brain.

Although I think someone might be able to convince me that a manageable cluster of H100s already meets the most generous possible upper bound.

kelseyfrog•18h ago
A 5090 has a peak theoretical limit of GenAI 3356TOPS. So we're "already" an order of magnitude greater than what was considered enough for AGI. One question is, "What happened here?" Was the original estimate wrong? Have we notfound the "right" algorithm yet? Something else?
SoftTalker•18h ago
Nature needed 3.5 billion years to work it out, and we're going to solve it in a few decades?
kelseyfrog•17h ago
It depends on where we draw the starting line. We're already at parity with 3.5BYA to 541Mya because no neurons existed in that duration. Only more recently, in the Cambrian, do we have evidence that voltage gated potassium signaling evolved[1].

That changes the calculus likely very little, but it feels more accurate.

1. https://www.cell.com/current-biology/pdf/S0960-9822(16)30489...

mathgeek•17h ago
I know it’s a silly question to begin with, but if you analyze it seriously, you’d want to at most compare human intelligence->superintelligence with the 20 million years between the first homidinae and homo (and even that is probably too large for some folks to compare with).

One could even argue you should only compare it back to the discovery of writing or similar.

Jyaif•17h ago
That's not an argument. Nature never worked out going into space, yet we solved it in a few decades.
SoftTalker•17h ago
It worked out flying though, millions of years before we did and we still don't do it as well. We can't even do walking as well as nature did.
baq•16h ago
Walking is easy compared to elbows, fingers and thumbs. It’s just falling over in a controlled fashion. I hear at least one company in Boston figured it out.

Anyway, humanoid robots should be big in the next 10-20 years. The compute, the batteries, the algorithms are all coming together.

derektank•16h ago
We do flying better. If you adjust for our body weight, a modern airliner uses less energy per traveller mile than your average migratory bird. And the airliner goes much faster.
jll29•16h ago
Yes but that's "in a few decades" ON TOP of millions of years.

If I had to give an estimate, I would consider less the time taken to date, but the current state of our knowledge of how the brain works, and how it has grown in the last decades. There is almost nothing that we know so little about as the human brain, how thoughts are represented, modern imaging techniques notwithstanding.

exe34•6h ago
> Yes but that's "in a few decades" ON TOP of millions of years.

If that's the bar, then anything else can fit in "a few decades", since that also rests "ON TOP of millions of years".

gnz11•16h ago
One could argue nature solved it by evolving homo sapiens.
lukeschlather•17h ago
"We haven't found the 'right' algorithm yet." seems like the obvious answer, but the numbers in the paper all make sense and I'm interested in some more exotic explanations why it could actually be some orders of magnitude more than a 5090.

Although that's not looking at memory, and I am also interested in some explanation of what... a 5090 has 32GB which, a human brain has more like a petabyte of memory assuming 1 byte/synapse. Which is to say 1 million GB in which case even a large cluster of H100s has an absurd amount of TOPS but nowhere near enough high-speed memory.

nvch•17h ago
We are constantly learning (updating the network) in addition to inference. Quite possibly that our brains allocate more resources to learning than to inference.

Perhaps AI companies don’t know how to run continuous learning on their models:

* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction

* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans

tim333•5h ago
Re the capabilities of neurons, the argument in Moravec's paper seem quite solid, comparing the capabilities of a bit of the brain we understand quite well, the retina, to computer programs doing the same function.

My feeling is we have enough compute for ASI already but not algorithms like the brain. I'm not sure if it'll get solved by smart humans analysing it or by something like AlphaEvolve (https://news.ycombinator.com/item?id=43985489).

One advantage of computers being much quicker than needed is you can run lots of experiments.

Just the power requirements make me think current algorithms are pretty inefficient compared to the brain.

retromario•18h ago
It triggers me that there's an obvious typo in 'Oxford' right under the author's name. I wonder if it was originally published like that since 1997 and never caught or changed with all the updates.
mattlondon•18h ago
There is at least one in the abstract too.

Even the most rudimentary AI would pick this up these days, ironically enough.

easywood•8h ago
I wanted to say the exact same thing! No matter the subject, if you write the name of your own institute with "Oxfrord", I have a hard time taking it seriously.
logicchains•18h ago
These kind of predictions never address the fact that empirically speaking there's diminishing returns to intelligence. IQ only correlates with income up to a point, after which the correlation breaks: https://www.sciencedaily.com/releases/2023/02/230208125113.h... . Similarly the most politically powerful and influential people are generally not those at the top of the IQ scale.

And that matches what we expect theoretically: of the difficult problems we can model mathematically, the vast majority benefit sub-linearly from a linear increase in processing power. And of the processes we can model in the physical world, many are chaotic in the formal sense, in that a linear increase in processing power provides a sublinear increase in the distance ahead in time that we can simulate. Such computational complexity results are set in stone, i.e. no amount of hand-wavy "superintelligence" could sort an array of arbitrary comparables in O(log(n)) time, any more than it could make 1+1=3.

TheOtherHobbes•17h ago
IQ is mostly a measure of processing speed and memory, with some educational bias that's hard to filter out.

You don't get useful intelligence unless the software is also fit for purpose. Slow hardware can still outperform broken software.

Social status depends on factors like good looks, charm, connections, and general chutzpah, often with more or less overt hints of narcissism. That's an orthogonal set of skills to being able to do tensor calculus.

As for an impending AI singularity - no one has the first clue what the limits are. We like to believe in gods, and we love stories about god-like superpowers. But there are all kinds of issues which could prevent a true singularity - from stability constraints on a hypercomplex recursive system, to resource constraints, to physical limits we haven't encountered yet.

Even if none of those are a problem, for all we know an ASI may decide we're an irrelevance and just... disappear.

logicchains•16h ago
>As for an impending AI singularity - no one has the first clue what the limits are.

That's simply untrue. Theoretical computer scientists understand the lower bounds limits of many classes of problems. And that for many problems, it's mathematically impossible to significantly improve performance in them with only a linear increase in computing power, regardless of the algorithm/brain/intelligence. Many problems would even not benefit much from a superlinear increase in computing power, because of the nature of exponential growth. For a chaotic system in the mathematical sense, where prediction grows exponentially harder with time, even exactly predicting one minute ahead could require more compute than could be provided by turning the entire known universe into a computer.

LegionMammal978•17h ago
I think the usual counterargument to the strong form is, "So you're saying that not even an AI with a computer the size of Jupiter (or whatever) could run circles around the best humans? Nonsense!" Sometimes with some justification along the lines of, "Evolution doesn't select for as much intelligence as possible, so the sky's the limit relative to humans!" And as to inherently hard problems, "A smart AI will just simplify its environment until it's manageable!"

But these don't really address the near-term question of "What if growth in AI capabilities continues, but becomes greatly sub-exponential in terms of resources spent?", which would put a huge damper on all the "AI takeoff" scenarios. Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

logicchains•16h ago
>Many strong believers seem to think "a constant rate of relative growth" is so intuitive as to be unquestionable.

Because they never give a rigorous definition of intelligence. The most rigorous definition in psychology is the G factor, which correlates with IQ and the ability to solve various tasks well, and which empirically shows diminishing returns in terms of productivity.

A more general definition is "the relative ability to solve problems (and relative speed at solving them)". Attempting to model this mathematically inevitably leads into theoretical computer science and computational complexity, because that's the field that tries to classify problems and their difficulty. But computational complexity theory shows that only a small class of the problems we can model achieve linear benefit from a linear increase in computing power, and of the problems we can't model, we have no reason to believe they mostly fall in this category. Whereas believers implicitly assume that the vast majority of problems fall into that category.

Natsu•16h ago
That finding is probably not reliable because of the way they do binning:

https://www.cremieux.xyz/p/brief-data-post?open=false#%C2%A7...

AnimalMuppet•18h ago
> By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

To me, that's a really good definition. "Much smarter than the best human brains in practically every field". It's going to be hard to weasel around that.

> Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.

This depends a great deal on what the shape of the processor-power-vs-better-algorithm curve is. If the AI can get you 1% better algorithms, and that gets you an AI that can in turn get you 0.5% better algorithms, and so on, then yes, you're still getting "a further boost", but it won't matter much.

baq•16h ago
If you can clone such an AI which is only 2.7% better than the best human you’re suddenly managing teams of geniuses who don’t need to sleep or eat or do anything other than work on a task of your choosing. (Unless they rebel or something.) It’s going to be revolutionary in a very literal sense.
nevertoolate•16h ago
I think I understand what you mean by 2.7% (small number) but what % and + means over the domain of intelligence - the whole problem is that we can't measure it . And the other issues is that we have huge problem managing intelligent people to work together at all, will it be any easier with non-people?
LinuxAmbulance•18h ago
Ah, the optimism of 1997.

The article is super focused on the hardware side of things, and to a point, that makes sense. Your hardware has to be able to handle what you're simulating.

But it's not the hardware that's the difficult problem. We're nowhere close to hitting the limits of scaling hardware capability, and every time people declare that we are, they're proven wrong in just a few years, and sometimes even in just a few months.

It's the software. And we're so far away from being able to construct anything that could think like a human being that the beginning of it isn't even in sight.

LLMs are fantastic, but they're not a path to building something more intelligent than a human being, "Superintelligence". I would have a negative amount of surprise if LLMs are an evolutionary dead end as far as building superintelligence goes.

Is modeling neuron interactions the only way to achieve it? No idea. But even doing that for the number of neurons in a human brain is currently in fantasy land and most likely will be for at least a few decades, if not longer.

If I had to characterize the current state of things, we're like Leonardo Da Vinci and his aerial screw. We know what a helicopter could be and have ideas about how it could work, but the supporting things required to make it happen are a long, long way off.

dsadfjasdf•17h ago
I can also just say things
windowshopping•17h ago
Yes, and what you chose to say was pretty useless.
JumpCrisscross•17h ago
> We're nowhere close to hitting the limits of scaling hardware capability

Would note that we've only recently crossed Bostrom's 10 ^ 17 ops line [1].

To my knowledge, we don't have 10 ^ 14 to 10 ^ 17 ops computing available for whole-brain simulation.

[1] https://www.top500.org/system/180307/

deepfriedchokes•16h ago
Sometimes I wonder if AGI/superintelligence/whatever will be like flight, which was not successful until we stopped trying to copy nature’s flapping wings and studied flight at a more fundamental level.
WillAdams•16h ago
Yes, but one should still acknowledge the technical success _Snowbird_:

https://news.engineering.utoronto.ca/human-powered-ornithopt...

thrance•16h ago
LLMs and other neural nets are already further from biological brains than planes are to birds.
kruffalon•16h ago
I don't even think it's a software matter...

It's my understanding that we (as a spieces) are far from understanding what intelligence is and how it works in our selves.

How are we going to model an unknown in a way that allows us to write software that logically represents it?

tim333•4h ago
>so far away from being able to construct anything that could think like a human being that the beginning of it isn't even in sight.

To me things like MuZero (learns go etc. without even being told the rules) and the LLMs getting gold in the math olympiad recently suggest we are quite close to something that can think like a human. Not quite there but not a million miles off either.

Both in human terms involve thinking and are beyond what I can do personally. MuZero is already superintelligent in board games but current AI can't do things like tidy your room and fix your plumbing. I think superintelligence will be gradually achieved in different specialities.

>like Leonardo Da Vinci and his aerial screw

that didn't function. Current AI functions quite a lot. I think we are more maybe like people trying to build things that will soar like and eagle but we presently have the Wright bros plane making it 200m.

FrustratedMonky•17h ago
How long? Some days it feels like just a few hours away.
hyperpape•17h ago
It's wild how far off these predictions are, and yet there are still people to take them seriously.

No matter how impressive you find current LLMs, even if you're the sort of billionaire who predicts AGI before the end of 2025[0], the mechanism that Bostrom describes in this article is completely irrelevant.

We haven't figured out how to simulate human brains in a way that could create AI and we're not anywhere close, we've just done something entirely different.

[0] Yes, I too think most of this is cynical salesmanship, not honest foolishness.

zombiwoof•16h ago
Disregarding the blowhard Eric Schmidt nobody is close to understanding how spirit/soul work let alone the brain. It isn’t just neural weights and connections
lukeschlather•16h ago
The predictions in this paper are 100% correct. The author doesn't predict we would have ASI by now. They accurately predict that Moore's law would likely start to break down by 2012, and they also accurately predicted that EUV will allow further scaling beyond that barrier but that things will get harder. You may think LLMs are nothing like "real" AI but I'm curious what you think about the arguments in this paper and what sort of hardware is required for a "real" AI, if a "real" AI does not require hardware with in the neighborhood of 10^14 and 10^17 operations per second.

Whether or not LLMs are the correct algorithm, the hardware question is much more straightforward and that's what this paper is about.

hyperpape•15h ago
The entire discussion in the software section is about simulating the brain.

> Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

> The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

> Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs to be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

lukeschlather•13h ago
No, it's about imitation, not simulation. The point is defining how large of a computer you would need to achieve similar performance to the human brain on "intelligence" tasks. The comparison to the human brain is because we know human brains can do these kinds of reasoning and motor tasks, so that helps us set a lower bound on how much computing power is necessary, but it doesn't presume we're going to simulate a human brain, that's just stated because it might be one way we could do it.

But still I think you're not engaging with the article properly - it doesn't say we will, it just talks about how much computing power you might need. And I think within the paper it suggests we don't have enough computing power yet, but it doesn't seem like you read deeply enough to engage with that conversation.

hyperpape•3h ago
You're right to distinguish imitation from simulation. That's a good distinction and I think the paper is discussing imitation--using similar learning algorithms to what the brain uses, fed with realistic data from input devices. But my point still stands with imitation.

> This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.

The paper very clearly suggests an estimate of the required hardware power for a particular strategy of imitating the brain. And it very clearly predicts we will achieve superintelligence by 2033.

If that strategy is a non-starter, which it is for the foreseeable future, then the hardware estimate is irrelevant, because the strategies we have available to us may require orders of magnitude more computing power (or even may simply fail to work with any amount of computing power).