1. Commonwealth (tokamak w/ high temp superconducting magnets)
2. Helion (field reversed configuration, magnetic-inertial, pulsed) ....
?. Wendelstein (stellarator)
Maybe stellarators will be the common design in 2060 once fabrication tech has improved, but for the near future I think its going to be one of the first two.
They are the only fusion startup I know of that was faster than their own timeline in the last year.
I'm sure they developed some really useful technology in the process of building the thing, but I suspect they would have made more progress faster if they had taken a more iterative approach.
The first transistor in Silicon Valley wasn’t made by Shockley.
[1] https://tae.com/tae-technologies-delivers-fusion-breakthroug...
There's huge advantages to muon catalyzed if they can get it to work. Plants would be orders of magnitude smaller and cheaper to build.
Tokamaks main problem is plasma instabilities. While Commonwealth may archive high Q briefly, nobody knows how plasma will behave at those conditions and long operations may not be possible.
Stellarators on the other hand do not have plasma stability problems. So my bet is on those.
All approaches have huge hurdles to overcome. Helion may have bigger challenges on the Q side, but all-in-all I think the probabilities of being viable ends up similar.
All other fusion power plants are thermal power plants. I suspect all thermal power plants will end up being economically unviable in the world of renewables, for various reasons. They’re just too bulky and slow, and require special consideration wrt cooling. It’s one of the reasons why gas power is king these days.
If we think really far ahead, the scaling of thermal power plants is limited by the heat they put out. It ends up contributing to global warming just from the thermal forcing they apply to the environment. The effect of the ones we have today are already surprisingly significant. Helion is a path to being able to produce a huge amount of energy with fairly limited impact on the environment (eventually limited by the thermal energy they dump, but perhaps they can use thermal radiation panels that dump the waste heat energy directly to space)
A useful fusion power plant needs a triple product of at least about 3e12 keV * s * m^-3.
They weren’t fusing things (at least, not much). This is a figure of merit that allows you to compare, across all the different fusion methods, how well you would be able to fuse the plasma if you were using burnable fuel such as deuterium and tritium (isotopes of hydrogen that have one or two extra neutrons).
Why not capture and make use of it?
Isn't that the whole point of heat pumps? Grab energy from one locale, move it another to do useful work?
The Earth radiates away almost exactly as much energy as it receives. It has to. Otherwise it would boil. Our biosphere, however, extracts a lot of available energy from that system. That results in the Sun shining low-entropy energy on the Earth, and the Earth radiating high-entropy radiation away.
Put another way, a universe that is homogenous at 10 million degrees has plenty of energy. But it has zero useful energy because you have no entropy gradient.
You could presumably radiate it to space by moving the heat to something that can "see" a clear sky, but you can have this happen naturally on a far huger scale by reducing GHG content in the atmosphere and increasing the radiative efficiency of the entire planet surface, as well as various passive systems like cool roofs, albedo manipulation and special materials that radiate specific wavelengths.
When you cool a building or a data center or whatever, you can pump that heat into a high temperature fluid and send it to a sky-radiator instead of sending it to an air-exchange radiator. So heat produced in processes could be moved to radiator assemblies and “beamed” into space (I probably should have said radiated).
Imagining a future: with a ~3% growth, so let's say fusion is deployed and everything is electric in the next few years (not happening that fast though), with AI data-centers everywhere so individual-level AI runs (say "Her"'s movie personal OS-level stuff) per human and we reach the out-of-my-buttocks figure of 500 TWh/year in 10 years time, which is crazy shit ... well, that would not "boil the world"!
The Sun delivers ~170,000 TWh per year. So 500 TWh still would not be that significant, and within the Sun's yearly delivery fluctuations.
The problem with energy generation today is that it's releasing gases, and these gases are disrupting the planet’s energy balance - especially how Earth gets rid of the massive energy it receives from the Sun. We do need to restore the balance between what comes in and what goes back out - fusion can help tackle that problem specifically, so it's beneficial overall even if it eventually adds a fractional percentage to the overall planetary energy bill.
I picture that fusion would be a complementary source, not the only one, and, once/if deployed, would help close some of key the loopholes that prevent solar (and other renewables) from being deployed 100%.
It delivers 170,000 TWh per hour (i.e. 170,000 TW)!
3.14 * (6378km)^2 * 1300W/m^2 = 166PW
It's a ludicrous amount of energy - roughly the entire human annual energy usage is delivered every 70 minutes. The whole problem of AGW is that even a tiny modulation in absolute terms of things that affect the steady state (i.e. greenhouse gases) can have substantial effects. But it's also, presumably, going to be key to fixing the problem, if we do fix it.
For every major trouncing of criterion we somehow invent 4 or 5 new requirements for it to be “real” AGI. Now , it seems, AGI must display human level intelligence at superhuman speed (servicing thousands of conversations at once), be as knowledgeable as the most knowledgeable 0.1% of humans across every facet of human knowledge, be superhumanly accurate, perfectly honest, and not make anything up.
I remember when AGI meant being able to generalize knowledge over problems not specifically accounted for in the algorithm… the ability to exhibit the “generalization” of knowledge, in contrast to algorithmic knowledge or expert systems. It was often referred to as “mouse level” or sometimes “dog-level” intelligence. Now we expect something vastly more capable than any being that has ever existed or it’s not “AGI” lmfao. “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.
People base their notions of AI on science fiction, and it usually goes one of two ways in fiction.
Either a) skynet awakens and kills us all or
B) the singularity happens, AI get so far ahead they become deities, and maybe the chosen elect transhumanists get swept up into some simulation that is basically a heavenly realm or something.
So yeah, bringing us to the promised land is an expectation of super AI that does seem to come out of certain types of science fiction.
So do we have that? As far as I know, we just have very, very large algorithms (to use your terminology). Give it any problem not in the training data and it fails.
As a sounding board and source of generally useful information, even my small locally hosted models generally outperform a substantial slice of the population.
We all know people we would not ask anything that mattered, because their ideas and opinions are typically not insightful or informative. Conversing with a 24b model is likely to have higher utility. Do these people then not exhibit “general intelligence”? I really think we generally accept pattern matching and next-token ramblings, hallucinations, and rampant failures of reasoning in stride from people, while applying a much, much higher bar to LLMs.
To me this makes no sense, because LLMs are compilations of human culture and their only functionality is to replicate human behavior. I think on average they do a pretty good job vs a random sampling of people, most of the time.
I guess we see this IRL when we internally label some people as “NPC’s”.
So does my local copy of Wikipedia.
But the lines do get blurry and many real humans indeed seem no more than stochastical parrots pretending understanding.
Is it? AI is impressive and all, but i don't think any of them have pased the Turing test, as defined by Turing (pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes), although i'd be happy to be proven wrong.
I've just read the 1950 paper "Computing Machinery and Intelligence" [1], in which Turing proposes his "Imitation Game" (what's now known as a "Turing Test"), and I think your claim is very misleading.
The "Imitation Game" proposed in the paper is a test that involves one human examiner and two examinees, one being a human and the other a computer, both of which are trying to persuade the examiner that they are the real human; the examiner is charged with deciding which is which. The popular understanding of "Turing Test" involves a human examiner and just one examinee, which is either a human or a computer, and the test is to see whether the examiner can tell.
These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
Aside: The bulk of this 28-page paper anticipates possible objections to his "Imitation Game" as a worthwhile alternative to the original question "Can machines think?", including a theological argument and an argument based on the existence of extra-sensory perception (ESP), which he takes seriously as it was apparently strongly supported by experimental data at that time. It also cites Helen Keller as an example of how learning can be achieved through any mechanism that permits bidirectional communication between teacher and student, and on p. 457 anticipates reinforcement learning:
> We normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.
[1]: https://archive.org/details/MIND--COMPUTING-MACHINERY-AND-IN...
I disagree. Having a control and not having a control is a huge difference when conducting an experiment.
[Apologies for the goal post shifting]
Fusion is much better understood. We are not going to create "the wrong kind of fusion" and have to come up with a new plan.
How come we have to build it and test it to know if it works?
Do we lack a mathematical model?
Fission reactors are relatively "easier" to simulate as giant finite element analysis Monte Carlo simulations with roughly voxels of space, i.e., thermal conductivity, heat capacity, etc. I happened to have been involved with one that was 50+ years old that worked just fine because of all of physicists and engineers who carefully crafted model data and code to reflect what would be likely to happen in reality when testing new, conventional reactor designs.
The problems with fusion are many orders-of-magnitude more involved and complex with wear, losses, and "fickleness" compared to fission.
Thus, experimental physics meeting engineering and manufacturing in new domains is expensive and hard work.
Maybe in 200 years there will be a open source, 3D-printable fusion reactor. :D
[0] https://en.wikipedia.org/wiki/Edward_Norton_Lorenz#Chaos_the...
In a fluid, effects are local: a particle can only directly effect what it is in direct contact with.
In a plasma, every particle interacts with every other. One definition of a plasma is that the motion is dominated by electromagnetic effects rather than thermodynamic: by definition, if you have a plasma, the range of interactions isn't bounded by proximity.
This doesn't apply quite so much to (e.g.) laser ignition plasmas, partly because they're comparatively tiny, and partly because the timescales you're interested in are very short. So they do get simulated.
But bulk simulations the size of a practical reactor are simply impractical.
But there is a german fusion startup about to build a stellarator.
https://www.proximafusion.com/about
(I assumed there was some sort of cooperation with Wendelstein, but found no mentioning of such on a quick look now)
Maybe there are complicated legal reasons for it.
But using "instead" implies zero sum thinking. Halting research in fusion does not cause investment in fission. They are only slightly related things.
With that rate of growth for batteries and the current improvements in battery capacity, will there be any need for a fusion plant, given that we won't see one come online for at least 10 to 15 years?
https://spectrum.ieee.org/china-nuclear-fusion-reactor
In general, I was leaning towards your side, but after learning a bit more about Wendelstein via a german podcast who went there twice
I believe the time for fusion is indeed near.
For the current energy needs, I still would rather invest heavily in solar, wind and battery, though.
But recent breakthroughs in superconductors and advancement of computers (confinement of the plasma needs lots of fast calculations) make it seem like a realistic goal to pursure for mid and long term.
s1artibartfast•8mo ago
https://archive.is/OHy4l
https://phys.org/news/2025-06-wendelstein-nuclear-fusion.htm...