https://www.chinadaily.com.cn/a/202507/21/WS687d99bca310ad07...
Is this also common for EU/USA? Do we say "UK developers new method for ..." or "Researchers at Cambridge"?
I swear I'm not making any political statements, just wondering why we treat it as a homogeneous entity.
these would be completely normal headlines, in my experience.
Not really a big deal, though I'd have... you know... linked to the actual paper or maybe mentioned the professor's name more prominently.
I think most stuff I read emphasizes institute and researchers more heavily, but I can see why anyone doing public research might want to expand the scope of credit.
China does thing literally applies here, asoth entities are Chinese-state educational and research institutions.
All the time. Just search for "UK/British scientists/engineers invent/develop" (with quotes). There are lots of headlines like that.
E.g. https://www.telegraph.co.uk/news/2025/05/10/british-scientis...
It's more that you don't really get any headlines out of China that aren't from or laundered through Chinese English-language media first. Whereas in the West, universities send their PR pieces to the Western media directly. No normal journalist is trolling Arxiv or Nature or whatever for juicy papers, and certainly not Chinese-language journals: they're just rehashing the media release, usually uncritically. Which is why you hear about every time someone sneezes near a new battery chemistry that "may revolutionise energy storage" but "needs more research into mass production methods" at MIT but almost never at Tsinghua, unless it's done the rounds first and caused a stir in the domain-specific publications.
Communist in name. Lots of private businesses though.
I guess when you're inside a country which produces the news, the actual location inside that country matters more than for people outside that country ...
E.g. for the US what's commonly called Americentrism - I guess a similar term exists for China.
I want 10 million dollar factories that can make 10 year old semiconductor chips.
You don't, pretty much. Exceptions exist, but not when it comes to mixed-signal processes.
Raspberry Pi would beg to differ.
Plenty of lower power or older stuff (RPI included) use older nodes just because that's available. Microcontrollers tend to use higher nodes (22, 40, or 55nm) just because they don't need the super high speed stuff.
Also, the RPI5 uses 16nm, not 22nm. Still not modern, but not unheard of for stuff like SBCs where performance is not particularly important compared to cost.
Now that the law is close to coming to an end, the economics changes. Latest tech provides negligible marginal benefits to the median consumer. So now it is possible to think of commodotizing a single process and making the factories much much cheaper.
This material will never replace cheap materials, like silicon or silicon carbide, or even gallium nitride, in the bulk of semiconductor devices, e.g. in CPUs and memories, or in power semiconductor devices.
It will be reserved for a few high-speed devices, in special instruments that need high-speed signal processing or in radars or communication devices used in high-frequency bands (obviously these include military applications).
The first roadblock on the use of new semiconductor materials is that in the beginning nobody succeeds to make crystals that are both big enough and free enough of defects. For size, usually achieving to make 2-inch wafers is the threshold for enabling commercial applications.
The second problem is finding metallization systems that can achieve ohmic contacts and rectifying contacts on the semiconductor crystal and the third is finding impurities that allow to modify the polarity and the concentration of the charge carriers in wide enough ranges.
These 2 problems are particularly difficult for wide-bandgap semiconductors, like gallium nitride, but they are unlikely to be difficult for indium selenide, which should behave similarly to zinc selenide or indium phosphide, for which there is much more experience.
Still, I guess I'm more interested in what Nikon is doing. Simplifying the process—eliminating the need for masks—could significantly decrease initial costs. https://www.techpowerup.com/339060/nikon-introduces-600x600-...
david927•4d ago
• 5–10x higher electron mobility
• Atomic thickness
• Tunable bandgap
• Lower power leakage
• Faster switching
So how big of news is this?
igravious•3d ago
addaon•1d ago
adrian_b•1d ago
There are already around two decades since silicon dioxide is no longer used as the gate insulator in high-performance transistors. The reason is that its dielectric constant is too low, so for very small transistors the gate would have to be too thin, so thin that it is impossible for it to not have holes and also impossible to prevent electrons from tunneling through it.
Therefore silicon dioxide has been replaced by hafnium dioxide (a part of the hafnium may be substituted with zirconium or rare-earth elements), which has a much higher dielectric constant, and which is also chemically resistant enough to survive the following wafer processing steps.
Because HfO2 cannot be grown in place, but it must be deposited in its entirety, it is much more difficult to ensure that the insulator-semiconductor interface is perfect, but the industry had to solve this problem decades ago, otherwise it would have been impossible to make the transistors smaller.
For other semiconductors than silicon, the gate insulator always had to be deposited. Because this is hard, metal-insulator-semiconductor FETs not made of silicon or of silicon carbide had only very rarely been used in the past. The transistors that are not made of Si or SiC typically are either Shottky-gate FETs or heterojunction bipolar transistors.
chasil•1d ago
The film itself is one atomic layer in thickness?
I don't know how you would make "wells" that form a FET, either for the source and drain, or for the larger complimentary wells of CMOS.
I don't know how advanced the thinking is to do this, or an equivalent.
https://www.science.org/doi/10.1126/science.adu3803
adrian_b•1d ago
Most gallium nitride devices, like those that are used now in miniature chargers for laptops/smartphones, are made like this.
Using this technique for indium selenide is a standard procedure, not something surprising.
The "wells" are made by doping with various kinds of atoms, which is normally done by ion implantation, i.e. a ion beam inserts the desired impurities into the crystal, at the desired depth.
In very thin devices, like in most modern CMOS technologies, the doped zones no longer look like "wells". For an N-channel MOSFET, you just have from source to drain 3 zones of alternating polarity, n-p-n. The middle zone is surrounded partially or even totally by the gate insulator.
hristov•1d ago
gsf_emergency_2•1d ago
They won't tell anyone
0cf8612b2e1e•1d ago
adrian_b•1d ago
Currently, the major consumers of indium are all the screens for monitors, laptops and smartphones, which use indium oxide as a transparent conductor, then the LEDs used in lighting and indicators. Also the power devices with gallium nitride contain indium, and their use is increasing.
There exist no mines of indium. Indium can be obtained as a byproduct from the extraction of other metals, primarily from zinc mining, but its concentration in zinc minerals is very small.
In order to produce more indium, one also has to produce more zinc, in an amount several orders of magnitude greater than the amount of produced indium. When the demand from indium will exceed that available from the current zinc production, further increases in demand will increase the indium price much steeper.
Except for indium, among the other chemical elements only for the platinum-group metals there is a so great mismatch between the amount that would be required by potential applications and the amount that is available on Earth. Selenium and tellurium are also close of these from this point of view.
phkahler•22h ago
We also ramped production of those screens without some major supply chain problem requiring 10x zinc production.
I don't see scarcity as a problem.
ajross•1d ago
Where they all fail is production process, which amounts to transistor size, basically. Sure, you can make a really great, very efficient, measurably improved very much macroscopic transistor. But no one knows how to put a hundred billion of them on a chip, so... basically who cares?
Someday someone will figure it out, maybe. But announcing an exciting new chemistry says little to nothing.
adrian_b•1d ago
It is easy to make transistors as small as on silicon on most other semiconductors, but the cost of the final product would be many times greater.
One important reason is that silicon is made into huge 12-in wafers, while most other semiconductors are made into small 2-inch to 4-inch wafers, like for silicon several decades ago. At each processing step on a silicon production line one machine processes 9 to 36 times more transistors than if another semiconductor were used. Large wafers also waste much less area when making big dies.
In general, for silicon there exist very big processing machines with very high productivity, while for other materials there is little difference between lab equipment and what can be used for commercial production. An integrated circuit made on a non-silicon material also requires more processing steps and more expensive materials.
In order to keep increasing the performance of CPUs and GPUs, the replacement of silicon with another semiconductor is unavoidable, perhaps in a decade from now. However, that will be done only after all other possibilities of improving silicon devices will be completely exhausted, in order to avoid the increase in production costs.
ajross•1d ago
Yikes, [citation needed] here. No, it absolutely is not. All the etch and litho chemistry is highly specific to the substrate and dopants. You can't just feed a germanium crystal through an etcher tool in a TSMC fab and get anything but a brick out the other side.
I'm not aware of anyone anywhere doing low-nm lithography on anything but silicon (even in a demo context, or even announcing plans for the capacity), but I'm willing to be educated.
adrian_b•1d ago
The reason is that such integrated circuits could not compete in cost, so developing all the fabrication equipment for them would not be worthwhile.
On the other hand, on special discrete devices, e.g. microwave transistors, and on experimental devices, "low-nm" (which means tens of nm for the most advanced devices) has been done for a long time.
The low fabrication yields, which would be unacceptable for the mass production of integrated circuits, have much less relevance for small and expensive discrete devices and for experimental devices.