The paper[0] is actually about their logarithmic number system. Deep learning is given as an example, and their reference implementation is in PyTorch, but it is far from the only application.
Anything involving a large number of multiplications that produce extremely small or extremely large numbers could make use of their number representation.
It builds on existing complex number implementations, making it fairly easy to implement in software and relatively efficient. They provide implementations of a number of common operations, including dot product (building on PyTorch's preexisting, numerically stabilized by experts, log-sum-of-exponentials) and matrix multiplication.
The main downside is that this is a very specialized number system: if you care about things other than chains of multiplications (say... addition?), then you should probably use classical floating-point numbers.
nestorD•1h ago
Anything involving a large number of multiplications that produce extremely small or extremely large numbers could make use of their number representation.
It builds on existing complex number implementations, making it fairly easy to implement in software and relatively efficient. They provide implementations of a number of common operations, including dot product (building on PyTorch's preexisting, numerically stabilized by experts, log-sum-of-exponentials) and matrix multiplication.
The main downside is that this is a very specialized number system: if you care about things other than chains of multiplications (say... addition?), then you should probably use classical floating-point numbers.
[0]: https://arxiv.org/abs/2510.03426