frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Met Releases High-Def 3D Scans of 140 Famous Art Objects

https://www.openculture.com/2026/03/the-met-releases-high-definition-3d-scans-of-140-famous-art-o...
1•coloneltcb•1m ago•0 comments

How to start learning Web Development from scratch?

1•JoyBundle•3m ago•0 comments

There Are No Fees at America's Smallest Bank (2023)

https://www.bloomberg.com/news/features/2023-04-13/america-s-smallest-bank-is-kentland-federal-sa...
1•yazantapuz•4m ago•0 comments

Clean Room as a Service

https://malus.sh/index.html
2•Venn1•5m ago•0 comments

WolfIP: Lightweight TCP/IP stack with no dynamic memory allocations

https://github.com/wolfssl/wolfip
1•789c789c789c•5m ago•0 comments

Creating virtual block devices with ublk

https://jpospisil.com/posts/2026-01-13-creating-virtual-block-devices-with-ublk
1•shayonj•6m ago•0 comments

Lf-lean: The frontier of verified software engineering

https://theorem.dev/blog/lf-lean/
1•alpaylan•8m ago•0 comments

Iran Includes American Tech Giants on List of New Targets

https://gizmodo.com/iran-includes-american-tech-giants-on-list-of-new-targets-2000732530
3•Anon84•8m ago•0 comments

Atlassian CEO: AI doesn't replace people here, but we're firing them anyway

https://www.heise.de/en/news/Atlassian-CEO-AI-doesn-t-replace-people-here-but-we-re-firing-them-a...
2•layer8•8m ago•0 comments

Auto-copy files when creating Git worktrees or Jujutsu workspaces

https://cretezy.com/2026/worktree-copy/
1•CraftThatBlock•9m ago•0 comments

Emscripten Target Support for Swift

https://forums.swift.org/t/pitch-emscripten-target-support-for-swift/85310
2•maxdesiatov•10m ago•0 comments

Italian prosecutors seek trial for Amazon, 4 execs in alleged $1.4B tax evasion

https://www.reuters.com/world/italian-prosecutors-seek-trial-amazon-four-execs-over-alleged-14-bl...
2•amarcheschi•10m ago•0 comments

Rapid Customization for RAG and Context Engineering

https://www.rapidfire.ai/blogs/grounded-ai-starts-here-the-open-framework-for-rag-and-context-eng...
1•kbigdelysh•11m ago•0 comments

Amazon Forced Engineers to Use AI Coding Tools. Then It Lost 6.3M Orders

https://medium.com/@heinancabouly/amazon-forced-engineers-to-use-ai-coding-tools-then-it-lost-6-3...
3•HeinanCA•11m ago•0 comments

Colon cancer now leading cause of cancer deaths under 50 in US

https://www.theguardian.com/us-news/2026/mar/12/colon-cancer-leading-deaths
3•stevenwoo•12m ago•0 comments

We Built Our AI Agent

https://www.mendral.com/blog/how-we-built-our-ai-agent
2•aluzzardi•12m ago•0 comments

US- and Greek-owned tankers ablaze after Iran claims 'underwater drone' strike

https://www.lloydslist.com/LL1156592/US--and-Greek-owned-tankers-ablaze-after-Iran-claims-underwa...
10•everybodyknows•12m ago•0 comments

The Road Not Taken: A World Where IPv4 Evolved

https://owl.billpg.com/ipv4x/
2•billpg•14m ago•0 comments

NEXUS.Pulse: A Binary-First Broadcast Service. 1M Entities Streamed in 11.8µs

https://telemetry.intelligentaudio.net
1•NexusCore•14m ago•1 comments

Show HN: Sigyn – OSS native macOS secrets manager to replace .env (GUI+CLI)

https://connorguy.github.io/sigyn/
1•conguy•14m ago•1 comments

Asia rolls out 4-day weeks, WFH to solve fuel crisis caused by Iran war

https://fortune.com/2026/03/11/iran-war-fuel-crisis-asia-work-from-home-closed-schools-price-caps/
11•speckx•15m ago•0 comments

Chardet dispute shows how AI will kill software licensing

https://www.theregister.com/2026/03/06/ai_kills_software_licensing/
2•DGAP•15m ago•0 comments

Show HN: jj-benchmark – Evaluating AI agents on Jujutsu version control

https://tabbyml.github.io/jj-benchmark/
1•wsxiaoys•16m ago•0 comments

agent-shell 0.47 updates

https://xenodium.com/agent-shell-0-47-1-updates
1•xenodium•16m ago•0 comments

AI Is Heroin

https://pancake.bearblog.dev/2026-03-11-ai-is-heroin/
3•speckx•16m ago•2 comments

Show HN: Open-source project management tool

https://github.com/MislavNovalic/Axelo
1•mnovalic•16m ago•0 comments

Adding internationalization to a SaaS is easier than it used to be

1•LeanVibe•19m ago•0 comments

Show HN: An Embeddable SQLite Parser

https://github.com/sqliteai/liteparser
1•marcobambini•20m ago•0 comments

Show HN: I made PythonStarter so I could launch faster with no Next.js or React

https://pythonstarter.co/
1•dan_easterman•21m ago•1 comments

Grand jury subpoena for Signal user data in the United States District Court

https://signal.org/bigbrother/district-of-columbia/
4•nobody9999•22m ago•0 comments
Open in hackernews

Faster asin() was hiding in plain sight

https://16bpp.net/blog/post/faster-asin-was-hiding-in-plain-sight/
233•def-pri-pub•1d ago

Comments

erichocean•1d ago
Ideal HN content, thanks!
orangepanda•1d ago
> Nobody likes throwing away work they've done

I like throwing away work I've done. Frees up my mental capacity for other work to throw away.

adampunk•1d ago
We love to leave faster functions languishing in library code. The basis for Q3A’s fast inverse square root had been sitting in fdlibm since 1986, on the net since 1993: https://www.netlib.org/fdlibm/e_sqrt.c
def-pri-pub•21h ago
Funny enough that fdlimb implementation of asin() did come up in my research. I believe it might have been more performant in the past. But taking a quick scan of `e_asin.c`, I see it doing something similar to the Cg asin() implementation (but with more terms and more multiplications, which my guess is that it's slower). I think I see it also taking more branches (which could also lead to more of a slowdown).
adampunk•19h ago
Yeah Ng’s work in fdlibm is cool and really clever in parts but a lot of branching. Some of the ways they reach correct rounding are…so cool.
drsopp•1d ago
Did some quick calculations, and at this precision, it seems a table lookup might be able to fit in the L1 cache depending on the CPU model.
Pannoniae•1d ago
Microbenchmarks. A LUT will win many of them but you pessimise the rest of the code. So unless a significant (read: 20+%) portion of your code goes into the LUT, there isn't that much point to bother. For almost any pure calculation without I/O, it's better to do the arithmetic than to do memory access.
jcalvinowens•23h ago
Locality within the LUT matters too: if you know you're looking up identical or nearby-enough values to benefit from caching, an LUT can be more of a win. You only pay the cache cost for the portion you actually touch at runtime.

I could imagine some graphics workloads tend compute asin() repeatedly with nearby input values. But I'd guess the locality isn't local enough to matter, only eight double precision floats fit in a cache line.

hrmtst93837•5h ago
Cache size and replacement policies can ruin even a well-tuned LUT once your working set grows or other threads spray cache lines so "just use a LUT" quietly turns into "debug the perf cliff" later. If the perf gain disappears under load or with real input sets you realise too late it was just a best-case microbenchmark trick.
groundzeros2015•1d ago
I don’t want to fill up L1 for sin.
jcalvinowens•23h ago
Surely the loss in precision of a 32KB LUT for double precision asin() would be unacceptable?
Jyaif•23h ago
By interpolating between values you can get excellent results with LUTs much smaller than 32KB. Will it be faster than the computation from op, that I don't know.
jcalvinowens•23h ago
I'm very skeptical you wouldn't get perceptible visual artifacts if you rounded the trig functions to 4096 linear approximations. But I'd be happy to be proven wrong :)
drsopp•22h ago
I experimented a bit with the code. Various tables with different datatypes. There is enough noise from the Monte Carlo to not make a difference if you use smaller data types than double or float. Even dropping interpolation worked fine, and got the speed to be on par with the best in the article, but not faster.
jcalvinowens•22h ago
Does your benchmark use sequential or randomly ordered inputs? That would make a substantial difference with an LUT, I would think. But I'm guessing. Maybe 32K is so small it doesn't matter (if almost all of the LUT sits in the cache and is never displaced).

> if you use smaller data types than double or float. Even dropping interpolation worked fine,

That's kinda tautological isn't it? Of course reduced precision is acceptable where reduced precision is acceptable... I guess I'm assuming double precision was used for a good reason, it often isn't :)

drsopp•21h ago
I didnt inspect the rest of the code but I guess the table is fetched from L2 on every call? I think the L1 data cache is flooded by other stuff going on all the time.

About dropping the interpolation: Yes you are right of course. I was thinking about the speed. No noticable speed improvement by dropping interpolation. The asin calls are only a small fraction of everything.

scottlamb•1d ago
Isn't the faster approach SIMD [edit: or GPU]? A 1.05x to 1.90x speedup is great. A 16x speedup is better!

They could be orthogonal improvements, but if I were prioritizing, I'd go for SIMD first.

I searched for asin on Intel's intrinsics guide. They have a AVX-512 instrinsic `_mm512_asin_ps` but it says "sequence" rather than single-instruction. Presumably the actual sequence they use is in some header file somewhere, but I don't know off-hand where to look, so I don't know how it compares to a SIMDified version of `fast_asin_cg`.

https://www.intel.com/content/www/us/en/docs/intrinsics-guid...

TimorousBestie•1d ago
I don’t know much about raytracing but it’s probably tricky to orchestrate all those asin calls so that the input and output memory is aligned and contiguous. My uneducated intuition is that there’s little regularity as to which pixels will take which branches and will end up requiring which asin calls, but I might be wrong.
scottlamb•23h ago
I'd expect it to come down to data-oriented design: SoA (structure of arrays) rather than AoS (array of structures).

I skimmed the author's source code, and this is where I'd start: https://github.com/define-private-public/PSRayTracing/blob/8...

Instead of an `_objects`, I might try for a `_spheres`, `_boxes`, etc. (Or just `_lists` still using the virtual dispatch but for each list, rather than each object.) The `asin` seems to be used just for spheres. Within my `Spheres::closest_hit` (note plural), I'd work to SIMDify it. (I'd try to SIMDify the others too of course but apparently not with `asin`.) I think it's doable: https://github.com/define-private-public/PSRayTracing/blob/8...

I don't know much about ray tracers either (having only written a super-naive one back in college) but this is the general technique used to speed up games, I believe. Besides enabling SIMD, it's more cache-efficient and minimizes dispatch overhead.

edit: there's also stuff that you can hoist in this impl. Restructuring as SoA isn't strictly necessary to do that, but it might make it more obvious and natural. As an example, this `ray_dir.length_squared()` is the same for the whole list. You'd notice that when iterating over the spheres. https://github.com/define-private-public/PSRayTracing/blob/8...

TimorousBestie•23h ago
This tracks with my experience and seems reasonable, yes. I tend to SoA all the things, sometimes to my coworkers’ amusement/annoyance.
def-pri-pub•21h ago
When I was working on this project, I was trying to restrict myself to the architecture of the original Ray Tracing in One Weekend book series. I am aware that things are not as SIMD friendly and that becomes a major bottle neck. While I am confident that an architectural change could yield a massive performance boost, it's something I don't want to spend my time on.

I think it's also more fun sometimes to take existing systems and to try to optimize them given whatever constraints exist. I've had to do that a lot in my day job already.

scottlamb•18h ago
I can relate to setting an arbitrary challenge for myself. fwiw, don't know where you draw the line of an architectural change, but I think that switching AoS -> SoA may actually be an approachably-sized mechanical refactor, and then taking advantage of it to SIMDify object lists can be done incrementally.

The value of course is contingent on there being a decent number of objects of a given type in the list rather than just a huge number of rays being sent to a small number of objects; I didn't evaluate that. If it's the other way around, the structure would be better flipped, and I don't know how reasonable that is with bounces (that maybe then aren't all being evaluated against the same objects?).

pixelesque•5h ago
It comes down to how "coherent" the rays are, and how much effort (compute) you want to put into sorting them into batches of rays.

With "primary" ray-tracing (i.e. camera rays, rays from surfaces to area lights), it's quite easy to batch them up and run SIMD operations on them.

But once you start doing global illumination, with rays bouncing off surfaces in all directions (and with complex materials, with multiple BSDF lobes, where lobes can be chosen stochastically), you start having to put a LOT of effort into sorting and batching rays such that they all (within a batch) hit the same objects or are going in roughly the same direction.

Am4TIfIsER0ppos•23h ago
I don't do much float work but I don't think there is a single regular sine instruction only old x87 float stack ones.

I was curious what "sequence" would end up being but my compiler is too old for that intrinsic. Even godbolt didn't help for gcc or clang but it did reveal that icc produced a call https://godbolt.org/z/a3EsKK4aY

nitwit005•20h ago
If you click libraries on godbolt, it's pulling in a bunch, including multiple SIMD libraries. You might have to fiddle with the libraries or build locally.
AlotOfReading•1d ago
I'm pretty sure it's not faster, but it was fun to write:

    float asin(float x) {
      float x2 = 1.0f-fabs(x);
      u32 i = bitcast(x2);
      i = 0x5f3759df - (i>>1);
      float inv = bitcast(i);
      return copysign(pi/2-pi/2*(x2*inv),x);
    }
Courtesy of evil floating point bithacks.
def-pri-pub•1d ago
> floating point bithacks

The forbidden magic

chuckadams•1d ago
You brought Zalgo. I blame this decade on you.
moffkalast•1d ago
> float asinine(float x) {

FTFY :P

adampunk•23h ago
// what the fuck
jacquesm•22h ago
That could do with some subtitles.
irishcoffee•22h ago
https://en.wikipedia.org/wiki/Fast_inverse_square_root
teo_zero•9h ago
The bad thing about this method is that it's slower than native CPU instructions. The good thing is that the result is very precise for at least 2 values of x, namely 1.0 and -1.0

JK

LegionMammal978•1d ago
In general, I find that minimax approximation is an underappreciated tool, especially the quite simple Remez algorithm to generate an optimal polynomial approximation [0]. With some modifications, you can adapt it to optimize for either absolute or relative error within an interval, or even come up with rational-function approximations. (Though unfortunately, many presentations of the algorithm use overly-simple forms of sample point selection that can break down on nontrivial input curves, especially if they contain small oscillations.)

[0] https://en.wikipedia.org/wiki/Remez_algorithm

jason_s•23h ago
Not sure I would call Remez "simple"... it's all relative; I prefer Chebyshev approximation which is simpler than Remez.
stephencanon•23h ago
Ideally either one is just a library call to generate the coefficients. Remez can get into trouble near the endpoints of the interval for asin and require a little bit of manual intervention, however.
LegionMammal978•23h ago
Perhaps, but at least I find it very simple for the optimality properties it gives: there is no inherent need to say, "I know that better parameters likely exist, but the algorithm to find them would be hopelessly expensive," as is the case in many forms of minimax optimization.
herf•23h ago
They teach a lot of Taylor/Maclaurin series in Math classes (and trig functions are sometimes called "CORDIC" which is an old method too) but these are not used much in actual FPUs and libraries. Maybe we should update the curricula so people know better ways.
bee_rider•21h ago
Taylor series makes a lot more sense in a math class, right? It is straightforward and (just for example), when you are thinking about whether or not a series converges in the limit, why care about the quality of the approximation after a set number of steps?
davrosthedalek•3h ago
Taylor series have a quite different convergence behavior than a general polynomial approximation. Or polynomial fit for that matter. Many papers were written which confuse this.

For example, 1/(x+2) has a pole at x=-2. The Taylor series around 0 will thus not converge for |x|>2. A polynomial approximation for, say, a range 0<x<L, will for all L.

xyzzyz•3h ago
Not quite. The point of Taylor’s theorem is that the n-th degree Taylor polynomial around a is the best n-th degree polynomial approximation around a. However, it doesn’t say anything about how good of an approximation it is further away from the point a. In fact, in math, when you use Taylor approximation, you don’t usually care about the infinite Taylor series, only the finite component.
stephc_int13•1d ago
My favorite tool to experiment with math approximation is lolremez. And you can easily ask your llm to do it for you.
glitchc•1d ago
The 4% improvement doesn't seem like it's worth the effort.

On a general note, instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs. So, replacing one with the other will not confer much benefit, as evidenced from the results. They're all typically implemented using LUTs, and it's hard to beat the performance of an optimized LUT, which is basically a multiplexer connected to some dedicated memory cells in hardware.

kstrauser•23h ago
> The 4% improvement doesn't seem like it's worth the effort.

People have gotten PhDs for smaller optimizations. I know. I've worked with them.

> instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs.

What's the x86-64 opcode for arcsin?

adrian_b•20h ago
Presumably the poster meant polynomial approximations of trigonometric functions not instructions for trigonometric functions, which are missing in most CPUs, though many GPUs have such instructions.

x86-64 had instructions for the exponential and logarithmic functions in Xeon Phi, but those instructions have been removed in Skylake Server and the later Intel or AMD CPUs with AVX-512 support.

However, instructions for trigonometric functions have no longer been added after Intel 80387, and those of 8087 and 80387 are deprecated.

glitchc•20h ago
> What's the x86-64 opcode for arcsin?

Not required. ATAN and SQRTS(S|D) are sufficient, the half-angle approach in the article is the recommended way.

> People have gotten PhDs for smaller optimizations. I know. I've worked with them.

I understand the can, not sure about the should. Not trying to be snarky, we just seem to be producing PhDs with the slimmest of justifications. The bar needs to be higher.

kstrauser•19h ago
> I understand the can, not sure about the should. Not trying to be snarky, we just seem to be producing PhDs with the slimmest of justifications. The bar needs to be higher.

I couldn't disagree more. Sure, making a 4% faster asin isn't going to change the world, but if it makes all callers a teensy bit faster, multiplied by the number of callers using it, then it adds up. Imagine the savings for a hyperscaler if they managed to made a more common instruction 4% faster.

charcircuit•22h ago
The effort of typing about 10 words into a LLM is minimal.
tverbeure•22h ago
> The 4% improvement doesn't seem like it's worth the effort.

I've spent the past few months improving the performance of some work thing by ~8% and the fun I've been having reminds me of the nineties, when I tried to squeeze every last % of performance out of the 3D graphics engine that I wrote as a hobby.

def-pri-pub•21h ago
You'd be surprised how it actually is worth the effort, even just a 1% improvement. If you have the time, this is a great talk to listen to: https://www.youtube.com/watch?v=kPR8h4-qZdk

For a little toy ray tracer, it is pretty measly. But for a larger corporation (with a professional project) a 4% speed improvement can mean MASSIVE cost savings.

Some of these tiny improvements can also have a cascading effect. Imagining finding a +4%, a +2% somewhere else, +3% in neighboring code, and a bunch of +1%s here and there. Eventually you'll have built up something that is 15-20% faster. Down the road you'll come across those optimizations which can yield the big results too (e.g. the +25%).

glitchc•20h ago
It's a cool talk, but the relevance to the present problem escapes me.

If you're alluding to gcc vs fbstring's performance (circa 15:43), then the performance improvement is not because fbstring is faster/simpler, but due to a foundational gcc design decision to always use the heap for string variables. Also, at around 16:40, the speaker concedes that gcc's simpler size() implementation runs significantly faster (3x faster at 0.3 ns) when the test conditions are different.

jason_s•23h ago
While I'm glad to see the OP got a good minimax solution at the end, it seems like the article missed clarifying one of the key points: error waveforms over a specified interval are critical, and if you don't see the characteristic minimax-like wiggle, you're wasting easy opportunity for improvement.

Taylor series in general are a poor choice, and Pade approximants of Taylor series are equally poor. If you're going to use Pade approximants, they should be of the original function.

I prefer Chebyshev approximation: https://www.embeddedrelated.com/showarticle/152.php which is often close enough to the more complicated Remez algorithm.

ogogmad•20h ago
Chebyshev polynomials cos(n arcos(x)) provide one of the proofs that every continuous function f:[0,1]->R can be uniformly approximated by polynomial functions. Bernstein polynomials provide a shorter proof, but perhaps not the best numerical method: https://en.wikipedia.org/wiki/Bernstein_polynomial#See_also
AlotOfReading•15h ago
Those don't guarantee that that they can be well approximated by a polynomial of degree N though, like we have here. You can apply Jackson's inequality to calculate a maximum error bound, but the epsilon for degree 5 is pretty atrocious.
rockmeamedee•1h ago
I had no idea, but this "wiggle" is required for an optimal approximation, it's called the "equioscillation property" [https://en.wikipedia.org/wiki/Equioscillation_theorem].

For a polynomial P (of degree n) to approximate a function F on the real numbers with minimal absolute error, the max error value of |P - F| needs to be hit multiple times, (n+2 times to be precise). You need to have the polynomial "wiggle" back and forth between the top of the error bound and the bottom.

And even more surprisingly, this is a necessary _and sufficient_! condition for optimality. If you find a polynomial whose error alternates and it hits its max error bound n+2 times, you know that no other polynomial of degree n can do better, that is the best error bound you can get for degree n.

Very cool!

exmadscientist•23h ago
This line:

> This amazing snippet of code was languishing in the docs of dead software, which in turn the original formula was scrawled away in a math textbook from the 60s.

was kind of telling for me. I have some background in this sort of work (and long ago concluded that there was pretty much nothing you can do to improve on existing code, unless either you have some new specific hardware or domain constraint, or you're just looking for something quick-n-dirty for whatever reason, or are willing to invest research-paper levels of time and effort) and to think that someone would call Abramowitz and Stegun "a math textbook from the 60s" is kind of funny. It's got a similar level of importance to its field as Knuth's Art of Computer Programming or stuff like that. It's not an obscure text. Yeah, you might forget what all is in it if you don't use it often, but you'd go "oh, of course that would be in there, wouldn't it...."

wolfi1•23h ago
Abramowitz/Stegun has been updated 2010 and resides now here: https://dlmf.nist.gov/
rerdavies•16h ago
Doesn't seem to be terribly up to date though. It seems to use almost exclusively taylor series, and seems to be completely uninterested in error analysis of any kind. Unless I'm missing something.
exmadscientist•8h ago
It's a general-purpose reference for mathematicians, not specifically for numerical analysis. Mathematicians are usually interested in the boring old power series centered at zero (Maclaurin series), so that's what gets prominence.
def-pri-pub•22h ago
These are books that my uni courses never had me read. I'm a little shocked at times at how my degree program skimped on some of the more famous texts.
neutronicus•20h ago
It is not a textbook, it is an extremely dense reference manual, so that honestly makes sense.

In physics grad school, professors would occasionally allude to it, and textbooks would cite it ... pretty often. So it's a thing anyone with postgraduate physics education should know exists, but you wouldn't ever be assigned it.

rendaw•5h ago
Presumably someone read it though, at some point, in order to be able to cite it.
neutronicus•4h ago
The relevant sections, at any rate
eesmith•1h ago
I didn't need Abramowitz and Stegun until grad school. In the 1990s. It was a well-known reference book for people at that level, not a text book.

For my undergrad the CRC math handbook was enough.

axus•18h ago
In this case, AI understood history better than the human.
bsder•16h ago
Yeah, if you want something that's somewhat obscure, pull up Cody and Waite "Software Manual for the Elementary Functions".

And, lo and behold, the ASIN implementation is minimax.

mkbosmans•7h ago
One of the ways that the classics can be improved is not to take the analytic ideal coefficients and approximate them to the closest floating point number, but rather take those ideal coefficients as a starting point for a search of slightly better ones.

The SLEEF Vectorized Math Library [1] does this and therefore can usually provide accuracy guarantees for the whole floating point range with a polynomial order lower than theory would predict.

Its asinf function [2] is accurate to 1 ULP for all single precision floats, and is similar to the `asin_cg` from the article, with the main difference the sqrt is done on the input of the polynomial instead of the output.

[1] https://sleef.org/ [2] https://github.com/shibatch/sleef/blob/master/src/libm/sleef...

mkbosmans•7h ago
I'm sorry, that second reference was actually for the 3.5ULP variant. The 1 ULP is here: https://github.com/shibatch/sleef/blob/master/src/libm/sleef...
ok123456•23h ago
Chebyshev approximation for asin is sum(2T_n(x) / (pi*n*n),n), the even terms are 0.
empiricus•23h ago
Does anyone knows the resources for the algos used in the HW implementations of math functions? I mean the algos inside the CPUs and GPUs. How they make a tradeoff between transistor number, power consumption, cycles, which algos allow this.
groos•19h ago
Here's one way to do it.

https://en.wikipedia.org/wiki/CORDIC

xt00•23h ago
To be accurate, this is originally from Hastings 1955, Princeton "APPROXIMATIONS FOR DIGITAL COMPUTERS BY CECIL HASTINGS", page 159-163, there are actually multiple versions of the approximation with different constants used. So the original work was done with the goal of being performant for computers of the 1950's. Then the famous Abramowitz and Stegun guys put that in formula 4.4.45 with permission, then the nvidia CG library wrote some code that was based upon the formula, likely with some optimizations.
rerdavies•15h ago
I ran this down, because I have a particular interest in vectorizable function approximations. Particular those that exploit bit-banging to handle range normalization. (Anyone have a good reference for that?)

Regrettably, this is NOT from Hastings 1955. Hastings provides Taylor series and Chebyshev polynomial approximations. The OP's solution is a Pade approximation, which are not covered at all in Hastings.

xt00•14h ago
When you say "this is NOT from Hastings" I had to double check my post again -- I guess you are saying that the Pade approximation is not from Hastings, but the polynomial approximation that the OP referenced from nvidia from A&S and ultimately from Hastings, definitely is in Hastings on page 159 -- I think you were referring to the Pade approximation not being in Hastings, which appears to be true yes. In the article it is interesting that the OP tried taylor expansion and pade approximation, but not the fairly standard "welp lets just fit a Nth order polynomial to the arcsin" which is what Hastings did back in the day.
sixo•23h ago
It appears that the real lesson here was to lean quite a bit more on theory than a programmer's usual roll-your-own heuristic would suggest.

A fantastic amount of collective human thought has been dedicated to function approximations in the last century; Taylor methods are over 200 years old and unlikely to come close to state-of-the-art.

cmovq•22h ago
> After all of the above work and that talk in mind, I decided to ask an LLM.

Impressive that an LLM managed to produce the answer from a 7 year old stack overflow answer all on its own! [1] This would have been the first search result for “fast asin” before this article was published.

[1]: https://stackoverflow.com/a/26030435

def-pri-pub•21h ago
I did see that, but isn't the vast majority of that page talking about acos() instead?
seanhunter•18h ago
That’s equivalent right? acos x = pi/2 - asin x

So if you’ve got one that’s fast you have them both.

varispeed•20h ago
If you are interested in such "tricks", you should check out the classic Hacker's Delight by Henry Warren
debo_•20h ago
https://bash-org-archive.com/?427792
andyjohnson0•19h ago
Interesting article. A few years back I implemented a bunch of maths primitives, including trig functions, using Taylor sequences etc, to see how it was done. An interesting challenge, even at the elementary level I was working at.

So this article got me wondering how much accuracy is needed before computing a series beats pre-computed lookup tables and interpolation. Anyone got any relevant experience to share?

How much accuracy does ray tracing require?

peterabbitcook•19h ago
I am curious, did you check how much your benchmarks moved (time and errors) if at all if you told the compiler to use —-use_fast_math or -ffast-math?

There’s generally not a faster version of inverse trig functions to inline, but it might optimize some other stuff out.

Unrelated to that, I’ve seen implementations (ie julia/base/special/trig) that use a “rational approximation” to asin, did you go down that road at any point?

veltas•7h ago
Just a point that the constexpr/const use in that C++ code makes no difference to output, and is just noise really.
Skeime•6h ago
Wouldn't it also be much better to evaluate the Taylor polynomials using Horner's method, instead? (Maybe C++ can do this automatically, but given that there might be rounding differences, it probably won't.)
WithinReason•6h ago
> In any graphics application trigonometric functions are frequently used.

Counterpoint from the man himself, "avoiding trigonometry":

https://iquilezles.org/articles/noacos/

djmips•4h ago
And further to that. https://fgiesen.wordpress.com/2010/10/21/finish-your-derivat...
coloneljelly•5h ago
In DSP math, it is common to use Chebyshev polynomial approximation. You can get incredibly precise results within your required bounds.
djmips•4h ago
To the blog poster:

Robin Green is an excellent resource

Faster Math Functions:

https://basesandframes.wordpress.com/wp-content/uploads/2016...

https://basesandframes.wordpress.com/wp-content/uploads/2016...

Even faster math functions GDC 2020:

https://www.gdcvault.com/play/1027337/Math-in-Game-Developme...

Ono-Sendai•3h ago
Here's my fast acos, which I think can be converted to an asin: https://forwardscattering.org/post/66