Int[csc(x) dx] = 2 Int[csc(2u) du]
= 2 Int[du / (2 cos(u) sin(u))]
= Int[sec^2(u) du / tan(u)]
= log(tan(u)) + C
= log(tan(x/2)) + C
Then Int[sec(x)] = Int[csc(u)] = log(tan(u/2)) + C = log(tan(pi/4 - x/2)) + C.
Of course, this was no use to Mercator, because the logarithm hadn't been invented yet. But you aren't just pulling a magic factor out of nowhere. There is definitely a bit of cleverness in rearranging the fraction — you have to be used to trying to find instances of the power rule when dealing with integrals of fractions.
https://justine.lol/sectorlisp2/
And probably a small forth too, with a dictionary defining every math word, something not so different to Lisp.
LLM's? 4GB of RAM? Your grampa's 486 with 16MB of RAM can do calculus too.
You can do a lot of numerical maths just with a noddy spreadsheet of course.
https://en.m.wikipedia.org/wiki/PDP-10
https://en.m.wikipedia.org/wiki/Incompatible_Timesharing_Sys...
https://en.m.wikipedia.org/wiki/Macsyma
Fun fact: old Macsyma's math code still runs at is on modern Linux'/BSD's with Maxima. Even plots work the same, albeit in a different output format.
A 386 it's far more powerful than this.
You could argue that the First useful thing electronic computers did was integration...
https://www.tandfonline.com/doi/full/10.1080/00295450.2021.1...
It's full circle. But with Lisp and Lambda Calculus even an Elementary school kid could understand integration, as you are literally describing the process as if they were Lego blocks.
Albeit in Forth would be far easier. It's almost telling the computer that multiplying it's iterated addition, and dividing, iterated substraction.
Floating numbers are done with specially memory 'blocks', and you can 'teach' the computer to multiply numbers bigger than 65536 in the exact same way humans do with pen and paper.
Heck, you can set float numbers by yourself by telling Forth how to do the float numbers by following the standard and setting up the f, f+, f/... and outputting rules by hand. Slower than a Forth done in assembly? Maybe, for sure; but natively, in old 80's computers, Forth was 10x faster than Basic.
From that to calculus, it's just telling the computer new
rules*. And you don't need an LLM for that.The inverse of cosine is arccosine (sometimes written acos or cos^{-1}). Secant is the reciprocal of cos ie sec x = 1/cos(x)).
Likewise cotan is the reciprocal of tan (1/tan). The inverse of tan is atan/arctan/tan^{-1}.
This is confusing for a lot of people because if you write x^{-1} that means 1/x. If you write f^{-1} and f is a function, then _generally_ it means the inverse of f. In the case of trig functions this is doubly confusing because people write sin^2 theta meaning (sin theta)^2 but sin^-1 theta means arcsin theta.
That's why in my maths studies they started by teaching you to do the inverse with a -1 so when you see it you don't get confused but changed to preferring arcsin etc as this is unambiguous and if you learn to write this way you won't confuse others.
Inverse function: https://en.wikipedia.org/wiki/Inverse_function / https://fr.wikipedia.org/wiki/Bijection_r%C3%A9ciproque
Reciprocal: https://en.wikipedia.org/wiki/Multiplicative_inverse / https://fr.wikipedia.org/wiki/Inverse
Wikipedia seems to have chosen "multiplicative inverse" over "reciprocal" for title, even though they are clearly indicated as synonymous.
For example "versine"
versin theta = 1-cos theta.
There is also "haversine" which is (1-cos theta)/2. Which is used in navigation apparently https://en.wikipedia.org/wiki/Versine
That is r versin theta (ie r - r cos theta). Pretty cool no? I mean I've literally never had to find the length of that line, but that's how you would if you wanted to..
I think we used it in geometry in US high school, but only to complete an assignment or two to show we could use trig functions correctly. I had to relearn how all of them worked to help my kid with homework, it's mostly look at the angles and sides you have available and pick which trig function is necessary to figure out which one you're solving for. I'm sure there are real life uses for trig functions, and I hate to be one of those "when are we ever going to use this" types, but I've never used any of them outside of math classes.
(quick search, didn't find the old ones, but similar to these)
https://mathematicaldaily.weebly.com/secant-cosecant-cotange...
https://www.pinterest.com/pin/enter-image-description-here--...
... which were not used in my education but whenever i saw them i wished they had been, they lay out a geometric interpretation of all of them. by "old" i mean "look like Leonardo drew them"
Personally I thought they were nice to have because coming up with the integral of 1/cos on the fly is pretty brutal in a long integral
The fact that on many maps Europe is much smaller that it appears should just make you all the more impressed by its achievements.
And the author talks like logarithm was invented long after integration
Basically our sir told us to multiply / divide by sec + tan and observe that its becoming something like integration f(x)^(-1) f'(x) * dx and if we let f(x) as t and this f'(x) * dx becomes dt Actually we can also prove the latter and I had to look at my notes because I haven't revised them yet but its basically f(x) = t
so f'(x) = dt/dx so f'(x)* dx = dt then we get
so integration f(x)^n * f'(x) * dx = integral t^n * dt (where t = f(x)) integral t^-1 dt so we get ln(t) and this t or f(x) was actually sec x + tan x so its ln(sec + tan) and in fact by doing some cool trigonometry we can say this as ln(tan(pi/4 + x/2)) + c
also cosec x integration is ln(tan(x/2)) + c
I haven't read the article but damn, HN, this feels way too specific for me LOL.
and in fact our sir himself told us that he would've also let us do this if we were in normal batches (we are in a slightly higher batch, but most students are still normal and it was easy to digest to be honest except when I was writing this previous comment, I actually found that our sir had complicated the step of f'(x) = df(x)/dx by letting us assume f(x) as t and so on..,maybe it makes it easier to understand considering f(x) to be its own variable like t instead, but that actually confused me a little bit when I was writing the previous comment) , still nothing too hard.
I actually want to ask here because I was too afraid to ask this to sir, but is there a way, a surefire way to solve any integral , like can computers solve any integral?
numerically sure (ie definite integrals can be evaluated for given values)
( The solution is both possible, and proved, and there is a goddamned youtube video about the trick, and its not a minor trick either, like the proofs of int (sec (x)) or int (1/x ). )
In my text book, and current text books, it is said it cannot be resolved by elementary means, and it cannot, but it can be solved, and proven by one whopper of an idea.
The research is left as an exercise.
ziofill•9mo ago
dataflow•9mo ago
mppm•9mo ago
If I understand correctly, the Hermite functions are the eigenfunctions of the Fourier Transform and thus all have this property -- with the Gaussian being a special case. But sech(x) is doubly interesting because it is not a Hermite function, though it can be represented as an infinite series thereof. Are there other well-behaved examples of this, or is sech(x) unique in that regard?
fxj•9mo ago
https://en.wikipedia.org/wiki/Dirac_comb
and for other:
http://www.systems.caltech.edu/dsp/ppv/papers/journal08post/...
abetusk•9mo ago
So, basically, the eigenfunctions of the Fourier transform are Hermite polynomials times a Gaussian [0] [1].
[0] https://math.stackexchange.com/questions/728670/functions-th...
[1] https://en.wikipedia.org/wiki/Hermite_polynomials#Hermite_fu...
perihelions•9mo ago
perihelions•9mo ago
kkylin•9mo ago
JoshTriplett•9mo ago
Makes sense, given that the definition of e goes hand in hand with its property of e^x being its own integral and derivative.