Hey,
I looked at some simple C++ programs disassembly (where for example, the sin or sqrt function is applied to the input of a user) and traced the .exe's in ollydbg 2. The calls get directed to the MSVCP100 dll, where there is some logic I don't fully understand (but I guess it's checking some status settings from some register), and then the computation is done. For the sin function it was kind of weird, and the computation of sin(x) wasn't traceable to a single instruction. I guess there is some kind of polynomial used to approximate the sinus, as I only saw multiplies and adds. The call to MSVCP's sqrt eventually to a fsqrt (the opcode for the FPU) so I guess that was is actually used, as opposed to the fsin function.
Why are the floating point functions inplemented in such a weird way? Why aren't the FPU opcodes the standard way? And even if there is a reason to not use the FPU, wouldn't it be better to let the OS provide the floating point functions (as you wouldn't need to do a lot of checks that way)?
What's up with the floating point functions?
Re: What's up with the floating point functions?
No idea. For a language library using floating-point hardware it's probable the best way for most users.kutkloon7 wrote:Why aren't the FPU opcodes the standard way?
Assume we don't have a suitable hardware implementation of sin so you have to implement it yourself. In Computer Approximations by Hart et al, you'll find 79 different ways to compute sin(X). The one you choose depends on your needs with regards to precision and computation latency. If you've got the resources you could even design your own approximation. Now, which one is your OS going to implement? It's probably easier to leave it to the language implementors.kutkloon7 wrote:wouldn't it be better to let the OS provide the floating point functions (as you wouldn't need to do a lot of checks that way)?
Every universe of discourse has its logical structure --- S. K. Langer.
- mathematician
- Member
- Posts: 437
- Joined: Fri Dec 15, 2006 5:26 pm
- Location: Church Stretton Uk
Re: What's up with the floating point functions?
Assuming that your calculus is up to scratch, you might want to google Maclaurin's Series.
That is probably how trig functions would be calculated, even if they were implemented in hardware.
That is probably how trig functions would be calculated, even if they were implemented in hardware.
The continuous image of a connected set is connected.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: What's up with the floating point functions?
The traditional method (Which the 8087 used, and which your calculator probably uses) is CORDIC. These days, expect a polynomial approximation or lookup table with interpolation followed by a few iterations of refinement.mathematician wrote:Assuming that your calculus is up to scratch, you might want to google Maclaurin's Series.
That is probably how trig functions would be calculated, even if they were implemented in hardware.
Re: What's up with the floating point functions?
Hi,
The first step of computing "sin(x)" is to find "x % (2 * PI)". For FPU, both PI itself and the result of "x % (2 * PI)" can't be more precise than an 80-bit floating point register, which means that (especially for large values of 'x') the result of "(x % (2 * PI))" isn't as precise, and therefore the result of "sin(x % (2 * PI))" isn't as precise.
For doing it in software, you can use as much precision as you like for "x % (2 * PI)" (and also for "sin(x % (2 * PI)"). Basically, you're not limited to the precision of 80-bit floating point for the intermediate steps, so the final "80-bit precision" result is more precise.
For performance (instead of precision) your compiler might have some sort of "fast math" option (e.g. "-ffast-math" for GCC). If this is enabled I'd expect it to generate an "fsin" instruction with no library call at all.
Cheers,
Brendan
As far as I can tell, the problem is precision.kutkloon7 wrote:Why are the floating point functions inplemented in such a weird way? Why aren't the FPU opcodes the standard way? And even if there is a reason to not use the FPU, wouldn't it be better to let the OS provide the floating point functions (as you wouldn't need to do a lot of checks that way)?
The first step of computing "sin(x)" is to find "x % (2 * PI)". For FPU, both PI itself and the result of "x % (2 * PI)" can't be more precise than an 80-bit floating point register, which means that (especially for large values of 'x') the result of "(x % (2 * PI))" isn't as precise, and therefore the result of "sin(x % (2 * PI))" isn't as precise.
For doing it in software, you can use as much precision as you like for "x % (2 * PI)" (and also for "sin(x % (2 * PI)"). Basically, you're not limited to the precision of 80-bit floating point for the intermediate steps, so the final "80-bit precision" result is more precise.
For performance (instead of precision) your compiler might have some sort of "fast math" option (e.g. "-ffast-math" for GCC). If this is enabled I'd expect it to generate an "fsin" instruction with no library call at all.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- mathematician
- Member
- Posts: 437
- Joined: Fri Dec 15, 2006 5:26 pm
- Location: Church Stretton Uk
Re: What's up with the floating point functions?
Let x be in radians (radians = degrees * π /180). Then as near as damn it,
cos(x) = 1 - x^2/2 + x^4/24 - x^6/720 [ + x^8/40320 - x^10/3808800 and so on ]
There is a similar series for sin(x), but it doesn't converge as fast.
Before the invention of computers, that had to be done by hand. Which is why it took twenty years to calculate the first table of logarithms.
cos(x) = 1 - x^2/2 + x^4/24 - x^6/720 [ + x^8/40320 - x^10/3808800 and so on ]
There is a similar series for sin(x), but it doesn't converge as fast.
Before the invention of computers, that had to be done by hand. Which is why it took twenty years to calculate the first table of logarithms.
The continuous image of a connected set is connected.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: What's up with the floating point functions?
Yup, the traditional taylor-maclaurin series (with factorials because that makes reading easier):
cos: x^0/0! - x^2/2! + x^4/4! - x^6/6! + ...
sin: x^1/1! - x^3/3! + x^5/5! - x^7/7! + ...
exp: x^0/0! + x^1/1! + x^2/2! + x^3/3! + ...
cos: x^0/0! - x^2/2! + x^4/4! - x^6/6! + ...
sin: x^1/1! - x^3/3! + x^5/5! - x^7/7! + ...
exp: x^0/0! + x^1/1! + x^2/2! + x^3/3! + ...
Re: What's up with the floating point functions?
I should have paid better attention when I was in high school...