-
Notifications
You must be signed in to change notification settings - Fork 18k
math/cmplx: Acos seems broken #6888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
At least the breakage seems intentional: http://golang.org/src/pkg/math/cmplx/asin.go?s=1687:1721#L41 50 // Asin returns the inverse sine of x. 51 func Asin(x complex128) complex128 { 52 if imag(x) == 0 { 53 if math.Abs(real(x)) > 1 { 54 return complex(math.Pi/2, 0) // DOMAIN error 55 } 56 return complex(math.Asin(real(x)), 0) 57 } |
That test for the domain error appears to be misguided. If I remove it and let the code fall through to the general case (diff below), then I get 0 + 1.316958i, the same answer as Wolfram Alpha. My complex math is a bit rusty. Can you explain why the negative coefficient should be preferred? diff -r b1edf8faa5d6 src/pkg/math/cmplx/asin.go --- a/src/pkg/math/cmplx/asin.go Wed Nov 13 21:29:34 2013 -0500 +++ b/src/pkg/math/cmplx/asin.go Wed Dec 04 16:58:54 2013 -0500 @@ -49,10 +49,7 @@ // Asin returns the inverse sine of x. func Asin(x complex128) complex128 { - if imag(x) == 0 { - if math.Abs(real(x)) > 1 { - return complex(math.Pi/2, 0) // DOMAIN error - } + if imag(x) == 0 && math.Abs(real(x)) <= 1 { return complex(math.Asin(real(x)), 0) } ct := complex(-imag(x), real(x)) // i * x @@ -65,11 +62,7 @@ // Asinh returns the inverse hyperbolic sine of x. func Asinh(x complex128) complex128 { - // TODO check range - if imag(x) == 0 { - if math.Abs(real(x)) > 1 { - return complex(math.Pi/2, 0) // DOMAIN error - } + if imag(x) == 0 && math.Abs(real(x)) <= 1 { return complex(math.Asinh(real(x)), 0) } xx := x * x Status changed to Thinking. |
The cephes code, which we crib from, supposedly implements c9x complex arithmetic. I am not sure what standard or proposal to the standard the corresponds to but, functionality wise, it does not follow the behavior of a c99 libm. As for why the imaginary part should be negative, it seems to follow from the definition acos(x) = pi/2 - asin(x) acos(2) = pi/2 - asin(2) = pi/2 - 1.5707 + 1.3169i = 1.5707+0i - 1.5707+1.3169i = 0-1.3169i |
I'm not sure about the cephes vs c99 inconsistency. We're not claiming to implement C99 semantics, of course. I'd like to understand why Wolfram Alpha (and by extension I assume Mathematica) has chosen a different convention. In general the decisions there seem to be quite well considered (Cos[2^120] notwithstanding). |
We might not be claiming to implement c99 semantics but cephes does. Furthermore, c99 has a definition of how these functions are supposed to behave. If we believe what is claimed in the cephes documentation, we should observe that behavior too. Mathematica is a different computing environment from either Go or C. Trivially, it does not use hardware floats and it probably does not have a notion of a signed zero. Both of these features are important as far as the implementation of the complex elementary functions and their branch cuts are concerned. More fundamentally, its implementation decisions likely predated work that later became standard practice For example $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import cmath >>> cmath.acos(2) -1.3169578969248166j >>> |
I'm a bit confused about the correctness and wrongness between Mathematica and libc here. cos is an even function, so the answer of acos is up to sign, which is normally taken to be positive in mathematics. Is this correctness/wrongness related to some computer/floating point specific point? Mathematica's answer is compatible with the common convention https://en.wikipedia.org/wiki/Inverse_hyperbolic_function#Logarithmic_representation |
Maybe I should have mentioned that cos(ix) = cosh(x) for completeness: https://en.wikipedia.org/wiki/Hyperbolic_function#Standard_algebraic_expressions |
Maybe I should have mentioned that cos(ix) = cosh(x) for completeness: https://en.wikipedia.org/wiki/Hyperbolic_function#Standard_algebraic_expressions or directly mentioned this https://en.wikipedia.org/wiki/Inverse_trigonometric_function#Logarithmic_forms |
@cshapiro As long as the choice for an extra minus is consistent overall (due to cos being even), it is not an issue. But it appears that it's not the case here. For "normal" domain, we (normally) have >>> cmath.acos(0.5) (1.0471975511965979-0j) but... >>> cmath.acos(2) -1.3169578969248166j whoops, we suddenly got an extra minus here. I don't think there is no such thing as "predated work that later became standard practice", it's mostly like just people copying each other's code. Of course, the answer is not wrong (cos is an even function). But there is an inconsistency. Mathematica is consistent overall. |
I tried to look at python's acos code https://github.com/python-git/python/blob/master/Modules/cmathmodule.c#L126 Here's the python version import cmath import math def acos(z): s1 = cmath.sqrt(1-z) s2 = cmath.sqrt(1+z) x = 2*math.atan2(s1.real, s2.real) y = cmath.asinh(s2.real*s1.imag - s2.imag*s1.real) return complex(x,y) print(acos(2)) 1.3169578969248166j (using python 3.3.3) Not really sure what's going on with python. By the way, strictly speaking, making an arbitrary sign change at some point in a function causes problems in terms of continuity (unless the value of the function is 0 at the point, which is the case here). |
I'm a bit confused by all of these comments. Anyway, there is a unique way of defining Acos(x) and Asin(x) so that it is continuous over the complex plane minus the lines (1,+infinity) and (-1,-infinity) (branch cut locus) and so that they coincide with their real analogues on (-1,1) and so that Acos(z) + Asin(z) = π/2 for all z in this domain. The values for real x such that |x|>1 may be debated (they are only defined up to sign), but should still be subject to the constraint that Acos(z) + Asin(z) = π/2. I don't understand either why acos(2)=-1.3169578969248166i is inconsistent in any way. It is consistent with Asin(2)=π/2+1.3169578969248166i. |
As I mentioned, yes, there is a unique way. It's the logarithmic formula here: https://en.wikipedia.org/wiki/Inverse_trigonometric_function#Logarithmic_forms (you can re-express that equation in terms for hyperbolic functions and Arg, but it's the same thing). To see why it is inconsistent, try putting 0.5 and 2 in the logarithmic version (gives positive for both 0.5 and 2) and compare them with Python's output (positive for 0.5, negative for 2). The choice of signature must not depend on the argument, so there musn't be anything like "for x<=1 I'm choosing a positive sign in the logarithmic formula but x>1 I'm suddenly going to introduce an extra sign at the beginning of the formula". |
It is not inconsistent. The logarithmic formula is not well defined for real numbers > 1 because it involves the choice of a branch for sqrt (whether sqrt(-1) = i or -i). You need to decide whether you want to make Acos/Asin continuous on the whole upper halfplane or not (to decide on which branch to use for real x > 1). If I blindly use the logarithmic formula it is continuous from below (weird): http://play.golang.org/p/4-d5obkJow Instead I can choose func acos(z complex128) complex128 { return 1i * cmplx.Log(z-1i/cmplx.Sqrt(1/(1-z*z))) } and make it continous from above. |
Seriously? That "branch choice" is well-established since more than 100 years ago. sqrt(-1) is *the* definition of i in mathematics. That log expression itself is written with that definition in mind. And even if you're living in an alternate universe where i is -sqrt(-1), you must stick with that definition from the beginning to the end. In general, asin expansion given there is OK. You should be careful when combining square roots when using complex numbers however, see https://en.wikipedia.org/wiki/Inverse_hyperbolic_function#Logarithmic_representation for a note related to acos (acosh there). The math is known, it's all analytic, and there is no need to involve anything computer specific here. Anyway, I don't understand the resistance here, and I'm already tired about discussing on well-known simple elementary math. I find it pointless to further trying to convince you. As a final note, just let me say that Wolfram probably knows math better than you, and if python/libc/you/whatever disagrees with Mathematica on the output of an elementary function, it's mostly likely that Mathematica is correct. Good day. |
There is no resistance, it's not at all the point to discuss elementary math here. If you apply the formula i*log(z - i sqrt(1-z²)) to z=2+0i you simply get: 1-z² = -3-0i sqrt(1-z²) = -sqrt(3)i z-i*sqrt(1-z²) = 2-sqrt(3) i*log(z-i*sqrt(1-z²)) = -1.31695i If you apply it to 2-0i 1-z² = -3+0i sqrt(1-z²) = +sqrt(3)i z-i*sqrt(1-z²) = 2+sqrt(3) i*log(z-i*sqrt(1-z²)) = +1.31695i No mystery here. |
And here's the Mathematica version of this trivial mathematics fact: http://www.wolframalpha.com/input/?i=sqrt%28-3%29+%3D%3D+-sqrt%283%29+I http://www.wolframalpha.com/input/?i=sqrt%28-3%29+%3D%3D+sqrt%283%29+I |
You are confusing mathematics and floating-point arithmetic. I do not expect Acos(2+0i) and Acos(2-0i) to be equal, just like I do not expect 1/-0 and 1/+0 to be equal. I do expect Acos(2+0i) to correspond to the mathematical limit of Acos(2+εi) when ε is a positive number going to zero. Similarly I do expect sqrt(-x+0i) and sqrt(-x-0i) to be different numbers (for real x > 0), because in the second case -x-0i belongs to the lower half complex plane thus its square root must be -sqrt(3)i, whereas -x+0i belongs to the upper half plane and thus sqrt(-3+0i) is sqrt(3)i. There is no mathematical mistake here, and this behaviour is totally mathematically logic and coherent, and matches the specification given by ISO 10967-3 norm. I have no problem with Mathematica not having the notion of a negative zero, which does not belong to floating-point arithmetic. And we are indeed talking about floating-point numbers, not real numbers or complex numbers with real coefficients. |
You first (wrongfully) argued that sqrt(-3) can be either -sqrt(3)i or sqrt(3)i and this non-existent branch-cut should be taken into account in order to get python's "correct" answer. Now it's suddenly about sqrt(-x+0i) and sqrt(-x-0i)? Let me keep it simple. If it doesn't agree with the analytic result, it's wrong. It's that simple. Floating points, computer implements, they all have errors. If you get the sign totally wrong, then it's a problem with the implementation, not mathematics. And let me add that acos *has* as discontinuity when passing the boundary on the imaginary part when the real part is greater than 1, so it's no revelations, and there is nothing wrong. See the plot in "Properties & Relations" http://reference.wolfram.com/mathematica/ref/ArcCosh.html for instance. Anyway. Really. Think/believe as you like. I tried to explain. It's not my responsibility to get it right anyway. |
You first mentioned that sqrt(-3) can be either -sqrt(3)i or sqrt(3)i (which was not correct) and this branch-cut should be taken into account in order to get python's "correct" answer. Why are we considering sqrt(-x+0i) and sqrt(-x-0i) now? Let me keep try to keep it simple. If it doesn't agree with the analytic result, it's wrong. It's that simple. Floating points, computer implementations, they all have errors. If you get the sign totally wrong for a non-zero result, then it's a problem with the implementation, not mathematics. And let me add that acos *has* as discontinuity when passing the boundary on the imaginary part, so it's no revelations, and there is nothing wrong with that. See the plot in "Properties & Relations" http://reference.wolfram.com/mathematica/ref/ArcCosh.html for instance. Anyway. Think/believe as you like. I tried to explain. However, it's not my responsibility to get it right. |
CL https://golang.org/cl/46492 mentions this issue. |
As mentioned previously, Mathematica doesn't have a notion of signed zero and resolves branch cuts using the "Counter-Clockwise Continuity" convention. In floating point arithmetic, the correct way to resolve the ambiguity is using one-sided continuity to signed zero as described in Branch Cuts for Complex Elementary Functions or Much Ado About Nothing's Sign Bit In any case, the error was in Sqrt, which failed to handle the sign when imag(x) == 0.0 The other issue was the spurious domain checks on the trigonometric/hyperbolic functions. The CL above fixes the behaviour on all of the branch cuts so that the functions are equal to their principal value on the entire complex plane. |
The text was updated successfully, but these errors were encountered: