How to get inverse trigonometric functions in Eiffel? - math

How do I use inverse trigonometric functions such as sin inverse, cos inverse, and tan inverse in Eiffel?
I tried atan, arctan, and arctangent, but none of them work. I was also unable to find it in any other source.
Thank you.

As per https://archive.eiffel.com/doc/online/eiffel50/intro/studio/index-09A/base/support/double_math.html
using the double_math class the names are arc_sine, arc_cosine, arc_tangent. There is no argument function like atan2.

Related

Recursive arc-length reparameterization of an arbitrary curve

I have a 3D parametric curve defined as P(t) = [x(t), y(t), z(t)].
I'm looking for a function to reparametrize this curve in terms of arc-length. I'm using OpenSCAD, which is a declarative language with no variables (constants only), so the solution needs to work recursively (and with no variables aside from global constants and function arguments).
More precisely, I need to write a function Q(s) that gives the point on P that is (approximately) distance s along the arc from the point where t=0. I already have functions for numeric integration and derivation that can be incorporated into the answer.
Any suggestions would be greatly appreciated!
p.s It's not possible to pass functions as a parameter in OpenSCAD, I usually get around this by just using global declarations.
The length of an arc sigma between parameter values t=0 and t=T can be computed by solving the following integral:
sigma(T) = Integral[ sqrt[ x'(t)^2 + y'(t)^2 + z'(t)^2 ],{t,0,T}]
If you want to parametrize your curve with the arc-length, you have to invert this formula. This is unfortunately rather difficult from a mathematics point of view. The simplest method is to implement a simple bisection method as a numeric solver. The computation method quickly becomes heavy so reusing previous results is ideal. The secant method is also useful as the derivative of sigma(t) is already known and equals
sigma'(t) = sqrt[ x'(t)^2 + y'(t)^2 + z'(t)^2]
Maybe not really the most helpful answer, but I hope it gives you some ideas. I cannot help you with the OpenSCad implementation.

Difference between unicode cdot and * in julia

I've started using the unicode cdot in place of * in my Julia code because I find it easier to read. I thought they were the same, but apparently there is a difference I don't understand. Is there documentation on this?
julia> 2pi⋅(0:1)
ERROR: MethodError: no method matching dot(::Float64, ::UnitRange{Int64})
Closest candidates are:
dot(::Number, ::Number) at linalg\generic.jl:301
dot{T<:Union{Float32,Float64},TI<:Integer}(::Array{T<:Union{Float32,Float64},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}, ::Array{T<:Union{Float32,Float64},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}) at linalg\matmul.jl:48
dot{T<:Union{Complex{Float32},Complex{Float64}},TI<:Integer}(::Array{T<:Union{Complex{Float32},Complex{Float64}},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}, ::Array{T<:Union{Complex{Float32},Complex{Float64}},1}, ::Union{Range{TI<:Integer},UnitRange{TI<:Integer}}) at linalg\matmul.jl:61
...
julia> 2pi*(0:1)
0.0:6.283185307179586:6.283185307179586
dot or ⋅ is not the same as multiplication (*). You can find out what it's for by typing ?dot:
help?> ⋅
search: ⋅
dot(x, y)
⋅(x,y)
Compute the dot product. For complex vectors, the first vector is conjugated. [...]
For more info about the dot product, see e.g. here.
It seems like you are conflating two different operators. The cdot aliases the dot function, while the asterisk * aliases multiplication routines.
I suspect that you want to do a dot product. The error that you see tells you that Julia does not know how to compute the dot product of a scalar floating point number (Float64) with an integer unit range (UnitRange{Int}). If you think about it, using dot here makes little sense.
In contrast, the second command 2pi*(0:1) computes the product of a scalar against the same UnitRange object. That simply rescales the range, and Julia has a method to do that.
A few options for you, depending on what you want to do:
Use * instead of dot here (easiest)
Code your own dot method to handle rescaling of UnitRange objects (probably not helpful)
Use elementwise multiplication .* (careful, not equal to dot!)

How can you find the condition number in Eigen?

In Matlab there are cond and rcond, and in LAPACK too. Is there any routine in Eigen to find the condition number of a matrix?
I have a Cholesky decomposition of a matrix and I want to check if it is close to singularity, but cannot find a similar function in the docs.
UPDATE:
I think I can use something like this algorithm, which makes use of the triangular factorization. The method by Ilya is useful for more accurate answers, so I will mark it as correct.
Probably easiest way to compute the condition number is using the expression:
cond(A) = max(sigma) / min(sigma)
where sigma is an array of singular values, the result of SVD. Eigen author suggests this code:
JacobiSVD<MatrixXd> svd(A);
double cond = svd.singularValues()(0)
/ svd.singularValues()(svd.singularValues().size()-1);
Other ways are (less efficient)
cond(A) = max(lambda) / min(lambda)
cond(A) = norm2(A) * norm2(A^-1)
where lambda is an array of eigenvalues.
It looks like Cholesky decomposition does not directly help here, but I cant tell for sure at the moment.
You could use the Gershgorin circle theorem to get a rough estimate.
But as Ilya Popov has already pointed out, calculating the eigenvalues/singular values is more reliable. However, it doesn't make sense to calculate all eigenvalues, which gets very expensive. You only need the largest and the smallest eigenvalues, for that you can use the Power method for the largest and Inverse Iteration for the smallest eigenvalue.
Or you could use a library that can do this already, e.g. Spectra.
You can use norms. In my robotics experience, this is computationally faster than singular values:
pseudoInverse(matrix).norm() * matrix.norm()
I found this to be 2.6X faster than singular values for 6x6 matrices. It's also recommended in this book:
B. Siciliano, and O. Khatib, Springer Handbook of Robotics. Berlin: Springer Science and Business Media, 2008, p. 236.

Is this a correct way to find the derivative of the sigmoid function in python?

I came up with this code:
def DSigmoid(value):
return (math.exp(float(value))/((1+math.exp(float(value)))**2))
a.) Will this return the correct derivative?
b.) Is this an efficient method?
Friendly regards,
Daquicker
Looks correct to me. In general, two good ways of checking such a derivative computation are:
Wolfram Alpha. Inputting the sigmoid function 1/(1+e^(-t)), we are given an explicit formula for the derivative, which matches yours. To be a little more direct, you can input D[1/(1+e^(-t)), t] to get the derivative without all the additional information.
Compare it to a numerical approximation. In your case, I will assume you already have a function Sigmoid(value). Taking
Dapprox = (Sigmoid(value+epsilon) - Sigmoid(value)) / epsilon
for some small epsilon and comparing it to the output of your function DSigmoid(value) should catch all but the tiniest errors. In general, estimating the derivative numerically is the best way to double check that you've actually coded the derivative correctly, even if you're already sure about the formula, and it takes almost no effort.
In case numerical stability is an issue, there is another possibility: provided that you have a good implementation of the sigmoid available (such as in scipy) you can implement it as:
from scipy.special import expit as sigmoid
def sigmoid_grad(x):
fx = sigmoid(x)
return fx * (1 - fx)
Note that this is mathematically equivalent to the other expression.
In my case this solution worked, while the direct implementation caused floating point overflows when computing exp(-x).

Why are the arguments to atan2 Y,X rather than X,Y?

In C the atan2 function has the following signature:
double atan2( double y, double x );
Other languages do this as well. This is the only function I know of that takes its arguments in Y,X order rather than X,Y order, and it screws me up regularly because when I think coordinates, I think (X,Y).
Does anyone know why atan2's argument order convention is this way?
Because I believe it is related to arctan(y/x), so y appears on top.
Here's a nice link talking about it a bit: Angles and Directions
My assumption has always been that this is because of the trig definition, ie that
tan(theta) = opposite / adjacent
When working with the canonical angle from the origin, opposite is always Y and adjacent is always X, so:
atan2(opposite, adjacent) = theta
Ie, it was done that way so there's no ordering confusion with respect to the mathematical definition.
Suppose a rectangle triangle with its opposite side called y, adjacent side called x:
tan(angle) = y/x
arctan(tan(angle)) = arctan(y/x)
It's because in school, the mnemonic for calculating the gradient
is rise over run, or in other words dy/dx, or more briefly y/x.
And this order has snuck into the arguments of arctangent functions.
So it's a historical artefact. For me it depends on what I'm thinking
about when I use atan2. If I'm thinking about differentials, I get it right
and if I'm thinking about coordinate pairs, I get it wrong.
The order is atan2(X,Y) in excel so I think the reverse order is a programming thing. atan(Y/X) can easily be changed to atan2(Y,X) by putting a '2' between the 'n' and the '(', and replacing the '/' with a ',', only 2 operations. The opposite order would take 4 operations and some of the operations would be more complex (cut and paste).
I often work out my math in Excel then port it to .NET, so will get hung up on atan2 sometimes. It would be best if atan2 could be standardized one way or the other.
It would be more convenient if atan2 had its arguments reversed. Then you wouldn't need to worry about flipping the arguments when computing polar angles. The Mathematica equivalent does just that: https://reference.wolfram.com/language/ref/ArcTan.html
Way back in the dawn of time, FORTRAN had an ATAN2 function with the less convenient argument order that, in this reference manual, is (somewhat inaccurately) described as arctan(arg1 / arg2).
It is plausible that the initial creator was fixated on atan2(arg1, arg2) being (more or less) arctan(arg1 / arg2), and that the decision was blindly copied from FORTRAN to C to C++ and Python and Java and JavaScript.

Resources