can anyone help me to calculate Arccos(X) ? with some formula ?
I'm trying to do it in some environment (SAP WEBI) with limited math formulas. ( have only cos , sin , tan.. ).
You can try using Newton's method:
function acos(a) {
delta = 1e-5
// a lousy first approximation
x = pi*(1-a)/2
last = x
x += (cos x-a)/sin x
while ( abs(x-last) > delta ) {
last = x
x += (cos x-a)/sin x
}
return x
}
From https://en.wikipedia.org/wiki/Inverse_trigonometric_functions :
# for -1 < x <= +1 :
acos(x) == 2*atan( sqrt(1-x*x)/(1+x) )
In WEBI there isn't way to calculate ACOS so there is 2 solutions:
1) create a new custom function in c++ and import it to WEBI
2) create a universe and use ACOS there.
Mor
Related
How can I find intersection points in the graph shown below using fsolve function (from scilab)?
Here is what I've tried so far:
function y=f(x)
y = 30 + 0 * x;
endfunction
function y= g(x)
y=zeros(x)
k1 = find(x >= 5 & x <= 11);
if k1<>[] then
y(k1)= -59.535905 +24.763399*x(k1) -3.135727*x(k1)^2+0.1288967*x(k1)^3;
end;
k2=find(x >= 11 & x <= 12);
if k2 <> [] then
y(k2)=1023.4465 - 270.59543 * x(k2) + 23.715076 * x(k2)^2 - 0.684764 * x(k2)^3;
end;
k3 = find(x >= 12 & x <= 17);
if k3 <> [] then
y(k3) =-307.31448 + 62.094807 *x(k3) - 4.0091108 * x(k3)^2 + 0.0853523 * x(k3)^3;
end;
k4 = find(x >= 17 & x <= 50);
if k4 <> [] then
y(k4) = 161.42601 - 20.624104 *x(k4) + 0.8567075 * x(k4)^2 - 0.0100559 * x(k4)^3;
end;
endfunction
t=[5:50];
plot(t, g(t));
plot2d(t, f(t));
deff('res = fct', ['res(1) = f(x)'; 'res(2) = g(x)']);
k1=[5, 45];
xsol1 = fsolve(k1, f, g)
Your original post was utterly unreadable and chaotic. It took me while to edit it and understand what you are trying to achieve. However I will try to help you. Lets go step by step:
I am not sure why you have used find function this way. probably you were trying to vectorize the g function? Please consider that Scilab does not broadcast functions over arrays by default. You need to either vectorize them or use feval to do so. Please read this other answer I have written before. find is a vectorized operation applying on an array, a Boolean operation and a scalar, finding the elements of the array which satisfy the operation. For example from the find page:
beers = ["Desperados", "Leffe", "Kronenbourg", "Heineken"];
find(beers == "Leffe")
returns 2 and
A = rand(1, 20);
w = find(A < 0.4)
returns those elements of array A which are smaller than 0.4.
Please learn about conditionals and specifically if, then, elsif, else, end statements. If you learn this you will not use the find function in that way. Sometimes you have so many ifs in a row, then try to use select, case, else, end instead. Your second function could be written as:
function y = g(x)
if x < 5 | 50 < x then
error("Out of range");
elseif x <= 11 then
y = -59.535905 + 24.763399 * x - 3.135727 * x^2 + 0.1288967 * x^3;
return;
elseif x <= 12 then
y = 1023.4465 - 270.59543 * x + 23.715076 * x^2 - 0.684764 * x^3;
return;
elseif x <= 17 then
y = -307.31448 + 62.094807 * x - 4.0091108 * x^2 + 0.0853523 * x^3;
return;
else
y = 161.42601 - 20.624104 * x + 0.8567075 * x^2 - 0.0100559 * x^3;
end
endfunction
Now apparently you want to find the points on this curve which have a value of 30. Although there are methods to find these points automatically plotting can be very helpful to find the proper range:
t = [5:50];
plot(t, feval(t, g) - 30)
showing that the the two solutions are in the range of 20 < x1 < 30 and 40 < x < 50.
Now if we use fsolve with the proper initial values it gives us good results:
--> deff('[y] = g2(x)', 'y = g(x) - 30');
--> fsolve([25; 45], g2)
ans =
26.67373
48.396547
The third parameter of the fsolve function is the Jacobin / derivative of the g(x) function. You either should calculate the derivatives of the above polynomials manually (or use a proper symbolic software like Maxima), or define them as polynomials using poly function. See this tutorial for example. Then differentiate them, defining a new function like dgdx.
So I searched the in internet looking for programs with Cramer's Rule and there were some few, but apparently these examples were for fixed matrices only like 2x2 or 4x4.
However, I am looking for a way to solve a NxN Matrix. So I started and reached the point of asking the user for the size of the matrix and asked the user to input the values of the matrix but then I don't know how to move on from here.
As in I guess my next step is to apply Cramer's rule and get the answers but I just don't know how.This is the step I'm missing. can anybody help me please?
First, you need to calculate the determinant of your equations system matrix - that is the matrix, that consists of the coefficients (from the left-hand side of the equations) - let it be D.
Then, to calculate the value of a certain variable, you need to take the matrix of your system (from the previous step), replace the coefficients of the corresponding column with constant terms (from the right-hand side), calculate the determinant of resulting matrix - let it be C, and divide C by D.
A bit more about the replacement from the previous step: say, your matrix if 3x3 (as in the image) - so, you have a system of equations, where every a coefficient is multiplied by x, every b - by y, and every c by z, and ds are the constant terms. So, to calculate y, you replace those coefficients that are multiplied by y - bs in this case, with ds.
You perform the second step for every variable and your system gets solved.
You can find an example in https://rosettacode.org/wiki/Cramer%27s_rule#C
Although the specific example deals with a 4X4 matrix the code is written to accommodate any size square matrix.
What you need is calculate the determinant. Cramer's rule is just for the determinant of a NxN matrix
if N is not big, you can use the Cramer's rule(see code below), which is quite straightforward. However, this method is not efficient; if your N is big, you need to resort to other methods, such as lu decomposition
Assuming your data is double, and result can be hold by double.
#include <malloc.h>
#include <stdio.h>
double det(double * matrix, int n) {
if( 1 >= n ) return matrix[ 0 ];
double *subMatrix = (double*)malloc(( n - 1 )*( n - 1 ) * sizeof(double));
double result = 0.0;
for( int i = 0; i < n; ++i ) {
for( int j = 0; j < n - 1; ++j ) {
for( int k = 0; k < i; ++k )
subMatrix[ j*( n - 1 ) + k ] = matrix[ ( j + 1 )*n + k ];
for( int k = i + 1; k < n; ++k )
subMatrix[ j*( n - 1 ) + ( k - 1 ) ] = matrix[ ( j + 1 )*n + k ];
}
if( i % 2 == 0 )
result += matrix[ 0 * n + i ] * det(subMatrix, n - 1);
else
result -= matrix[ 0 * n + i ] * det(subMatrix, n - 1);
}
free(subMatrix);
return result;
}
int main() {
double matrix[ ] = { 1,2,3,4,5,6,7,8,2,6,4,8,3,1,1,2 };
printf("%lf\n", det(matrix, 4));
return 0;
}
I have a scilab program for averaging a 3D matrix and it works ok.However, instead of having the average just be a set value.I want it to be a certain sum of mass(sum(n*n*n).
K = 100
N = 5
A = 1
mid = floor(N/2)
volume = rand(K, K, K)
cubeCount = floor( K / N )
for x=0:cubeCount1
for y=0:cubeCount1
for z=0:cubeCount1
// Get a cube of NxNxN size
cube = 20;
//Calculate the average value of the voxels in the cube
avg = sum( cube ) / (N * N * N);
// Assign it to the center voxel
volume( N*x+mid+1, N*y+mid+1, N*z+mid+1 ) = avg
end
end
end
disp( volume )
If anyone has a simple solution to this, please tell me.
You seem to have just about said it your self. All you would need to do would be change cube to equal.
cube = while sum(A * A * A) < 10,
A=A+1;
This will give you the correct sum of mass of the voxels.
I tried to implement bessel function using that formula, this is the code:
function result=Bessel(num);
if num==0
result=bessel(0,1);
elseif num==1
result=bessel(1,1);
else
result=2*(num-1)*Bessel(num-1)-Bessel(num-2);
end;
But if I use MATLAB's bessel function to compare it with this one, I get too high different values.
For example if I type Bessel(20) it gives me 3.1689e+005 as result, if instead I type bessel(20,1) it gives me 3.8735e-025 , a totally different result.
such recurrence relations are nice in mathematics but numerically unstable when implementing algorithms using limited precision representations of floating-point numbers.
Consider the following comparison:
x = 0:20;
y1 = arrayfun(#(n)besselj(n,1), x); %# builtin function
y2 = arrayfun(#Bessel, x); %# your function
semilogy(x,y1, x,y2), grid on
legend('besselj','Bessel')
title('J_\nu(z)'), xlabel('\nu'), ylabel('log scale')
So you can see how the computed values start to differ significantly after 9.
According to MATLAB:
BESSELJ uses a MEX interface to a Fortran library by D. E. Amos.
and gives the following as references for their implementation:
D. E. Amos, "A subroutine package for Bessel functions of a complex
argument and nonnegative order", Sandia National Laboratory Report,
SAND85-1018, May, 1985.
D. E. Amos, "A portable package for Bessel functions of a complex
argument and nonnegative order", Trans. Math. Software, 1986.
The forward recurrence relation you are using is not stable. To see why, consider that the values of BesselJ(n,x) become smaller and smaller by about a factor 1/2n. You can see this by looking at the first term of the Taylor series for J.
So, what you're doing is subtracting a large number from a multiple of a somewhat smaller number to get an even smaller number. Numerically, that's not going to work well.
Look at it this way. We know the result is of the order of 10^-25. You start out with numbers that are of the order of 1. So in order to get even one accurate digit out of this, we have to know the first two numbers with at least 25 digits precision. We clearly don't, and the recurrence actually diverges.
Using the same recurrence relation to go backwards, from high orders to low orders, is stable. When you start with correct values for J(20,1) and J(19,1), you can calculate all orders down to 0 with full accuracy as well. Why does this work? Because now the numbers are getting larger in each step. You're subtracting a very small number from an exact multiple of a larger number to get an even larger number.
You can just modify the code below which is for the Spherical bessel function. It is well tested and works for all arguments and order range. I am sorry it is in C#
public static Complex bessel(int n, Complex z)
{
if (n == 0) return sin(z) / z;
if (n == 1) return sin(z) / (z * z) - cos(z) / z;
if (n <= System.Math.Abs(z.real))
{
Complex h0 = bessel(0, z);
Complex h1 = bessel(1, z);
Complex ret = 0;
for (int i = 2; i <= n; i++)
{
ret = (2 * i - 1) / z * h1 - h0;
h0 = h1;
h1 = ret;
if (double.IsInfinity(ret.real) || double.IsInfinity(ret.imag)) return double.PositiveInfinity;
}
return ret;
}
else
{
double u = 2.0 * abs(z.real) / (2 * n + 1);
double a = 0.1;
double b = 0.175;
int v = n - (int)System.Math.Ceiling((System.Math.Log(0.5e-16 * (a + b * u * (2 - System.Math.Pow(u, 2)) / (1 - System.Math.Pow(u, 2))), 2)));
Complex ret = 0;
while (v > n - 1)
{
ret = z / (2 * v + 1.0 - z * ret);
v = v - 1;
}
Complex jnM1 = ret;
while (v > 0)
{
ret = z / (2 * v + 1.0 - z * ret);
jnM1 = jnM1 * ret;
v = v - 1;
}
return jnM1 * sin(z) / z;
}
}
I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}