Linear Scale vs. Log Scale - math

This may be a very simple question, but I haven't been in touch with Algorithms since a while.
I have a logarithmic scale of 20,100,500,2500,12500 which relates to 1,2,3,4,5 on the respectively. Now, I want to find out as to where the value 225 would lie on the scale above? And also, going the other way round, how would I find out as to what the value for 2.3 interpret to on the scale. Would be great if someone can help me with the answer and explanation for this.

Note that each step in the scale multiplies the previous step by 5.
So the explicit formula for your output is
y = 4 * 5^x
or
x = log-base-5(y/4)
where
log-base-5(n) = log(n)/log(5)
if you want to compute it in code. That last line is called the change of base formula, and is explained here You can use either a natural log or common log on the right side of the formula, it doesn't matter.

Related

The reason of bias'es usage in networks?

It may be easy to see why but I still don't understand why we use bias in a neutral network? The weight's values will get changed, therefore ensuring whether the algorithm will learn. So, why use bias in all of this?
Because of linear equations.
Bias is another learned parameter. A single neuron will compute w*x + b where w is your weight parameter and b your bias.
Perhaps this helps you: Let's assume you are dealing with a 2D euclidian space that you'd like to classify with two labels. You can do that by computing a linear function and then classify everything below it with one label, and everything below with another. If you would not use the bias you could only change the slope of your function and your function would always intersect (0, 0). Bias gives you the possibility to define where that linear function intersects the y-axis for x=0, i.e. (0, y). E.g. without bias you could not separate data that is only separatable by the y-axis.

Calculating rho values for a lemniscate graph

I am trying to graph a lemniscate in polar coordinates on scilab. Which formula is
rho^2=a^2*cos(2*theta).
The thing is that calculating the square root of certain values will return an imaginary number as the value would be negative.
clear
close
clc
clf
a=3;
theta=[0:((1*%pi)/180):((359*%pi)/180)];
rr=(a*a)*cos(2*theta);
rho=sqrt(rr);
polarplot(theta,rho,2);
Anyways, the program breaks itself when the negative rr values are reached since the square root is not properly defined for them.
All I need is the code to ignore those points and plot the others.
I don't know if this is understandable, but I hope someone do and can help me with this.
Thanks in advance.
You may ignore (e.g. filter out) those points, but there is an even easier solution: use only the real part of your result vector for the plot with real
polarplot(theta,real(rho),2);
You may also assigh it to a new variable if want to use it later:
rhoreal=real(rho);

Rotate model around x,y,z axes, without gimbal lock, with input data always as x,y,z axes angle rotations

I have an input device that gives me 3 angles -- rotation around x,y,z axes.
Now I need to use these angles to rotate the 3D space, without gimbal lock. I thought I could convert to Quaternions, but apparently since I'm getting the data as 3 angles this won't help?
If that's the case, just how can I correctly rotate the space, keeping in mind that my input data simply is x,y,z axes rotation angles, so I can't just "avoid" that. Similarly, moving around the order of axes rotations won't help -- all axes will be used anyway, so shuffling the order around won't accomplish anything. But surely there must be a way to do this?
If it helps, the problem can pretty much be reduced to implementing this function:
void generateVectorsFromAngles(double &lastXRotation,
double &lastYRotation,
double &lastZRotation,
JD::Vector &up,
JD::Vector &viewing) {
JD::Vector yaxis = JD::Vector(0,0,1);
JD::Vector zaxis = JD::Vector(0,1,0);
JD::Vector xaxis = JD::Vector(1,0,0);
up.rotate(xaxis, lastXRotation);
up.rotate(yaxis, lastYRotation);
up.rotate(zaxis, lastZRotation);
viewing.rotate(xaxis, lastXRotation);
viewing.rotate(yaxis, lastYRotation);
viewing.rotate(zaxis, lastZRotation);
}
in a way that avoids gimbal lock.
If your device is giving you absolute X/Y/Z angles (which implies something like actual gimbals), it will have some specific sequence to describe what order the rotations occur in.
Since you say that "the order doesn't matter", this suggests your device is something like (almost certainly?) a 3-axis rate gyro, and you're getting differential angles. In this case, you want to combine your 3 differential angles into a rotation vector, and use this to update an orientation quaternion, as follows:
given differential angles (in radians):
dXrot, dYrot, dZrot
and current orientation quaternion Q such that:
{r=0, ijk=rot(v)} = Q {r=0, ijk=v} Q*
construct an update quaternion:
dQ = {r=1, i=dXrot/2, j=dYrot/2, k=dZrot/2}
and update your orientation:
Q' = normalize( quaternion_multiply(dQ, Q) )
Note that dQ is only a crude approximation of a unit quaternion (which makes the normalize() operation more important than usual). However, if your differential angles are not large, it is actually quite a good approximation. Even if your differential angles are large, this simple approximation makes less nonsense than many other things you could do. If you have problems with large differential angles, you might try adding a quadratic correction to improve your accuracy (as described in the third section).
However, a more likely problem is that any kind of repeated update like this tends to drift, simply from accumulated arithmetic error if nothing else. Also, your physical sensors will have bias -- e.g., your rate gyros will have offsets which, if not corrected for, will cause your orientation estimate Q to precess slowly. If this kind of drift matters to your application, you will need some way to detect/correct it if you want to maintain a stable system.
If you do have a problem with large differential angles, there is a trigonometric formula for computing an exact update quaternion dQ. The assumption is that the total rotation angle should be linearly proportional to the magnitude of the input vector; given this, you can compute an exact update quaternion as follows:
given differential half-angle vector (in radians):
dV = (dXrot, dYrot, dZrot)/2
compute the magnitude of the half-angle vector:
theta = |dV| = 0.5 * sqrt(dXrot^2 + dYrot^2 + dZrot^2)
then the update quaternion, as used above, is:
dQ = {r=cos(theta), ijk=dV*sin(theta)/theta}
= {r=cos(theta), ijk=normalize(dV)*sin(theta)}
Note that directly computing either sin(theta)/theta ornormalize(dV) is is singular near zero, but the limit value of vector ijk near zero is simply ijk = dV = (dXrot,dYrot,dZrot), as in the approximation from the first section. If you do compute your update quaternion this way, the straightforward method is to check for this, and use the approximation for small theta (for which it is an extremely good approximation!).
Finally, another approach is to use a Taylor expansion for cos(theta) and sin(theta)/theta. This is an intermediate approach -- an improved approximation that increases the range of accuracy:
cos(x) ~ 1 - x^2/2 + x^4/24 - x^6/720 ...
sin(x)/x ~ 1 - x^2/6 + x^4/120 - x^6/5040 ...
So, the "quadratic correction" mentioned in the first section is:
dQ = {r=1-theta*theta*(1.0/2), ijk=dV*(1-theta*theta*(1.0/6))}
Q' = normalize( quaternion_multiply(dQ, Q) )
Additional terms will extend the accurate range of the approximation, but if you need more than +/-90 degrees per update, you should probably use the exact trig functions described in the second section. You could also use a Taylor expansion in combination with the exact trigonometric solution -- it may be helpful by allowing you to switch seamlessly between the approximation and the exact formula.
I think that the 'gimbal lock' is not a problem of computations/mathematics but rather a problem of some physical devices.
Given that you can represent any orientation with XYZ rotations, then even at the 'gimbal lock point' there is a XYZ representation for any imaginable orientation change. Your physical gimbal may be not able to rotate this way, but the mathematics still works :).
The only problem here is your input device - if it's gimbal then it can lock, but you didn't give any details on that.
EDIT: OK, so after you added a function I think I see what you need. The function is perfectly correct. But sadly, you just can't get a nice and easy, continuous way of orientation edition using XYZ axis rotations. I haven't seen such solution even in professional 3D packages.
The only thing that comes to my mind is to treat your input like a steering in aeroplane - you just have some initial orientation and you can rotate it around X, Y or Z axis by some amount. Then you store the new orientation and clear your inputs. Rotations in 3DMax/Maya/Blender are done the same way.
If you give us more info about real-world usage you want to achieve we may get some better ideas.

Distribution Pattern (Texture) and Ramp Math?

I'm trying to achieve the ramp effect as seen here:
(source: splashdamage.com)
Blending the textures based on a distribution pattern is easy. Basically, just this (HLSL):
Result = lerp(SampleA, SampleB, DistributionPatternSample);
Which works, but without the ramp.
http://aaronm.nuclearglory.com/private/stackoverflow/result1.png
My first guess was that to incorporate "Ramp Factor" I could just do this:
Result = lerp(A, B, (1.0f - Ramp)*Distribution);
However, that does not work because if Ramp is also 1.0 the result would be zero, causing just 'A' to be used. This is what I get when Ramp is 1.0f with that method:
http://aaronm.nuclearglory.com/private/stackoverflow/result2.png
I've attempted to just multiply the ramp with the distribution, which is obviously incorrect. (Figured it's worth a shot to try and discover interesting effects. No interesting effect was discovered.)
I've also attempted subtracting the Ramp from the Distribution, like so:
Result = lerp(A, B, saturate(Distribution - Ramp));
But the issue with that is that the ramp is meant to control sharpness of the blend. So, that doesn't really do anything either.
I'm hoping someone can inform me what I need to do to accomplish this, mathematically. I'm trying to avoid branching because this is shader code. I can simulate branching by multiplying out results, but I'd prefer not to do this. I am also hoping someone can fill me in on why the math is formulated the way it is for the sharpness. Throwing around math without knowing how to use it can be troublesome.
For context, that top image was taken from here:
http://wiki.splashdamage.com/index.php/A_Simple_First_Megatexture
I understand how MegaTextures (the clip-map approach) and Virtual Texturing (the more advanced approach) work just fine. So I don't need any explanation on that. I'm just trying to implement this particular blend in a shader.
For reference, this is the distribution pattern texture I'm using.
http://aaronm.nuclearglory.com/private/stackoverflow/distribution.png
Their ramp width is essentially just a contrast change on the distribution map. A brute version of this is a simple rescaling and clamp.
Things we want to preserve are that 0.5 maps to 0.5, and that the texture goes from 0 to 1 over a region of width w.
This gives
x = 0.5 + (x-0.5)/w
This means the final HLSL will look something like this:
Result = lerp(A, B, clamp( 0.5 + (Distribution-0.5)/w, 0, 1) );
Now if this ends up looking jaggy at the edges you can switch to using a smoothstep. In shich case you'd get
Result = lerp(A, B, smoothstep( 0.5 + (Distribution-0.5)/w, 0, 1) );
However, one thing to keep in mind here is that this type of thresholding works best with smoothish distribution patters. I'm not sure if yours is going to be smooth enough (unless that is a small version of a mega texture in which case you're probabbly OK.)

Math function to find saturation point of a curve

Does anybody know an algorithm in c to find the saturation point in a curve of saturation?
The curve could change its increasing speed in a sharp or in a smooth way, and have noise included, so it's not as simple as I thought.
I tried calculating the derivative of atan(delta_y/delta_x), but it doesn't work fine for all the curves.
It appears you're trying to ascertain, numerically, when the gradient of a function, fitted to some data points from a chemistry experiment, is less than one. It also seems like your data is noisy and you want to determine when the gradient would be less than one if the noise wasn't there.
Firstly, let's forget about the noise. You do not want to do this:
atan(((y(i)-y(i-1))/(x(i)-x(i-1)))*180/PI
There is no need to compute the angle of the gradient when you have the gradient is right there. Just compare (y(i)-y(i-1))/(x(i)-x(i-1)) to 1.
Secondly, if there is noise you can't trust derivatives computed like that. But to do better we really need to know more about your problem. There are infinitely many ways to interpret your data. Is there noise in the x values, or just in the y values? Do we expect this curve to have a characteristic shape or can it do anything.
I'll make a guess: This is some kind of chemistry thing where the y values rapidly increase but then the rate of increase slows down, so that in the absence of noise we have y = A(1-exp(-B*x)) for some A and B. If that's the case then you can use a non-linear regression algorithm to fit such a curve to your points and then test when the gradient of the fitted curve is less than 1.
But without more data, your question will be hard to answer. If you really are unwilling to give more information I'd suggest a quick and dirty filtering of your data. Eg. at any time estimate the true value of y by using a weighted average of the previous y values using weights that drop off exponentially the further back in time you go. Eg. instead of using y[i] use z[i] where
z[i] = sum over j = 0 to i of w[i,j]*y[j] / sum over j = 0 to i of w[i,j]
where
w[i,j] = exp(A*(x[j]-x[i]))
and A is some number that you tune by hand until you get the results you want. Try this, and plotting the z[i] as you tweak A. See if it does what you want.
We can get the maxima or minima of a curve quite easily from the function parameters of the curve.Can't see whats the reason why you getting inconsistent results.
I think the problem might be while trying to include the noise curve with the original .So make sure that you mixes these curves in a proper way.There is nothing wrong with the atan or any other math function you used. The problem is with your implementation which you haven't specified here.

Resources