I cannot find any examples in CF of the Haversine formula (a formula for working out distances between two points on a sphere from their longitudes and latitudes).
Wikipeda has examples in other languages (http://en.wikipedia.org/wiki/Haversine_formula) but none in CF.
An interpretation in CF is below by another developer, internal, and not fully tested. I am interested to see how others have calculated this in CF. I would also be interested to get opinions on the example below to how it can be simplified.
var variables.intEarthRadius = 6371; // in km
var local.decRadius = arguments.radius / 1000; // convert radius given in metres to kilometres
var local.latMax = arguments.latitude + degree(local.decRadius / variables.intEarthRadius);
var local.latMin = arguments.latitude - degree(local.decRadius / variables.intEarthRadius);
var local.lngMax = arguments.longitude + degree(local.decRadius / variables.intEarthRadius / cos(radian(arguments.latitude)));
var local.lngMin = arguments.longitude - degree(local.decRadius / variables.intEarthRadius / cos(radian(arguments.latitude)));
private numeric function degree(required numeric radian) hint="I convert radians to degrees." {
return arguments.radian * 180 / pi();
}
private numeric function radian(required numeric degrees) hint="I convert degrees to radians." {
return arguments.degrees * pi() / 180;
}
Have you looked at this...
http://cflib.org/udf/getHaversineDistance
(New URL since CFLib.org switched to static site generator)
Related
My goal here is to improve the user experience so that the cursor goes where the user would intuitively expect it to when moving the joystick diagonally, whatever that means.
Consider a joystick that has a different configured speed for each direction.
e.g. Maybe the joystick has a defect where some directions are too sensitive and some aren't sensitive enough, so you're trying to correct for that. Or maybe you're playing an FPS where you rarely need to look up or down, so you lower the Y-sensitivity.
Here are our max speeds for each direction:
var map = {
x: 100,
y: 200,
}
The joystick input gives us a unit vector from 0 to 1.
Right now the joystick is tilted to the right 25% of the way and tilted up 50% of the way.
joystick = (dx: 0.25, dy: -0.50)
Sheepishly, I'm not sure where to go from here.
Edit: I will try #Caderyn's solution:
var speeds = {
x: 100, // max speed of -100 to 100 on x-axis
y: 300, // max speed of -300 to 300 on y-axis
}
var joystick = { dx: 2, dy: -3 }
console.log('joystick normalized:', normalize(joystick))
var scalar = Math.sqrt(joystick.dx*joystick.dx / speeds.x*speeds.x + joystick.dy*joystick.dy / speeds.y*speeds.y)
var scalar2 = Math.sqrt(joystick.dx*joystick.dx + joystick.dy*joystick.dy)
console.log('scalar1' , scalar) // length formula that uses max speeds
console.log('scalar2', scalar2) // regular length formula
// normalize using maxspeeds
var normalize1 = { dx: joystick.dx/scalar, dy: joystick.dy/scalar }
console.log('normalize1', normalize1, length(normalize1))
// regular normalize (no maxpseed lookup)
var normalize2 = { dx: joystick.dx/scalar2, dy: joystick.dy/scalar2 }
console.log('normalize2', normalize2, length(normalize2))
function length({dx, dy}) {
return Math.sqrt(dx*dx + dy*dy)
}
function normalize(vector) {
var {dx,dy} = vector
var len = length(vector)
return {dx: dx/len, dy: dy/len}
}
Am I missing something massive or does this give the same results as regular vector.len() and vector.normalize() that don't try to integrate the maxspeed data at all?
three solutions :
You can simply multiply each component of the input vector by it's respective speed
you can divide the vector itself by sqrt(dx^2/hSpeed^2+dy^2/vSpeed^2)
you can multiply the vector itself by sqrt((dx^2+dy^2)/(dx^2/hSpeed^2+dy^2/vSpeed^2)) or 0 if the input is (0, 0)
the second solution will preserve the vector's direction when the first will tend to pull it in the direction with the greatest max speed. But if the domain of those function is the unit disc, their image will be an ellipse whose radii are the two max speeds
EDIT : the third method does what the second intended to do: if the imput is A, it will return B such that a/b=c/d (the second method was returning C):
I am looking at code references for simple PID implementation in arduino.
these are the few implementations
YMFC
pid_error_temp = gyro_pitch_input - pid_pitch_setpoint;
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
if(pid_i_mem_pitch > pid_max_pitch)pid_i_mem_pitch = pid_max_pitch;
else if(pid_i_mem_pitch < pid_max_pitch * -1)pid_i_mem_pitch = pid_max_pitch * -1;
pid_output_pitch = pid_p_gain_pitch * pid_error_temp + pid_i_mem_pitch + pid_d_gain_pitch * (pid_error_temp - pid_last_pitch_d_error);
if(pid_output_pitch > pid_max_pitch)pid_output_pitch = pid_max_pitch;
else if(pid_output_pitch < pid_max_pitch * -1)pid_output_pitch = pid_max_pitch * -1;
pid_last_pitch_d_error = pid_error_temp;
lobodol
error_sum[PITCH] += errors[PITCH];
deltaErr[PITCH] = errors[PITCH] - previous_error[PITCH];
previous_error[PITCH] = errors[PITCH];
pitch_pid = (errors[PITCH] * Kp[PITCH]) + (error_sum[PITCH] * Ki[PITCH]) + (deltaErr[PITCH] * Kd[PITCH]);
Arduino Forum Post
double PTerm = kp * error;
integral += error * (double) (timeChange * .000001);
ITerm = ki * integral;
// Derivative term using angle change
derivative = (input - lastInput) / (double)(timeChange * .000001);
DTerm = (-kd * derivative);
//Compute PID Output
double output = PTerm + ITerm + DTerm ;
brettbeauregard
void Compute()
{
/*How long since we last calculated*/
unsigned long now = millis();
double timeChange = (double)(now - lastTime);
/*Compute all the working error variables*/
double error = Setpoint - Input;
errSum += (error * timeChange);
double dErr = (error - lastErr) / timeChange;
/*Compute PID Output*/
Output = kp * error + ki * errSum + kd * dErr;
/*Remember some variables for next time*/
lastErr = error;
lastTime = now;
}
can any one give me explanations for following :
lobodol & YMFC ignoring the time constant. how does it effect the pid calculations
YMFC code the i term is
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
why he multiplying with error ?
labodol is just adding the previous error with present error and other two are multiplying it with time change
any other simple implementation suggestions also welcome.
The lobodol and YMFC systems work without using the time constant, because the code is written in such a manner that the Arduino would not do anything other then the control.
As such the time difference between calculations of P, I and D error would remain the same.
There is no difference in how you would tune these systems to how you would tune any other PID system.
While this system works, this also means that the final tuned PID values would only be used with these systems and not to any other.
The other systems which are using the time difference in their calculations. This means that the tuned PID values could be used(More reliably as compared to lobodol and YMFC) with other systems as well.
In the YMFC implementation
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
^
Note the '+' sign before the '=' sign. This means that the error is getting added. Just that the gain multiplication is being done before addition as opposed to after addition.
Both methods yield the same (theoretically) result.
I hope this helps.
I currently have a function that updates a value using the sin function, and a timeFactor double that keeps track of how much time has passed since the program started:
double timeFactor;
double delta;
while(running) {
delta = currentTime - lastTime;
timeFactor += delta;
var objectX = sin(timeFactor);
}
However, I need to convert this code to use the delta rather than the timeFactor.
I.e. For updating to sin(time+delta) I only want to use the current value of sin(time) and anything calculated from the value of delta.
I.e. calcualte sin(time+delta) == f(sin(time),delta)
How do I do this?
From math:
sin(A + B) == sin(A) * cos(B) + cos(A) * sin(B)
cos(A + B) == cos(A) * cos(B) - sin(A) * sin(B)
Store sin(A) and cos(A) in two variables.
Then for updating them use temporary copy of one of them, otherwise you will update the second using the new instead of the old value of the first.
Assuming:
persistent objectX stores current and is initialised with initial sin(timeFactor)
persistent objectXc stores current and is initialised with initial cos(timeFactor)
temporary objectXt stores a copy of objectX
("persistent" as in "keeps value across executions of update code",
in contrast to "temporary" as in "only keeps value during update code";
this is to avoid using the "global" attribute, which implies poor data design)
Update code:
objectXt = objectX;
objectX = objectX * cos(delta) + objectXc * sin(delta);
objectXc = objectXc* cos(delta) - objectXt * sin(delta);
Credits to John Coleman for spotting the problem in initial idea to use
1 == sin(A)*sin(A)+cos(A)*cos(A)
That would have been actually
sin(time+delta)== f(sin(time), delta)
But it fails for 50% of a full period.
So I hope this
sin(time+delta)==f(sin(time), cos(time), delta)
also helps.
How can I find a point ( C (x,y,z) ) between 2 points ( A(x,y,z) , B(x,y,z) ) in a thgree.js scene?
I know that with this: mid point I can find the middle point between them, but I don't want the middle point, I want to find the point which is between them and also has distance a from the A point?
in this picture you can see what I mean :
Thank you.
Basically you need to get the direction vector between the two points (D), normalize it, and you'll use it for getting the new point in the way: NewPoint = PointA + D*Length.
You could use length normalized (0..1) or as an absolute value from 0 to length of the direction vector.
Here you can see some examples using both methods:
Using absolute value:
function getPointInBetweenByLen(pointA, pointB, length) {
var dir = pointB.clone().sub(pointA).normalize().multiplyScalar(length);
return pointA.clone().add(dir);
}
And to use with percentage (0..1)
function getPointInBetweenByPerc(pointA, pointB, percentage) {
var dir = pointB.clone().sub(pointA);
var len = dir.length();
dir = dir.normalize().multiplyScalar(len*percentage);
return pointA.clone().add(dir);
}
See it in action: http://jsfiddle.net/8mnqjsge/
Hope it helps.
I know the question is for THREE.JS and I end up looking for something similar in Babylon JS.
Just in case if you are using Babylon JS Vector3 then the formula would translate to:
function getPointInBetweenByPerc(pointA, pointB, percentage) {
var dir = pointB.clone().subtract(pointA);
var length = dir.length();
dir = dir.normalize().scale(length *percentage);
return pointA.clone().add(dir);
}
Hope it help somebody.
This is known as lerp between two points
e.g. in Three:
C = new Three.Vector3()
C.lerpVectors(A, B, a)
also in generic this is just a single lerp (linear interpolation) math (basically (a * t) + b * (1 - t)) on each axis. Lerp can be described as follows:
function lerp (a, b, t) {
return a + t * (b - a)
}
in your case (see above) :
A = {
x: lerp(A.x, B.x, a),
y: lerp(A.y, B.y, a),
z: lerp(A.z, B.z, a)
}
I'm using Lucene to create a search engine, its all going well but I'm having to implement and algorithm for scoring results based on their relevancy and age. I have three inputs:
Relevancy score - an example would be 2.68065834
Age of document (in UNIX epoch format - e.g. number of seconds since 1970) - an example would be 1380979800
Age scew (this is between 0 and 10 and is specified by the user and it allows them to control how much of an effect the age of a document has on the overall score)
What I'm doing currently is basically:
ageOfDocumentInHours = age / 3600; //this is to avoid any overflows
ageModifier = ageOfDocumentInHours * ageScew + 1; // scew of 0 results in relevancy * 1
overallScore = relevancy * ageModifier;
I know nothing about statistics - is there a better way to do this?
Thanks,
Joe
This is what I ended up doing:
public override float CustomScore(int doc, float subQueryScore, float valSrcScore)
{
float contentScore = subQueryScore;
double start = 1262307661d; //2010
if (_dateVsContentModifier == 0)
{
return base.CustomScore(doc, subQueryScore, valSrcScore);
}
long epoch = (long)(DateTime.Now - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds;
long docSinceStartHours = (long)Math.Ceiling((valSrcScore - start) / 3600);
long nowSinceStartHours = (long)Math.Ceiling((epoch - start) / 3600);
float ratio = (float)docSinceStartHours / (float)nowSinceStartHours; // Get a fraction where a document that was created this hour has a value of 1
float ageScore = (ratio * _dateVsContentModifier) + 1; // We add 1 because we dont want the bit where we square it bellow to make the value smaller
float ageScoreAdjustedSoNewerIsBetter = 1;
if (_newerContentModifier > 0)
{
// Here we square it, multiuply it and then get the square root. This serves to make newer content have an exponentially higher score than old content instead of it just being linear
ageScoreAdjustedSoNewerIsBetter = (float)Math.Sqrt((ageScore * ageScore) * _newerContentModifier);
}
return ageScoreAdjustedSoNewerIsBetter * contentScore;
}
The basic idea is that the age score is a fraction where 0 is the first day of 2010 and 1 is right now. This decimal value is then multiplied by _dateVsContentModifier which optionally gives the date a boost over the relevancy score.
The age scroe is the squared, multiplied by _newerContentModifier and then square rooted. This causes newer content have a higher score than older content.
Joe