Convert sin calculation to use per-frame delta - math

I currently have a function that updates a value using the sin function, and a timeFactor double that keeps track of how much time has passed since the program started:
double timeFactor;
double delta;
while(running) {
delta = currentTime - lastTime;
timeFactor += delta;
var objectX = sin(timeFactor);
}
However, I need to convert this code to use the delta rather than the timeFactor.
I.e. For updating to sin(time+delta) I only want to use the current value of sin(time) and anything calculated from the value of delta.
I.e. calcualte sin(time+delta) == f(sin(time),delta)
How do I do this?

From math:
sin(A + B) == sin(A) * cos(B) + cos(A) * sin(B)
cos(A + B) == cos(A) * cos(B) - sin(A) * sin(B)
Store sin(A) and cos(A) in two variables.
Then for updating them use temporary copy of one of them, otherwise you will update the second using the new instead of the old value of the first.
Assuming:
persistent objectX stores current and is initialised with initial sin(timeFactor)
persistent objectXc stores current and is initialised with initial cos(timeFactor)
temporary objectXt stores a copy of objectX
("persistent" as in "keeps value across executions of update code",
in contrast to "temporary" as in "only keeps value during update code";
this is to avoid using the "global" attribute, which implies poor data design)
Update code:
objectXt = objectX;
objectX = objectX * cos(delta) + objectXc * sin(delta);
objectXc = objectXc* cos(delta) - objectXt * sin(delta);
Credits to John Coleman for spotting the problem in initial idea to use
1 == sin(A)*sin(A)+cos(A)*cos(A)
That would have been actually
sin(time+delta)== f(sin(time), delta)
But it fails for 50% of a full period.
So I hope this
sin(time+delta)==f(sin(time), cos(time), delta)
also helps.

Related

How is R able to sum an integer sequence so fast?

Create a large contiguous sequence of integers:
x <- 1:1e20
How is R able to compute the sum so fast?
sum(x)
Doesn't it have to loop over 1e20 elements in the vector and sum each element?
Summing up the comments:
R introduced something called ALTREP, or ALternate REPresentation for R objects. Its intent is to do some things more efficiently. From https://www.r-project.org/dsc/2017/slides/dsc2017.pdf, some examples include:
allow vector data to be in a memory-mapped file or distributed
allow compact representation of arithmetic sequences;
allow adding meta-data to objects;
allow computations/allocations to be deferred;
support alternative representations of environments.
The second and fourth bullets seem appropriate here.
We can see a hint of this in action by looking at what I'm inferring is at the core of the R sum primitive for altreps, at https://github.com/wch/r-source/blob/7c0449d81c853f781fb13e9c7118065aedaf2f7f/src/main/altclasses.c#L262:
static SEXP compact_intseq_Sum(SEXP x, Rboolean narm)
{
#ifdef COMPACT_INTSEQ_MUTABLE
/* If the vector has been expanded it may have been modified. */
if (COMPACT_SEQ_EXPANDED(x) != R_NilValue)
return NULL;
#endif
double tmp;
SEXP info = COMPACT_SEQ_INFO(x);
R_xlen_t size = COMPACT_INTSEQ_INFO_LENGTH(info);
R_xlen_t n1 = COMPACT_INTSEQ_INFO_FIRST(info);
int inc = COMPACT_INTSEQ_INFO_INCR(info);
tmp = (size / 2.0) * (n1 + n1 + inc * (size - 1));
if(tmp > INT_MAX || tmp < R_INT_MIN)
/**** check for overflow of exact integer range? */
return ScalarReal(tmp);
else
return ScalarInteger((int) tmp);
}
Namely, the reduction of an integer sequence without gaps is trivial. It's when there are gaps or NAs that things become a bit more complicated.
In action:
vec <- 1:1e10
sum(vec)
# [1] 5e+19
sum(vec[-10])
# Error: cannot allocate vector of size 37.3 Gb
### win11, R-4.2.2
Where ideally we would see that sum(vec) == (sum(vec[-10]) + 10), but we cannot since we can't use the optimization of sequence-summing.

Asymptotic complexity of log(n) * log(log(n))

I was working through a problem last night where I had to insert into a priority queue n times. Therefore, asymptotic complexity was n log n. However n could be as large as 10^16, so I had to do better. I found a solution that allowed me to only have to insert into the priority queue log n times with everything else remaining constant time. So, the complexity is log(n) * log(log(n)). Is that my asymptotic complexity or can this be simplified further?
Here is the alogrithm. I was able to reduce the complexity by using a hashmap to count the duplicate prioroities that would be inserted into the priority queue and providing a single calculation based on that.
I know by my code that it may not be intutitve how n log n complexity is reduced to log n log log n. I had to walk through examples to figure out it n was reduced to log n. While solvedUpTo used to increase at the same rate as n, now by ~n<=20 there were half the steps to get to the same value in solvedUpTo, ~n<=30 there were 1/3 the steps, quickly after that it was at 1/4 the steps and so on (all approximates, because I cannot remember the exact numbers).
The code is intentionally left ambiguous to what it is solving:
fun solve(n:Long, x:Long, y:Long): Long {
val numCount = mutableMapOf<Long,Long>()
val minQue: PriorityQueue<Long> = PriorityQueue<Long>()
addToQueue(numCount,minQue,x,1)
addToQueue(numCount,minQue,y,1)
var answer = x + y
var solvedUpTo = 2L
while (solvedUpTo < n) {
val next = minQue.poll()
val nextCount = numCount.remove(next)!!
val quantityToSolveFor = min(nextCount,n - solvedUpTo)
answer = ((answer + ((next + x + y) * quantityToSolveFor))).rem(1000000007)
addToQueue(numCount,minQue,next + x,quantityToSolveFor)
addToQueue(numCount,minQue,next + y,quantityToSolveFor)
solvedUpTo += quantityToSolveFor
}
return answer
}
fun <K> addToQueue(numCount: MutableMap<K,Long>, minQue: PriorityQueue<K>, num: K, incrementBy: Long) {
if (incrementMapAndCheckIfNew(numCount, num, incrementBy)) {
minQue.add(num)
}
}
//Returns true if just added
fun <K> incrementMapAndCheckIfNew(map: MutableMap<K,Long>, key: K, incrementBy: Long): Boolean {
val prevKey = map.putIfAbsent(key,0L)
map[key] = map[key]!! + incrementBy
return prevKey == null
}
Nope, O(log n log log n) is as simplified as that expression is going to get. You sometimes see runtimes like O(n log n log log n) popping up in number theory contexts, and there aren’t simpler common functions that quantities like these are equivalent to.

PID Implementation

I am looking at code references for simple PID implementation in arduino.
these are the few implementations
YMFC
pid_error_temp = gyro_pitch_input - pid_pitch_setpoint;
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
if(pid_i_mem_pitch > pid_max_pitch)pid_i_mem_pitch = pid_max_pitch;
else if(pid_i_mem_pitch < pid_max_pitch * -1)pid_i_mem_pitch = pid_max_pitch * -1;
pid_output_pitch = pid_p_gain_pitch * pid_error_temp + pid_i_mem_pitch + pid_d_gain_pitch * (pid_error_temp - pid_last_pitch_d_error);
if(pid_output_pitch > pid_max_pitch)pid_output_pitch = pid_max_pitch;
else if(pid_output_pitch < pid_max_pitch * -1)pid_output_pitch = pid_max_pitch * -1;
pid_last_pitch_d_error = pid_error_temp;
lobodol
error_sum[PITCH] += errors[PITCH];
deltaErr[PITCH] = errors[PITCH] - previous_error[PITCH];
previous_error[PITCH] = errors[PITCH];
pitch_pid = (errors[PITCH] * Kp[PITCH]) + (error_sum[PITCH] * Ki[PITCH]) + (deltaErr[PITCH] * Kd[PITCH]);
Arduino Forum Post
double PTerm = kp * error;
integral += error * (double) (timeChange * .000001);
ITerm = ki * integral;
// Derivative term using angle change
derivative = (input - lastInput) / (double)(timeChange * .000001);
DTerm = (-kd * derivative);
//Compute PID Output
double output = PTerm + ITerm + DTerm ;
brettbeauregard
void Compute()
{
/*How long since we last calculated*/
unsigned long now = millis();
double timeChange = (double)(now - lastTime);
/*Compute all the working error variables*/
double error = Setpoint - Input;
errSum += (error * timeChange);
double dErr = (error - lastErr) / timeChange;
/*Compute PID Output*/
Output = kp * error + ki * errSum + kd * dErr;
/*Remember some variables for next time*/
lastErr = error;
lastTime = now;
}
can any one give me explanations for following :
lobodol & YMFC ignoring the time constant. how does it effect the pid calculations
YMFC code the i term is
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
why he multiplying with error ?
labodol is just adding the previous error with present error and other two are multiplying it with time change
any other simple implementation suggestions also welcome.
The lobodol and YMFC systems work without using the time constant, because the code is written in such a manner that the Arduino would not do anything other then the control.
As such the time difference between calculations of P, I and D error would remain the same.
There is no difference in how you would tune these systems to how you would tune any other PID system.
While this system works, this also means that the final tuned PID values would only be used with these systems and not to any other.
The other systems which are using the time difference in their calculations. This means that the tuned PID values could be used(More reliably as compared to lobodol and YMFC) with other systems as well.
In the YMFC implementation
pid_i_mem_pitch += pid_i_gain_pitch * pid_error_temp;
^
Note the '+' sign before the '=' sign. This means that the error is getting added. Just that the gain multiplication is being done before addition as opposed to after addition.
Both methods yield the same (theoretically) result.
I hope this helps.

Calculation for scoring search results based or relevancy and age

I'm using Lucene to create a search engine, its all going well but I'm having to implement and algorithm for scoring results based on their relevancy and age. I have three inputs:
Relevancy score - an example would be 2.68065834
Age of document (in UNIX epoch format - e.g. number of seconds since 1970) - an example would be 1380979800
Age scew (this is between 0 and 10 and is specified by the user and it allows them to control how much of an effect the age of a document has on the overall score)
What I'm doing currently is basically:
ageOfDocumentInHours = age / 3600; //this is to avoid any overflows
ageModifier = ageOfDocumentInHours * ageScew + 1; // scew of 0 results in relevancy * 1
overallScore = relevancy * ageModifier;
I know nothing about statistics - is there a better way to do this?
Thanks,
Joe
This is what I ended up doing:
public override float CustomScore(int doc, float subQueryScore, float valSrcScore)
{
float contentScore = subQueryScore;
double start = 1262307661d; //2010
if (_dateVsContentModifier == 0)
{
return base.CustomScore(doc, subQueryScore, valSrcScore);
}
long epoch = (long)(DateTime.Now - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds;
long docSinceStartHours = (long)Math.Ceiling((valSrcScore - start) / 3600);
long nowSinceStartHours = (long)Math.Ceiling((epoch - start) / 3600);
float ratio = (float)docSinceStartHours / (float)nowSinceStartHours; // Get a fraction where a document that was created this hour has a value of 1
float ageScore = (ratio * _dateVsContentModifier) + 1; // We add 1 because we dont want the bit where we square it bellow to make the value smaller
float ageScoreAdjustedSoNewerIsBetter = 1;
if (_newerContentModifier > 0)
{
// Here we square it, multiuply it and then get the square root. This serves to make newer content have an exponentially higher score than old content instead of it just being linear
ageScoreAdjustedSoNewerIsBetter = (float)Math.Sqrt((ageScore * ageScore) * _newerContentModifier);
}
return ageScoreAdjustedSoNewerIsBetter * contentScore;
}
The basic idea is that the age score is a fraction where 0 is the first day of 2010 and 1 is right now. This decimal value is then multiplied by _dateVsContentModifier which optionally gives the date a boost over the relevancy score.
The age scroe is the squared, multiplied by _newerContentModifier and then square rooted. This causes newer content have a higher score than older content.
Joe

trapezodial integral matlab

I want to use instead of matlab integration command, a basic self created one. Do you have any Idea how to fix the error? If I use Matlab quad command, my algorithm works good but when I try to use my self created integral function,not suprisingly for sure, it does not work:(
M-File:
function y = trapapa(low, up, ints, fun)
y = 0;
step = (up - low) / ints;
for j = low : step : up
y = y + feval(fun,j);
end
y = (y - (feval(fun, low) + feval(fun, up))/2) * step;
Mean algorithm:
clear;
x0=linspace(0,4,3);
y=linspace(0,2,3);
for i=1:length(x0)
for j=1:length(y)
x(i,j)=y(j)+x0(i);
alpha=#(rho)((5-2*x(i,j)).*exp(y(j)-rho))./2;
%int(i,j)=quad(alpha,0,y(j))
int(i,j)=trapapa(alpha,0,y(j),10)
end
end
You are not following your function definition in the script. The fun parameter (variable alpha) is supposed to be the last one.
Try int(i,j)=trapapa(0,y(j),10,alpha)

Resources