sympy: rootfinding of parameterized function in range - constraints

I'm having trouble to find the root of a parameterized quintic polynome. Background is: I want to find the parameter s_f such that for any given parameter d_f, the curvature of the polynomial is smaller than a threshold (yeah .. sounds complex, but the math is rather straight);
# define quintic polynomial (jerk-minimized trajectory)
# see http://courses.shadmehrlab.org/Shortcourse/minimumjerk.pdf
s = symbols('s', real=True, positive=True)
s_f = symbols('s_f', real=True, positive=True, nonzero=True)
d_0 = 0
d_f = symbols('d_f', real=True, positive=True, nonzero=True)
d_of_s = d_0 + (d_f - d_0) * ( 10*(s/s_f)**3 - 15*(s/s_f)**4 + 6*(s/s_f)**5 )
display(d_of_s)
# define curvature of d_of_s
# see https://en.wikipedia.org/wiki/Curvature#In_terms_of_a_general_parametrization
y = d_of_s
dy = diff(d_of_s, s)
ddy = diff(dy, s)
x = s
dx = diff(x, s) # evaluates to 1
ddx = diff(dx, s) # evaluates to 0
k = (dx*ddy - dy*ddx) / ((dx*dx + dy*dy)**Rational(3,2))
# the goal is to find s_f for any given d_f, such that k(s) < some_threshold
# strategy: find the roots of the derivative of k in the range of s∈[0, s_f]
dk = diff(k, s)
dk = simplify(dk)
display(dk)
# now solve
res = solveset(dk, s, Interval(0, s_f).intersection(S.Reals))
display(res)
The function dk(s, d_f, s_f) has two root in the interval s∈[0, s_f], however solveset returns this:
ConditionSet(s, Eq(5400*d_f**2*s**4*(-s**2 + 2*s*s_f - s_f**2)*(2*s**2 - 3*s*s_f + s_f**2)**2 + (900*d_f**2*s**4*(s**2 - 2*s*s_f + s_f**2)**2 + s_f**10)*(6*s**2 - 6*s*s_f + s_f**2), 0), Interval(0, s_f))
.. which is afaik equivalent to: I can't solve this, we got infinite number of results. Well, this is true for the function in general. limit(dk, s, -oo) and limit(dk, s, +oo) is zero. But since I stated the domain interval, why am I not getting the two roots I'm expecting? I'd also expect to get a more granular result:
set containing the roots for s < 0;
set containing the roots for s > s_f
the two roots when s∈[0, s_f]
I started with solve() and a lot of different assumptions on my symbols. I get different results for different assumptions, but no combination seems to yield what I need. When I state no assumptions, I get back a set with a huge condition and 8 roots, that don't seem real or correct. In general, the constraints are:
- all symbols are real
- s_f > 0
- d_f > 0
- s ∈ [0, s_f] (domain range .. the polynomial is only evaluated in this interval)
I guess the problem is that I'm not setting up my solveset correctly:
how to specify that s_f and d_f are real? afaik the symbol
assumptions are ignored when using solveset?
how to specify intervals
and assumptions on other multivariate functions, i.e. other symbols
than the domain one?
This is what d_of_s look like for s_f = 1, d_f = 1
And this is what dk(s) looks like (I plotted outside the domain range to visualize the problem).

Substitute in the known values of d_f and s_f and use real_roots to find the real roots of the numerator of dk. Keep the ones that have a value in the range of interest:
>>> s_fi = 3
>>> [i for i in real_roots(dk.subs(d_f,2).subs(s_f, s_fi).as_numer_denom()[0])
... if 0 <= i.n(2) <= s_fi]
[CRootOf(800*s**10 - 12000*s**9 + 74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5
+ 162000*s**4 - 4374*s**2 + 13122*s - 6561, 1), CRootOf(800*s**10 - 12000*s**9 +
74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5 + 162000*s**4 - 4374*s**2 +
13122*s - 6561, 2)]
I kept the CRootOf instances because they can be computed to arbitrary precision, e.g. to 3 digits:
>>> [i.n(3) for i in _]
[0.433, 2.57]

Related

How do you simplify the difference between functions of x in R with respect to a Calculus context?

First of all, this looks like a fair amount of Calculus, so I predict that it would get forwarded to Cross-Validated by someone who thinks that this is TL;DR. But I think this is a programming question, so here me out.
Imagine that I have the following functions in terms of x: f(x), g(x), h(x) ...
f(x) = 2x^2 + 4x - 30
g(x) = x^2 - x + 12
h(x) = f(x) - g(x) = (2x^2 + 4x - 30) - (x^2 - x + 12) = x^2 + 5*x - 42
Note: If I were to compute g(x) - f(x) here I would get a different polynomial, but I would get the same roots so it doesn't really matter because if I took the coefficients from g(x) - f(x), then polyroot() would return the same x-intercept intersection points as f(x) = g(x).
I am able to resolve h(x) = (2x^2 + 4x - 30) - (x^2 - x + 12), but I can't resolve it to x^2 + 5*x - 42 which is just a more simplified version of the same function of h(x). But I need it in this form to compute the intersections of these functions where I need the coefficients of the difference function. Then I would use the points of intersection to compute the difference integral over the greater function minus the smaller function over the range where the functions intersect, and this difference integral is simply the area between the functions.
So my goal is to compute the area between two intersecting functions.
My problem is that I want to automate the whole process, and I want to simply the h(x) difference function to 1*x^2 + 5*(x) - 42, where the coefficients of this polynomial function in increasing order are -42, 5, 1 in that order.
So let's just write the code:
fx <- function(x){2*x^2 + 4*x - 30}
gx <- function(x){1*x^2 - 1*x + 12}
hx <- function(x){fx - gx} # doesn't work because I can't pass it to curve(hx)
hx <- function(x){(2*x^2 + 4*x - 30) - (1*x^2 - 1*x + 12)} # works
but it is not in the form that I want.
> hx
function(x){(2*x^2 + 4*x - 30) - (1*x^2 - 1*x + 12)}
<bytecode: 0x000000001c0bfc10>
Errors:
> curve(hx)
Error in expression(fx) - expression(gx) :
non-numeric argument to binary operator
See this is why I need the coefficients.
> z <- polyroot(c(-42, 5, 1)) # polyroot functions give you the x-intercepts of a polynomial function.
> z
[1] 4.446222-0i -9.446222+0i
Of course I could just compute "x^2 + 5*x - 42" on pen and paper, but they say that programmers always want to find the most efficient algorithmic process with the least amount of work.
Now I need to see which function is greater than the other, over the given range. Two ways visually or incrementally. (This is for the Calculus II part.)
x = seq(from = -9.4, to = 4.4, by = 0.2)
fx_range = 2*x^2 + 4*x - 30
> table(fx_range >= gx_range)
FALSE
70
> table(gx_range >= fx_range)
TRUE
70
It looks like the g(x) function is greater than or equal to the f(x) function over the range of the intersection points. So should evalulate the integral of g(x) - f(x) according to calculus. I was just doing f(x) - g(x) earlier for the polyroot function.
Areabetween curves = (from -9.446222 to 4.446222) ∫[g(x) - f(x)]dx
= (from -9.446222 to 4.446222) ∫[(x^2 - x + 12) - (2*x^2 + 4*x - 30)]
gx_minus_fx = function(x){(x^2 - x + 12) - (2*x^2 + 4*x - 30)}
Area = integrate(gx_minus_fx, lower = -9.446222, upper = 4.446222)
Area
446.8736 with absolute error < 5e-12 # This is exactly what I wanted to compute!
Now let's graphically check if I was supposed to subtract g(x) - f(x):
> curve(fx, main = "Functions with their Intersection Points", xlab = "x", ylab = "Functions of x", from = -9.446222, to = 4.446222)
> curve(gx, col = "red", add = TRUE)
> legend("topright", c("f(x) = 2x^2 + 4x - 30", "g(x) = x^2 - x + 12"), fill = c("black", "red"))
Yeah, I did it right!
So again, what I would like help with is figuring out how I could simplify
h(x) = f(x) - g(x) to x^2 + 5*x - 42.
This appears to be an algebraic problem. I showed that I could do high-level Calculus 2 in R, and I would just like to know if there is a way that I can automate this whole process for the h(x) function.
Thank you!!!

Numerically stable evaluation of sqrt(x+a) - sqrt(x)

Is there an elegant way of numerically stable evaluating the following expression for the full parameter range x,a >= 0?
f(x,a) = sqrt(x+a) - sqrt(x)
Also is there any programming language or library that does provide this kind of function? If yes, under what name? I have no specific problem using the above expression right now, but encountered it many times in the past and always thought that this problem must have been solved before!
Yes, there is! Provided that at least one of x and a is positive, you can use:
f(x, a) = a / (sqrt(x + a) + sqrt(x))
which is perfectly numerically stable, but hardly worth a library function in its own right. Of course, when x = a = 0, the result should be 0.
Explanation: sqrt(x + a) - sqrt(x) is equal to (sqrt(x + a) - sqrt(x)) * (sqrt(x + a) + sqrt(x)) / (sqrt(x + a) + sqrt(x)). Now multiply the first two terms to get sqrt(x+a)^2 - sqrt(x)^2, which simplifies to a.
Here's an example demonstrating the stability: the troublesome case for the original expression is where x + a and x are very close in value (or equivalently when a is much smaller in magnitude than x). For example, if x = 1 and a is small, we know from a Taylor expansion around 1 that sqrt(1 + a) should be 1 + a/2 - a^2/8 + O(a^3), so sqrt(1 + a) - sqrt(1) should be close to a/2 - a^2/8. Let's try that for a particular choice of small a. Here's the original function (written in Python, in this case, but you can treat it as pseudocode):
def f(x, a):
return sqrt(x + a) - sqrt(x)
and here's the stable version:
def g(x, a):
if a == 0:
return 0.0
else:
return a / ((sqrt(x + a) + sqrt(x))
Now let's see what we get with x = 1 and a = 2e-10:
>>> a = 2e-10
>>> f(1, a)
1.000000082740371e-10
>>> g(1, a)
9.999999999500001e-11
The value we should have got is (up to machine accuracy): a/2 - a^2/8 - for this particular a, the cubic and higher order terms are insignificant in the context of IEEE 754 double-precision floats, which only provide around 16 decimal digits of precision. Let's compute that value for comparison:
>>> a/2 - a**2/8
9.999999999500001e-11

Given a list of coefficients, create a polynomial

I want to create a polynomial with given coefficients. This seems very simple but what I have found till now did not appear to be the thing I desired.
For example in such an environment;
n = 11
K = GF(4,'a')
R = PolynomialRing(GF(4,'a'),"x")
x = R.gen()
a = K.gen()
v = [1,a,0,0,1,1,1,a,a,0,1]
Given a list/vector v of length n (I will set this n and v at the begining), I want to get the polynomial v(x) as v[i]*x^i.
(Actually after that I am going to build the quotient ring GF(4,'a')[x] /< x^n-v(x) > after getting this v(x) from above) then I will say;
S = R.quotient(x^n-v(x), 'y')
y = S.gen()
But I couldn't write it.
This is a frequently asked question in many places so it is better to leave it here as an answer although the answer I have is so simple:
I just wrote R(v) and it gave me the polynomial:
sage
n = 11
K = GF(4,'a')
R = PolynomialRing(GF(4,'a'),"x")
x = R.gen()
a = K.gen()
v = [1,a,0,0,1,1,1,a,a,0,1]
R(v)
x^10 + a*x^8 + a*x^7 + x^6 + x^5 + x^4 + a*x + 1
Basically (that is, ignoring the specifics of your polynomial ring) you have a list/vector v of length n and you require a polynomial which is the sum of all v[i]*x^i. Note that this sum equals the matrix product V.X where V is a one row matrix (essentially equal to the vector v) and X is a column matrix consisting of powers of x. In Maxima you could write
v: [1,a,0,0,1,1,1,a,a,0,1]$
n: length(v)$
V: matrix(v)$
X: genmatrix(lambda([i,j], x^(i-1)), n, 1)$
V.X;
The output is
x^10+ax^8+ax^7+x^6+x^5+x^4+a*x+1

translating matlab script to R

I've just been working though converting some MATLAB scripts to work in R, however having never used MATLAB in my life, and not exactly being an expert on R I'm having some trouble.
Edit: It's a script I was given designed to correct temperature measurements for lag generated by insulation mass effects. My understanding is that It looks at the rate of change of the temperature and attempts to adjust for errors generated by the response time of the sensor. Unfortunately there is no literature available to me to give me an indication of the numbers i am expecting from the function, and the only way to find out will be to experimentally test it at a later date.
the original script:
function [Tc, dT] = CTD_TempTimelagCorrection(T0,Tau,t)
N1 = Tau/t;
Tc = T0;
N = 3;
for j=ceil(N/2):numel(T0)-ceil(N/2)
A = nan(N,1);
# Compute weights
for k=1:N
A(k) = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)));
end
A = A./sum(A);
# Verify unity
if sum(A) ~= 1
disp('Error: Sum of weights is not unity');
end
Comp = nan(N,1);
# Compute components
for k=1:N
Comp(k) = A(k)*T0(j - (ceil(N/2)) + k);
end
Tc(j) = sum(Comp);
dT = Tc - T0;
end
where I've managed to get to:
CTD_TempTimelagCorrection <- function(temp,Tau,t){
## Define which equation to use based on duration of lag and frequency
## With ESM2 profiler sampling # 2hz: N1>tau/t = TRUE
N1 = Tau/t
Tc = temp
N = 3
for(i in ceiling(N/2):length(temp)-ceiling(N/2)){
A = matrix(nrow=N,ncol=1)
# Compute weights
for(k in 1:N){
A[k] = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)))
}
A = A/sum(A)
# Verify unity
if(sum(A) != 1){
print("Error: Sum of weights is not unity")
}
Comp = matrix(nrow=N,ncol=1)
# Compute components
for(k in 1:N){
Comp[k] = A[k]*temp[i - (ceiling(N/2)) + k]
}
Tc[i] = sum(Comp)
dT = Tc - temp
}
return(dT)
}
I think the problem is the Comp[k] line, could someone point out what I've done wrong? I'm not sure I can select the elements of the array in such a way.
by the way, Tau = 1, t = 0.5 and temp (or T0) will be a vector.
Thanks
edit: apparently my description is too brief in explaining my code samples, not really sure what more I could write that would be relevant and not just wasting peoples time. Is this enough Mr Filter?
The error is as follows:
Error in Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
replacement has length zero
In addition: Warning message:
In Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
number of items to replace is not a multiple of replacement length
If you write print(i - (ceiling(N/2)) + k) before that line, you will see that you are using incorrect indices for temp[i - (ceiling(N/2)) + k], which means that nothing is returned to be inserted into Comp[k]. I assume this problem is due to Matlab allowing the use of 0 as an index and not R, and the way negative indices are handled (they don't work the same in both languages). You need to implement a fix to return the correct indices.

equivalent expressions

I'm trying to figure out an equivalent expressions of the following equations using bitwise, addition, and/or subtraction operators. I know there's suppose to be an answer (which furthermore generalizes to work for any modulus 2^a-1, where a is a power of 2), but for some reason I can't seem to figure out what the relation is.
Initial expressions:
x = n % (2^32-1);
c = (int)n / (2^32-1); // ints are 32-bit, but x, c, and n may have a greater number of bits
My procedure for the first expression was to take the modulo of 2^32, then try to make up the difference between the two modulo's. I'm having trouble on this second part.
x = n & 0xFFFFFFFF + difference // how do I calculate difference?
I know that the difference n%(2^32)-n%(2^32-1) is periodic (with a period of 2^32*(2^32-1)), and there's a "spike up' starting at multiples of 2^32-1 and ending at 2^32. After each 2^32 multiple, the difference plot decreases by 1 (hopefully my descriptions make sense)
Similarly, the second expression could be calculated in a similar fashion:
c = n >> 32 + makeup // how do I calculate makeup?
I think makeup steadily increases by 1 at multiples of 2^32-1 (and decreases by 1 at multiples of 2^32), though I'm having troubles expressing this idea in terms of the available operators.
You can use these identities:
n mod (x - 1) = (((n div x) mod (x - 1)) + ((n mod x) mod (x - 1))) mod (x - 1)
n div (x - 1) = (n div x) + (((n div x) + (n mod x)) div (x - 1))
First comes from (ab+c) mod d = ((a mod d) (b mod d) + (c mod d)) mod d.
Second comes from expanding n = ax + b = a(x-1) + a + b, while dividing by x-1.
I think I've figured out the answer to my question:
Compute c first, then use the results to compute x. Assumes that the comparison returns 1 for true, 0 for false. Also, the shifts are all logical shifts.
c = (n>>32) + ((t & 0xFFFFFFFF) >= (0xFFFFFFFF - (n>>32)))
x = (0xFFFFFFFE - (n & 0xFFFFFFFF) - ((c - (n>>32))<<32)-c) & 0xFFFFFFFF
edit: changed x (only need to keep lower 32 bits, rest is "junk")

Resources