I am trying to write a circle-to-line-segment collision detection algorithm, which involves determining the intersection point of the circle and the line segment. The line segment represents the trajectory of a bullet over the last frame, which means it would hit the first circle even if there's multiple circles intersecting with the line.
I would like to obtain the t value of the point of intersection, which is a measure of how far along the line segment the intersection is. Computing the t value requires solving a quadratic equation, which involves the formula t = (-b - sqrt(det)) / (2 * a). To make the code faster, I am trying to avoid using sqrt entirely, which means instead of comparing to find the smallest t over all potential circles, I will try to find the smallest t^2 for them. However, I am not sure how to find t^2 without sqrt, even given t = (-b - sqrt(det)) / (2 * a), because the binomial expansion still involves sqrt.
How to compute t^2 = ((-b - sqrt(det)) / (2 * a))^2 without using sqrt function?
You need to solve
(x0 - cx + dx * t)^2 + (y0 - cy + dy * t)^2 = r^2
for every (cx, cy, r) from a circle set. It is impossible to find t value without quadratic equation solution and sqrt in general case.
But perhaps you can (we don't know all problem details) build some space index structure (partition, i.e. kd-tree) to avoid full checking of all these circles.
Related
Let's define that we have a process over time, like acceleration, position etc. f(t). This is the ground truth that is happening.
Then we apply a measurement process in form m(t) = k*f(t)+b, where k is a scaling factor, and b is bias and noise combined. I assume that the noise is uniform, and with large enough data set will average out, so I don't care much about that.
For example I'm measuring rotation with n accelerometers at different lengths from the center of rotation. Then k would be the length from the center of rotation (plus minus all sorts of errors), and b would be bias terms, like gravity etc. Bur for now let's concentrate on the 1D case for simplicity.
If we have n measurements, each would have their own scaling factors and biases, is there a way of getting them (relative or absolute) extracted from the data?
Thus far I've gotten relative scaling factors out while testing, by numerically differentiating the data, and then comparing the differentiated functions to first one, and then averaging the result. I assume this works because differentiation gets rid of constants, but how would I approach getting out the bias terms?
I haven't gotten any good enough solutions, but I've tried to compute the following for the biases:
For n=2 measurements:
m1(t) = k1 * f(t) + b1
m2(t) = k2 * f(t) + b2
m2(t) - m1(t) = f(t) * (k2 - k1) + b2 - b1
If we make the assumption that b1 = 0, then we get relative (I think) bias between the two functions, and solving for b2 yields:
b2 = m2(t) - m1(t) - f(t) * (k2 - k1)
And if we are going with relative bias, we can assume that m1(t) == f(t) and getting us:
b2 = m2(t) - m1(t)*(k2 - k1 + 1)
This is assuming I didn't do any mistakes in calculation. If this is the case I have problem in implementation, but that is more trivial to solve than wrong thinking process.
This question and this question both show how to split a cubic Bézier curve at a particular parameterized value 0 ≤ t ≤ 1 along the curve, composing the original curve shape from two new segments. I need to split my Bézier curve at a point along the curve whose coordinate I know, but not the parameterized value t for the point.
For example, consider Adobe Illustrator, where the user can click on a curve to add a point into the path, without affecting the shape of the path.
Assuming I find the point on the curve closest to where the user clicks, how do I calculate the control points from this? Is there a formula to split a Bézier curve given a point on the curve?
Alternatively (and less desirably), given a point on the curve, is there a way to determine the parameterized value t corresponding to that point (other than using De Casteljau's algorithm in a binary search)?
My Bézier curve happens to only be in 2D, but a great answer would include the vector math needed to apply in arbitrary dimensions.
It is possible, and perhaps simpler, to determine the parametric value of a point on the curve without using De Casteljau's algorithm, but you will have to use heuristics to find a good starting value and similarly approximate the result.
One possible, and fairly simple way is to use Newton's method such that:
tn+1 = tn - ( bx(tn) - cx ) / bx'(tn)
Where bx(t) refers to the x component of some Bezier curve in polynomial form with the control points x0, x1, x2 and x3, bx'(t) is the first derivative and cx is a point on the curve such that:
cx = bx(t) | 0 < t < 1
the coefficients of bx(t) are:
A = -x0 + 3x1 - 3x2 + x3
B = 3x0 - 6x1 + 3x2
C = -3x0 + 3x1
D = x0
and:
bx(t) = At3 + Bt2 + Ct + D,
bx'(t) = 3At2 + 2Bt + C
Now finding a good starting value to plug into Newton's method is the tricky part. For most curves which do not contain loops or cusps, you can simply use the formula:
tn = ( cx - x0 ) / ( x3 - x0 ) | x0 < x1 < x2 < x3
Now you already have:
bx(tn) ≈ cx
So applying one or more iterations of Newton's method will give a better approximation of t for cx.
Note that the Newton Raphson algorithm has quadratic convergence. In most cases a good starting value will result in negligible improvement after two iterations, i.e. less than half a pixel.
Finally it's worth noting that cubic Bezier curves have exact solutions for finding extrema via finding roots of the first derivative. So curves which are problematic can simply be subdivided at their extrema to remove loops or cusps, then better results can be obtained by analyzing the resulting section in question. Subdividing cubics in this way will satisfy the above constraint.
Assuming complex numbers a1..an, what is the angle phi those numbers have to be rotated by (= multiplied by exp(i*phi) ) to maximize the sum of the absolute value of the real parts?
in:=complex[N]
out:=in.*exp(i*phi)
f:=sum(abs(real(out)))
-> which phi maximizes f?
Is there a elegant solution (As in not iterating over phi) ?
It is not difficult to find the angle each number has to be multiplied by to make it real, but weighting those angles to find a single optimal one for all of them is difficult, because the rotation is obviously not linear - something like
sum(phiN.*abs(in))/sum(abs(in))
does not work (produces lower sum than with an angle found by iterating over -pi to pi)
Any ideas are appreciated..
Although there exists an analytic solution, it is usually too hard to calculate it (may be feasible for a small number of input variables (n)). I'll first go over this solution, then suggest alternatives.
Analytic solution
Given the input numbers (l1, phi1), (l2, phi2), ... (ln, phin), where li is the length and phii the angle of the number, you want to find:
arg max_phi Sum_i abs(li cos(phii + phi))
You only have one independent variable. So, we start by deriving the function with respect to phi:
f'(phi) = Sum_i (-li sin(phii + phi) * abs'(l cos(phii + phi))
abs'(x) is either +1 or -1. Due to its discontinuity, we won't get around trying every combination. So you end up with 2^n variants of f'. The optimum is then one of the (usually four) arguments where f'(phi)=0. This can be calculated as follows. I denote with si the sign of the i-th term, which you need to modify:
numerator = Sum_i si li sin(phii)
denominator = (Sum_i li^2) + (Sum_i Sum_{j>i} 2 * li * lj * si * sj
cos(phii - phij))
Then, the four solution candidates are:
phi* = -arc cos( numerator / sqrt(denominator))
phi** = -arc cos(-numerator / sqrt(denominator))
phi*** = arc cos( numerator / sqrt(denominator))
phi**** = arc cos(-numerator / sqrt(denominator))
Find all candidates for every variation and take the one with maximum f(phi). However, as mentioned, this approach is not suitable for large n. You need 2^n variations of f and each variation requires O(n^2) time to construct the solution.
Numerical solution
An alternative is a numerical optimization approach. The challenge is that your function is not convex. Hence, if you find a local maximum, you cannot say if it is the global one. Most algorithms require good initialization. You could find the initial point by sampling the domain of phi and picking the best one. Then, try some of the standard approaches (Newton, Levenberg-Marquardt, BFGS).
This should be very simple. I have a function f(x), and I want to evaluate f'(x) for a given x in MATLAB.
All my searches have come up with symbolic math, which is not what I need, I need numerical differentiation.
E.g. if I define: fx = inline('x.^2')
I want to find say f'(3), which would be 6, I don't want to find 2x
If your function is known to be twice differentiable, use
f'(x) = (f(x + h) - f(x - h)) / 2h
which is second order accurate in h. If it is only once differentiable, use
f'(x) = (f(x + h) - f(x)) / h (*)
which is first order in h.
This is theory. In practice, things are quite tricky. I'll take the second formula (first order) as the analysis is simpler. Do the second order one as an exercise.
The very first observation is that you must make sure that (x + h) - x = h, otherwise you get huge errors. Indeed, f(x + h) and f(x) are close to each other (say 2.0456 and 2.0467), and when you substract them, you lose a lot of significant figures (here it is 0.0011, which has 3 significant figures less than x). So any error on h is likely to have a huge impact on the result.
So, first step, fix a candidate h (I'll show you in a minute how to chose it), and take as h for your computation the quantity h' = (x + h) - x. If you are using a language like C, you must take care to define h or x as volatile for that computation not to be optimized away.
Next, the choice of h. The error in (*) has two parts: the truncation error and the roundoff error. The truncation error is because the formula is not exact:
(f(x + h) - f(x)) / h = f'(x) + e1(h)
where e1(h) = h / 2 * sup_{x in [0,h]} |f''(x)|.
The roundoff error comes from the fact that f(x + h) and f(x) are close to each other. It can be estimated roughly as
e2(h) ~ epsilon_f |f(x) / h|
where epsilon_f is the relative precision in the computation of f(x) (or f(x + h), which is close). This has to be assessed from your problem. For simple functions, epsilon_f can be taken as the machine epsilon. For more complicated ones, it can be worse than that by orders of magnitude.
So you want h which minimizes e1(h) + e2(h). Plugging everything together and optimizing in h yields
h ~ sqrt(2 * epsilon_f * f / f'')
which has to be estimated from your function. You can take rough estimates. When in doubt, take h ~ sqrt(epsilon) where epsilon = machine accuracy. For the optimal choice of h, the relative accuracy to which the derivative is known is sqrt(epsilon_f), ie. half the significant figures are correct.
In short: too small a h => roundoff error, too large a h => truncation error.
For the second order formula, same computation yields
h ~ (6 * epsilon_f / f''')^(1/3)
and a fractional accuracy of (epsilon_f)^(2/3) for the derivative (which is typically one or two significant figures better than the first order formula, assuming double precision).
If this is too imprecise, feel free to ask for more methods, there are a lot of tricks to get better accuracy. Richardson extrapolation is a good start for smooth functions. But those methods typically compute f quite a few times, this may or not be what you want if your function is complex.
If you are going to use numerical derivatives a lot of times at different points, it becomes interesting to construct a Chebyshev approximation.
To get a numerical difference (symmetric difference), you calculate (f(x+dx)-f(x-dx))/(2*dx)
fx = #(x)x.^2;
fPrimeAt3 = (fx(3.1)-fx(2.9))/0.2;
Alternatively, you can create a vector of function values and apply DIFF, i.e.
xValues = 2:0.1:4;
fValues = fx(xValues);
df = diff(fValues)./0.1;
Note that diff takes the forward difference, and that it assumes that dx equals to 1.
However, in your case, you may be better off to define fx as a polynomial, and evaluating the derivative of the function, rather than the function values.
Lacking the symbolic toolbox, nothing stops you from using Derivest, a tool for automatic adaptive numerical differentiation.
derivest(#sin,pi)
ans =
-1
For your example it does very nicely. In fact, it even provides an estimate of the error in the resulting approximation.
fx = inline('x.^2');
[fp,errest] = derivest(fx,3)
fp =
6
errest =
3.6308e-14
did you try diff (calculates differences and approximates a derivative), gradient, or polyder (calculates the derivative of a polynomial) functions?
You can read more on these functions by using help <commandname> on MATLAB console, or use the function browser in the Help menu.
For a given function in analytical form, you can evaluate the derivative at a desired point with the following code:
syms x
df = diff(x^2);
df3 = subs(df, 'x', 3);
fprintf('f''(3)=%f\n', df3);
For pure numerical derivatives use the already given solutions by Jonas and posdef.
If F := GF(p^n) is the finite field with p^n elements, where p is a prime number and n a natural number, is there any efficient algorithm to work out the product of two elements in F?
Here are my thoughts so far:
I know that the standard construction of F is to take an irreducible polynomial f of degree n in GF(p) and then view elements of F as polynomials in the quotient GF(p)[X]/(f), and I have a feeling that this is probably already the right approach since polynomial multiplication and addition should be easy to implement, but I somehow fail to see how this can be actually done. For example, how would one choose an appropriate f, and how can I get the equivalence class of an arbitrary polynomial?
First pick an irreducible polynomial of degree n over GF[p]. Just generate random ones, a random polynomial is irreducible with probability ~1/n.
To test your random polynomials, you'll need some code to factor polynomials over GF[p], see the wikipedia page for some algorithms.
Then your elements in GF[p^n] are just n-degree polynomials over GF[p]. Just do normal polynomial arithmetic and make sure to compute the remainder modulo your irreducible polynomial.
It's pretty easy to code up simple versions of this scheme. You can get arbitrarily complicated in how you implement, say, the modulo operation. See modular exponentiation, Montgomery multiplication, and multiplication using FFT.
Whether there is an efficient algorithm to multiply elements in GF(p^n) depends on how you are representing the elements of GF(p^n).
As you say, one way is indeed to work in GF(p)(X)/(f). Addition and multiplication is relatively straightforward here. However, determining a suitable irreducible polynomial f is not easy - as far as I know there isn't an efficient algorithm for calculating a suitable f.
Another way is to use what are called Zech's logarithms. Magma uses
pre-computed tables of them for working with small finite fields. It is possible that GAP does
too, although its documentation is less clear.
Computing with mathematical structures is often quite tricky. You're certainly not missing anything obvious here.
It depends on your needs and on your field.
When you multiply you have to pick a generator of Fx. When you are adding you have to use the fact that F is a vector space over some smaller Fpm. In practice what you do a lot of time is some mixed approach. E.g. if you are working over F256, take a generator X of F256x, and let G be it's minimal polynomial over F16. You now have
(sumi smaller then 16 ai Xi)(sumj smaller then 16 bj Xj)= sum_k sumi+j = k ai bj Xi+j
All you have to do to make multiplication efficient, is store a multipication table of F16, and (using G) construct X^m in terms of lower powers of X and elements in F16
Finanly, in the rare case where pn = 22n, you get Conways field of nimbers (look in Conways "winning ways", or in Knuth's volume 4A section 7.1.3), for which there are very efficient algorithms.
Galois Field Arithmetic Library (C++, mod 2, doesn't look like it supports other prime elements)
LinBox (C++)
MPFQ (C++)
I have no personal experience w/ these, however (have made my own primitive C++ classes for Galois fields of degree 31 or less, nothing too exotic or worth copying). Like one of the commenters mentioned, you might check mathoverflow.net -- just ask nicely and make sure you've done your homework first. Someone there ought to know what kinds of mathematical software is suitable for manipulation of finite fields, and it's close enough to mathoverflow's area of interest that a well-stated question should not get closed down.
Assume this is the question for an algorithm performing multiplication in finite fields, when a monic irreducible polynomial f(X) is identified (else consider Rabin's Test for Irreducibility)
You have two polynomials of degree n-1
A(X) = a_0 + a_1*X + a_2*X^2 + ... + a_(n-1)*X^(n-1) and
B(X) = b_0 + b_1*X + b_2*X^2 + ... + b_(n-1)*X^(n-1)
coefficients a_k, b_k are out of representatives {0,1,...,p-1} of Z/pZ.
The product is defined as
C(X) = A(X)*B(X) % f(X),
where the modulo operator "%" is the remainder of the polynomial division A(X)*B(X) / f(X).
Following is an approach with complexity O(n^2)
1.) By distributive law the product can be decomposed to
B(X) * X^(n-1) * a_(n-1)
+ B(X) * X^(n-2) * a_(n-2)
+ ...
+ B(X) * a_0
=
(...(a_(n-1) * B(X) * X
+ a_(n-2) * B(X)) * X
+ a_(n-3) * B(X)) * X
...
+ a_1 * B(X)) * X
+ a_0 * B(X)
2.) For the %-operator is a ring homomorphism from Z/pZ[X] onto GF(p^n) it can be applied in each step of the iteration above.
A(X)*B(X) % f(X) =
(...(a_(n-1) * B(X) * X % f(X)
+ a_(n-2) * B(X)) * X % f(X)
+ a_(n-3) * B(X)) * X % f(X)
...
+ a_1 * B(X)) * X % f(X)
+ a_0 * B(X)
3.) After each multiplication with X, i.e. a shift in the coefficient space, you have a polynomial T_k(X) of degree n with element t_kn * X^n. Reduction modulo f(X) is done by
T_k(X) % f(X) = T_k(X) - t_kn*f(X),
which is a polynomial of degree n-1.
Finally, with reduction polynomial
r(x) := f(x) - X^n and
T_k(X) =: t_kn * X^n + U_(n-1)(X)
T_k(X) % f(X) = t_kn * X^n + U_(n-1)(X) - t_kn*( r(x) + X^n)
= U_(n-1)(X) - t_kn*r(x)
i.e. all steps can be done with polynomials of maximum degree n-1.