Algorithm intersecting polytope and half line - r

I am looking for an algorithm in R to intersect a convex polytope with a line segment. I found several post here on stack exchange for in the plane but I am wondering if this algorithms exists in higher dimensions. My google searches didn't really produce a lot of answers.
The line segment is composed of a point inside and a point outside the convex polytope. Are there algorithms in R available that can do this in dimension N<=10 ? Or does anyone know a reference so I can implement the algorithm myself? Is there information on the complexity of finding the polytope and the intersection ?

For problems in computational geometry, dimension d > 3 usually might as well be d arbitrary. If you have the polytope as a collection of intersected halfspaces, then likely the only sensible thing to do is to intersect the line segment with each of the separating hyperplanes (by solving a system of d linear equations) and take the intersection closest to the point inside.
If you have only the vertices of the polytope or even just a set of vertices whose convex closure is the polytope, then the easiest approach given R's libraries probably is linear programming. (Conceivably you could compute the facets using an algorithm to find high-dimensional convex hulls, but there could be Theta(n^floor(d/2)) of them, where n is the number of vertices.) I'm not familiar with LP solvers in R, so I'll write down the program mathematically. It shouldn't be too hard to translate. Let p_0 be the point outside and p_1 be the point inside and v_i be the ith point defining the polytope.
maximize alpha_0
subject to
for 1 <= j <= d,
p_0[j] alpha_0 + p_1[j] alpha_1 - sum_{1 <= i <= n} v_i[j] beta_i = 0
alpha_0 + alpha_1 = 1
sum_{1 <= i <= n} beta_i = 1
alpha_0 >= 0
alpha_1 >= 0
for 1 <= i <= n,
beta_i >= 0
The intersection is defined by the point p_0 alpha_0 + p_1 alpha_1 (unless the program is infeasible, in which case there is no intersection).

Related

Split a cubic Bézier curve at a point

This question and this question both show how to split a cubic Bézier curve at a particular parameterized value 0 ≤ t ≤ 1 along the curve, composing the original curve shape from two new segments. I need to split my Bézier curve at a point along the curve whose coordinate I know, but not the parameterized value t for the point.
For example, consider Adobe Illustrator, where the user can click on a curve to add a point into the path, without affecting the shape of the path.
Assuming I find the point on the curve closest to where the user clicks, how do I calculate the control points from this? Is there a formula to split a Bézier curve given a point on the curve?
Alternatively (and less desirably), given a point on the curve, is there a way to determine the parameterized value t corresponding to that point (other than using De Casteljau's algorithm in a binary search)?
My Bézier curve happens to only be in 2D, but a great answer would include the vector math needed to apply in arbitrary dimensions.
It is possible, and perhaps simpler, to determine the parametric value of a point on the curve without using De Casteljau's algorithm, but you will have to use heuristics to find a good starting value and similarly approximate the result.
One possible, and fairly simple way is to use Newton's method such that:
tn+1 = tn - ( bx(tn) - cx ) / bx'(tn)
Where bx(t) refers to the x component of some Bezier curve in polynomial form with the control points x0, x1, x2 and x3, bx'(t) is the first derivative and cx is a point on the curve such that:
cx = bx(t) | 0 < t < 1
the coefficients of bx(t) are:
A = -x0 + 3x1 - 3x2 + x3
B = 3x0 - 6x1 + 3x2
C = -3x0 + 3x1
D = x0
and:
bx(t) = At3 + Bt2 + Ct + D,
bx'(t) = 3At2 + 2Bt + C
Now finding a good starting value to plug into Newton's method is the tricky part. For most curves which do not contain loops or cusps, you can simply use the formula:
tn = ( cx - x0 ) / ( x3 - x0 ) | x0 < x1 < x2 < x3
Now you already have:
bx(tn) ≈ cx
So applying one or more iterations of Newton's method will give a better approximation of t for cx.
Note that the Newton Raphson algorithm has quadratic convergence. In most cases a good starting value will result in negligible improvement after two iterations, i.e. less than half a pixel.
Finally it's worth noting that cubic Bezier curves have exact solutions for finding extrema via finding roots of the first derivative. So curves which are problematic can simply be subdivided at their extrema to remove loops or cusps, then better results can be obtained by analyzing the resulting section in question. Subdividing cubics in this way will satisfy the above constraint.

Do anyone know how to raytrace NURBS of degree 2?

I'm programming a GLSL raytrace since a while and I done some improvement, but since a view days I think that it would be much faster to raytrace curved surfaces instead of many triangles so I came across NURBS. If I write the equation (extended --> only +, -, *, /, sqrt and square) down I can't see any way to get an intersection point with a ray.
Does any one of you know how to raytrace a NURBS of degree 2?
This is my equation (no real NURBS equation):
given :
(A to I are 3d vectors)
A
B
C
D
E
F
G
H
I
a = 2(B-A)
b = 2B-A-C
c = 2(E-D)
d = 2E-D-F
e = 2(H-G)
f = 2H-G-I
(a to f are defined to have the equation a bit shorter later)
o
r
(o and r are 3d vectors again)
searched :
u, v (, t)
to solve :
(A+au-bu²) + ((D+cu-du²)-(A+au+bu²))2v - (2(D+cu-bu²)-(A+au-bu²)-(G+eu-fu²))v² = o+rt
(NURB) = (LINE)
There is quite a bit of literature on the subject for example https://www.researchgate.net/publication/232644373_Direct_and_fast_ray_tracing_of_NURBS_surfaces . This is for general NURBS. Not sure if you can simplify things for quadratic NURBS.
The basic idea is to think of your ray as the intersection of two planes N . r = a, M . r = b. With N, M normal vectors to the planes, a, b constants. If r = R(u,v) is you NURB function. This gives you two equations in two variables to solve.
This is where I'm a little unsure. I think for quadratics NURBS you can represent the function as a quotient of two quadratic polynomials R(u,v) = P(u,v) / Q(u,v), where P is vector valued and Q is just a 1D polynomial. If so the equation you want to solve are
N . P(u,v) = a Q(u,v)
M . P(u,v) = b Q(u,v)
that is two quadratics in two variables. You can use a variety numerical methods like Newton's method or gradient descent and as the equations are quadratics it should converge relatively quickly.
You will need to consider each patch separately (0 < u < 1/3, 0 < v < 1/3 etc.) to cope with the piecewise nature of the functions.

It is possible to create an A* admissible heuristic with zero cost path values?

When we use A* with a non admissible heuristic we can sometimes get a non optimal path as result.
But when it is allowed to have path with zero cost, the only admissible heuristic that comes to my mind is h(x) = 0, which turns A* into a "simple" Dijkstra's algorithm.
Am I correct? Is this the only possible admissible heuristic? What is the real loss of not using an admissible heuristic? There is other path-finding algorithm that works better with zero cost paths?
An example:
Suppose the following graph (the numbers above edges shows the costs):
1 1 0 1 1
S --> V1 --> V2 --> V3 --> V4 --> G
Where:
S means start vertex
V means inner vertex
G means goal vertex
By looking the graph, we see that C(S) = 4.
What heuristic function h(x) can I use? If I use euclidian distance I got:
f(S) = g(S) + h(S)
f(S) = 0 + 5 = 5
We can see that this heuristic over-estimates the real distance, therefore for a more complex graph, it may not find the optimal solution.
Not true. The heuristic function h(x) has argument x consisting of the current search state. It returns an estimate of the distance from x to the goal. In a simple graph, x is a graph node.
Admissibility requires that h(x) can only be an under-estimate (or equal to the goal distance). This condition is for each particular x. (You seem to be inferring the condition is for all possible x, which is far too strong. A* would be useless if this were necessary.)
The correct statement regarding the case you propose is that h(x) = 0 is necessary only when x is a state with distance zero to the goal. Any other value would be an over-estimate. However, for any other x (in the same state space) that requires transitions with total at least cost C>0 to get to the goal, we can have any h such that h(x)<=C.
Of course if x's distance to goal is zero, then x is the goal state and the search is complete. So your concern is vacuous - there are no cases where it's of interest.
Information to construct h(x) comes from your knowledge of the search space (e.g. characteristics of the graph). A bare, general graph alone doesn't provide anything useful. The best you can do is h(x) = cost of min weight outgoing edge of x for non-goal nodes and, as already discussed, h(x) = 0 for the goal. Again note this is a lower bound on distance to goal. It gives you Dijkstra!
To do better you need to know something about the graph's structure.
Edit
In your example, you are providing detailed knowledge, so making a good h is simple. You can use
/ 4 if x == S
| 3 if x == V1
h(x) = { 2 if x == V2 or V3
| 1 if x == V4
\ 0 if x == G
or you can use any other function h'(x) such that h'(x) <= h(x) for all x. For example, this would be admissible:
/ 3 if x == S
| 2 if x == V1
h'(x) = { 2 if x == V2 or V3
| 1 if x == V4
\ 0 if x == G
Addition
The OP points out that for many problems, h(x) can be hard to choose! This is precisely correct. If you can't find a good admissible heuristic, then A* is the wrong algorithm! Nonetheless, A* is very effective for problems where heuristics can be found. Examples I've tried myself:
Graphs where Euclidean distance is a good lower bound on the possible distance between any two nodes. For example, each pair of cities A and B is separated by a distance D "as the crow flies," but the road distance from A to B is at least D in length and possibly much more, i.e. its cost C is greater than or equal to D. In this case, D makes a fine heuristic because it's a low estimate.
Puzzles where "distance" to the winning state involves moving game pieces. In this case, the number of pieces currently out of position with respect to the winning state is a fine heuristic. Examples are the 8-bishop's problem from 7th Guest (number of bishops not yet in their final positions) and the Magic Square Problem (total Manhatten distance from all pieces' current positions to their correct position in the winning state).

Get branch points of equation

If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.

How to calculate n log n = c

I have a homework problem for my algorithms class asking me to calculate the maximum size of a problem that can be solved in a given number of operations using an O(n log n) algorithm (ie: n log n = c). I was able to get an answer by approximating, but is there a clean way to get an exact answer?
There is no closed-form formula for this equation. Basically, you can transform the equation:
n log n = c
log(n^n) = c
n^n = exp(c)
Then, this equation has a solution of the form:
n = exp(W(c))
where W is Lambert W function (see especially "Example 2"). It was proved that W cannot be expressed using elementary operations.
However, f(n)=n*log(n) is a monotonic function. You can simply use bisection (here in python):
import math
def nlogn(c):
lower = 0.0
upper = 10e10
while True:
middle = (lower+upper)/2
if lower == middle or middle == upper:
return middle
if middle*math.log(middle, 2) > c:
upper = middle
else:
lower = middle
the O notation only gives you the biggest term in the equation. Ie the performance of your O(n log n ) algorithm could actually be better represented by c = (n log n) + n + 53.
This means that without knowing the exact nature of the performance of your algorithm you wouldn't be able to calculate the exact number of operations required to process an given amount of data.
But it is possible to calculate that the maximum number of operations required to process a data set of size n is more than a certain number, or conversely that the biggest problem set that can be solved, using that algorithm and that number of operations, is smaller than a certain number.
The O notation is useful for comparing 2 algorithms, ie an O(n^2) algorithm is faster than a O(n^3) algorithm etc.
see Wikipedia for more info.
some help with logs

Resources