I'm programming a GLSL raytrace since a while and I done some improvement, but since a view days I think that it would be much faster to raytrace curved surfaces instead of many triangles so I came across NURBS. If I write the equation (extended --> only +, -, *, /, sqrt and square) down I can't see any way to get an intersection point with a ray.
Does any one of you know how to raytrace a NURBS of degree 2?
This is my equation (no real NURBS equation):
given :
(A to I are 3d vectors)
A
B
C
D
E
F
G
H
I
a = 2(B-A)
b = 2B-A-C
c = 2(E-D)
d = 2E-D-F
e = 2(H-G)
f = 2H-G-I
(a to f are defined to have the equation a bit shorter later)
o
r
(o and r are 3d vectors again)
searched :
u, v (, t)
to solve :
(A+au-bu²) + ((D+cu-du²)-(A+au+bu²))2v - (2(D+cu-bu²)-(A+au-bu²)-(G+eu-fu²))v² = o+rt
(NURB) = (LINE)
There is quite a bit of literature on the subject for example https://www.researchgate.net/publication/232644373_Direct_and_fast_ray_tracing_of_NURBS_surfaces . This is for general NURBS. Not sure if you can simplify things for quadratic NURBS.
The basic idea is to think of your ray as the intersection of two planes N . r = a, M . r = b. With N, M normal vectors to the planes, a, b constants. If r = R(u,v) is you NURB function. This gives you two equations in two variables to solve.
This is where I'm a little unsure. I think for quadratics NURBS you can represent the function as a quotient of two quadratic polynomials R(u,v) = P(u,v) / Q(u,v), where P is vector valued and Q is just a 1D polynomial. If so the equation you want to solve are
N . P(u,v) = a Q(u,v)
M . P(u,v) = b Q(u,v)
that is two quadratics in two variables. You can use a variety numerical methods like Newton's method or gradient descent and as the equations are quadratics it should converge relatively quickly.
You will need to consider each patch separately (0 < u < 1/3, 0 < v < 1/3 etc.) to cope with the piecewise nature of the functions.
Related
This question and this question both show how to split a cubic Bézier curve at a particular parameterized value 0 ≤ t ≤ 1 along the curve, composing the original curve shape from two new segments. I need to split my Bézier curve at a point along the curve whose coordinate I know, but not the parameterized value t for the point.
For example, consider Adobe Illustrator, where the user can click on a curve to add a point into the path, without affecting the shape of the path.
Assuming I find the point on the curve closest to where the user clicks, how do I calculate the control points from this? Is there a formula to split a Bézier curve given a point on the curve?
Alternatively (and less desirably), given a point on the curve, is there a way to determine the parameterized value t corresponding to that point (other than using De Casteljau's algorithm in a binary search)?
My Bézier curve happens to only be in 2D, but a great answer would include the vector math needed to apply in arbitrary dimensions.
It is possible, and perhaps simpler, to determine the parametric value of a point on the curve without using De Casteljau's algorithm, but you will have to use heuristics to find a good starting value and similarly approximate the result.
One possible, and fairly simple way is to use Newton's method such that:
tn+1 = tn - ( bx(tn) - cx ) / bx'(tn)
Where bx(t) refers to the x component of some Bezier curve in polynomial form with the control points x0, x1, x2 and x3, bx'(t) is the first derivative and cx is a point on the curve such that:
cx = bx(t) | 0 < t < 1
the coefficients of bx(t) are:
A = -x0 + 3x1 - 3x2 + x3
B = 3x0 - 6x1 + 3x2
C = -3x0 + 3x1
D = x0
and:
bx(t) = At3 + Bt2 + Ct + D,
bx'(t) = 3At2 + 2Bt + C
Now finding a good starting value to plug into Newton's method is the tricky part. For most curves which do not contain loops or cusps, you can simply use the formula:
tn = ( cx - x0 ) / ( x3 - x0 ) | x0 < x1 < x2 < x3
Now you already have:
bx(tn) ≈ cx
So applying one or more iterations of Newton's method will give a better approximation of t for cx.
Note that the Newton Raphson algorithm has quadratic convergence. In most cases a good starting value will result in negligible improvement after two iterations, i.e. less than half a pixel.
Finally it's worth noting that cubic Bezier curves have exact solutions for finding extrema via finding roots of the first derivative. So curves which are problematic can simply be subdivided at their extrema to remove loops or cusps, then better results can be obtained by analyzing the resulting section in question. Subdividing cubics in this way will satisfy the above constraint.
I am looking for an algorithm in R to intersect a convex polytope with a line segment. I found several post here on stack exchange for in the plane but I am wondering if this algorithms exists in higher dimensions. My google searches didn't really produce a lot of answers.
The line segment is composed of a point inside and a point outside the convex polytope. Are there algorithms in R available that can do this in dimension N<=10 ? Or does anyone know a reference so I can implement the algorithm myself? Is there information on the complexity of finding the polytope and the intersection ?
For problems in computational geometry, dimension d > 3 usually might as well be d arbitrary. If you have the polytope as a collection of intersected halfspaces, then likely the only sensible thing to do is to intersect the line segment with each of the separating hyperplanes (by solving a system of d linear equations) and take the intersection closest to the point inside.
If you have only the vertices of the polytope or even just a set of vertices whose convex closure is the polytope, then the easiest approach given R's libraries probably is linear programming. (Conceivably you could compute the facets using an algorithm to find high-dimensional convex hulls, but there could be Theta(n^floor(d/2)) of them, where n is the number of vertices.) I'm not familiar with LP solvers in R, so I'll write down the program mathematically. It shouldn't be too hard to translate. Let p_0 be the point outside and p_1 be the point inside and v_i be the ith point defining the polytope.
maximize alpha_0
subject to
for 1 <= j <= d,
p_0[j] alpha_0 + p_1[j] alpha_1 - sum_{1 <= i <= n} v_i[j] beta_i = 0
alpha_0 + alpha_1 = 1
sum_{1 <= i <= n} beta_i = 1
alpha_0 >= 0
alpha_1 >= 0
for 1 <= i <= n,
beta_i >= 0
The intersection is defined by the point p_0 alpha_0 + p_1 alpha_1 (unless the program is infeasible, in which case there is no intersection).
This should be very simple. I have a function f(x), and I want to evaluate f'(x) for a given x in MATLAB.
All my searches have come up with symbolic math, which is not what I need, I need numerical differentiation.
E.g. if I define: fx = inline('x.^2')
I want to find say f'(3), which would be 6, I don't want to find 2x
If your function is known to be twice differentiable, use
f'(x) = (f(x + h) - f(x - h)) / 2h
which is second order accurate in h. If it is only once differentiable, use
f'(x) = (f(x + h) - f(x)) / h (*)
which is first order in h.
This is theory. In practice, things are quite tricky. I'll take the second formula (first order) as the analysis is simpler. Do the second order one as an exercise.
The very first observation is that you must make sure that (x + h) - x = h, otherwise you get huge errors. Indeed, f(x + h) and f(x) are close to each other (say 2.0456 and 2.0467), and when you substract them, you lose a lot of significant figures (here it is 0.0011, which has 3 significant figures less than x). So any error on h is likely to have a huge impact on the result.
So, first step, fix a candidate h (I'll show you in a minute how to chose it), and take as h for your computation the quantity h' = (x + h) - x. If you are using a language like C, you must take care to define h or x as volatile for that computation not to be optimized away.
Next, the choice of h. The error in (*) has two parts: the truncation error and the roundoff error. The truncation error is because the formula is not exact:
(f(x + h) - f(x)) / h = f'(x) + e1(h)
where e1(h) = h / 2 * sup_{x in [0,h]} |f''(x)|.
The roundoff error comes from the fact that f(x + h) and f(x) are close to each other. It can be estimated roughly as
e2(h) ~ epsilon_f |f(x) / h|
where epsilon_f is the relative precision in the computation of f(x) (or f(x + h), which is close). This has to be assessed from your problem. For simple functions, epsilon_f can be taken as the machine epsilon. For more complicated ones, it can be worse than that by orders of magnitude.
So you want h which minimizes e1(h) + e2(h). Plugging everything together and optimizing in h yields
h ~ sqrt(2 * epsilon_f * f / f'')
which has to be estimated from your function. You can take rough estimates. When in doubt, take h ~ sqrt(epsilon) where epsilon = machine accuracy. For the optimal choice of h, the relative accuracy to which the derivative is known is sqrt(epsilon_f), ie. half the significant figures are correct.
In short: too small a h => roundoff error, too large a h => truncation error.
For the second order formula, same computation yields
h ~ (6 * epsilon_f / f''')^(1/3)
and a fractional accuracy of (epsilon_f)^(2/3) for the derivative (which is typically one or two significant figures better than the first order formula, assuming double precision).
If this is too imprecise, feel free to ask for more methods, there are a lot of tricks to get better accuracy. Richardson extrapolation is a good start for smooth functions. But those methods typically compute f quite a few times, this may or not be what you want if your function is complex.
If you are going to use numerical derivatives a lot of times at different points, it becomes interesting to construct a Chebyshev approximation.
To get a numerical difference (symmetric difference), you calculate (f(x+dx)-f(x-dx))/(2*dx)
fx = #(x)x.^2;
fPrimeAt3 = (fx(3.1)-fx(2.9))/0.2;
Alternatively, you can create a vector of function values and apply DIFF, i.e.
xValues = 2:0.1:4;
fValues = fx(xValues);
df = diff(fValues)./0.1;
Note that diff takes the forward difference, and that it assumes that dx equals to 1.
However, in your case, you may be better off to define fx as a polynomial, and evaluating the derivative of the function, rather than the function values.
Lacking the symbolic toolbox, nothing stops you from using Derivest, a tool for automatic adaptive numerical differentiation.
derivest(#sin,pi)
ans =
-1
For your example it does very nicely. In fact, it even provides an estimate of the error in the resulting approximation.
fx = inline('x.^2');
[fp,errest] = derivest(fx,3)
fp =
6
errest =
3.6308e-14
did you try diff (calculates differences and approximates a derivative), gradient, or polyder (calculates the derivative of a polynomial) functions?
You can read more on these functions by using help <commandname> on MATLAB console, or use the function browser in the Help menu.
For a given function in analytical form, you can evaluate the derivative at a desired point with the following code:
syms x
df = diff(x^2);
df3 = subs(df, 'x', 3);
fprintf('f''(3)=%f\n', df3);
For pure numerical derivatives use the already given solutions by Jonas and posdef.
I have a homework problem for my algorithms class asking me to calculate the maximum size of a problem that can be solved in a given number of operations using an O(n log n) algorithm (ie: n log n = c). I was able to get an answer by approximating, but is there a clean way to get an exact answer?
There is no closed-form formula for this equation. Basically, you can transform the equation:
n log n = c
log(n^n) = c
n^n = exp(c)
Then, this equation has a solution of the form:
n = exp(W(c))
where W is Lambert W function (see especially "Example 2"). It was proved that W cannot be expressed using elementary operations.
However, f(n)=n*log(n) is a monotonic function. You can simply use bisection (here in python):
import math
def nlogn(c):
lower = 0.0
upper = 10e10
while True:
middle = (lower+upper)/2
if lower == middle or middle == upper:
return middle
if middle*math.log(middle, 2) > c:
upper = middle
else:
lower = middle
the O notation only gives you the biggest term in the equation. Ie the performance of your O(n log n ) algorithm could actually be better represented by c = (n log n) + n + 53.
This means that without knowing the exact nature of the performance of your algorithm you wouldn't be able to calculate the exact number of operations required to process an given amount of data.
But it is possible to calculate that the maximum number of operations required to process a data set of size n is more than a certain number, or conversely that the biggest problem set that can be solved, using that algorithm and that number of operations, is smaller than a certain number.
The O notation is useful for comparing 2 algorithms, ie an O(n^2) algorithm is faster than a O(n^3) algorithm etc.
see Wikipedia for more info.
some help with logs
I'm working on coding the Pohlig-Hellman Algorithm but I am having problem understand the steps in the algorithm based on the definition of the algorithm.
Going by the Wiki of the algorithm:
I know the first part 1) is to calculate the prime factor of p-1 - which is fine.
However, I am not sure what I need to do in steps 2) where you calculate the co-efficents:
Let x2 = c0 + c1(2).
125(180/2) = 12590 1 mod (181) so c0 = 0.
125(180/4) = 12545 1 mod (181) so c1 = 0.
Thus, x2 = 0 + 0 = 0.
and 3) put the coefficents together and solve in the chinese remainder theorem.
Can someone help with explaining this in plain english (i) - or pseudocode. I want to code the solution myself obviously but I cannot make any more progress unless i understand the algorithm.
Note: I have done a lot of searching for this and I read S. Pohlig and M. Hellman (1978). "An Improved Algorithm for Computing Logarithms over GF(p) and its Cryptographic Significance but its still not really making sense to me.
Thanks in advance
Update:
how come q(125) stays constant in this example.
Where as in this example is appears like he is calculating a new q each time.
To be more specific I don't understand how the following is computed:
Now divide 7531 by a^c0 to get
7531(a^-2) = 6735 mod p.
Let's start with the main idea behind Pohlig-Hellman. Assume that we are given y, g and p and that we want to find x, such that
y == gx (mod p).
(I'm using == to denote an equivalence relation). To simplify things, I'm also assuming that the order of g is p-1, i.e. the smallest positive k with 1==gk (mod p) is k=p-1.
An inefficient method to find x, would be to simply try all values in the range 1 .. p-1.
Somewhat better is the "Baby-step giant-step" method that requires O(p0.5) arithmetic operations. Both methods are quite slow for large p. Pohlig-Hellman is a significant improvement when p-1 has many factors. I.e. assume that
p-1 = n r
Then what Pohlig and Hellman propose is to solve the equation
yn == (gn)z
(mod p).
If we take logarithms to the basis g on both sides, this is the same as
n logg(y) == logg(yn) == nz (mod p-1).
n can be divided out, giving
logg(y) == z (mod r).
Hence x == z (mod r).
This is an improvement, since we only have to search a range 0 .. r-1 for a solution of z. And again "Baby-step giant-step" can be used to improve the search for z. Obviously, doing this once is not a complete solution yet. I.e. one has to repeat the algorithm above for every prime factor r of p-1 and then to use the Chinese remainder theorem to find x from the partial solutions. This works nicely if p-1 is square free.
If p-1 is divisible by a prime power then a similiar idea can be used. For example let's assume that p-1 = m qk.
In the first step, we compute z such that x == z (mod q) as shown above. Next we want to extend this to a solution x == z' (mod q2). E.g. if p-1 = m q2 then this means that we have to find z' such that
ym == (gm)z' (mod p).
Since we already know that z' == z (mod q), z' must be in the set {z, z+q, z+2q, ..., z+(q-1)q }. Again we could either do an exhaustive search for z' or improve the search with "baby-step giant-step". This step is repeated for every exponent of q, this is from knowing x mod qi we iteratively derive x mod qi+1.
I'm coding it up myself right now (JAVA). I'm using Pollard-Rho to find the small prime factors of p-1. Then using Pohlig-Hellman to solve a DSA private key. y = g^x. I am having the same problem..
UPDATE: "To be more specific I don't understand how the following is computed: Now divide 7531 by a^c0 to get 7531(a^-2) = 6735 mod p."
if you find the modInverse of a^c0 it will make sense
Regards