I am looking for an (almost everywhere) differentiable function f(p1, p2, p3, p4) that given four points will give me a scale-agnostic measure for co-planarity. It is zero if the four points lie on the same plane and positive otherwise. Scale-agnostic means that, when I uniformly scale all points the planarity measure will return the same.
I came up with something that is quite complex and not easy to optimize. Define u=p2-p1, v=p3-p1, w=p4-p1. Then the planarity measure is:
[(u x v) * w]² / (|u x v|² |w|²)
where x means cross product and '*' means dot product.
The numerator is simply (the square of) the volume of the tetrahedron defined by the four points, and the denominator is a normalizing factor that makes this measure become simply the cosine of an angle. Because angles do not changed under uniform scale, this function satisfies all my requirements.
Does anybody know of something simpler?
Alex.
Edit:
I eventually used an Augmented Lagrangian method to perform optimization, so I don't need it to be scale agnostic. Just using the constraint (u x v) * w = 0 is enough, as the optimization procedure finds the correct Lagrange multiplier to compensate for the scale.
Your methods seems ok, I'd do something like this for efficient implementation:
Take u, v, w as you did
Normalize them: various tricks exist to evaluate the inverse square root efficiently with whatever precision you want, like this jewel. Most modern processors have builtins for this operation.
Take f = |det(u, v, w)| ( = (u x v) . w ). There are fast direct implementations for 3x3 matrices; see #batty's answer to this question.
This amounts to what you do without the squares. It is still homogeneous and almost everywhere differentiable. Take the square of the determinant if you want something differentiable everywhere.
EDIT: #phkahler implicitly suggested using the ratio of the radius of the inscribed sphere to the radius of the circumscribed sphere as a measure of planarity. This is a bounded differentiable function of the points, invariant by scaling. However, this is at least as difficult to compute as what you (and I) suggest. Especially computing the radius of the circumscribed sphere is very sensitive to roundoff errors.
A measure that should be symmetric with respect to point reorderings is:
((u x v).w)^2/(|u||v||w||u-v||u-w||v-w|)
which is proportional to the volume of the tetrahedron squared divided by all 6 edge lengths. It is not simpler than your formula or Alexandre C.'s, but it is not much more complicated. However, it does become unnecessarily singular when any two points coincide.
A better-behaved, order-insensitive formula is:
let a = u x v
b = v x w
c = w x u
(a.w)^2/(|a| + |b| + |c| + |a+b+c|)^3
which is something like the volume of the tetrahedron divided by the surface area, but raised to appropriate powers to make the whole thing scale-insensitive. This is also a bit more complex than your formula, but it works unless all 4 points are collinear.
How about
|(u x v) * w| / |u|^3
(and you can change |x| to (x)^2 if you think it's simpler).
Related
If I have X amount of things (lets just randomly say 300)
Is there an algorithm that will arrange these things somewhat evenly around a central point? Like a 100 sided dice or a 3d mesh of a sphere?
Id rather have the things somewhat evenly spaced like this..
Rather than this polar way..
ps. For those interested, wondering why do I want to do this?
Well I'm doing these for fun, and after completing #7 I decided I'd like to represent the array of wires in 3d in Unity and watch them operate in a slowed down manner.
Here is a simple transformation that maps a uniform sample in the rectangle [0, 2 pi] x [-1, 1] onto a uniform sample on the sphere of radius r:
T(phi, z) = (r cos(phi) sqrt(1 - z^2), r sin(phi) sqrt(1 - zˆ2), r z)
The reason why this transformation produces uniform samples on the sphere is that the area of any region T(U) obtained by transforming the region U from the rectangle does not depend on U but on the area of U.
To prove this mathematically it is enough to verify that the norm of the vectorial product
| ∂T/∂phi x ∂T/∂z |
is constant (the area on the sphere is the integral of this vectorial product w.r.t. phi and z).
Summarizing
To produce a random sample uniformly distributed in the Sphere of radius r do the following:
Produce a random sample (phi_1, ..., phi_n) uniformly distributed in [0, 2 pi].
Produce a random sample (z_1, ..., z_n) uniformly distributed in [-1, 1].
For every pair (phi_j, z_k) calculate T(phi_j, z_k) using the formula above.
Here's a three-step approach. 1a) Make more points than you need. 1b) Remove some. 2) Adjust the rest.
1a) To make more points that you need, take any quasiregular polyhedron with sides that tessellate (triangles, squares, diamonds). Tesselate the spherical faces by subdivision, generating more vertices. For example, if you use the regular icosahedron you get geodesic domes. (Subdivide by 2, you get the dual to the C60 buckyball.) Working out exact formulas isn't hard. The number of new vertices per face is quadratic in the subdivision.
1b) Randomly remove enough points to get you down to your target number.
2) Use a force-directed layout algorithm to redistribute the vertices over the sphere. The underlying force graph is just that provided by the nearest neighbors in your underlying tesselation.
There are other ways to do step 1), such as just generating random points in any distribution. There is an advantage of starting with a quasiregular figure, though. Force-directed algorithms have a reputation for poor convergence in some cases. By starting with something that's already mostly optimal, you'll bypass most all of any convergence problems you might have.
One elegant solution I came across recently is a spherical fibonacci lattice (http://extremelearning.com.au/how-to-evenly-distribute-points-on-a-sphere-more-effectively-than-the-canonical-fibonacci-lattice/)
The nice thing about it is that you can specify the exact number of points you want
// C# Code example
Vector3[] SphericalFibonacciLattice(int n) {
Vector3[] res = new Vector3[n];
float goldenRatio = (1.0f + MathF.Sqrt(5.0f)) * 0.5f;
for(int i = 0; i < n; i++)
{
float theta = 2.0f * MathF.PI * i / goldenRatio;
float phi = MathF.Acos(1.0f - 2.0f * (i + 0.5f) / n);
Vector3 p = new Vector3(MathF.Cos(theta) * MathF.Sin(phi),
MathF.Sin(theta) * MathF.Sin(phi),
MathF.Cos(phi));
res[i] = p;
}
return res;
}
The linked article extends on this to create an even more uniform distribution, but even this basic version creates very nice results.
I have this formula
B = tan(atan(A) + C)
where A is the input, B is output and C is a constant. The problem is that sin, cos and tan functions are computationally expensive and also there is quite a big loss of precision along the formula when calculated as 4 byte floats. I am in the process of optimizing my code so is there any way to avoid using these functions even if the total number of calculations is several times higher?
Further background: the numbers A, B and C are the ratio's of x/y coordinate for 3 points on a 2 dimensional plane
According to Wolfram Alpha, tan(atan(A)+C) can be written as (A+tan(C))/(1-A*tan(C)).
You can easily derive this by hand from the tangent sum formula:
tan(a + b) = (tan a + tan b)/(1 - tan a tan b).
If the implementation of tan in your math library is slow or inaccurate it's possible that faster or more precise implementations exist.
I'll presume that your formula is correct. Mark's comment essentially comes down to the idea that C must have units of an angle for the formula to make sense, but if C is a ratio, then it won't have the proper units. Mark has a valid question.
In the end, you will still need to compute a tangent, but there are things you can do to help a bit.
First, apply a simple trig identity, for the tangent of a sum. This, combined with the fact that tan(atan(A)) = A, reduces your formula to
B = (A + tan(C))/(1 - A*tan(C))
Thus you still need to compute ONE tangent, that of C. (Thus precompute tan(C), once.) Nothing will get you around that.
However, there are ways to compute a tangent more efficiently than as a ratio sin(C)/cos(C). For example, a direct series approximation might be better. Or there is a trick using a series for the versine, which is itself more efficient to compute than the tangent series. And for small angles it can be rapidly convergent. You can assure small angles using range reduction tricks for that versine. Other tricks exist too.
atan(A) = atan(x_a/y_a) for some point is the angle between vector (x_a,y_a) and Oy. Because C is a constant, you can precompute some vector c=(x_c,y_c) with unit length and inclined to Oy with angle C. Then cos(atan(A)+C) can be expressed as inner product of these vectors divided by length of a. From cos you can get tan using main identity. In the end a got:
B = sqrt((x_a^2 + y_a^2)/(x_a*x_c + y_a*y_c)^2 - 1)
This might be more effective. Be careful with signs.
I have a set of points (with unknow coordinates) and the distance matrix. I need to find the coordinates of these points in order to plot them and show the solution of my algorithm.
I can set one of these points in the coordinate (0,0) to simpify, and find the others. Can anyone tell me if it's possible to find the coordinates of the other points, and if yes, how?
Thanks in advance!
EDIT
Forgot to say that I need the coordinates on x-y only
The answers based on angles are cumbersome to implement and can't be easily generalized to data in higher dimensions. A better approach is that mentioned in my and WimC's answers here: given the distance matrix D(i, j), define
M(i, j) = 0.5*(D(1, j)^2 + D(i, 1)^2 - D(i, j)^2)
which should be a positive semi-definite matrix with rank equal to the minimal Euclidean dimension k in which the points can be embedded. The coordinates of the points can then be obtained from the k eigenvectors v(i) of M corresponding to non-zero eigenvalues q(i): place the vectors sqrt(q(i))*v(i) as columns in an n x k matrix X; then each row of X is a point. In other words, sqrt(q(i))*v(i) gives the ith component of all of the points.
The eigenvalues and eigenvectors of a matrix can be obtained easily in most programming languages (e.g., using GSL in C/C++, using the built-in function eig in Matlab, using Numpy in Python, etc.)
Note that this particular method always places the first point at the origin, but any rotation, reflection, or translation of the points will also satisfy the original distance matrix.
Step 1, arbitrarily assign one point P1 as (0,0).
Step 2, arbitrarily assign one point P2 along the positive x axis. (0, Dp1p2)
Step 3, find a point P3 such that
Dp1p2 ~= Dp1p3+Dp2p3
Dp1p3 ~= Dp1p2+Dp2p3
Dp2p3 ~= Dp1p3+Dp1p2
and set that point in the "positive" y domain (if it meets any of these criteria, the point should be placed on the P1P2 axis).
Use the cosine law to determine the distance:
cos (A) = (Dp1p2^2 + Dp1p3^2 - Dp2p3^2)/(2*Dp1p2* Dp1p3)
P3 = (Dp1p3 * cos (A), Dp1p3 * sin(A))
You have now successfully built an orthonormal space and placed three points in that space.
Step 4: To determine all the other points, repeat step 3, to give you a tentative y coordinate.
(Xn, Yn).
Compare the distance {(Xn, Yn), (X3, Y3)} to Dp3pn in your matrix. If it is identical, you have successfully identified the coordinate for point n. Otherwise, the point n is at (Xn, -Yn).
Note there is an alternative to step 4, but it is too much math for a Saturday afternoon
If for points p, q, and r you have pq, qr, and rp in your matrix, you have a triangle.
Wherever you have a triangle in your matrix you can compute one of two solutions for that triangle (independent of a euclidean transform of the triangle on the plane). That is, for each triangle you compute, it's mirror image is also a triangle that satisfies the distance constraints on p, q, and r. The fact that there are two solutions even for a triangle leads to the chirality problem: You have to choose the chirality (orientation) of each triangle, and not all choices may lead to a feasible solution to the problem.
Nevertheless, I have some suggestions. If the number entries is small, consider using simulated annealing. You could incorporate chirality into the annealing step. This will be slow for large systems, and it may not converge to a perfect solution, but for some problems it's the best you and do.
The second suggestion will not give you a perfect solution, but it will distribute the error: the method of least squares. In your case the objective function will be the error between the distances in your matrix, and actual distances between your points.
This is a math problem. To derive coordinate matrix X only given by its distance matrix.
However there is an efficient solution to this -- Multidimensional Scaling, that do some linear algebra. Simply put, it requires a pairwise Euclidean distance matrix D, and the output is the estimated coordinate Y (perhaps rotated), which is a proximation to X. For programming reason, just use SciKit.manifold.MDS in Python.
The "eigenvector" method given by the favourite replies above is very general and automatically outputs a set of coordinates as the OP requested, however I noticed that that algorithm does not even ask for a desired orientation (rotation angle) for the frame of the output points, the algorithm chooses that orientation all by itself!
People who use it might want to know at what angle the frame will be tipped before hand so I found an equation which gives the answer for the case of up to three input points, however I have not had time to generalize it to n-points and hope someone will do that and add it to this discussion. Here are the three angles the output sides will form with the x-axis as a function of the input side lengths:
angle side a = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2-b^2)^2)/(a^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side b = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*b^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
angle side c = arcsin(sqrt(((c+b+a)*(c+b-a)*(c-b+a)*(-c+b+a)*(c^2+b^2-a^2)^2)/(4*c^4*((c^2+b^2-a^2)^2+(c^2-b^2)^2))))*180/Pi/2
Those equations also lead directly to a solution to the OP's problem of finding the coordinates for each point because: the side lengths are already given from the OP as the input, and my equations give the slope of each side versus the x-axis of the solution, thus revealing the vector for each side of the polygon answer, and summing those sides through vector addition up to a desired vertex will produce the coordinate of that vertex. So if anyone can extend my angle equations to handling beyond three input lengths (but I note: that might be impossible?), it might be a very fast way to the general solution of the OP's question, since slow parts of the algorithms that people gave above like "least square fitting" or "matrix equation solving" might be avoidable.
I am developing a rotate-around-axis algorithm in 3 dimensions. My inputs are
the axis I am revolving around, as a vector from my center point
the center point (obviously)
the angle I wish to rotate around
my current position
I am wondering if there is a way to do this without trigonometry, just with vector operations. Does anyone have a potential solution?
EDIT: Is there a way that I could rotate by pi/4 radians (45 degrees) each time, rather than an inputted angle theta? This might simplify things a bit, I don't know.
Rotations are inherently well-described by and .
It's a handy trick that unit quaternions nicely represent 3-D rotations just as well as (and in some senses, better than) rotation matrices. Converting a rotation by angle about a normal axis where , does require a little bit of trigonometry: .
But from there on it's simple arithmetic.
A quaternion can be directly applied to rotate a vector with , or converted to a rotation matrix .
This is a rotation around the origin, of course. To rotate around an arbitrary point in space, simply translate by to the origin, rotate, then translate by to return.
use matrices: http://en.wikipedia.org/wiki/Rotation_matrix#Rotations_in_three_dimensions
If this is some sort of dumb homework problem, you can use Taylor Series approximation of the sine/consine functions. Whether or not this "counts" as trigonometry is I guess up for debate. You could then use these values in a rotation matrix or quarternion, if you want to use vector operations.
But again, there's no practical reason to do this.
Are there other techniques that don't use trig functions? Possibly, but there are no know efficient, general (i.e. for arbitrary angles) ways to perform rotations without use of trig functions.
However, based on your edit, you can precompute the sin and cos for a collection of angles you're interested in and store them in a lookup table. You need not be constrained in such a circumstance to π/4 increments, but you can do π/256 or π/1024 increments if you want. Also, you don't need two tables, since cos(θ) = sin(θ+π/2).
From there, you can use any of a number of interpolation methods to include simple rounding, linear interpolation or some sort of polynomial interpolation based on your needs.
You would then use either the matrix or quaternion based transformation to compute the rotated vector.
This will be faster than computing the sin and cos for general angles, though will require some additional space, and there will be an accuracy penalty as well. But if it satisfies your needs...
Theres a cheaper way than matrices, I think ive got it to sum count of adders.
The perimetre box of the vector is as good as an angle, if you step in partitions of the box size. (thats only a binary shift if its a power of 2.)
Then that would be a "box rotate" then just use the side report to give you how far along the diagonal you would be then you can split it up into so many gradients, the circle shape.
Id like to see someone proove that u can rotate without matrices or any trig like that too.
Is it possible to rotate without trigonometry? Yes.
Is it useful to rotate without using trigonometry? Probably not.
The first option is a problem-level solution: Change your coordinate system to spherical or cylindrical coordinates.
Since you rotate around an axis cylindrical coordinates of the form (alpha, radius, x3) will work.
Naming your center point O (for origin) and the point to rotate P, you can get the vector between them v=P-O. You also know the normal vector n of your plane of rotation (the vector you rotate around). With this, you can get the components of v that are parallel and orthogonal to n using a vector projection.
You have the freedom to choose how your new coordinate frame is rotated (relative to your original frame), so you can measure angles from the projection of v onto the plane of rotation. You also have the freedom to choose between degree and radians.
From there, you can now rotate to your heart's content using addition and subtraction.
Using dot(.,.) to denote the scalar product it would look something like this in code
v_parallel = dot(v, n) / dot(n, n) * n
radius = norm(v - v_parallel)
x3 = norm(v_parallel)
new_axis = (v - v_parallel) / norm(v - v_parallel)
P_polar = (0, radius, x3)
# P rotated by 90 degrees
P_polar = (pi/2, radius, x3)
# P rotated by -10 degrees
P_polar = (-pi/36, radius, x3)
However, if you want to change back to a standard basis you will have to use trigonometry again. Hence why I said this approach exists, but may not be too useful in practice.
Another approach comes from the cool observation that you can describe any planar rotation using two reflections along two given axis (represented by two vectors). The plane of rotation is the plane that is spun up by the two vectors and the angle of rotation is twice the angle between the two vectors.
You can reflect a vector using the vector projection from above; hence, you can do the entire process without trigonometry if you know the two vectors (let's call them x1 and x2).
tmp = v - 2 * dot(v, x1) / dot(x1, x1) * x1
v_rotated = tmp - 2 * dot(tmp, x2) / dot(x2, x2) * x2
The problem then turns into finding two vectors that are orthogonal to n and have an enclosing angle of alpha/2. How to do this is specific to your problem. For arbitrary alpha this is again the point where you can't dodge the trigonometry bullet; hence, it is again possible, but maybe not so viable in practice.
With help from Mathematica, it looks like we can rotate a point around a vector without Sin/Cos if you are willing to specify the amount of rotation as a number between -1 and 1, rather than an angle in radians.
The below starts with Mathematica's RotationTransform of a point {x,y,z} around a vector {u,v,w} by c radians (which contains many instances of Cos[c] and Sin[c]). It then substitutes all the Cos[c] with "c" and Sin[c] with Sqrt[1-c^2] (a trig identity for Sin in terms of Cos). Everything is simplified with the assumption that the rotation vector is normalized. The resulting equation produces the rotated point without any trig operations.
Note: as c ranges from -1 to 1 the point will only rotate through half a circle, the other half of the rotation can be achieved by flipping the signs on {u,v,w}.
Given the points of a line and a quadratic bezier curve, how do you calculate their nearest point?
There exist a scientific paper regarding this question from INRIA: Computing the minimum distance between two Bézier curves (PDF here)
I once wrote a tool to do a similar task. Bezier splines are typically parametric cubic polynomials. To compute the square of the distance between a cubic segment and a line, this is just the square of the distance between two polynomial functions, itself just another polynomial function! Note that I said the square of the distance, not the square root.
Essentially, for any point on a cubic segment, one could compute the square of the distance from that point to the line. This will be a 6th order polynomial. Can we minimize that square of the distance? Yes. The minimum must occur where the derivative of that polynomial is zero. So differentiate, getting a 5th order polynomial. Use your favorite root finding tool that generates all of the roots numerically. Jenkins & Traub, whatever. Choose the correct solution from that set of roots, excluding any solutions that are complex, and only picking a solution if it lies inside the cubic segment in question. Make sure you exclude the points that correspond to local maxima of the distance.
All of this can be efficiently done, and no iterative optimizer besides a polynomial root finder need be used, thus one does not require the use of optimization tools that require starting values, finding only a solution near that starting value.
For example, in the 3-d figure I show a curve generated by a set of points in 3-d (in red), then I took another set of points that lay in a circle outside, I computed the closest point on the inner curve from each, drawing a line down to that curve. These points of minimum distance were generated by the scheme outlined above.
I just wanna give you a few hints, in for the case Q.B.Curve // segment :
to get a fast enough computation, i think you should first think about using a kind of 'bounding box' for your algorithm.
Say P0 is first point of the Q. B. Curve, P2 the second point, P1 the control point, and P3P4 the segment then :
Compute distance from P0, P1, P2 to P3P4
if P0 OR P2 is nearest point --> this is the nearest point of the curve from P3P4. end :=).
if P1 is nearest point, and Pi (i=0 or 1) the second nearest point, the distance beetween PiPC and P3P4 is an estimate of the distance you seek that might be precise enough, depending on your needs.
if you need to be more acurate : compute P1', which is the point on the Q.B.curve the nearest from P1 : you find it applying the BQC formula with t=0.5. --> distance from PiP1' to P3P4 is an even more accurate estimate -but more costly-.
Note that if the line defined by P1P1' intersects P3P4, P1' is the closest point of QBC from P3P4.
if P1P1' does not intersect P3P4, then you're out of luck, you must go the hard way...
Now if (and when) you need precision :
think about using a divide and conquer algorithm on the parameter of the curve :
which is nearest from P3P4 ?? P0P1' or P1'P2 ??? if it is P0P1' --> t is beetween 0 and 0.5 so compute Pm for t=0.25.
Now which is nearest from P3P4?? P0Pm or PmP1' ?? if it is PmP1' --> compute Pm2 for t=0.25+0.125=0.375 then which is nearest ? PmPm2 or Pm2P1' ??? etc
you will come to accurate solution in no time, like 6 iteration and your precision on t is 0.004 !! you might stop the search when distance beetween two points becomes below a given value. (and not difference beetwen two parameters, since for a little change in parameter, points might be far away)
in fact the principle of this algorithm is to approximate the curve with segments more and more precisely each time.
For the curve / curve case i would first 'box' them also to avoid useless computation, so first use segment/segment computation, then (maybe) segment/curve computation, and only if needed curve/curve computation.
For curve/curve, divide and conquer works also, more difficult to explain but you might figure it out. :=)
hope you can find your good balance for speed/accuracy with this :=)
Edit : Think i found for the general case a nice solution :-)
You should iterate on the (inner) bounding triangles of each B.Q.C.
So we have Triangle T1, points A, B, C having 't' parameter tA, tB, tC.
and Triangle T2, points D, E, F, having t parameter tD, tE, tF.
Initially we have tA=0 tB=0.5 tC= 1.0 and same for T2 tD=0, tE=0.5, tF=1.0
The idea is to call a procedure recursivly that will split T1 and/or T2 into smaller rectangles until we are ok with the precision reached.
The first step is to compute distance from T1 from T2, keeping track of with segments were the nearest on each triangle. First 'trick': if on T1 the segment is AC, then stop recursivity on T1, the nearest point on Curve 1 is either A or C. if on T2 the nearest segment is DF, then stop recursivity on T2, the nearest point on Curve2 is either D or F. If we stopped recursivity for both -> return distance = min (AD, AF, CD, CF). then if we have recursivity on T1, and segment AB is nearest, new T1 becomes : A'=A B= point of Curve one with tB=(tA+tC)/2 = 0.25, C=old B. same goes for T2 : apply recursivityif needed and call same algorithm on new T1 and new T2. Stop algorithm when distance found beetween T1 and T2 minus distance found beetween previous T1 and T2 is below a threshold.
the function might look like ComputeDistance(curveParam1, A, C, shouldSplitCurve1, curveParam2, D, F, shouldSplitCurve2, previousDistance) where points store also their t parameters.
note that distance (curve, segment) is just a particular case of this algorithm, and that you should implement distance (triangle, triangle) and distance (segment, triangle) to have it worked. Have fun.
1.Simple bad method - by iteration go by point from first curve and go by point from second curve and get minimum
2.Determine math function of distance between curves and calc limit of this function like:
|Fcur1(t)-Fcur2(t)| ->0
Fs is vector.
I think we can calculate the derivative of this for determine extremums and get nearest and farest points
I think about this some time later, and post full response.
Formulate your problem in terms of standard analysis: You have got a quantity to minimize (distance), so you formulate an equation for this quantity and find the points where the first derivatives are zero. Parameterize with a single parameter by using the curve's parameter p, which is between 0 for the first point and 1 for the last point.
In the line case, the equation is fairly simple: Get the x/y coordinates from the spline's equation and compute the distance to the given line via vector equations (scalar product with the line's normal).
In the curve's case, the analytical solution could get pretty complicated. You might want to use a numerical minimization technique such as Nelder-Mead or, since you have a 1D continuous problem, simple bisection.
In the case of a Bézier curve and a line
There are three candidates for the closest point to the line:
The place on the Bézier curve segment that is parallel to the line (if such a place exists),
One end of the curve segment,
The other end of the curve segment.
Test all three; the shortest distance wins.
In the case of two Bézier curves
Depends if you want the exact analytical result, or if an optimised numerical result is good enough.
Analytical result
Given two Bézier curves A(t) and B(s), you can derive equations for their local orientation A'(t) and B'(s). The point pairs for which A'(t) = B'(s) are candidates, i.e. the (t, s) for which the curves are locally parallel. I haven't checked, but I assume that A'(t) - B'(s) = 0 can be solved analytically. If your curves are anything like those you show in your example, there should be either only one solution or no solution to that equation, but there could be two (or infinitely many in the case where the curves identical but translated -- in which case you can ignore this because the winner will always be one of the curve segment endpoints).
In an approach similar to the curve-line case outline above, test each of these point pairs, plus the curve segment endpoints. The shortest distance wins.
Numerical result
Let's say the points on the two Bézier curves are defined as A(t) and B(s). You want to minimize the distance d( t, s) = |A(t) - B(s)|. It's a simple two-parameter optimization problem: find the s and t that minimize d( t, s) with the constraints 0 ≤ t ≤ 1 and 0 ≤ s ≤ 1.
Since d = SQRT( ( xA - xB)² + (yA - yB)²), you can also just minimize the function f( t, s) = [d( t, s)]² to save a square root calculation.
There are numerous ready-made methods for such optimization problems. Pick and choose.
Note that in both cases above, anything higher-order than quadratic Bézier curves can giver you more than one local minimum, so this is something to watch out for. From the examples you give, it looks like your curves have no inflexion points, so this concern may not apply in your case.
The point where there normals match is their nearest point. I mean u draw a line orthogonal to the line. .if that line is orthogonal to the curve as well then the point of intersection is the nearest point