Why is G3 continuity only achievable between two edges?
For example, curve 1/2/3, 4/5/6, 7/8/9, 10/11/12 are all G3 continuous. The center surface is built with G3 constraint on edge 5/2. Since curve 1/2/3, 4/5/6 are G3 already, how come edge 8/11 can only achieve G1 tangent?
Can't I just build a function to use the first & second & third derivatives at each u/v's edge control point to compute the 3 control points adjacent to this edge control point to achieve G3 on all 4 edges?
The reason that the center surface cannot be built to be G3 to all 4 surfaces is because the 4 surfaces might not meet at the 4 corner points with G3 continuity. In fact, for the given condition that the curves 1/2/3, 4/5/6, 7/8/9, 10/11/12 are all G3 continuous only ensures that the 4 surfaces meet at the 4 corner points with G1 continuity.
The following is a bit more details per OP's request.
Let's denote two of the 4 surfaces as Surface A and B with coincident control points P and Q as shown in the following picture.
The normal vector of Surface A at point P is obtained by taking cross product of vector(P,P1) and vector(P,P2) and that for surface B at point Q is the cross product of vector(Q,Q1) and vector(Q,Q2). Since curve 1 and 2 are connected with G3 continuity, it means that vector(P,P1) is parallel to vector(Q,Q1). Similarly we have vector(P,P2) parallel to vector(Q,Q2). Therefore, we can conclude that surface A and B have the same unit normal vector at point P (or Q), which means the two surface meet with G1 continuity.
In order for surfaces A and B to meet with G2 at point P, 3 more control points from each surface will get involved (shown in the picture as green dots P3, P4 and P5 for surface A). All these 12 control points (6 from each surface) need to form a specific relationship in order for the two surfaces to meet with G2 continuity. The fact that curve 1/2 and 8/9 are connected with G3 continuity only affects the location of P3 and P5, and not the location of P4. Therefore, it does not ensure the two surfaces meet with G2 continuity at point A, let alone G3 continuity.
Related
I want to have a normalized graph edit distance.
I'm using this function:
https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.similarity.graph_edit_distance.html#networkx.algorithms.similarity.graph_edit_distance
I'm trying to understand to graph_edit_distance function in order to be able to normalize it between 0 and 1 but I don't understand it fully.
For example:
def compare_graphs(Ga, Gb):
draw_graph(Ga)
draw_graph(Gb)
graph_edit_distance = nx.graph_edit_distance(Ga, Gb, node_match=lambda x,y : x==y)
return graph_edit_distance
compare_graphs(G1, G3)
Why is the graph_edit_distance = 4?
Graph construction:
Hey
e1 = [(1,2), (2,3)]
e3 = [(1,3), (3,1)]
G1 = nx.DiGraph()
G1.add_edges_from(e1)
G3 = nx.DiGraph()
G3.add_edges_from(e3)
The edit distance is measured by:
nx.graph_edit_distance(Ga, Gb, node_match=lambda x,y : x==y)
The difference from graph_edit_distance is that it relates to node indexes.
This is the output of optimize_edit_paths:
list(optimize_edit_paths(G1, G2, node_match, edge_match,
node_subst_cost, node_del_cost, node_ins_cost,
edge_subst_cost, edge_del_cost, edge_ins_cost,
upper_bound, True))
Out[3]:
[([(1, 1), (2, None), (3, 3)],
[((1, 2), None), ((2, 3), None), (None, (1, 3)), (None, (3, 1))],
5.0),
([(1, 1), (2, 3), (3, None)],
[((1, 2), (1, 3)), (None, (3, 1)), ((2, 3), None)],
4.0)]
I know it should be the minimum sequence of node and edge edit operations transforming graph G1 to graph isomorphic to G2.
When I try to count, I get:
1. Add node 2 to G3,
2. Cancel e1=(1,3) from G3
3. Cancel e2=(3,1) from G3
4. Add e3 = (1,2) to G3
5. Add e4 = (2,3) to G3
graph_edit_distance = 5.
What am I missing?
Or alternatively, what can I do in order to normalize the distance I receive?
I thought about dividing by |V1| + |V2| + |E1| + |E2|, or dividing by max(|V1| + |E1|, |V2| + |E2|)) but I'm not sure.
Thanks in advance.
I know its old post but I am currently reading about GED and was willing to answer it for someone looking for it in future.
Graph edit distance is 4
Reason:
1 and 3 are connected using an undirected edge. While 1 and 2 are connected using directed edge. Now graph edit path will be :
Turn undirected edge to directed edge
Change 3 to 2 (substitution)
Add an edge to 2
Finally add a node to that edge
The Graph Edit Distance is unbounded. The more nodes you add to one graph, that the other graph doesn't have, the more edits you need to make them the same. So, the GED can be arbitrarily large.
I haven't seen this proposed anywhere else, but I have an idea:
Instead of GED(G1,G2), you can compute GED(G1,G2)/[GED(G1,G0) + GED(G2,G0)],
where G0 is the empty graph.
The situation is analogous to the difference between real numbers.
Imagine that I give you |A-B| and |C-D|. They are not on the same footing.
E.g., you could have A=1, B=2 and C=1000, D=1001.
The differences are equal, but the relative differences are very different.
To convey that, you would compute |A-B|/(|A|+|B|) instead of just |A-B|.
This is symmetric to a swapping of A and B, and it's a relative distance.
Since it's relative, it can be compared to the other relative distance: |C-D|/(|C|+|D|). These relative distances are comparable, they're expressing a notion that is universal and applies to all pairs of numbers.
In summary, compute the relative GED, using G0, the null graph, like you would use 0 if you were measuring the relative distance between real numbers.
I have a 3d model (from Blender) with vertices, vertices normals (normalized) and faces (triangles). I need to calculate additional vertices and their normals. Other words, I need algorithm to calculate center vertex for triangle from three vertices and three vertices normals.
For example, in picture we have A, B, C vertices. How to calculate D vertex and it's normal?
Or, even better, point E (center of one of the sides).
Could anybody help me?
If you want point D lie exactly on plane based on ABC then I suggest you to use barycentric coordinates. Point D is intersection of medians and it is (1/3, 1/3, 1/3) in barycentric coordinates, or D = 1/3A + 1/3B + 1/3C, E would be (0,1/2,1/2). The normal ND should be calculated in the same way as D, ND = 1/3NA + 1/3NB + 1/3NC.
You didn't state the reason why do you need to calculate D and E. I suppose you want to get more triangles in the mesh, thus better level of detail. In this case PN-triangles should be used
I have two pairs of latitude and longitude, but each pair also has an associated radius, because the coordinates may be more or less accurate. How to find the minimum distance between two circular areas on Earth?
The .NET code here calculates the distance between two precise geocoordinates, but does not take into account the associated radii.
Example
What is the shortest distance between the perimeters of these two circles, one in London, England and the other in Cancun, Mexico ?
51°31′2″ N 0°7′50″ W radius 90m
21°7′45″ N 86°45′49″ W radius 550m
Also, the distance between these two overlapping areas should be 0 meters:
21°7′46″ N 86°45′49″ W radius 50m
21°7′45″ N 86°45′49″ W radius 550m
A cheap and cheerful approximation would be to find the distance between the centres and then subtract the radius of each circle from that. If the result is negative the circles overlap and the minimum distance is 0; otherwise the minimum distance is the result. This would give the exact answer on a plane.
Actually, except for some odd cases involving nearly antipodal points I think this will give the correct answer on a spheroid too. For (except for the odd cases) if the centres (A and B say) are d apart, then there will be a geodesic from A to B of length d. If we go a distance r (radius of the circle about A) along the geodesic toward B we will get to a point a on the circle, and similarly (from B a distance s going towards A) we get to a point b on the circle about B. The geodesic from A to B is also a geodesic from a to b, and the distance along it is d-r-s. So the distance from a to b is d-r-s. There can't be points (a',b' say) on the circles closer then that, for if there were we could get from A to B by going from A to a', along the geodesic from a' to b' and then from B to b; but the geodesic from A to B is the shortest route.
Assuming the two regions are "close enough" so one can neglect the spherical nature of the problem, along with ...
Assuming that those "confidence regions" are somehow important to the user of the result, along with ...
The fact that a single number as a result would erase the uncertainty of the information (or the measurement errors), I would recommend to not expect a number but an interval to be an adequate result.
Let p1, p2 be two "close enough" centers of regions R1, R2.
Let u1, u2 be the uncertainty of the position in the same distance unit as p1, p2 are measured, as the radius of those circles.
Center distance:
dc p1 p2 = |p2-p1|
BorderDistance Minimum:
bdmin p1 p2 u1 u2 = (dc p1 p2) - u1 - u2
BorderDistance Maximum:
bdmax p1 p2 u1 u2 = (dc p1 p2) + u1 + u2
Then, the distance between those regions is the interval:
[bdmin p1 p2 u1 u2, bdmax p1 p2 u1 u2]
let sq x = x*x
let distance p1 p2 : float =
let x1,y1 = p1
let x2,y2 = p2
sqrt(sq (x2-x1) + sq (y2-y1))
let rdistance p1 u1 p2 u2 =
( (distance p1 p2) - u1 - u2
, (distance p1 p2) + u1 + u2
)
let rdistance3 p1 u1 p2 u2 =
let mi,ma = rdistance p1 u1 p2 u2
(mi,distance p1 p2,ma)
let P1 = (0.0,0.0)
let P2 = (10.0,10.0)
let U1 = 2.0
let U2 = 5.0
printfn "as interval: %A" (rdistance P1 U1 P2 U2)
printfn "as interval with center: %A" (rdistance3 P1 U1 P2 U2)
as interval: (7.142135624, 21.14213562)
as interval with center: (7.142135624, 14.14213562, 21.14213562)
The latter version is nice as it allows users to continue as they please, having all 3 values and also are able to get a feeling for the accuracy.
Discussion:
If the true data looks like the one on the picture, it does not pay off to take spherical geometry formulae for the computation. The reason being, that the size of the circles is magnitudes larger than the error yielded from euklidean geometry.
If, on the other hand, the true data would be at significantly large distances, it would probably not matter if the center points or the edges of the circles were taken for the computation. As then the radius of the circles would be tiny compared to the distance. Then, though, spherical geometry is needed.
Last not least, if this is only one step in a longer series of computations, it pays off to keep accuracy information.
See for example the wikipedia article on interval arithmetic.
If you see U1, U2 as statistical parameters, such as the n% confidence region (think something like standard deviation), then you can try to find a statistical model and reason about it.
Warming up, if we assumed both P1 and P2 were measured points from the same statistical distribution, which they are obviously not. Then, the variance of both points would be the same. Which is obviously not the case. Then, given a series of P1,P2 pairs, you could estimate the underlying distribution and use something like a t-test to test the hypothesis P1 = P2.
Now, what you probably have as your U1, U2 in GPS laymans terms is called "dilution of precision" (DOP, some use 2, actually HDOP,VDOP), which is a single number, aggregating the uncertainty of the GPS fix computation. It is a function of many parameters, as a GPS receiver can actually notice:
Number of visible and used satellites used for the fix
Estimate of time accuracy
Accuracy information stated by the satellites
...
Let's say, the GPS receiver only sees 3 satellites. What it does is "measure" the distance to each satellite which is at a location known to the GPS receiver (the satellites send their positions). So from each of the satellites, the receiver can yield a sphere of its own position, with the radius of the sphere being the distance, the center being the location of the satellite.
Intersecting the Spheres computed for each used satellite, one can receive a volume in which the GPS receiver is located. In the absence of any measurement errors etc. it would actually the be exact location of the receiver...in case of selective availability being turned off. SA is an artificial error satellites can add to their information, which decreases the accuracy, civilian GPS receivers can obtain. I think it has been turned off now for a while...
Since the GPS receiver does not have an atomic clock, but the GPS satellites do have those, the estimation task of the receiver is not only to estimate its 3 coordinates, but also the state of its own cheap clock. This is why a GPS fix with only 3 satellites is also called a 2D fix (as the system of equations is still underdetermined). 4 and more satellites yield a 3D fix.
Beyond that basic theory of how it works, there are factors specific to the location of the GPS receiver. Apart from the number of satellites a receiver can use in a given location, there can be RF frequency reflections etc., which can render the "distance computed by time delay" for one or more satellites errorneous. If, say the GPS receiver sees many more than 4 satellites, it will be able to construe, that some measurements are inconsistent compared to the remaining measurements.
All those aspects shown above are then computed into a single floating point number, named "Dilution of precision".
So, clearly it is not easy to take basic statistical tests for the hypothesis P1 <> P2 and one would have to dig deeper than is possible here in this format.
I have a directed network where 50 nodes have a degree of 3 and another 50 have a degree of 10.
source("http://bioconductor.org/biocLite.R")
biocLite("graph")
#load graph and make the specified graph
library(graph)
degrees=c(rep(3,50),rep(10,50))
names(degrees)=paste("node",seq_along(degrees)) #nodes must be names
x=randomNodeGraph(degrees)
#verify graph
edges=edgeMatrix(x)
edgecount=table(as.vector(edges))
table(edgecount)
This is a directed network where the total degree is made up from both indegree and outdegree.
I would like to have a network where every indegree is also an outdegree and vice versa
so for example if node 1 has an edge to node 5 then node 5 also needs to have an edge to node 1. My main goal is to preserve the degree distribution, i.e. 50 with degree of 3 and 50 with degree of 10.
Simply setting the graph to be undirected seems to do it:
x2 <- x
edgemode(x2) <- "undirected"
edges<-edgeMatrix(x)
edgecount <- table(as.vector(edges))
table(edgecount)
Gives the same results as your code.
Also, an undirected graph will always have an edge from 5 to 1 if there is an edge from 1 to 5. A single edge satisfies this property.
Paul Shannon suggests the following:
library(graph)
library(igraph)
degrees=c(rep(3,50),rep(10,50))
g <- igraph.to.graphNEL(degree.sequence.game(degrees))
table(graph::degree(g))
This gives the same results as your code.
I came across an interesting question in my textbook, but not further answer or details were supplied :(
Given some points, A, B, C etc
and some distance relationships between those points:
A -> B = 23
A -> C = 45
B -> A = 23
B -> C = 78
C -> A = 45
C -> B = 78
So this distance between C and A is 45 units, A and B is 23 units etc
How to draw a map or some sort of representation? Is it just a case of constraining against those rules until you converge?
Since it is only 3 points, it is a simple triangle, and you know the distances of the three sides from the table: 23, 45, and 78 "units".
So you can plot any two of the points as a straight line, then do a little bit of math to determine the angle to the third point (and you already know the distance):
// a, b, and c are the distances, C is the angle.
c² = b² + a² - 2ba cosC
Solve that and you have the angle across point C so you can plot the third point.
Edit (I originally missed that this was for N points since it was only in the subject):.
If you don't have all of the distances, then you will have to find three that do have all three legs defined to use as a starting point and plot those. After that, find another point that has distances defined to two of your existing points and calculate your new triangle with those three points and plot that one. Repeat this until you run out of points.
I think multidimensional scaling is what you want. For example, given distances between U.S. cities, you'll get something like this:
There may not be a way to perfectly satisfy your constraints in 2- or 3-D, but this will minimize the cost function.