Related
I am very sorry for asking a question that is probably very easy if you know how to solve it, and where many versions of the same question has been asked before. However, I am creating a new post since I have not found an answer to this specific question.
Basically, I have a 200cm x 200cm square that I am recording with a camera above it. However, the camera distorts the square slightly, see example here.. I am wondering how I go from transforming the x,y coordinates in the camera to real-life x,y coordinates (e.g., between 0-200 cm for each side). I understand that I probably need to apply some kind of transformation matrix, but I do not know which one, nor how to determine the transformation matrix. I haven't done any serious linear-algebra in a long time, so I appreciate any pointers for what to read up on, or how to get it done. I am working in python, so if there is some ready code for doing the transformation that would also be useful to know.
Thanks a lot!
I will show this using python and numpy.
import numpy as np
First, you have to understand the projection model
def apply_homography(H, p1):
p = H # p1.T
return (p[:2] / p[2]).T
With some algebraic manipulation you can determine the points at the plane z=1 that produced the given points.
def revert_homography(H, p2):
Hb = np.linalg.inv(H)
# 1 figure out which z coordinate should be added to p2
# order to get z=1 for p1
z = 1/(Hb[2,2] + (Hb[2,0] * p2[:,0] + Hb[2,1]*p2[:,1]))
p2 = np.hstack([p2[:,:2] * z[:,None], z[:, None]])
return p2 # Hb.T
The projection is not invertible, but under the complanarity assumption it may be inverted successfully.
Now, let's see how to determine the H matrix from the given points (assuming they are coplanar).
If you have the four corners in order in order you can simply specify the (x,y) coordinates of the cornder, then you can use the projection equations to determine the homography matrix like here, or here.
This requires at least 5 points to be determined as there is 9 coefficients, but we can fix one element of the matrix and make it an inhomogeneous equation.
def find_homography(p1, p2):
A = np.zeros((8, 2*len(p1)))
# x2'*(H[2,0]*x1+H[2,1]*x2)
A[6,0::2] = p1[:,0] * p2[:,0]
A[7,0::2] = p1[:,1] * p2[:,0]
# - (H[0,0]*x1+H[0,1]*y1+H[0,2])
A[0,0::2] = -p1[:,0]
A[1,0::2] = -p1[:,1]
A[2,0::2] = -1
# y2'*(H[2,0]*x1+H[2,1]*x2)
A[6,1::2] = p1[:,0] * p2[:,1]
A[7,1::2] = p1[:,1] * p2[:,1]
# - (H[1,0]*x1+H[1,1]*y1+H[1,2])
A[3,1::2] = -p1[:,0]
A[4,1::2] = -p1[:,1]
A[5,1::2] = -1
# assuming H[2,2] = 1 we can pass its coefficient
# to the independent term making an inhomogeneous
# equation
b = np.zeros(2*len(p2))
b[0::2] = -p2[:,0]
b[1::2] = -p2[:,1]
h = np.ones(9)
h[:8] = np.linalg.lstsq(A.T, b, rcond=None)[0]
return h.reshape(3,3)
Here a complete usage example. I pick a random H and transform four random points, this is what you have, I show how to find the transformation matrix H_. Next I create a test set of points, and I show how to find the world coordinates from the image coordinates.
# Pick a random Homography
H = np.random.rand(3,3)
H[2,2] = 1
# Pick a set of random points
p1 = np.random.randn(4, 3);
p1[:,2] = 1;
# The coordinates of the points in the image
p2 = apply_homography(H, p1)
# testing
# Create a set of random points
p_test = np.random.randn(20, 3)
p_test[:,2] = 1;
p_test2 = apply_homography(H, p_test)
# Now using only the corners find the homography
# Find a homography transform
H_ = find_homography(p1, p2)
assert np.allclose(H, H_)
# Predict the plane points for the test points
p_test_predicted = revert_homography(H_, p_test2)
assert np.allclose(p_test_predicted, p_test)
For a graph in networkx, I have made a layout to draw a network graph using code below:
data = pd.read_csv('data\\email-dept3.csv')
edges = [edge for edge in zip(data['source'],data['target'])]
print(len(edges))
G = nx.Graph()
G.add_edges_from(edges)
node_pos = nx.kamada_kawai_layout(G)
#I want to get the edge length as one attributes, but I don't know how to code this function
edge_length = calculate_edge_length()
nx.draw_networkx_nodes(G,node_pos,**options)#draw nodes
[nx.draw_networkx_edges(G,node_pos,edgelist=[key],alpha=np.amin([1,value*100]),width=2) for key,value in cent.items()]
plt.show()
And the result is:
What I want to do is get the every edge's length in this graph. Because after layout, every node has a position in screen, and the edge has its length according to its two nodes' position. But in networkx's API, I can't find the method to get the edge's length. And I also don't know how to calculate this value.
If you need more information, please contact me.
I am trying all kinds of methods to adjust the transparency of edges. The length of line is one of my consideration.
Interesting idea! Seems like a worthwhile experiment; I'll let you decide if it works well or not. :-)
But in networkx's API, I can't find the method to get the edge's length
I think you have to compute them yourself. Fortunately, that's not too hard. Here's an example.
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,10)
def example_graph():
"""
Return the classic Karate Club network, but give text labels to the nodes.
"""
labels = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJZKLMNOPQRSTUVWXYZ'
kg = nx.karate_club_graph()
edges = [(labels[i], labels[j]) for i,j in kg.edges()]
G = nx.Graph()
G.add_edges_from(edges)
return G
# Test network
G = example_graph()
# Determine layout node positions
node_pos = nx.kamada_kawai_layout(G)
# Determine edge distances (from the node positions)
node_pos_df = pd.DataFrame(node_pos.values(), columns=['x', 'y'], index=node_pos.keys())
node_pos_df = node_pos_df.rename_axis('label').sort_index()
edges = np.array(G.edges())
u_pos = node_pos_df.loc[edges[:, 0]].values
v_pos = node_pos_df.loc[edges[:, 1]].values
distances = np.linalg.norm(u_pos - v_pos, axis=1)
## Optional: Add the distances as edge attributes
#edge_distances = {(u,v): d for (u,v), d in zip(G.edges(), distances)}
#nx.set_edge_attributes(G, edge_distances, "layout_distance")
# Compute alpha: Set 0.15 as minimum alpha, 1.0 as maximum alpha
d_min, d_max = distances.min(), distances.max()
alphas = 1.0 - 0.85 * (distances - d_min) / (d_max - d_min)
# Draw graph
nx.draw_networkx_nodes(G, node_pos)
nx.draw_networkx_edges(G, node_pos, edgelist=G.edges(), alpha=alphas, width=2)
plt.show()
Given 2D uniform variable we can generate a uniform distribution in a unit-disk as discussed here.
My problem is similar in that i wish to uniformly sample the intersection area of two intersecting disks where one disk is always the unit-disk and the other can be freely moved and resized like here
I was trying to split the area into two regions (as depicted above) and sample each region individual based on the respected disk. My approach is based on uniform disk algorithm cited above. To sample the first region right of the center line I would restrict theta to be within the two intersection points. Next r would need to be projected based on that theta
such that the points are pushed in the area between our mid line and the radius of the disk. The python sample code can be found here.
u = unifrom2D()
A;B; // Intersection points
for p in allPoints
theta = u.x * (getTheta(A) - getTheta(B)) + getTheta(B)
r = sqrt(u.y + (1- u.y)*length2(lineIntersection(theta)))
p = (r * cos(theta), r * sin(theta))
However this approach is rather expensive and further fails to preserve uniformity. Just to clarify i do not want to use rejection sampling.
I am not sure if this is better than rejection sampling, but here is a solution for uniform sampling of a circle segment (with center angle <= pi) involving the numerical computation of an inverse function. (The uniform sampling of the intersection of two circles can then be composed of the sampling of segments, sectors and triangles - depending on how the intersection can be split into simpler figures.)
First we need to know how to generate a random value Z with given distribution F, i.e. we want
P(Z < x) = F(x) <=> (x = F^-1(y))
P(Z < F^-1(y)) = F(F^-1(y)) = y <=> (F is monotonous)
P(F(Z) < y) = y
This means: if Z has the requested distribution F, then F(Z) is distributed uniformly. The other way round:
Z = F^-1(Y),
where Y is distributed uniformly in [0,1], has the requested distribution.
If F is of the form
/ 0, x < a
F(x) = | (F0(x)-F0(a)) / (F0(b)-F0(a)), a <= x <= b
\ 1, b < x
then we can choose a Y0 uniformly in [F(a),F(b)] and set Z = F0^-1(Y0).
We choose to parametrize the segment by (theta,r), where the center angle theta is measured from one segment side. When the segment's center angle is alpha, the area of the segment intersected with a sector of angle theta starting where the segment starts is (for the unit circle, theta in [0,alpha/2])
F0_theta(theta) = 0.5*(theta - d*(s - d*tan(alpha/2-theta)))
where s = AB/2 = sin(alpha/2) and d = dist(M,AB) = cos(alpha/2) (the distance of the circle center to the segment). (The case alpha/2 <= theta <= alpha is symmetric and not considered here.)
We need a random theta with P(theta < x) = F_theta(x). The inverse of F_theta cannot be computed symbolically - it must be determined by some optimization algorithm (e.g. Newton-Raphson).
Once theta is fixed we need a random radius r in the range
[r_min, 1], r_min = d/cos(alpha/2-theta).
For x in [0, 1-r_min] the distribution must be
F0_r(x) = (x+r_min)^2 - r_min^2 = x^2 + 2*x*r_min.
Here the inverse can be computed symbolically:
F0_r^-1(y) = -r_min + sqrt(r_min^2+y)
Here is an implementation in Python for proof of concept:
from math import sin,cos,tan,sqrt
from scipy.optimize import newton
# area of segment of unit circle
# alpha: center angle of segment (0 <= alpha <= pi)
def segmentArea(alpha):
return 0.5*(alpha - sin(alpha))
# generate a function that gives the area of a segment of a unit circle
# intersected with a sector of given angle, where the sector starts at one end of the segment.
# The returned function is valid for [0,alpha/2].
# For theta=alpha/2 the returned function gives half of the segment area.
# alpha: center angle of segment (0 <= alpha <= pi)
def segmentAreaByAngle_gen(alpha):
alpha_2 = 0.5*alpha
s,d = sin(alpha_2),cos(alpha_2)
return lambda theta: 0.5*(theta - d*(s - d*tan(alpha_2-theta)))
# generate derivative function generated by segmentAreaByAngle_gen
def segmentAreaByAngleDeriv_gen(alpha):
alpha_2 = 0.5*alpha
d = cos(alpha_2)
return lambda theta: (lambda dr = d/cos(alpha_2-theta): 0.5*(1 - dr*dr))()
# generate inverse of function generated by segmentAreaByAngle_gen
def segmentAreaByAngleInv_gen(alpha):
x0 = sqrt(0.5*segmentArea(alpha)) # initial guess by approximating half of segment with right-angled triangle
return lambda area: newton(lambda theta: segmentAreaByAngle_gen(alpha)(theta) - area, x0, segmentAreaByAngleDeriv_gen(alpha))
# for a segment of the unit circle in canonical position
# (i.e. symmetric to x-axis, on positive side of x-axis)
# generate uniformly distributed random point in upper half
def randomPointInSegmentHalf(alpha):
FInv = segmentAreaByAngleInv_gen(alpha)
areaRandom = random.uniform(0,0.5*segmentArea(alpha))
thetaRandom = FInv(areaRandom)
alpha_2 = 0.5*alpha
d = cos(alpha_2)
rMin = d/cos(alpha_2-thetaRandom)
secAreaRandom = random.uniform(0, 1-rMin*rMin)
rRandom = sqrt(rMin*rMin + secAreaRandom)
return rRandom*cos(alpha_2-thetaRandom), rRandom*sin(alpha_2-thetaRandom)
The visualisation seems to verify uniform distribution (of the upper half of a segment with center angle pi/2):
import matplotlib.pyplot as plot
segmentPoints = [randomPointInSegmentHalf(pi/2) for _ in range(500)]
plot.scatter(*zip(*segmentPoints))
plot.show()
I want to calculate projected distance between two points and between a point a polygon. All coordinates are specified under same projection lat,lon (WGS84).
I calculated the distance between a point and a polygon using pyproj as follows:
from pyproj import Proj, transform, Geod
geod = Geod(ellps='WGS84')
angle1,angle2,dist1 = geod.inv(wLong1, sLat1, wLong2, sLat2)
#this returns distance in m
I want to use the same function to calculate the distance between a point and a bounding box.
bbox = box(wLong1, sLat1, eLong1, nLat1)
point = Point(wLong2,sLat2)
dist2 = (point.distance(bbox))
Unlike the first example (dist1 in meter), I think the second example (dist2) returns distance in degrees. How can I translate this value into meter like example 1?
You need mean radius of earth's curvature (rm) for computation.
from pyproj import Geod
from math import radians
# ... some code
a = Geod.a
b = Geod.b
rm = (2.0*a + b)/3.0 # simple mean radius, as defined by IUGG
rm * radians(dist2) # your dist in meters
More accurate formulas for rm exist, but the above is good approximate.
How do you find the 3 euler angles between 2 3D vectors?
When I have one Vector and I want to get its rotation, this link can be usually used: Calculate rotations to look at a 3D point?
But how do I do it when calculating them according to one another?
As others have already pointed out, your question should be revised. Let's call your vectors a and b. I assume that length(a)==length(b) > 0 otherwise I cannot answer the question.
Calculate the cross product of your vectors v = a x b; v gives the axis of rotation. By computing the dot product, you can get the cosine of the angle you should rotate with cos(angle)=dot(a,b)/(length(a)length(b)), and with acos you can uniquely determine the angle (#Archie thanks for pointing out my earlier mistake). At this point you have the axis angle representation of your rotation.
The remaining work is to convert this representation to the representation you are looking for: Euler angles. Conversion Axis-Angle to Euler is a way to do it, as you have found it. You have to handle the degenerate case when v = [ 0, 0, 0], that is, when the angle is either 0 or 180 degrees.
I personally don't like Euler angles, they screw up the stability of your app and they are not appropriate for interpolation, see also
Strange behavior with android orientation sensor
Interpolating between rotation matrices
At first you would have to subtract vector one from vector two in order to get vector two relative to vector one. With these values you can calculate Euler angles.
To understand the calculation from vector to Euler intuitively, lets imagine a sphere with the radius of 1 and the origin at its center. A vector represents a point on its surface in 3D coordinates. This point can also be defined by spherical 2D coordinates: latitude and longitude, pitch and yaw respectively.
In order "roll <- pitch <- yaw" calculation can be done as follows:
To calculate the yaw you calculate the tangent of the two planar axes (x and z) considering the quadrant.
yaw = atan2(x, z) *180.0/PI;
Pitch is quite the same but as its plane is rotated along with yaw the 'adjacent' is on two axis. In order to find its length we will have to use the Pythagorean theorem.
float padj = sqrt(pow(x, 2) + pow(z, 2));
pitch = atan2(padj, y) *180.0/PI;
Notes:
Roll can not be calculated as a vector has no rotation around its own axis. I usually set it to 0.
The length of your vector is lost and can not be converted back.
In Euler the order of your axes matters, mix them up and you will get different results.
It took me a lot of time to find this answer so I would like to share it with you now.
first, you need to find the rotation matrix, and then with scipy you can easily find the angles you want.
There is no short way to do this.
so let's first declare some functions...
import numpy as np
from scipy.spatial.transform import Rotation
def normalize(v):
return v / np.linalg.norm(v)
def find_additional_vertical_vector(vector):
ez = np.array([0, 0, 1])
look_at_vector = normalize(vector)
up_vector = normalize(ez - np.dot(look_at_vector, ez) * look_at_vector)
return up_vector
def calc_rotation_matrix(v1_start, v2_start, v1_target, v2_target):
"""
calculating M the rotation matrix from base U to base V
M # U = V
M = V # U^-1
"""
def get_base_matrices():
u1_start = normalize(v1_start)
u2_start = normalize(v2_start)
u3_start = normalize(np.cross(u1_start, u2_start))
u1_target = normalize(v1_target)
u2_target = normalize(v2_target)
u3_target = normalize(np.cross(u1_target, u2_target))
U = np.hstack([u1_start.reshape(3, 1), u2_start.reshape(3, 1), u3_start.reshape(3, 1)])
V = np.hstack([u1_target.reshape(3, 1), u2_target.reshape(3, 1), u3_target.reshape(3, 1)])
return U, V
def calc_base_transition_matrix():
return np.dot(V, np.linalg.inv(U))
if not np.isclose(np.dot(v1_target, v2_target), 0, atol=1e-03):
raise ValueError("v1_target and v2_target must be vertical")
U, V = get_base_matrices()
return calc_base_transition_matrix()
def get_euler_rotation_angles(start_look_at_vector, target_look_at_vector, start_up_vector=None, target_up_vector=None):
if start_up_vector is None:
start_up_vector = find_additional_vertical_vector(start_look_at_vector)
if target_up_vector is None:
target_up_vector = find_additional_vertical_vector(target_look_at_vector)
rot_mat = calc_rotation_matrix(start_look_at_vector, start_up_vector, target_look_at_vector, target_up_vector)
is_equal = np.allclose(rot_mat # start_look_at_vector, target_look_at_vector, atol=1e-03)
print(f"rot_mat # start_look_at_vector1 == target_look_at_vector1 is {is_equal}")
rotation = Rotation.from_matrix(rot_mat)
return rotation.as_euler(seq="xyz", degrees=True)
Finding the XYZ Euler rotation angles from 1 vector to another might give you more than one answer.
Assuming what you are rotation is the look_at_vector of some kind of shape and you want this shape to stay not upside down and still look at the target_look_at_vector
if __name__ == "__main__":
# Example 1
start_look_at_vector = normalize(np.random.random(3))
target_look_at_vector = normalize(np.array([-0.70710688829422, 0.4156269133090973, -0.5720613598823547]))
phi, theta, psi = get_euler_rotation_angles(start_look_at_vector, target_look_at_vector)
print(f"phi_x_rotation={phi}, theta_y_rotation={theta}, psi_z_rotation={psi}")
Now if you want to have a specific role rotation to your shape, my code also supports that!
you just need to give the target_up_vector as a parameter as well.
just make sure it is vertical to the target_look_at_vector that you are giving.
if __name__ == "__main__":
# Example 2
# look and up must be vertical
start_look_at_vector = normalize(np.array([1, 2, 3]))
start_up_vector = normalize(np.array([1, -3, 2]))
target_look_at_vector = np.array([0.19283590755300162, 0.6597510192626469, -0.7263217228739983])
target_up_vector = np.array([-0.13225754322703182, 0.7509361508721898, 0.6469955018014842])
phi, theta, psi = get_euler_rotation_angles(
start_look_at_vector, target_look_at_vector, start_up_vector, target_up_vector
)
print(f"phi_x_rotation={phi}, theta_y_rotation={theta}, psi_z_rotation={psi}")
Getting Rotation Matrix in MATLAB is very easy
e.g.
A = [1.353553385, 0.200000003, 0.35]
B = [1 2 3]
[q] = vrrotvec(A,B)
Rot_mat = vrrotvec2mat(q)