Construction an Arc in Eyeshot - automatic-ref-counting

I am trying to construct the arcs applying Eyeshot 12.
I use the constructor: Arc(Plane, 2D center point, 2D start point, 2D end point).
I have two arcs. The end point of one of them is exactly the same as the start point of the another one. In spite of that, Eyeshot constructs the arcs with significant gap between these points. Is this a bug , or I am doing somethying wrong?
The parameters of my arcs are as follows:
Arc1: 2D center point = (-0.655572, 0.160451),
2D start point = (-0.008477, 0.049511),
2D end point = (0.000385, 0.1271105).
Arc2: 2D center point = (-1.789206, 0.218072),
2D start point = (0.000385, 0.1271105),
2D end point = (0.002240, 0.177704).

The radius of each arc is defined as the distance between the center and the startpoint. So if you pass an endpoint that has a different distance from the center, the arc will not pass through the endpoint.
In both of your arcs, these distances are different, and that's why you get the gap:
C1-Sta1 = 0.65653607869255759
C1-End1 = 0.65680375668022029
C2-Sta2 = 1.7919012087063424
C2-End2 = 1.7919007635301683
So if you want the first arc to end at the point in common with the second arc, you need to treat that point as a start, and then revert the arc's orientation:
Plane pl = Plane.XY;
Point2D c1 = new Point2D(-0.655572, 0.160451);
Point2D c2 = new Point2D(-1.789206, 0.218072);
Point2D s1 = new Point2D(-0.008477, 0.049511);
Point2D s2 = new Point2D(0.000385, 0.1271105);
Point2D e1 = new Point2D(0.000385, 0.1271105);
Point2D e2 = new Point2D(0.002240, 0.177704);
Plane plInv = new Plane(pl.Origin, pl.AxisY, pl.AxisX);
Arc a1 = new Arc(plInv,plInv.Project(pl.PointAt(c1)), plInv.Project(pl.PointAt(e1)), plInv.Project(pl.PointAt(s1)));
a1.Reverse();
Arc a2 = new Arc(pl,c2,s2,e2);

Related

Rotate a Vector 3D by a small amount

So I have a 3d vector (Javascript + Three.js, but it doesnt really matter, since this is not dependant on language) and I want to rotate it by a small amount in a random direction.
The background is, I want to have a random weapon spread in a 3d shooting game, so I have a vector where the player is aiming, but need to rotate it slightly in a random direction by a max angle.
You could compute an offset vector in the plane defined by your direction (dir), add it to dir and then normalize to get the new direction.
If you can assume your dir vector never points up (assuming y-up), you can do something like this (some functions are made-up):
var yAxis = new THREE.Vector3(0.0, 1.0, 0.0);
var dir = new THREE.Vector3(...);
dir.normalize();
// Vectors defining the plane orthogonal to 'dir'.
var side = new THREE.Vector3();
var up = new THREE.Vector3();
// This will give a vector orthogonal to 'dir' and 'yAxis'.
side.crossVectors(dir, yAxis);
side.normalize();
// This will give a vector orthogonal both to 'dir' and 'side'.
// This represents the up direction with respect of 'dir'.
up.crossVectors(side, dir);
up.normalize();
// Maximum displacement angle.
var angle = rad(45.0);
// Create a random 2d vector representing the offset in the plane orthogonal
// to 'dir'.
// Alternatively you can draw a random angle 0/2pi and compute sin/cos.
var delta = new THREE.Vector2(rand(-1.0, 1.0), rand(-1.0, 1.0));
delta.normalize();
delta.multiplyScalar(Math.tan(angle));
// 'side' and 'up' define a plane orthogonal to 'dir', so here we're creating
// the 3d version of the offset vector.
side.multiplyScalar(delta.x);
up.multiplyScalar(delta.y);
// Define the new direction by offsetting 'dir' with the 2 vectors in the
// side/up plane.
var newDir = new THREE.Vector3(dir.x, dir.y, dir.z);
newDir.add(side);
newDir.add(up);
newDir.normalize();
// Just check that the angle between 'dir' and 'newDir' is the same as the
// chosen one.
console.log(Math.acos(dir.dot(newDir)) / Math.PI * 180.0);
If dir can also point up, then you need to generate side and up using dir alone.
Hope this helps.

How to calculate a 3D rect covers exact the full screen in Unity?

I have a Quad faces to the camera in a 3D scene, how can I calculate a position and size to make it covers the screen exactly in Unity?
With those 4 vectors you should be able to build your quad. They are in a World space coords. The 10f number is a distance from camera to the vertices.
You may also look at this link.
Vector3 p0 = camera.ScreenToWorldPoint( new Vector3(0, 0, 10f));
Vector3 p1 = camera.ScreenToWorldPoint( new Vector3(0, camera.pixelWidth, 10f));
Vector3 p2 = camera.ScreenToWorldPoint( new Vector3(camera.pixelHeight, camera.pixelWidth, 10f));
Vector3 p3 = camera.ScreenToWorldPoint( new Vector3(camera.pixelHeight, 0, 10f));

draw 3d faces as 2d

I have 3d mesh and I would like to draw each face a 2d shape.
What I have in mind is this:
for each face
1. access the face normal
2. get a rotation matrix from the normal vector
3. multiply each vertex to the rotation matrix to get the vertices in a '2d like ' plane
4. get 2 coordinates from the transformed vertices
I don't know if this is the best way to do this, so any suggestion is welcome.
At the moment I'm trying to get a rotation matrix from the normal vector,
how would I do this ?
UPDATE:
Here is a visual explanation of what I need:
At the moment I have quads, but there's no problem
converting them into triangles.
I want to rotate the vertices of a face, so that
one of the dimensions gets flattened.
I also need to store the original 3d rotation of the face.
I imagine that would be inverse rotation of the face
normal.
I think I'm a bit lost in space :)
Here's a basic prototype I did using Processing:
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
PVector nr = PVector.add(n,new PVector());//clone normal
float rx = degrees(acos(n.dot(x)));//angle between normal and x axis
float ry = degrees(acos(n.dot(y)));//angle between normal and y axis
float rz = degrees(acos(n.dot(z)));//angle between normal and z axis
PMatrix3D r = new PMatrix3D();
//is this ok, or should I drop the builtin function, and add
//the rotations manually
r.rotateX(rx);
r.rotateY(ry);
r.rotateZ(rz);
print("original: ");println(face);
for(int i = 0 ; i < 4; i++){
PVector rv = new PVector();
PVector rn = new PVector();
r.mult(face[i],rv);
r.mult(nr,rn);
face2d[i] = PVector.add(face[i],rv);
}
print("rotated: ");println(face2d);
//draw
float scale = 100.0;
translate(width * .5,height * .5);//move to centre, Processing has 0,0 = Top,Lef
beginShape(QUADS);
for(int i = 0 ; i < 4; i++){
vertex(face2d[i].x * scale,face2d[i].y * scale,face2d[i].z * scale);
}
endShape();
line(0,0,0,nr.x*scale,nr.y*scale,nr.z*scale);
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m = new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)-n.z*s, n.x*n.z*(1-c)+n.y*s, 0,
n.x*n.y*(1-c)+n.z*s,y2+(1-y2)*c,n.y*n.z*(1-c)-n.x*s,0,
n.x*n.y*(1-c)-n.y*s,n.x*n.z*(1-c)+n.x*s,z2-(1-z2)*c,0,
0,0,0,1);
}
Update
Sorry if I'm getting annoying, but I don't seem to get it.
Here's a bit of python using Blender's API:
import Blender
from Blender import *
import math
from math import sin,cos,radians,degrees
def getRotMatrix(n):
c = cos(0)
s = sin(0)
x2 = n.x*n.x
y2 = n.y*n.y
z2 = n.z*n.z
l1 = x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s
l2 = n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s
l3 = n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c
m = Mathutils.Matrix(l1,l2,l3)
return m
scn = Scene.GetCurrent()
ob = scn.objects.active.getData(mesh=True)#access mesh
out = ob.name+'\n'
#face0
f = ob.faces[0]
n = f.v[0].no
out += 'face: ' + str(f)+'\n'
out += 'normal: ' + str(n)+'\n'
m = getRotMatrix(n)
m.invert()
rvs = []
for v in range(0,len(f.v)):
out += 'original vertex'+str(v)+': ' + str(f.v[v].co) + '\n'
rvs.append(m*f.v[v].co)
out += '\n'
for v in range(0,len(rvs)):
out += 'original vertex'+str(v)+': ' + str(rvs[v]) + '\n'
f = open('out.txt','w')
f.write(out)
f.close
All I do is get the current object, access the first face, get the normal, get the vertices, calculate the rotation matrix, invert it, then multiply it by each vertex.
Finally I write a simple output.
Here's the output for a default plane for which I rotated all the vertices manually by 30 degrees:
Plane.008
face: [MFace (0 3 2 1) 0]
normal: [0.000000, -0.499985, 0.866024](vector)
original vertex0: [1.000000, 0.866025, 0.500000](vector)
original vertex1: [-1.000000, 0.866026, 0.500000](vector)
original vertex2: [-1.000000, -0.866025, -0.500000](vector)
original vertex3: [1.000000, -0.866025, -0.500000](vector)
rotated vertex0: [1.000000, 0.866025, 1.000011](vector)
rotated vertex1: [-1.000000, 0.866026, 1.000012](vector)
rotated vertex2: [-1.000000, -0.866025, -1.000012](vector)
rotated vertex3: [1.000000, -0.866025, -1.000012](vector)
Here's the first face of the famous Suzanne mesh:
Suzanne.001
face: [MFace (46 0 2 44) 0]
normal: [0.987976, -0.010102, 0.154088](vector)
original vertex0: [0.468750, 0.242188, 0.757813](vector)
original vertex1: [0.437500, 0.164063, 0.765625](vector)
original vertex2: [0.500000, 0.093750, 0.687500](vector)
original vertex3: [0.562500, 0.242188, 0.671875](vector)
rotated vertex0: [0.468750, 0.242188, -0.795592](vector)
rotated vertex1: [0.437500, 0.164063, -0.803794](vector)
rotated vertex2: [0.500000, 0.093750, -0.721774](vector)
rotated vertex3: [0.562500, 0.242188, -0.705370](vector)
The vertices from the Plane.008 mesh are altered, the ones from Suzanne.001's mesh
aren't. Shouldn't they ? Should I expect to get zeroes on one axis ?
Once I got the rotation matrix from the normal vector, what is the rotation on x,y,z ?
Note: 1. Blender's Matrix supports the * operator 2.In Blender's coordinate system Z point's up. It looks like a right handed system, rotated 90 degrees on X.
Thanks
That looks reasonable to me. Here's how to get a rotation matrix from normal vector. The normal is the vector. The angle is 0. You probably want the inverse rotation.
Is your mesh triangulated? I'm assuming it is. If so, you can do this, without rotation matrices. Let the points of the face be A,B,C. Take any two vertices of the face, say A and B. Define the x axis along vector AB. A is at 0,0. B is at 0,|AB|. C can be determined from trigonometry using the angle between AC and AB (which you get by using the dot product) and the length |AC|.
You created the m matrix correctly. This is the rotation that corresponds to your normal vector. You can use the inverse of this matrix to "unrotate" your points. The normal of face2d will be x, i.e. point along the x-axis. So extract your 2d coordinates accordingly. (This assumes your quad is approximately planar.)
I don't know the library you are using (Processing), so I'm just assuming there are methods for m.invert() and an operator for applying a rotation matrix to a point. They may of course be called something else. Luckily the inverse of a pure rotation matrix is its transpose, and multiplying a matrix and a vector are straightforward to do manually if you need to.
void setup(){
size(400,400,P3D);
background(255);
stroke(0,0,120);
smooth();
fill(0,120,0);
PVector x = new PVector(1,0,0);
PVector y = new PVector(0,1,0);
PVector z = new PVector(0,0,1);
PVector n = new PVector(0.378521084785,0.925412774086,0.0180059205741);//normal
PVector p0 = new PVector(0.372828125954,-0.178844243288,1.35241031647);
PVector p1 = new PVector(-1.25476706028,0.505195975304,0.412718296051);
PVector p2 = new PVector(-0.372828245163,0.178844287992,-1.35241031647);
PVector p3 = new PVector(1.2547672987,-0.505196034908,-0.412717700005);
PVector[] face = {p0,p1,p2,p3};
PVector[] face2d = new PVector[4];
//what do I do with this ?
float c = cos(0), s = sin(0);
float x2 = n.x*n.x,y2 = n.y*n.y,z2 = n.z*n.z;
PMatrix3D m_inverse =
new PMatrix3D(x2+(1-x2)*c, n.x*n.y*(1-c)+n.z*s, n.x*n.y*(1-c)-n.y*s, 0,
n.x*n.y*(1-c)-n.z*s,y2+(1-y2)*c,n.x*n.z*(1-c)+n.x*s, 0,
n.x*n.z*(1-c)+n.y*s,n.y*n.z*(1-c)-n.x*s,z2-(1-z2)*c, 0,
0,0,0,1);
face2d[0] = m_inverse * p0; // Assuming there's an appropriate operator*().
face2d[1] = m_inverse * p1;
face2d[2] = m_inverse * p2;
face2d[3] = m_inverse * p3;
// print & draw as you did before...
}
For face v0-v1-v3-v2 vectors v3-v0, v3-v2 and a face normal already form rotation matrix that would transform 2d face into 3d face.
Matrix represents coordinate system. Each row (or column, depending on notation) corresponds to axis coordinate system within new coordinate system. 3d rotation/translation matrix can be represented as:
vx.x vx.y vx.z 0
vy.x vy.y vy.z 0
vz.x vz.y vz.z 0
vp.x vp.y vp.z 1
where vx is an x axis of a coordinate system, vy - y axis, vz - z axis, and vp - origin of new system.
Assume that v3-v0 is an y axis (2nd row), v3-v2 - x axis (1st row), and normal - z axis (3rd row). Build a matrix from them. Then invert matrix. You'll get a matrix that will rotate a 3d face into 2d face.
I have 3d mesh and I would like to draw each face a 2d shape.
I suspect that UV unwrapping algorithms are closer to what you want to achieve than trying to get rotation matrix from 3d face.
That's very easy to achieve: (Note: By "face" I mean "triangle")
Create a view matrix that represents a camera looking at a face.
Determine the center of the face with bi-linear interpolation.
Determine the normal of the face.
Position the camera some units in opposite normal direction.
Let the camera look at the center of the face.
Set the cameras up vector point in the direction of the middle of any vertex of the face.
Set the aspect ratio to 1.
Compute the view matrix using this data.
Create a orthogonal projection matrix.
Set the width and height of the view frustum large enough to contain the whole face (e.g. the length of the longest site of a face).
Compute the projection matrix.
For every vertex v of the face, multiply it by both matrices: v * view * projection.
The result is a projection of Your 3d faces into 2d space as if You were looking at them exactly orthogonal without any perspective disturbances. The final coordinates will be in normalized screen coordinates where (-1, -1) is the bottom left corner, (0, 0) is the center and (1, 1) is the top right corner.

Bounding Boxes for Circle and Arcs in 3D

Given curves of type Circle and Circular-Arc in 3D space, what is a good way to compute accurate bounding boxes (world axis aligned)?
Edit: found solution for circles, still need help with Arcs.
C# snippet for solving BoundingBoxes for Circles:
public static BoundingBox CircleBBox(Circle circle)
{
Point3d O = circle.Center;
Vector3d N = circle.Normal;
double ax = Angle(N, new Vector3d(1,0,0));
double ay = Angle(N, new Vector3d(0,1,0));
double az = Angle(N, new Vector3d(0,0,1));
Vector3d R = new Vector3d(Math.Sin(ax), Math.Sin(ay), Math.Sin(az));
R *= circle.Radius;
return new BoundingBox(O - R, O + R);
}
private static double Angle(Vector3d A, Vector3d B)
{
double dP = A * B;
if (dP <= -1.0) { return Math.PI; }
if (dP >= +1.0) { return 0.0; }
return Math.Acos(dP);
}
One thing that's not specified is how you convert that angle range to points in space. So we'll start there and assume that the angle 0 maps to O + r***X** and angle π/2 maps to O + r***Y**, where O is the center of the circle and
X = (x1,x2,x3)
and
Y = (y1,y2,y3)
are unit vectors.
So the circle is swept out by the function
P(θ) = O + rcos(θ)X + rsin(θ)Y
where θ is in the closed interval [θstart,θend].
The derivative of P is
P'(θ) = -rsin(θ)X + rcos(θ)Y
For the purpose of computing a bounding box we're interested in the points where one of the coordinates reaches an extremal value, hence points where one of the coordinates of P' is zero.
Setting -rsin(θ)xi + rcos(θ)yi = 0 we get
tan(θ) = sin(θ)/cos(θ) = yi/xi.
So we're looking for θ where θ = arctan(yi/xi) for i in {1,2,3}.
You have to watch out for the details of the range of arctan(), and avoiding divide-by-zero, and that if θ is a solution then so is θ±k*π, and I'll leave those details to you.
All you have to do is find the set of θ corresponding to extremal values in your angle range, and compute the bounding box of their corresponding points on the circle, and you're done. It's possible that there are no extremal values in the angle range, in which case you compute the bounding box of the points corresponding to θstart and θend. In fact you may as well initialize your solution set of θ's with those two values, so you don't have to special case it.

How can I turn a ray-plane intersection point into barycentric coordinates?

My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?
The usual formula for points (p) on a line, starting at (p0) with vector direction (v) is:
p = p0 + t*v
The criterion for a point (p) on a plane containing (p1) and with normal (n) is:
(p - p1).n = 0
So, plug&chug:
(p0 + t*v - p1).n = (p0-p1).n + t*(v.n) = 0
-> t = (p1-p0).n / v.n
-> p = p0 + ((p1-p0).n / v.n)*v
To check:
(p - p1).n = (p0-p1).n + ((p1-p0).n / v.n)*(v.n)
= (p0-p1).n + (p1-p0).n
= 0
If you want to fix the Z coordinate at a particular value, you need to choose a normal along the Z axis (which will define a plane parallel to XY plane).
Then, you have:
n = (0,0,1)
-> p = p0 + ((p1.z-p0.z)/v.z) * v
-> x and y offsets from p0 = ((p1.z-p0.z)/v.z) * (v.x,v.y)
Finally, if you're trying to build a virtual "camera" for 3D computer graphics, the standard way to do this kind of thing is homogeneous coordinates. Ultimately, working with homogeneous coordinates is simpler (and usually faster) than the kind of ad hoc 3D vector algebra I have written above.

Resources