I have a problem with my 3D Project. It is quite complicated to describe the purpose so I try to abstract it to the minimum.
I have an live videostream of the unity program which I bring up to fullscreen (1920 x 1200). One user clicks on the screen to send the coords to the unity app.
sending coords:
// relative coord
float x = mouse_x / 1920.0f;
float y = mouse_y / 1200.0f;
The receiver is the unity app, which trys to make an 3D coordinate of it and finds a wall or an obstacle to place a mark.
1. Attempt
// 1268 x 720 receiver viewport size
Ray ray = Camera.main.ScreenPointToRay(new Vector3(Position.x * 1268.0f, Position.y * 720.0f, 0));
2. Attempt
// * 1268 not necessary
Vector3 far = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 1));
Vector3 near = Camera.main.ViewportToWorldPoint(new Vector3(fix.Position.x, fix.Position.y, 0));
Vector3 dir = far - near;
dir.Normalize();
Ray ray = new Ray(near, dir);
RaycastHit hitInfo;
if(Physics.Raycast(ray, out hitInfo))
{
// place mark
}
Both attempts results in the same way. If the coordinate is around the center then it is in the center on the receiver as well. But the more it goes to the edge it will be much farer from the position it should be. The picture shows what I think happens. The red circle is the current behaviour and the green is what I was expecting. I'd rather have a 90 degrees ray from the screen to the wall than right through the cam.
I really do not know what to do. Thank you very much for your help in advance.
You're right about your drawing, this is indeed what's happening.
Here is a test I did using Debug.DrawRay.
The blue ray is the output of this code.
Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * 100f, Color.red);
And here is the drawing of the red, like you did.
var viewportPointRay = Camera.main.ViewportPointToRay(viewportTouchPos);
Debug.DrawRay(viewportPointRay.origin, viewportPointRay.direction * 3f, Color.blue);
I was expecting a truly simple answer but I wasn't able to find one. I did found a trick to do what you want though.
var ray = new Ray(viewportPointRay.origin, Camera.main.transform.forward);
Debug.DrawRay(ray.origin, ray.direction, Color.green);
Result
Related
I have a 3D game where I want an arrow to point in the direction base on the mouses angle of that object in a 2D view.
Now from the camera looking down at the board from a 90 degree x-angle standpoint it works fine. The below image is when I am in a 90 Degree x-angle Camera angle facing down on my game and I have the arrow face where my cursor is:
But now when we take a step back and have the camera at a 45 degree x-angle the direction the arrow is facing is a bit off. The below image is when I have the cursor face my mouse cursor when my camera is on a 45 degree x-angle :
Now lets look at the above image but when the Camera is shifted back to 90 Degrees x-angle:
My current code is:
// Get the vectors of the 2 points, the pivot point which is the ball start and the position of the mouse.
Vector2 objectPoint = Camera.main.WorldToScreenPoint(_arrowTransform.position);
Vector2 mousePoint = (Vector2)Input.mousePosition;
float angle = Mathf.Atan2( mousePoint.y - objectPoint.y, mousePoint.x - objectPoint.x ) * 180 / Mathf.PI;
_arrowTransform.rotation = Quaternion.AngleAxis(-angle, Vector2.up) * Quaternion.Euler(90f, 0f, 0f);
What would I have to add in my Mathf.Atan2() to compensate for the camera rotation on the x and/or y to make sure when the user wants to move the camera how they please it will make sure to provide an accurate direction?
EDIT : The solution was in MotoSV's answer with using Plane. This allowed me to get the exact point no matter what my camera angles were based on my mouse position. Code that worked for me is below :
void Update()
{
Plane groundPlane = new Plane(Vector3.up, new Vector3(_arrowTransform.position.x, _arrowTransform.position.y, _arrowTransform.position.z));
Ray ray = _mainCamera.ScreenPointToRay(Input.mousePosition);
float distance;
if (groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
_arrowTransform.LookAt(point);
}
}
Although this does not answer your question directly with regards to the Mathf.Atan2 method it is a alternative approach that may be useful.
This would be placed onto the game object that represents the arrow:
public class MouseController : MonoBehaviour
{
private Camera _camera;
private void Start()
{
_camera = GameObject.FindGameObjectWithTag("MainCamera").GetComponent<Camera>();
}
private void Update()
{
Plane groundPlane = new Plane(Vector3.up, this.transform.position);
Ray ray = _camera.ScreenPointToRay(Input.mousePosition);
float distance;
Vector3 axis = Vector3.zero;
if(groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
axis = (point - this.transform.position).normalized;
axis = new Vector3(axis.x, 0f, axis.z);
}
this.transform.rotation = Quaternion.LookRotation(axis);
}
}
The basic idea is to:
Create a Plane instance centred at the game object's position
Convert the mouse screen position into a Ray that heads into the world
relative to the camer'a current position and rotation
Then cast that ray onto the Plane created in step #1
If the ray intersects the plane, then you can use the GetPoint method to find out where on the plane the ray hit
Then create a direction vector from the centre of the plane to the intersect point and create a LookRotation based on the vector
You can find out more information about the Plane class on the Unity - Plane documentation page.
I am using google tango tablet to acquire point cloud data and RGB camera images. I want to create 3D scan of the room. For that i need to map 2D image pixels to point cloud point. I will be doing this with a lot of point clouds and corresponding images.Thus I need to write a code script which has two inputs 1. point cloud and 2. image taken from the same point in same direction and the script should output colored point cloud. How should i approach this & which platforms will be very simple to use?
Here is the math to map a 3D point v to 2D pixel space in the camera image (assuming that v already incorporates the extrinsic camera position and orientation, see note at bottom*):
// Project to tangent space.
vec2 imageCoords = v.xy/v.z;
// Apply radial distortion.
float r2 = dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0 + k1*r2 + k2*r4 + k3*r6;
// Map to pixel space.
vec3 pixelCoords = cameraTransform*vec3(imageCoords, 1);
Where cameraTransform is the 3x3 matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
with fx, fy, cx, cy, k1, k2, k3 from TangoCameraIntrinsics.
pixelCoords is declared vec3 but is actually 2D in homogeneous coordinates. The third coordinate is always 1 and so can be ignored for practical purposes.
Note that if you want texture coordinates instead of pixel coordinates, that is just another linear transform that can be premultiplied onto cameraTransform ahead of time (as is any top-to-bottom vs. bottom-to-top scanline addressing).
As for what "platform" (which I loosely interpreted as "language") is simplest, the native API seems to be the most straightforward way to get your hands on camera pixels, though it appears people have also succeeded with Unity and Java.
* Points delivered by TangoXYZij already incorporate the depth camera extrinsic transform. Technically, because the current developer tablet shares the same hardware between depth and color image acquisition, you won't be able to get a color image that exactly matches unless both your device and your scene are stationary. Fortunately in practice, most applications can probably assume that neither the camera pose nor the scene changes enough in one frame time to significantly affect color lookup.
This answer is not original, it is simply meant as a convenience for Unity users who would like the correct answer, as provided by #rhashimoto, worked out for them. My contribution (hopefully) is providing code that reduces the normal 16 multiplies and 12 adds (given Unity only does 4x4 matrices) to 2 multiplies and 2 adds by dropping out all of the zero results. I ran a little under a million points through the test, checking each time that my calculations agreed with the basic matrix calculations - defined as the absolute difference between the two results being less than machine epsilon - I'm as comfortable with this as I can be knowing that #rhashimoto may show up and poke a giant hole in it :-)
If you want to switch back and forth, remember this is C#, so the USEMATRIXMATH define must appear at the beginning of the file.
Given there's only one Tango device right now, and I'm assuming the intrinsics are constant across all of the devices, I just dumped them in as constants, such that
fx = 1042.73999023438
fy = 1042.96997070313
cx = 637.273986816406
cy = 352.928985595703
k1 = 0.228532999753952
k2 = -0.663019001483917
k3 = 0.642908990383148
Yes they can be dumped in as constants, which would make things more readable, and C# is probably smart enough to optimize it out - however, I spent too much of my life in Agner Fogg's stuff, and will always be paranoid.
The commented out code at the bottom is for testing the difference, should you desire. You'll have to uncomment some other stuff, and comment out the returns if you want to test the results.
My thanks again to #rhashimoto, this is far far better than what I had
I have stayed true to his logic, remember these are pixel coordinates, not UV coordinates - he is correct that you can premultiply the transform to get normalized UV values, but since he schooled me on this once already, I will stick with exactly the math he presented before I fiddle with too much :-)
static public Vector2 PictureUV(Vector3 tangoDepthPoint)
{
Vector2 imageCoords = new Vector2(tangoDepthPoint.x / tangoDepthPoint.z, tangoDepthPoint.y / tangoDepthPoint.z);
float r2 = Vector2.Dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0f + 0.228532999753952f*r2 + -0.663019001483917f*r4 + 0.642908990383148f*r6;
Vector3 ic3 = new Vector3(imageCoords.x,imageCoords.y,1);
#if USEMATRIXMATH
Matrix4x4 cameraTransform = new Matrix4x4();
cameraTransform.SetRow(0,new Vector4(1042.73999023438f,0,637.273986816406f,0));
cameraTransform.SetRow(1, new Vector4(0, 1042.96997070313f, 352.928985595703f, 0));
cameraTransform.SetRow(2, new Vector4(0, 0, 1, 0));
cameraTransform.SetRow(3, new Vector4(0, 0, 0, 1));
Vector3 pixelCoords = cameraTransform * ic3;
return new Vector2(pixelCoords.x, pixelCoords.y);
#else
//float v1 = 1042.73999023438f * imageCoords.x + 637.273986816406f;
//float v2 = 1042.96997070313f * imageCoords.y + 352.928985595703f;
//float v3 = 1;
return new Vector2(1042.73999023438f * imageCoords.x + 637.273986816406f,1042.96997070313f * imageCoords.y + 352.928985595703);
#endif
//float dx = Math.Abs(v1 - pixelCoords.x);
//float dy = Math.Abs(v2 - pixelCoords.y);
//float dz = Math.Abs(v3 - pixelCoords.z);
//if (dx > float.Epsilon || dy > float.Epsilon || dz > float.Epsilon)
// UnityEngine.Debug.Log("Well, that didn't work");
//return new Vector2(v1, v2);
}
As one final note, do note the code he provided is GLSL - if you're just using this for pretty pictures, use it - this is for those that actually need to perform additional processing.
I am writing a volume render program that constantly adjusts some plane geometry so it always faces the camera. The plane geometry rotates whenever the camera rotates in order to appear as if it doesn't move--relative to everything else in the scene. (I use the camera's viewing direction as a normal vector to these plane geometries.)
Currently I am manually storing a custom rotation vector ('rotations') and applying its affects as follows in the render function:
gl2.glRotated(rotations.y, 1.0, 0.0, 0.0);
gl2.glRotated(rotations.x, 0.0, 1.0, 0.0);
Then later on I get the viewing direction by rotating the initial view direction (0,0,-1) around the x and y axes with the values from rotation. This is done in the following manner. The final viewing direction is stored in 'view':
public Vec3f getViewingAngle(){
//first rotate the viewing POINT
//then find the vector from there to the center
Vec3f view=new Vec3f(0,0,-1);
float newZ=0;
float ratio=(float) (Math.PI/180);
float vA=(float) (-1f*rotations.y*(ratio));
float hA=(float) (-1f*rotations.x)*ratio;
//rotate about the x axis first
float newY=(float) (view.y*Math.cos(vA)-view.z*Math.sin(vA));
newZ=(float) (view.y*Math.sin(vA)+view.z*Math.cos(vA));
view=new Vec3f(view.x,newY,newZ);
//rotate about Y axis
float newX=(float) (view.z*Math.sin(hA)+view.x*Math.cos(hA));
newZ=(float) (view.z*Math.cos(hA)-view.x*Math.sin(hA));
view=new Vec3f(newX,view.y,newZ);
view=new Vec3f(view.x*-1f,view.y*-1f,view.z*-1f);
//return the finalized normal viewing direction
view=Vec3f.normalized(view);
return view;
}
Now I am moving this program to a larger project wherein the camera rotation is handled by a 3rd party graphics library. I have no rotations vector. Is there some way I can get my view direction vector from:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
I am looking at this for reference http://3dengine.org/Modelview_matrix but I still don't get how to come up with the view direction. Can someone explain to me if it is possible and how it works?
You'll want to look at this picture # http://db-in.com/images/local_vectors.jpg
The Direction-of-Flight ( DOF) is the 3rd row.
GLfloat matrix[16];
glGetFloatv( GL_MODELVIEW_MATRIX, matrix );
float DOF[3];
DOF[0] = matrix[ 2 ]; // x
DOF[1] = matrix[ 6 ]; // y
DOF[2] = matrix[ 10 ]; // z
Reference:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
Instead of trying to follow the modelview matrix, to adjust your volume rasterizer's fragment impostor, you should just adjust the modelview matrix to your needs. OpenGL is not a scene graph, it's a drawing system and you can, and should change things however they suit you best.
Of course if you must embedd the volume rasterization into a larger scene, it may be neccessary to extract certain info from the modelview matrix. The upper left 3×3 submatrix contains the composite rotation of models and view. The 3rd column contains the view rotated Z vector.
I have a 2D Point (x,y) and I want to project it to a Vector, so that I can perform a ray-trace to check if the user clicked on a 3D Object, I have written all the other code, Except when I got back to my function to get the Vector from the xy cords of the mouse, I was not accounting for Field-Of-View, and I don't want to guess what the factor would be, as 'voodoo' fixes are not a good idea for a library. any math-magicians wanna help? :-).
Heres my current code, that needs FOV of the camera applied:
sf::Vector3<float> Camera::Get3DVector(int Posx, int Posy, sf::Vector2<int> ScreenSize){
//not using a "wide lens", and will maintain the aspect ratio of the viewport
int window_x = Posx - ScreenSize.x/2;
int window_y = (ScreenSize.y - Posy) - ScreenSize.y/2;
float Ray_x = float(window_x)/float(ScreenSize.x/2);
float Ray_y = float(window_y)/float(ScreenSize.y/2);
sf::Vector3<float> Vector(Ray_x,Ray_y, -_zNear);
// to global cords
return MultiplyByMatrix((Vector/LengthOfVector(Vector)), _XMatrix, _YMatrix, _ZMatrix);
}
You're not too fart off, one thing is to make sure your mouse is in -1 to 1 space (not 0 to 1)
Then you create 2 vectors:
Vector3 orig = Vector3(mouse.X,mouse.Y,0.0f);
Vector3 far = Vector3(mouse.X,mouse.Y,1.0f);
You also need to use the inverse of your perspective tranform (or viewprojection if you want world space)
Matrix ivp = Matrix::Invert(Projection)
Then you do:
Vector3 rayorigin = Vector3::TransformCoordinate(orig,ivp);
Vector3 rayfar = Vector3::TransformCoordinate(far,ivp);
If you want a ray, you also need direction, which is simply:
Vector3 raydir = Normalize(rayfar-rayorigin);
I want to know a way to flip an angle in a horizontal axis, without having to do many operations. Say I have an angle of 0 ("pointing right" in my code's coordinate system), the flipped angle should be 180 (pointing left). If 90 (pointing up), flipped it should still be 90. 89 is 91, and so on.
I can operate on the X/Y speeds implied by the angle but that would slow things down, and I feel it's not the proper way to go.
I don't know much math so I might be calling things by the wrong name...Can anyone help?
EDIT: Sorry I took long, I had to be out of the computer for long, OK...
http://img215.imageshack.us/img215/8095/screenshot031v.jpg
This screenshot might do.The above structure are two satellites and a beam linked to the white dot in the center. The two satellites should inherit the angle of the white dot (it's visible for debug purposes), so if it's aiming at an angle, they will follow. The satellite at the left is mirrored, so I calculated it with 180-angle as suggested, although it was my first try as well. As you can see it is not mirrored but flipped. And when the white dot rotates, it rotates backwards. The other does alright.
This is the angle recalculation for something linked to something else, pid would be the parent, and id the current. pin.ang is the angle offset copied when the object is linked to another, so it keeps position when rotated:
if(object[id].mirror)
object[id].angle = 180 - (object[id].pin.ang + object[pid].angle);
else
object[id].angle = object[id].pin.ang + object[pid].angle;
And this is the specific rotation part. OpenGL. the offx/y is for things rotated off-center, like the beam about to come out there, it renders everything else right.
glTranslatef(list[index[i]].x, list[index[i]].y, 0);
glRotatef(list[index[i]].angle, 0.0, 0.0, 1.0);
glTranslatef(list[index[i]].offx, -list[index[i]].offy, 0);
The rotation also seems to miss when the rotation speed (an integer added every redraw to the current angle, positive for rotating clockwise, like in this next one:
http://img216.imageshack.us/img216/7/screenshot032ulr.jpg
So it's definitely not 180-angle, despite how obvious it'd be. The mirroring is done by just reversing the texture coordinates so it doesn't affect angle. I am afraid it might be a quirk on the GL rotation thing.
The reflected amount (just looking at the maths) would be (180 - angle)
Angle | Reflection
------+-----------
0 | 180
90 | 90
89 | 91
91 | 89
360 | -180
270 | -90
Note the negatives if you fall below the "horizontal plane" - which you could leave as they are, or handle as a special case.
Isn't it simply
result = 180-(your angle)
As already explained, you find the opposite angle by subtracting your angle from 180 degrees. Eg:
180 - yourangle
Directly manipulating the X/Y speeds would not be very cumbersome. You simply reverse the direction of the X speed, by multiplying it by minus 1, example: speedx = (-1) * speedx. This would change the left-right direction, eg: something moving to the left would start moving to the right, and vice versa, and the vertical speed would be unaffected.
If you're using sine/cosine (sin/cos) to recalculate your X/Y speed components, then the *(-1) method would probably be more efficient. Ultimately it depends on the context of your program. If you're looking for a better solution, update your question with more details.
This solution is for -Y oriented angles (like a watch)! For +X orientation (like school math) you need to swap X and Y.
public static float FlipAngleX(float angle)
{
angle = NormalizeAngle(angle);
angle = TwoPi - angle;
return angle;
}
public static float FlipAngleY(float angle)
{
angle = NormalizeAngle(angle);
if (angle < Pi)
{
angle = Pi - angle;
}
else
{
angle = TwoPi - angle + Pi;
}
return angle;
}
/// <summary>
/// Keeps angle between 0 - Two Pi
/// </summary>
public static float NormalizeAngle(float angle)
{
if (angle < 0)
{
int backRevolutions = (int)(-angle / TwoPi);
return angle + TwoPi * (backRevolutions + 1);
}
else
{
return angle % TwoPi;
}
}
Aah, seems the problem came from negative numbers after all, I ensured them being positive and now the rotation does fine, I don't even need to recalculate angle...
Thanks to everyone, I ended up figuring out due to bits of every response.
to flip counter clockwise to clockwise (270 on right -> 90 on right)
angle - 360
--
to flip vertical (180 on top -> 0/360 on top)
Math.Normalize(angle - 180)
--
both:
float flipped_vertical = angle - 360
float flipped_vertical_and_horizontal = Math.Normalize(flipped_vertical- 180)
just 360-angle will flip your angle horizontaly but not verticaly