Coordinate vs position - math

struct Point2D
{
int x;
int y;
};
There is a 2D tile-based map.
How to name variable of type Point2D which represents coordinates of specific tile?
Point2D tilePosition;
or
Point2D tileCoordinate;
I have problem in understanding difference between coordinate and position.

Coordinate system is just a way to identify position, I prefer tilePosition. IMHO, naming is supposed to indicate the meaning of the variable, not what the inside structure is.

Related

Unity - Find a point for a gameobject to look at the mouse while camera is at any angle

I have a 3D game where I want an arrow to point in the direction base on the mouses angle of that object in a 2D view.
Now from the camera looking down at the board from a 90 degree x-angle standpoint it works fine. The below image is when I am in a 90 Degree x-angle Camera angle facing down on my game and I have the arrow face where my cursor is:
But now when we take a step back and have the camera at a 45 degree x-angle the direction the arrow is facing is a bit off. The below image is when I have the cursor face my mouse cursor when my camera is on a 45 degree x-angle :
Now lets look at the above image but when the Camera is shifted back to 90 Degrees x-angle:
My current code is:
// Get the vectors of the 2 points, the pivot point which is the ball start and the position of the mouse.
Vector2 objectPoint = Camera.main.WorldToScreenPoint(_arrowTransform.position);
Vector2 mousePoint = (Vector2)Input.mousePosition;
float angle = Mathf.Atan2( mousePoint.y - objectPoint.y, mousePoint.x - objectPoint.x ) * 180 / Mathf.PI;
_arrowTransform.rotation = Quaternion.AngleAxis(-angle, Vector2.up) * Quaternion.Euler(90f, 0f, 0f);
What would I have to add in my Mathf.Atan2() to compensate for the camera rotation on the x and/or y to make sure when the user wants to move the camera how they please it will make sure to provide an accurate direction?
EDIT : The solution was in MotoSV's answer with using Plane. This allowed me to get the exact point no matter what my camera angles were based on my mouse position. Code that worked for me is below :
void Update()
{
Plane groundPlane = new Plane(Vector3.up, new Vector3(_arrowTransform.position.x, _arrowTransform.position.y, _arrowTransform.position.z));
Ray ray = _mainCamera.ScreenPointToRay(Input.mousePosition);
float distance;
if (groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
_arrowTransform.LookAt(point);
}
}
Although this does not answer your question directly with regards to the Mathf.Atan2 method it is a alternative approach that may be useful.
This would be placed onto the game object that represents the arrow:
public class MouseController : MonoBehaviour
{
private Camera _camera;
private void Start()
{
_camera = GameObject.FindGameObjectWithTag("MainCamera").GetComponent<Camera>();
}
private void Update()
{
Plane groundPlane = new Plane(Vector3.up, this.transform.position);
Ray ray = _camera.ScreenPointToRay(Input.mousePosition);
float distance;
Vector3 axis = Vector3.zero;
if(groundPlane.Raycast(ray, out distance))
{
Vector3 point = ray.GetPoint(distance);
axis = (point - this.transform.position).normalized;
axis = new Vector3(axis.x, 0f, axis.z);
}
this.transform.rotation = Quaternion.LookRotation(axis);
}
}
The basic idea is to:
Create a Plane instance centred at the game object's position
Convert the mouse screen position into a Ray that heads into the world
relative to the camer'a current position and rotation
Then cast that ray onto the Plane created in step #1
If the ray intersects the plane, then you can use the GetPoint method to find out where on the plane the ray hit
Then create a direction vector from the centre of the plane to the intersect point and create a LookRotation based on the vector
You can find out more information about the Plane class on the Unity - Plane documentation page.

Perspective Projection effect correction

I was trying to plot 8 points in a 3D space from the 8 vertices of the above 3D sphare.
I used the following code:
#include "Coordinates2d.h"
#include "Point3d.h"
const double zoom = 500;
int main()
{
Coordinates2d::ShowWindow("3D Primitives!");
std::vector<Point3d> points;
points.push_back(Point3d(0,0,20));
points.push_back(Point3d(0,100,20));
points.push_back(Point3d(120,100,20));
points.push_back(Point3d(120,0,20));
points.push_back(Point3d(0,0,120));
points.push_back(Point3d(0,100,120));
points.push_back(Point3d(120,100,120));
points.push_back(Point3d(120,0,120));
for(int i=0 ; i<points.size() ; i++)
{
Coordinates2d::Draw(points[i], zoom);
}
Coordinates2d::Wait();
}
Where, the Point3D is like the following:
#ifndef _POINT_3D_
#define _POINT_3D_
#include "graphics.h"
#include "Matrix.h"
#include "Point2d.h"
#include <cmath>
#include <iostream>
struct Point3d
{
double x;
double y;
double z;
public:
Point3d();
Point3d(double x, double y, double z);
Point3d(Point3d const & point);
Point3d & operator=(Point3d const & point);
Point3d & operator+(int scalar);
bool operator==(Point3d const & point);
bool operator!=(Point3d const & point);
Point3d Round()
{
return Point3d(floor(this->x + 0.5), floor(this->y + 0.5), floor(this->z + 0.5));
}
void Show()
{
std::cout<<"("<<x<<", "<<y<<", "<<z<<")";
}
bool IsValid();
double Distance(Point3d & point);
void SetMatrix(const Matrix & mat);
Matrix GetMatrix() const;
Point2d ConvertTo2d(double zoom)
{
return Point2d(x*zoom/(zoom-z), y*zoom/(zoom-z));
}
};
#endif
#ifndef _COORDINATES_2D_
#define _COORDINATES_2D_
#include "graphics.h"
#include "Point2d.h"
#include "Point3d.h"
#include "Line3d.h"
class Coordinates2d
{
private:
static Point2d origin;
public:
static void Wait();
static void ShowWindow(char str[]);
private:
static void Draw(Point2d & pt);
public:
static void Draw(Point3d & pt, double zoom)
{
Coordinates2d::Draw(pt.ConvertTo2d(zoom));
}
};
#endif
I was expecting the output to be the following:
But the output became like the following:
I am actually interested to move my viewing camera.
How can I achieve my desired result?
I see from the comments that you achieved your desired result with a clever formula. If you're interested in doing it the 'standard' graphics way, using matrices, I hope this post will help you.
I found an excellent page written explaining projection matrices for OpenGL, which also extends to the general mathematics of projection.
If you want to go in depth, here is the very well written article, explains it's steps in detail, and is just overall highly commendable.
The below image shows the first part of what you're trying to do.
So the image on the left is the 'viewing volume' that you want your camera to see. You can see that in this case, the Center of Projection (basically the focal point of the camera) is at the origin.
But wait, you say, I don't WANT the center of projection to be at the origin! I know, we'll cover that later.
What we're doing here is taking the strangely shaped volume on the left, and converting it to what we call 'normalized coordinate' on the right. So we're mapping out viewing volume onto the range of -1 to 1 in each direction. Basically, we mathmatically stretch the irregularly shaped viewing volume into this 2x2x2 cube centered at the origin.
This operation is accomplished through the following matrix, again, from the excellent article I linked above.
So note you have six variables.
t = top
b = bottom
l = left
r = right
n = near
f = far
Those six variables define you viewing volume. Far is not labeled on the above image, but it is the distance of the furthest plane from the origin in the image.
The above image shows the projection matrix that puts out viewing volume into normalized coordinates. Once coordinates are in this form, you can make it flat by simply ignoring the z coordinate, which is similar to some of the work you have done (nice work!).
So we're all set with that for viewing things from the origin. But let's say we don't want to view from the origin, and would prefer to view from, say somewhere behind and to the side.
Well we can do that! but instead of moving our viewing area (we have the math all nicely worked out right here), it is perhaps counter intuitively, easier to move all the points we are trying to view.
This can be done by multiplying all of the points by a translation matrix.
Here is the wikipedia page for translation, from which I took the following matrix.
Vx, Vy, and Vz are the amount we want to move things in the x, y, and z directions. Keep in mind, if we want to move the camera in the positive x direction, we need a negative Vx, and vice versa. This is because we are moving the points instead of the camera. Feel free to try it and see, if you want.
You may also have noticed that both of the matrices I showed are 4x4, and your coordinates are 3x1. This is because the matrices are meant to be used with homogeneous coordinates. These seem strange because they use 4 variables to represent a 3D point, but its just x, y, z, and w, where you make w =1 for your points. I believe this variable is used for depth buffers, among other things, but it is basically ubiquitously present in graphics' matrix math, so you'll want to get used to using it.
Now that you have these matrices, you can apply the translation one to your points, then apply the perspective one to those points you got out. Then simply ignore the z components, and there you are! You have a 2D image from -1 to 1 in the x and y directions.

How do I take a 2D point, and project it into a 3D Vector by a perspective camera

I have a 2D Point (x,y) and I want to project it to a Vector, so that I can perform a ray-trace to check if the user clicked on a 3D Object, I have written all the other code, Except when I got back to my function to get the Vector from the xy cords of the mouse, I was not accounting for Field-Of-View, and I don't want to guess what the factor would be, as 'voodoo' fixes are not a good idea for a library. any math-magicians wanna help? :-).
Heres my current code, that needs FOV of the camera applied:
sf::Vector3<float> Camera::Get3DVector(int Posx, int Posy, sf::Vector2<int> ScreenSize){
//not using a "wide lens", and will maintain the aspect ratio of the viewport
int window_x = Posx - ScreenSize.x/2;
int window_y = (ScreenSize.y - Posy) - ScreenSize.y/2;
float Ray_x = float(window_x)/float(ScreenSize.x/2);
float Ray_y = float(window_y)/float(ScreenSize.y/2);
sf::Vector3<float> Vector(Ray_x,Ray_y, -_zNear);
// to global cords
return MultiplyByMatrix((Vector/LengthOfVector(Vector)), _XMatrix, _YMatrix, _ZMatrix);
}
You're not too fart off, one thing is to make sure your mouse is in -1 to 1 space (not 0 to 1)
Then you create 2 vectors:
Vector3 orig = Vector3(mouse.X,mouse.Y,0.0f);
Vector3 far = Vector3(mouse.X,mouse.Y,1.0f);
You also need to use the inverse of your perspective tranform (or viewprojection if you want world space)
Matrix ivp = Matrix::Invert(Projection)
Then you do:
Vector3 rayorigin = Vector3::TransformCoordinate(orig,ivp);
Vector3 rayfar = Vector3::TransformCoordinate(far,ivp);
If you want a ray, you also need direction, which is simply:
Vector3 raydir = Normalize(rayfar-rayorigin);

QGraphicsItem's - selection & rotation

I'd like to implement application which allows user to select few QGraphicsItems and then rotate them as a group. I know that I could add all items into one QGraphicsItemGroup but I need to keep Z-value of each item. Is it possible?
I also have a second question.
I'm trying to rotate QGraphicsItem around some point (different from (0,0) - let's say (200,150)). After that operation I want to rotate this item once more time but now around (0,0). I'm using code below:
QPointF point(200,150); // point is (200,150) at first time and then it is changed to (0,0) - no matter how...
qreal x = temp.rx();
qreal y = temp.ry();
item->setTransform(item->transform()*(QTransform().translate(x,y).rotate(angle).translate(-x,-y)));
I noticed that after second rotation the item is not rotated around point (0,0) but around some other point (I don't know which). I also noticed that if I change order of operations it all works great.
What am I doing wrong?
Regarding your first problem, why should the z-values be a problem when putting them into a QGraphicsGroup?
On the other hand you could also iterate through the selected items and just apply the transformation.
I guess this snippet will solve your 2nd problem:
QGraphicsView view;
QGraphicsScene scene;
QPointF itemPosToRotate(-35,-35);
QPointF pivotPoint(25,25);
QGraphicsEllipseItem *pivotCircle = scene.addEllipse(-2.5,-2.5,5,5);
pivotCircle->setPos(pivotPoint);
QGraphicsRectItem *rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
// draw some coordinate frame lines
scene.addLine(-100,0,100,0);
scene.addLine(0,100,0,-100);
// do half-cicle rotation
for(int j=0;j<=5;j++)
for(int i=1;i<=20;i++) {
rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
QPointF itemCenter = rect->pos();
QPointF pivot = pivotCircle->pos() - itemCenter;
// your local rotation
rect->setRotation(45);
// your rotation around the pivot
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);
}
view.setScene(&scene);
view.setTransform(view.transform().scale(2,2));
view.show();
EDIT:
In case you meant to rotate around the global coordinate frame origin change the rotations to:
rect->setTransform(QTransform().translate(-itemCenter.x(), -itemCenter.y()).rotate(360.0 * (qreal)j/5.0).translate(itemCenter.x(),itemCenter.y()) );
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);

Method to combine multiple affine transforms as if each was specified in un-transformed space

I'm looking for a way to combine affine transforms in such a way so that the effect is equivalent to using each transform to manipulate a shape in succession. The problem is that if I simply concatenate the transforms, then each successive transform's effect is interpreted in the existing transform's co-ordinate space.
For example, consider a square around the origin (-50,-50, 100,100). I want to rotate it, and then translate it down 100px. If I take a transform and rotate and then translate, the translation gets interpreted in the rotated coordinates. Instead, if I transform the shape itself to rotate it, and then transform that shape again to translate it, both translations are interpreted in the "normal" un-translated plane, and it gives me what I want.
The problem is that for what I'm doing many transforms may take place, each of which needs to be interpreted in the normal coordinate plane, but I don't want to store a stack of transforms, nor can I simply keep manipulating a shape, because I need to at any time be able to create the final transformed shape from the original starting shape.
I'm aware that for this simple example if I did the translate before the rotate I'd get the same result, but that's missing the point. I'm dealing with an arbitrary set of successive scale, translate, and rotate transforms, so simply putting them in a certain order doesn't cut it.
I have an inkling that there should be a way to concatenate transforms in such a way that you modify the new transform before you concatenate it, correcting for the existing transform so that the effect is that the new transform appears to have been applied as if it were referencing the un-transformed coordinate plane. For example, if you translate by (70.7, 70.7) in the above example instead of (0,100), the result becomes equivalent. I just can't seem to figure out what the math is to figure out in general how to alter the new transform so it works out correctly.
Thanks for reading - hope this made sense. Heres the source of the example that created the screenshot:
public class TransformExample extends JPanel {
#Override
protected void paintComponent(Graphics _g) {
super.paintComponent(_g);
Graphics2D g = (Graphics2D) _g;
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g.translate(150, 100); // translate so we can see method 1 clearly
paintConcatenate(g);
g.translate(200, 0); // translate again so we can see method 2 to the right of method 1
paintSuccessive(g);
}
private void paintConcatenate(Graphics2D g) {
AffineTransform tx = new AffineTransform();
Shape shape = new Rectangle(-50, -50, 100, 100);
// Draw the 3 steps, altering the transform each time
draw(g, shape, tx, Color.GRAY);
tx.rotate(Math.PI / 4);
draw(g, shape, tx, Color.GREEN);
tx.translate(70.7, 70.7);
draw(g, shape, tx, Color.PINK);
}
private void paintSuccessive(Graphics2D g) {
Shape shape = new Rectangle(-50, -50, 100, 100);
// Draw the 3 steps, altering the shape each time with a new transform
draw(g, shape, null, Color.GRAY);
shape = AffineTransform.getRotateInstance(Math.PI / 4).createTransformedShape(shape);
draw(g, shape, null, Color.GREEN);
shape = AffineTransform.getTranslateInstance(0, 100).createTransformedShape(shape);
draw(g, shape, null, Color.PINK);
}
private void draw(Graphics2D g, Shape shape, AffineTransform tx, Color color) {
if (tx != null) {
shape = tx.createTransformedShape(shape);
}
g.setColor(color);
g.fill(shape);
}
public static void main(String[] args) {
JFrame f = new JFrame("Transform Example");
f.setSize(500, 350);
f.setContentPane(new TransformExample());
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
}
}
(I'm working with Java2D, although I don't think the language or 2d library is all that pertinent here.)
I suggest you to keep track of some absolute values and then do less transformations as you can.
For example, store the translation matrix and the rotation angle around the origin.
int translate[2];
int rotate;
Now, suppose that you want to rotate around its center and then translate the object somewhere, and then rotate it again under its center.
Because with affine transformations, rotation matrix aren't commutative, so if you apply a rotation,translation, rotation you'll get an wrong result.
But you can simply sum the rotation angle of the first and third rotation, and apply a single rotation and then the translation.
Hope to be clear.
when you rotate an object, you normally rotate around a specific point. It looks like you are just rotating around (0,0) which is usually not what you want.
To rotate around a specific point (x,y),
translate the point to 0 (-x, -y),
then rotate,
then translate back (x, y).
public static AffineTransform getRotateInstance(double theta,
double anchorx,
double anchory)

Resources