I have a camera in OpenGL.I had no problem with it until adding FPS controller.The problem is that the basic FPS behavior is ok. The camera moves forward,backward,left and right+ rotates towards the direction supplied by mouse input.The problems begin when the camera moves to the sides or the back of the target position.In such a case camera local forward,backward,left,right directions aren't updated based on its current forward look but remain the same as if it was right in front of the target.Example:
If the target object position is at (0,0,0) and camera position is at (-50,0,0) (to the left of the target) and camera is looking at the target,then to move it back and forth I have to use the keys for left and right movement while backward/forward keys move the camera sideways.
Here is the code I use to calculate camera position, rotation and LookAt matrix:
void LookAtTarget(const vec3 &eye,const vec3 ¢er,const vec3 &up)
{
this->_eye = eye;
this->_center = center;
this->_up = up;
this->_direction =normalize((center - eye));
_viewMatrix=lookAt( eye, center , up);
_transform.SetModel(_viewMatrix );
UpdateViewFrustum();
}
void SetPosition(const vec3 &position){
this->_eye=position;
this->_center=position + _direction;
LookAtTarget(_eye,_center,_up);
}
void SetRotation(float rz , float ry ,float rx){
_rotationMatrix=mat4(1);
vec3 direction(0.0f, 0.0f, -1.0f);
vec3 up(0.0f, 1.0f, 0.0f);
_rotationMatrix=eulerAngleYXZ(ry,rx,rz);
vec4 rotatedDir= _rotationMatrix * vec4(direction,1) ;
this->_center = this->_eye + vec3(rotatedDir);
this->_up =vec3( _rotationMatrix * vec4(up,1));
LookAtTarget(_eye, _center, up);
}
Then in the render loop I set camera's transformations:
while(true)
{
display();
fps->print(GetElapsedTime());
if(glfwGetKey(GLFW_KEY_ESC) || !glfwGetWindowParam(GLFW_OPENED)){
break;
}
calculateCameraMovement();
moveCamera();
view->GetScene()->GetCurrentCam()->SetRotation(0,-camYRot,-camXRot);
view->GetScene()->GetCurrentCam()->SetPosition(camXPos,camYPos,camZPos);
}
lookAt() method comes from GLM math lib.
I am pretty sure I have to multiply some of the vectors (eye ,center etc) with rotation matrix but I am not sure which ones.I tried to multiply _viewMatrix by the _rotationMatrix but it creates a mess.The code for FPS camera position and rotation calculation is taken from here.But for the actual rendering I use programmable pipeline.
Update:
I solved the issue by adding a separate method which doesn't calculate camera matrix using lookAt but rather using the usual and basic approach:
void FpsMove(GLfloat x, GLfloat y , GLfloat z,float pitch,float yaw){
_viewMatrix =rotate(mat4(1.0f), pitch, vec3(1, 0, 0));
_viewMatrix=rotate(_viewMatrix, yaw, vec3(0, 1, 0));
_viewMatrix= translate(_viewMatrix, vec3(-x, -y, -z));
_transform.SetModel( _viewMatrix );
}
It solved the problem but I still want to know how to make it work with lookAt() methods I presented here.
You need to change the forward direction of the camera, which is presumably fixed to (0,0,-1). You can do this by rotating the directions about the y axis by camYRot (as computed in the lookat function) so that forwards is in the same direction that the camera is pointing (in the plane made by the z and x axes).
Related
I'm trying to convert world coordinates to screen coordinates. I have available: fov, screen width, screen height, camera position, camera angle and obviously the position of the object in world space.
This is what I tried:
glm::vec3 world_to_screen(glm::vec3 pos,
glm::vec3 cam_angle,
glm::vec3 cam_pos) {
glm::mat4 projection = glm::perspective(
glm::radians(FOV), (float)SCREEN_W / (float)SCREEN_H, NEAR, FAR);
glm::mat4 model(1.0);
model = glm::translate(model, cam_pos);
model = glm::rotate(model, cam_angle.x, glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, cam_angle.y, glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, cam_angle.z, glm::vec3(0.0f, 0.0f, 1.0f));
glm::mat4 view = glm::inverse(model);
glm::mat4 modelview = view * model;
return glm::project(pos, modelview, projection,
glm::vec4(0, 0, SCREEN_W, SCREEN_H));
}
However it is not working, the output screen coordinates are over 30000 (I don't have a 30k monitor...) and I'm not sure what I did wrong.
There is a correlation though, sometimes the screen coordinates happen to be in my screen (I draw an indicator at the position to see if I did it right) and if the object moves the indicator also moves with (kinda) the same speed etc.
Help is very much appreciated.
This code:
glm::mat4 view = glm::inverse(model);
glm::mat4 modelview = view * model;
is equivalent to glm::mat4 modelview(1.0f) (except for floating point inaccuracies).
The modelView matrix in computer graphics refers to the product of the view matrix of the camera and the model matrix of the object you want to render. What you calculated is the model matrix of the camera (if you would want to place some 3D object into your world at the position of your camera). You typically do not want to do that. And rendering an object with the model matrix of the camera means it has to cancel itself out, as the center of the object would be transformed to the position of the camera, which is the origin of the eye space.
However, since your pos is given in world space, your model matrix of the object is implicitly the identity matrix, hence
glm::mat4 modelview = view;
is what you need here.
I need to be able to unproject a screen pixel into object space using Vulkan, but somewhere my math is going wrong.
Here is the shader as it stands today for reference:
void main()
{
//the depth of this pixel is between 0 and 1
vec4 obj_space = vec4( float(gl_FragCoord.x)/ubo.screen_width, float(gl_FragCoord.y)/ubo.screen_height, gl_FragCoord.z, 1.0f);
//this puts us in normalized device coordinates [-1,1 ] range
obj_space.xy = ( obj_space.xy * 2.0f ) -1.0f;
//this two lines will put is in object space coordinates
//mvp_inverse is derived from this in the c++ side:
//glm::inverse(app.three_d_camera->get_projection_matrix() * app.three_d_camera->view_matrix * model);
obj_space = ubo.mvp_inverse * obj_space;
obj_space.xyz /= obj_space.w;
//the resulting position here is wrong
out_color = obj_space;
}
when I output the position in color, the colors are off. I know I can simply pass in the object space position from the vertex shader to the fragment shader, but I'd like to understand why my math is not working, it will help me understand Vulkan and maybe learn a little math myself.
Thanks!
I'm not entirely sure what your problem is, but lets go over potential problems.
Remember, vulkan clip space is:
positive y = down,
positive x = right,
positive z = out,
centered at the middle of the screen.
Additionally, despite OpenGL's GLSL docs saying it is centered at the bottom left corner, in vulkan gl_FragCoord is centered at the top left corner.
in this step:
obj_space.xy = ( obj_space.xy * 2.0f ) -1.0f;
obj_space is now:
left x : -1.0
right x : 1.0
top y = -1.0
bottom y = 1.0
out z = 1.0
back z = 0
I'm almost entirely sure you don't mean your object space to have Y be negative at the top. The reasoning for y increasing starting from top to bottom is for images and textures, which on the CPU are ordered the same way, and now are ordered like that in vulkan.
Some other notes:
You claim your inverse is derivied from glm::inverse here:
glm::inverse(app.three_d_camera->get_projection_matrix() * app.three_d_camera->view_matrix * model);
But GLM uses OpenGL notation for matrix dimensions and handedness, and unless you force it to the correct coordinate system, it is going to assume right handed positive Y up, z negative out. You'll need to include the following #defines before it works correctly (or physically change your calculations to accommodate this).
#define GLM_FORCE_DEPTH_ZERO_TO_ONE
#define GLM_FORCE_LEFT_HANDED
Additionally you'll need to modify your matrices to account for the negative Y direction. Here is an example of how I've handled this in the past (modifying the perspective matrix directly):
ubo.model = glm::translate(glm::mat4(1.0f), glm::vec3(pos_x,pos_y,pos_z));
ubo.model *= glm::rotate(glm::mat4(1.0f), time * glm::radians(0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, -10.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float) swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1; // makes the y axis projected to the same as vulkans
Coded in procesing (processing.org):
I want to know when the mouse or another shape collides with a rectangle,
this is very easy but I have one problem: I want it to work when the rectangle is rotated (example: rotate(radians(90))).
Both Kevin and Asad's contributions are useful.
In terms of using the 2D renderer, you need to roll your own functionality for that. For this you should be familiar with a few bits and bobs of linear algebra (vector and matrices mainly and just a few operations anyway).
I am going to assume you're already familiar with 2D transformations (using pushMatrix()/popMatrix() along with translate(),rotate(),scale()) if not, I warmly recommend the 2D Transformations Processing tutorial
I am going to explain some of the concepts only briefly (as it's a big topic on it's own).
If you used translate()/rotate()/scale() before, it's all been matrix operations handled for you behind the scenes. In 2D, a transformation can be stored in a 3x3 matrix like so:
X Y T
1 0 0
0 1 0
0 0 1
The rotation and scale are stored in the 1st and 2nd column (2 values each) while translation is stored in the last column. In theory you could have a 2x3 matrix instead of a 3x3 matrix, but an NxN matrix has a few nice properties. One of the nice things is being simple to multiply with a vector. Position can be stored as vectors and we'd like to transform a vector by multiplying it with a transformation matrix. If you look at a vector as a single column vector, the 3x3 form of the matrix allow multiplication(see matrix multiplication rules here).
In short:
You can store transformations in a matrix
You can apply these transformation to a vector using multiplication
Back to your issue, checking if a point is within a box with transformations applied, you can do this:
convert the test point's coordinate system to the box's transformed coordinate system by:
inverting the box's transformation matrix and
multiplying the point to the inverted transformation matrix.
This may be hard to comprehend at first, but one way to look at is imagining you rotate the whole 'world'(coordinate system) so your rotated box is straight (essentially rotating in the opposite direction, or inverting the transformation) then check if the point is in the box.
Luckily all these matrix operations don't need to be implemented from scratch: PMatrix2D deals with this.
Here is a basic commented sketch explaining all the above:
Box box1,box2;
void setup(){
size(400,400);
box1 = new Box(200,100);
box1.translate(75,100);
box1.rotate(radians(30));
box1.scale(1.1);
box2 = new Box(100,200);
box2.translate(275,150);
box2.rotate(radians(-5));
box2.scale(.95);
}
void draw(){
background(255);
box1.update(mouseX,mouseY);
box2.update(mouseX,mouseY);
box1.draw();
box2.draw();
}
class Box{
PMatrix2D coordinates = new PMatrix2D();//box coordinate system
PMatrix2D reverseCoordinates = new PMatrix2D();//inverted coordinate system
PVector reversedTestPoint = new PVector();//allocate reversed point as vector
PVector testPoint = new PVector();//allocate regular point as vector
float w,h;//box width and height
boolean isHovered;
Box(float w,float h){
this.w = w;
this.h = h;
}
//whenever we update the regular coordinate system, we update the reversed one too
void updateReverseCoordinates(){
reverseCoordinates = coordinates.get();//clone the original coordinate system
reverseCoordinates.invert();//simply invert it
}
void translate(float x,float y){
coordinates.translate(x,y);
updateReverseCoordinates();
}
void rotate(float angle){
coordinates.rotate(angle);
updateReverseCoordinates();
}
void scale(float s){
coordinates.scale(s);
updateReverseCoordinates();
}
boolean isOver(float x,float y){
reversedTestPoint.set(0,0);//reset the reverse test point
testPoint.set(x,y);//set the x,y coordinates we want to test
//transform the passed x,y coordinates to the reversed coordinates using matrix multiplication
reverseCoordinates.mult(testPoint,reversedTestPoint);
//simply test the bounding box
return ((reversedTestPoint.x >= 0 && reversedTestPoint.x <= w) &&
(reversedTestPoint.y >= 0 && reversedTestPoint.y <= h));
}
void update(float x,float y){
isHovered = isOver(x,y);
}
void draw(){
if(isHovered) fill(127);
else fill(255);
pushMatrix();
applyMatrix(coordinates);
rect(0,0,w,h);
popMatrix();
}
}
You're looking for the modelX() and modelY() functions. Just pass in mouseX and mouseY (z is 0) to find the position of the mouse in rotated space. Similarly, pass in the position of your rectangles to find their rotated points.
Here's the example from the reference:
void setup() {
size(500, 500, P3D);
noFill();
}
void draw() {
background(0);
pushMatrix();
// start at the middle of the screen
translate(width/2, height/2, -200);
// some random rotation to make things interesting
rotateY(1.0); //yrot);
rotateZ(2.0); //zrot);
// rotate in X a little more each frame
rotateX(frameCount / 100.0);
// offset from center
translate(0, 150, 0);
// draw a white box outline at (0, 0, 0)
stroke(255);
box(50);
// the box was drawn at (0, 0, 0), store that location
float x = modelX(0, 0, 0);
float y = modelY(0, 0, 0);
float z = modelZ(0, 0, 0);
// clear out all the transformations
popMatrix();
// draw another box at the same (x, y, z) coordinate as the other
pushMatrix();
translate(x, y, z);
stroke(255, 0, 0);
box(50);
popMatrix();
}
I am writing a volume render program that constantly adjusts some plane geometry so it always faces the camera. The plane geometry rotates whenever the camera rotates in order to appear as if it doesn't move--relative to everything else in the scene. (I use the camera's viewing direction as a normal vector to these plane geometries.)
Currently I am manually storing a custom rotation vector ('rotations') and applying its affects as follows in the render function:
gl2.glRotated(rotations.y, 1.0, 0.0, 0.0);
gl2.glRotated(rotations.x, 0.0, 1.0, 0.0);
Then later on I get the viewing direction by rotating the initial view direction (0,0,-1) around the x and y axes with the values from rotation. This is done in the following manner. The final viewing direction is stored in 'view':
public Vec3f getViewingAngle(){
//first rotate the viewing POINT
//then find the vector from there to the center
Vec3f view=new Vec3f(0,0,-1);
float newZ=0;
float ratio=(float) (Math.PI/180);
float vA=(float) (-1f*rotations.y*(ratio));
float hA=(float) (-1f*rotations.x)*ratio;
//rotate about the x axis first
float newY=(float) (view.y*Math.cos(vA)-view.z*Math.sin(vA));
newZ=(float) (view.y*Math.sin(vA)+view.z*Math.cos(vA));
view=new Vec3f(view.x,newY,newZ);
//rotate about Y axis
float newX=(float) (view.z*Math.sin(hA)+view.x*Math.cos(hA));
newZ=(float) (view.z*Math.cos(hA)-view.x*Math.sin(hA));
view=new Vec3f(newX,view.y,newZ);
view=new Vec3f(view.x*-1f,view.y*-1f,view.z*-1f);
//return the finalized normal viewing direction
view=Vec3f.normalized(view);
return view;
}
Now I am moving this program to a larger project wherein the camera rotation is handled by a 3rd party graphics library. I have no rotations vector. Is there some way I can get my view direction vector from:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
I am looking at this for reference http://3dengine.org/Modelview_matrix but I still don't get how to come up with the view direction. Can someone explain to me if it is possible and how it works?
You'll want to look at this picture # http://db-in.com/images/local_vectors.jpg
The Direction-of-Flight ( DOF) is the 3rd row.
GLfloat matrix[16];
glGetFloatv( GL_MODELVIEW_MATRIX, matrix );
float DOF[3];
DOF[0] = matrix[ 2 ]; // x
DOF[1] = matrix[ 6 ]; // y
DOF[2] = matrix[ 10 ]; // z
Reference:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
Instead of trying to follow the modelview matrix, to adjust your volume rasterizer's fragment impostor, you should just adjust the modelview matrix to your needs. OpenGL is not a scene graph, it's a drawing system and you can, and should change things however they suit you best.
Of course if you must embedd the volume rasterization into a larger scene, it may be neccessary to extract certain info from the modelview matrix. The upper left 3×3 submatrix contains the composite rotation of models and view. The 3rd column contains the view rotated Z vector.
I'd like to implement application which allows user to select few QGraphicsItems and then rotate them as a group. I know that I could add all items into one QGraphicsItemGroup but I need to keep Z-value of each item. Is it possible?
I also have a second question.
I'm trying to rotate QGraphicsItem around some point (different from (0,0) - let's say (200,150)). After that operation I want to rotate this item once more time but now around (0,0). I'm using code below:
QPointF point(200,150); // point is (200,150) at first time and then it is changed to (0,0) - no matter how...
qreal x = temp.rx();
qreal y = temp.ry();
item->setTransform(item->transform()*(QTransform().translate(x,y).rotate(angle).translate(-x,-y)));
I noticed that after second rotation the item is not rotated around point (0,0) but around some other point (I don't know which). I also noticed that if I change order of operations it all works great.
What am I doing wrong?
Regarding your first problem, why should the z-values be a problem when putting them into a QGraphicsGroup?
On the other hand you could also iterate through the selected items and just apply the transformation.
I guess this snippet will solve your 2nd problem:
QGraphicsView view;
QGraphicsScene scene;
QPointF itemPosToRotate(-35,-35);
QPointF pivotPoint(25,25);
QGraphicsEllipseItem *pivotCircle = scene.addEllipse(-2.5,-2.5,5,5);
pivotCircle->setPos(pivotPoint);
QGraphicsRectItem *rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
// draw some coordinate frame lines
scene.addLine(-100,0,100,0);
scene.addLine(0,100,0,-100);
// do half-cicle rotation
for(int j=0;j<=5;j++)
for(int i=1;i<=20;i++) {
rect = scene.addRect(-5,-5,10,10);
rect->setPos(itemPosToRotate);
QPointF itemCenter = rect->pos();
QPointF pivot = pivotCircle->pos() - itemCenter;
// your local rotation
rect->setRotation(45);
// your rotation around the pivot
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);
}
view.setScene(&scene);
view.setTransform(view.transform().scale(2,2));
view.show();
EDIT:
In case you meant to rotate around the global coordinate frame origin change the rotations to:
rect->setTransform(QTransform().translate(-itemCenter.x(), -itemCenter.y()).rotate(360.0 * (qreal)j/5.0).translate(itemCenter.x(),itemCenter.y()) );
rect->setTransform(QTransform().translate(pivot.x(), pivot.y()).rotate(180.0 * (qreal)i/20.0).translate(-pivot.x(),-pivot.y()),true);