I really got stuck with drawing "roads" on Pixmap in Qt.
I have all coordinates in fractional value which are very close to each other (I've got them from converting longitude/latitude to X/Y coordinates using Mercator's formulas). Qt drawLine function has only integer parameters to draw on a pixmap (cause nobody will draw 2.5 pixels, for example). Moreover, the coordinate starts with top left corner so I need to change it, like this:
Xold = x
Ynew = Ymax - Y
Now I have ordinary X/Y coordinate system, with Y-axis going to top and X-axis going to left.
Here's my code, how I trying to draw lines:
double minlat = 637800*log(tan(3.14/4+3.14*bounds[1]/360.0))/log(2.71),maxlat=637800*log(tan(3.14/4+3.14*bounds[2]/360.0))/log(2.71);
std::vector<double> x;
std::vector<double> y;
QSize size = ui->label_2->size();
size=ui->label_2->size();
QImage pic(size.width(),size.height(),QImage::Format_ARGB32_Premultiplied);
pic.fill(Qt::transparent);
QPainter painter(&pic);
for (unsigned int i=0; i < wayVector.size(); i++){
for (unsigned int j=0; j<wayVector[i].refs.size(); j++){
x.push_back(637800*3.14*nodeHash[wayVector[i].refs[j]].lon/180.0);
y.push_back(637800*log(tan(3.14/4+3.14*nodeHash[wayVector[i].refs[j]].lat/360.0))/log(2.71));
}
for (unsigned int j=0; j<wayVector[i].refs.size()-1;j++){
painter.setPen(Qt::green);
double x1 = x[j]/(size.width()/(maxlon-minlon));
double y1 = maxlat*size.height()/(maxlat-minlat)-y[j]*size.height()/(maxlat-minlat);
double x2 = x[j+1]/(size.width()/(maxlon-minlon));
double y2 = maxlat*size.height()/(maxlat-minlat)-y[j+1]*size.height()/(maxlat-minlat);
painter.drawLine(x1,y1,x2,y2);
}
x.clear();
y.clear();
}
But as soon as I put x1,y1,x2,y2 to drawLine function they converts to integer and everything goes wrong, because all X/Y-coordinates become the same (because of they are very close).
I really don't know how I could draw this lines on a pixmap.
Any ideas?
There are 5 different drawLine() functions. Use void QPainter::drawLine(const QPointF& p1, const QPointF& p2) or void QPainter::drawLine(const QLineF& line) instead. Those types ending with F use doubles.
Related
I've been trying to represent a 2d array of images as an isometric grid in Processing, however I cannot seem to get their placement right.
The images do not get placed next to each other (as in, the tiles do not touch), even though the x and y points seem to indicate they should be (as the cartesian view works and the isometric conversion equations seem to be correct).
Here is what I mean:
I think I may be treating my translations and rotations wrong, but after hours of googling I cannot find how.
My full code for this implementation can be seen here. This is full Processing code and over complicated, but a simpler version can be seen below.
color grass = color(20, 255, 20); //Grass tiles lay within wall tiles. These are usually images, but here they are colours for simplicity
color wall = color(150, 150, 150);
void setup() {
size(600, 600);
noLoop();
}
void draw() {
int rectWidth = 30;
float scale = 2; //Used to grow the shapes larger
float gap = rectWidth * scale; //The gap between each "tile", to allow tile s to fit next to each other
int rows = 4, cols = 4; //How many rows and columns there are in the grid
translate(300, 200);
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
/* x and y calculations */
float cartesianX = col * gap; //The standard cartesian x and y points. These place the tiles next to each other on the cartesian plane
float cartesianY = row * gap;
float isometricX = (cartesianX - cartesianY); //The isometric x and y points. The equations calculate it from the cartesian ones
float isometricY = (cartesianX + cartesianY) / 2;
/* transformations and placement */
pushMatrix(); //Pushes the transform and rotate matrix onto a stack, allowing it to be reset after each loop
translate(isometricX, isometricY); //Translate to the point that the tile needs to be placed.
scale(scale, scale / 2); //Scale the tile, making it twice as wide as it is high
rotate(radians(45)); //Rotate the tile into place
//Work out what colour to set the box to
if (row == 0 || col == 0 || row == rows -1 || col == cols - 1) fill(wall);
else fill(grass);
rect(0, 0, rectWidth, rectWidth);
popMatrix();
}
}
}
Let's look closer at how you're using two values:
int rectWidth = 30;
This is the size of the rectangles. Makes sense.
float gap = rectWidth * scale;
This is the distance between the left sides of the rectangle. In other words, you're using these to place the rectangles. When this is greater than the size of the rectangles, you'll have space between the rectangles. And since you're multiplying rectWidth by scale (which is 2), it's going to be greater than rectWidth.
In other words, if you make your gap equal to rectWidth, you don't get any spaces:
float gap = rectWidth;
Of course, that means you can probably get rid of your gap variable entirely, but it might come in handy if you want to space the rectangles out to make their borders thicker or something.
I am trying to draw some ellipses like if they were in the perimeter of an imaginary circle.I have done my logic, I do not see where it fails. Basically what I do is move the starting point where I want, then get locations using trigonometry, given that the angle and the hypotenuse are kwown. See the code:
// Curve for 5 number
translate(width/6*3-30, width/6*4);
for(int alpha = 0; alpha < 120; alpha = alpha +5){
int radius = (int)random(30)+20;
int xpos = (int)cos(alpha)*350; int ypos= (int)sin(alpha)*350;
ellipse(xpos,ypos,radius,radius);}
}
cos() and sin() are expecting radians. Try sin(radians(alpha))
Also perhaps you should make xpos and ypos floats...
I would like to draw a textured circle in Direct3D which looks like a real 3D sphere. For this purpose, I took a texture of a billard ball and tried to write a pixel shader in HLSL, which maps it onto a simple pre-transformed quad in such a way that it looks like a 3-dimensional sphere (apart from the lighting, of course).
This is what I've got so far:
struct PS_INPUT
{
float2 Texture : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
sampler2D Tex0;
// main function
PS_OUTPUT ps_main( PS_INPUT In )
{
// default color for points outside the sphere (alpha=0, i.e. invisible)
PS_OUTPUT Out;
Out.Color = float4(0, 0, 0, 0);
float pi = acos(-1);
// map texel coordinates to [-1, 1]
float x = 2.0 * (In.Texture.x - 0.5);
float y = 2.0 * (In.Texture.y - 0.5);
float r = sqrt(x * x + y * y);
// if the texel is not inside the sphere
if(r > 1.0f)
return Out;
// 3D position on the front half of the sphere
float p[3] = {x, y, sqrt(1 - x*x + y*y)};
// calculate UV mapping
float u = 0.5 + atan2(p[2], p[0]) / (2.0*pi);
float v = 0.5 - asin(p[1]) / pi;
// do some simple antialiasing
float alpha = saturate((1-r) * 32); // scale by half quad width
Out.Color = tex2D(Tex0, float2(u, v));
Out.Color.a = alpha;
return Out;
}
The texture coordinates of my quad range from 0 to 1, so I first map them to [-1, 1]. After that I followed the formula in this article to calculate the correct texture coordinates for the current point.
At first, the outcome looked ok, but I'd like to be able to rotate this illusion of a sphere arbitrarily. So I gradually increased u in the hope of rotating the sphere around the vertical axis. This is the result:
As you can see, the imprint of the ball looks unnaturally deformed when it reaches the edge. Can anyone see any reason for this? And additionally, how could I implement rotations around an arbitrary axis?
Thanks in advance!
I finally found the mistake by myself: The calculation of the z value which corresponds to the current point (x, y) on the front half of the sphere was wrong. It must of course be:
That's all, it works as exspected now. Furthermore, I figured out how to rotate the sphere. You just have to rotate the point p before calculating u and v by multiplying it with a 3D rotation matrix like this one for example.
The result looks like the following:
If anyone has any advice as to how I could smooth the texture a litte bit, please leave a comment.
I have two vectors in a game. One vector is the player, one vector is an object. I also have a vector that specifies the direction the player if facing. The direction vector has no z part. It is a point that has a magnitude of 1 placed somewhere around the origin.
I want to calculate the angle between the direction the soldier is currently facing and the object, so I can correctly pan some audio (stereo only).
The diagram below describes my problem. I want to calculate the angle between the two dashed lines. One dashed line connects the player and the object, and the other is a line representing the direction the player is facing from the point the player is at.
At the moment, I am doing this (assume player, object and direction are all vectors with 3 points, x, y and z):
Vector3d v1 = direction;
Vector3d v2 = object - player;
v1.normalise();
v2.normalise();
float angle = acos(dotProduct(v1, v2));
But it seems to give me incorrect results. Any advice?
Test of code:
Vector3d soldier = Vector3d(1.f, 1.f, 0.f);
Vector3d object = Vector3d(1.f, -1.f, 0.f);
Vector3d dir = Vector3d(1.f, 0.f, 0.f);
Vector3d v1 = dir;
Vector3d v2 = object - soldier;
long steps = 360;
for (long step = 0; step < steps; step++) {
float rad = (float)step * (M_PI / 180.f);
v1.x = cosf(rad);
v1.y = sinf(rad);
v1.normalise();
float dx = dotProduct(v2, v1);
float dy = dotProduct(v2, soldier);
float vangle = atan2(dx, dy);
}
You shoud always use atan2 when computing angular deltas, and then normalize.
The reason is that for example acos is a function with domain -1...1; even normalizing if the input absolute value (because of approximations) gets bigger than 1 the function will fail even if it's clear that in such a case you would have liked an angle of 0 or PI instead. Also acos cannot measure the full range -PI..PI and you'd need to use explicitly sign tests to find the correct quadrant.
Instead atan2 only singularity is at (0, 0) (where of course it doesn't make sense to compute an angle) and its codomain is the full circle -PI...PI.
Here is an example in C++
// Absolute angle 1
double a1 = atan2(object.y - player.y, object.x - player.x);
// Absolute angle 2
double a2 = atan2(direction.y, direction.x);
// Relative angle
double rel_angle = a1 - a2;
// Normalize to -PI .. +PI
rel_angle -= floor((rel_angle + PI)/(2*PI)) * (2*PI) - PI;
In the case of a general 3d orientation you need two orthogonal directions, e.g. the vector of where the nose is pointing to and the vector to where your right ear is.
In that case the formulas are just slightly more complex, but simpler if you have the dot product handy:
// I'm assuming that '*' is defined as the dot product
// between two vectors: x1*x2 + y1*y2 + z1*z2
double dx = (object - player) * nose_direction;
double dy = (object - player) * right_ear_direction;
double angle = atan2(dx, dy); // Already in -PI ... PI range
In 3D space, you also need to compute the axis:
Vector3d axis = normalise(crossProduct(normalise(v1), normalise(v2)));
I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;