I'm trying to figure out the math to find a random point inside a cube.
I have something small but it can't take into account the rotation of the cube.
Here are some images of my results.
Here you can see the cube is rotated to some degree but when I generate some points it retains the shape as if the cube was normal (I think the term is called axis aligned but I'm not sure).
I'm using a Vector to represent the extent of the cube but for the life of me I can't figure out how to get the points to follow it when it's rotated.
Can someone point me in the right direction as to how I would do this?
EDIT1:
Now its misaligned and it goes even weirder when I rotate it sideways.
Can someone walk me through it from the beginning? I think my base line math is all wrong to begin with.
Generate the points in the straight position then apply the rotation (also check the origin of the coordinates).
Related
Apologies for asking this question, but I'm having trouble getting my head around 2D geometry and transforms (probably due to lack of sleep), can't visualize things in my mind's eye. Please could you help.
I'm using Qt and QTransform, although this is largely irrelevant as this is a mathematical problem. I have an image that takes up the whole viewport, I zoom into the image at a point (zoomPos) that is clicked on. I accomplish this with the following transform:
zoomTransform.translate(zoomPos.x(), zoomPos.y());
zoomTransform.scale(zoomFactor, zoomFactor);
zoomTransform.translate(-zoomPos.x(), -zoomPos.y());
What I wish to calculate are the point coordinates of the center of the scaled (zoomed) image in terms of the original (unscaled) coordinate system. Another way of explaining this is: I wish to calculate the point coordinates of the original image that is the center of the scaled (zoomed) image. I hope that makes sense.
I tried using QTransform::map which maps a point to the coordinate system defined by the transform. I think I have to use an inverted zoomTransform (not sure) and am not sure which coordinates to map from.
Thanks for reading.
I want to calculate the angle of view (or the field of view) from a photograph, without knowing anything about the camera, as to use that information in a 3D environment.
I have to use trigonometry to solve this (most probably using arctan), but I'm not proficient enough in math ...
Can somebody please help?
Please have a look at this example.
I assume the angle between the line CENTER-LEFT and CENTER-RIGHT is 90° in reality.
I know the distances (in pixels) of point C to the vanishing points VP-left and VP-right.
Furthermore the height of the image is the angle of view in my 3D environment.
Thanks!
I have four points in a 3D space, example:
(0,0,1)
(1,0,1)
(1,0,2)
(0,0,2)
Then I have a 2D position on that square plane:
x = 0.5
y = 0.5
I need to find out the 3D space point of that position in the plane. In this example it's easy: (0.5,0,1.5), because Y is zero. But imagine that Y was not zero (and not all the same), that the plane is leaning in some direction. How would I calculate the point in that case?
I imagine this should be a pretty easy thing to solve, but I can't figure it out. Please answer in programming terms and not in straight math terms, if possible.
Update with image: The gray plane (made out of two triangles) are the real one actually existing. I create a non-existing plane on top of this, the ABCD corners are exactly the same, however it doesn't slope. What I need to do is project a pixel (blue one in example) from the non-existing plane to the existing plane. It will be in the exact same location, except that it has gained a Y value from the sloping plane.
(couldn't actually make the image appear because i need 10 reputation to show it, wtf?)
What I've been able to work out so far on my own is which one of the two triangles to use in the gray plane and the normal of triangle. I basically just need to figure out how I can project the pixel.
Figured it out mostly thanks to http://gamedeveloperjourney.blogspot.com/2009/04/point-plane-collision-detection.html
Made me realize I had to verify the normal a bit closer, turns out my plane's grid was being rendered a little different than the actual coordinates for the verticles. No wonder this was so hard to get right! The pixel was projected correctly but rendered incorrectly.
I am planning to make a program which will have some circular shapes moving inside of a oddly shaped Polygon.
I can't seem to figure out how to do the collision detection with the edges and have the shapes bounce back correctly.
I am sure this problem has been solved before, but I can't find a nice example.
My main problems are:
Figuring out if the circle has hit the edge of its surrounding polygon.
Once a hit occurs calculate the normal of the hit point to figure out the reflection vector.
Can anyone point me in the right direction?
Thanks, Jason
You need to do a circle line intersection test.
To make it faster, you can first check the bounding boxes. For example, if the start and end point of the line are both to the left of the leftmost coordinate of the circle, there can't be an intersection.
Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.