Calculating position for rotated image - apache-flex

I have a couple of images, representing tents, that look like this:
The red and blue parts on each side of the tents are doorways, and several tents can be connected together via these doorways. For example, I want to connect the two blue doorways so that they match up like in this picture:
If the first tent is stationary, around which point do I rotate the second tent and how do I calculate where to place it?
Currently, I have the upper left corner of each doorway as an x and a y value, together with the width and direction (in degrees) of the door. I'm treating the doorways as one dimensional, so they don't have heights. Would another representation suit this better? Perhaps a start point and an end point plus direction?
I'm coding this in Flex/AS3, but I'm more after a way of thinking than code, though code would be appreciated too!

Got this fixed, after many mangled braincells. What I did was to first move the registration point of each tent to the center and calculate the doorways from there. I also changed the doorways into single point representations, set in the center of each doorway.
To get the position of the second tent, I did the following:
Rotate the first tent so that the right doorway was facing north/upwards
Rotate the second tent so that the right doorway was facing south/downwards
Calculate the position of the first tent's doorway using
Calculate the position of the second tent by placing it's doorway at the same point as the first tent's doorway
Rotate the coordinate of the second tent by as much as the first tent was rotated but in the opposite direction
Image from Wikipedia: http://en.wikipedia.org/wiki/Rotation_%28mathematics%29

Related

Weird phenomenon with three.js plane

This is the first question I've ever asked on here! Apologies in advance if I've done it wrong somehow.
I have written a program which stacks up spheres in three.js.
Each sphere starts with randomly generated (within certain bounds) x and z co-ordinates, and a y co-ordinate high above the ground plane. I casts rays from each of the sphere's vertices to see how far down it can fall before it intersects with an existing mesh.
For each sphere, I test it in 80 different random xz positions, see where it can fall the furthest, and then 'drop' it into that position.
This is intended to create bubble towers like this one:
However, I have noticed that when I make the bubble radius very small and the base dimensions of the tower large, this happens:
If I turn the recursions down from 80, this effect is less apparent. For some reason, three.js seems to think that the spheres can fall further at the corners of the base square. The origin is exactly at the center of the base square - perhaps this is relevant.
When I console log all the fall-distances I'm receiving from the raycaster, they are indeed larger the further away you get from the center of the square... but only at the 11th or 12th decimal place.
This is not so much a problem I am trying to solve (I could just round fall distances to the nearest 10th decimal place before I pick the largest one), but something I am very curious about. Does anyone know why this is happening? Has anybody come across something similar to this before?
EDIT:
I edited my code to shift everything so that the origin is no longer at the center of the base square:
So am I correct in thinking... this phenomenon is something to do with distance from the origin, rather than anything relating to the surface onto which the balls are falling?
Indeed, the pattern you are seeing is exactly because the corners and edges of the bottom of your tower are furthest from the origin where you are dropping the balls. You are creating a right triangle (see image below) in which the vertical "leg" is the line from the origin from which you are dropping the balls down to the point directly below on mesh floor (at a right angle to the floor - thus the name, right triangle). The hypotenuse is always the longest leg of a right triangle, and the futher out your rays cast from the point just below the origin, the longer the hypotenuse will be, and the more your algorithm will favor that longer distance (no matter how fractional).
Increasing the size of the tower base would exaggerate this effect as the hypotenuse measurements can now grow even larger. Reducing the size of the balls would also favor the pattern you are seeing, as now each ball is not taking up as much space, and so the distant measurments to the corners won't fill in as quickly as they would with larger balls so that now more balls will congregate at the edges before filling in the rest of the space.
Moving your dropping origin to one side or another creates longer distances (hypotenuses) to the opposites sides and corners, so that the balls will fill in those distant locations first.
The reason you see less of an effect when you reduce the sample size from 80 to say, 20, is that there are simply fewer chances to detect these more distant locations to which the balls could fall (an odds game).
A right triangle:
A back-of-the-napkin sketch:

determine rectangle rotation point

I would like to know how to compute rotation components of a rectangle in space according to four given points in a projection plane.
Hard to depict in a single sentence, thus I explain my needs.
I have a 3D world viewed from a static camera (located in <0,0,0>).
I have a known rectangular shape (an picture, actually) That I want to place in that space.
I only can define points (up to four) in a spherical/rectangular referencial (camera looking at <0°,0°> (sph) or <0,0,1000> (rect)).
I considere the given polygon to be my rectangle shape rotated (rX,rY,rZ). 3 points are supposed to be enough, 4 points should be too constraintfull. I'm not sure for now.
I want to determine rX, rY and rZ, the rectangle rotation about its center.
--- My first attempt at solving this constrint problem was to fix the first point: given spherical coordinates, I "project" this point onto a camera-facing plane at z=1000. Quite easy, this give me a point.
Then, the second point is considered to be on the <0,0,0>- segment, which is about an infinity of solution ; but I fix this by knowing the width(w) and height(h) of my rectangle: I then get two solutions for my second point ; one is "in front" of the first point, and the other is "far away"... I now have a edge of my rectangle. Two, in fact.
And from there, I don't know what to do. If in the end I have my four points, I don't have a clue about how to calculate the rotation equivalency...
It's hard to be lost in Mathematics...
To get an idea of the goal of all this: I make photospheres and I want to "insert" in them images. For instance, I got on my photo a TV screen, and I want to place a picture in the screen. I know my screen size (or I can guess it), I know the size of the image I want to place in (actually, it has the same aspect ratio), and I know the four screen corner positions in my space (spherical or euclidian). My software allow my to place an image in the scene and to rotate it as I want. I can zoom it (to give the feeling of depth)... I then can do all this manually, but it is a long try-fail process and never exact. I would like then to be able to type in the screen corner positions, and get the final image place and rotation attributes in a click...
The question in pictures:
Images presenting steps of the problem
Note that on the page, I present actual images of my app. I mean I had to manually rotate and scale the picture to get it fits the screen but it is not a photoshop. The parameters found are:
Scale: 0.86362
rX = 18.9375
rY = -12.5875
rZ = -0.105881
center position: <-9.55, 18.76, 1000>
Note: Rotation is not enought to set the picture up: we need scale and translation. I assume the scale can be found once a first edge is fixed (first two points help determining two solutions as initial constraints, and because I then know edge length and picture width and height, I can deduce scale. But the software is kind and allow me to modify picture width and height: thus the constraint is just to be sure the four points are descripbing a rectangle in space, with is simple to check with vectors. Here, the problem seems to place the fourth point as a valid rectangle corner, and then deduce rotation from that rectangle. About translation, it is the center (diagonal cross) of the points once fixed.

Collision detection with oddly shaped polygons

I am planning to make a program which will have some circular shapes moving inside of a oddly shaped Polygon.
I can't seem to figure out how to do the collision detection with the edges and have the shapes bounce back correctly.
I am sure this problem has been solved before, but I can't find a nice example.
My main problems are:
Figuring out if the circle has hit the edge of its surrounding polygon.
Once a hit occurs calculate the normal of the hit point to figure out the reflection vector.
Can anyone point me in the right direction?
Thanks, Jason
You need to do a circle line intersection test.
To make it faster, you can first check the bounding boxes. For example, if the start and end point of the line are both to the left of the leftmost coordinate of the circle, there can't be an intersection.

XNA Track rotated pixel positions

Im making a game in xna where a tank has to move over a landscape.
I need to be able find the bottom of the tank when it is rotated so I can make it move up and down as the player goes over the landscape.
for example if i have a sprite at with its top left corner at 400,300 and i rotate it around its center by 45 degrees around its center, how do i find the new locations of the bottom track.
Thanks
Thanks for the reply Langaurd.
I have looked at the article link before but didnt understand how it works.
Im making a 2d side scrolling game. As the player moves left and right, the tank has to also tilt to follow the contour of the terrain.
I have two vectors that store the back bottom of the track and one that stores the front bottom of the track.
I have tried
Vector2 backBottom = new Vector2(5, 25);
Vector2 frontBottom = new Vector2(5, 32);
backBottom = Vector2.Transform(backBottom+position, Matrix.CreateRotationZ(angle));
frontBottom = Vector2.Transform(frontBottom+position, Matrix.CreateRotationZ(angle));
but that gave me some strange values
Not 100% clear on exactly what it is you are trying to do. You mention a sprite, which is 2D, but your description is in 3D terms. If you are doing a 2D side view, then you can't tell the tank is rotated 45 degrees. If you are doing a 2D top-down view, then you shouldn't really care where the bottom of the tred is.
In any case, two suggestions. If you are die-hard on tracking rotated pixels, then read this article: 2D collision with Transformed Pixels from the creators.xna.com site. However I would recommend tracking vectors. Use two vectors to represent the track locations, and then use Vector2.Transform to rotate them with the tank. You could then use the vectors to check to see if the tracks have hit something, what angle they are at, ect.
You need to define a clearer orientation for you sprite. I would use a Front and Up Vector for the tank. Now you rotate both of them together with the angle your tank drives depending on the terrain. Lets say these vectors are at the center of your sprite. and your sprite is rotated, exactly like your up and front vectors. Now just multiply your Halfheight with -Up vector and you should have your local bottom center, add your tank position and you have your world bottom track position.
Important: Don't mix up a point, which can be expressed by a vector, or a real vector which has no position and only shows the direction. For directions its important to normalize the vector.
Sorry for the vague answer but you question is a little bit vague too.

Calculating 2D angles for 3D objects in perspective

Imagine a photo, with the face of a building marked out.
Its given that the face of the building is a rectangle, with 90 degree corners. However, because its a photo, perspective will be involved and the parallel edges of the face will converge on the horizon.
With such a rectangle, how do you calculate the angle in 2D of the vectors of the edges of a face that is at right angles to it?
In the image below, the blue is the face marked on the photo, and I'm wondering how to calculate the 2D vector of the red lines of the other face:
example http://img689.imageshack.us/img689/2060/leslievillestarbuckscor.jpg
So if you ignore the picture for a moment, and concentrate on the lines, is there enough information in one of the face outlines - the interior angles and such - to know the path of the face on the other side of the corner? What would the formula be?
We know that both are rectangles - that is that each corner is a right angle - and that they are at right angles to each other. So how do you determine the vector of the second face using only knowledge of the position of the first?
It's quite easy, you should use basic 2 point perspective rules.
First of all you need 2 vanishing points, one to the left and one to the right of your object. They'll both stay on the same horizon line.
alt text http://img62.imageshack.us/img62/9669/perspectiveh.png
After having placed the horizon (that chooses the sight heigh) and the vanishing points (the positions of the points will change field of view) you can easily calculate where your lines go (of course you need to be able to calculate the line that crosses two points: i think you can do it)
Honestly, what I'd do is a Hough Transform on the image and determine a way to identify the red lines from the image. To find the red lines, I'd find any lines in the transform that touch your blue ones. The good thing about the transform is that you get angle information for free.
Since you know that you're looking at lines, you could also do a Radon Transform and look for peaks at particular angles; it's essentially the same thing.
Matlab has some nice functionality for this kind of work.

Resources