How can I seamlessly wrap map tiles around cylindrically? - math

I'm creating a game that takes place on a map, and the player should be able to scroll around the map. I'm using real-world data from NASA as a 5700 by 2700 pixel image split into 4 smaller ones, each corresponding to a hemisphere:
How I split up the image:
The player will be viewing the world through a camera, which is currently in a 4:3 aspect ratio, which can be moved around. Its height and width can be described as two variables x and y, currently at 480 and 360 respectively.
Model of the camera:
In practice, the camera is "fixed" and instead the tiles move. The camera's center is described as two variables: xcam and ycam.
Currently, the 4 tiles move and hide flawlessly. The problem arises when the camera passes over the "edge" at 180 degrees latitude. What should happen is that the tiles on one side should show and move as if the world was a cylinder without any noticeable gaps. I update xcam by doing this equation to it:
xcam = ((xcam + (2700 - x) mod (5400 - x)) - (2700 - x)
And the tiles' centers update according to these equations (I will focus only on tiles 1 and 2 for simplicity):
tile1_x = xcam - 1350
tile1_y = ycam + 650
tile2_x = xcam + 1350
tile2_y = ycam + 650
Using this, whenever the camera moves past the leftmost edge of tile 1, it "skips" and instead of tile 1 still being visible with tile 2 in view, it moves enough so that tile 2's rightmost edge is in the camera's rightmost edge.
Here's what happens in reality: ,
and here's what I want to happen: .
So, is there any way to update the equations I'm using (or even completely redo everything) so that I can get smooth wrapping?

I think you unnecessarily hard-code a number of tiles and their sizes, and thus bind your code to those data. In my opinion it would be better to store them in some variables, so that they can be easily modified in one place if data ever changes. This also allows us to write a more flexible code.
So, let's assume we have variables:
// logical size of the whole Earth's map,
// currently 2 and 2
int ncols, nrows;
// single tile's size, currently 2700 and 1350
int wtile, htile;
// the whole Earth map's size
// always ncols*wtile and nrows*htile
int wmap, hmap;
Tile tiles[nrows][ncols];
// viewport's center in map coordinates
int xcam, ycam;
// viewport's size in map coordinates, currently 480 and 360
int wcam, hcam;
Whenever we update the player's position, we need to make sure the position falls within an allowed range. But, we need to establish the coordinates system first in order to define the allowed range. For example, if x values span from 0 to wmap-1, increasing rightwards (towards East), and y values span from 0 to hmap-1, increasing downwards (toward South), then:
// player's displacement
int dx, dy;
xcam = (xcam + dx) mod wmap
ycam = (ycam + dy) mod hmap
assures the camera position is always within the map. (Assumed the mod operator always returns non-negative value. Should it work like the C language % operator, which returns negative result for negative dividend, one needs to add a divisor first to make sure the first argument is non-negative: xcam = (xcam + dx + wmap) mod wmap, etc.)
If you'd rather like to have xcam,ycam = 0,0 at the center of a map (that is, at the Greenwich meridian and the equator), then the allowed range would be -wmap/2 through wmap/2-1 for x and -hmap/2 through hmap/2 - 1 for y. Then:
xcam = (xcam + dx + wmap/2) mod wmap - wmap/2
ycam = (ycam + dy + hmap/2) mod hmap - hmap/2
More generally, let x0, y0 denote the 'zero' position of camera relative to the upper-left corner of the map. Then we can update the camera position by transforming it to the map's coordinates, then shifting and wrapping, and finally transforming back to camera's coordinates:
xmap = xcam + x0
ymap = ycam + y0
xmap = (xmap + dx) mod wmap
ymap = (ymap + dy) mod hmap
xcam = xmap - x0
ycam = ymap - y0
or, more compactly:
xcam = (xcam + dx + x0) mod wmap - x0
ycam = (ycam + dy + y0) mod hmap - y0
Now, when we know the position of the viewport (camera) relative to the map, we need to fill it with the map tiles. And a new decision must be made here.
When we travel from Anchorage, Alaska (western hemisphere) to the North, we eventually reach the North Pole and then we'll find ourselves in the eastern hemisphere, headin South. If we proceed in the same direction, we'll get to Kuusamo, Norway, then Sankt Petersburg, Russia, then Kiev, Ukraine... But that would be a travel to the South! We usually do not describe it as a next part of the initial North route. Consequently, we do not show the part 'past the pole' as an upside-down extension of a map. Hence the map should never show tiles above row number 0 or below row nrows-1.
On the other hand, when we travel along circles of latitude, we smoothly cross the 0 and 180 meridians and switch between the eastern and western hemisphere. So if the camera view covers area on both sides of the left or right edge of the map, we need to continue filling the view with tiles from the other end of the tiles array. If we use a map scaled down, so that it is smaller than the viewport, we may even need to iterate that more than once!
The left edge of a camera view corresponds to the 'longitude' of xleft = xcam - wcam/2 and the right one to xrght = xcam + wcam/2. So we can step across the viewport by the tile's width to find out appropriate columns and show them:
x = xleft
repeat
show a column at x
x = x + wtile
until x >= xrght
The 'show a column at x' part requires finding appropriate column, then iterating across the column to show corresponding tiles. Let's find out which tiles fit the camera view:
ytop = ycam - hcam/2
ybot = ycam + hcam/2
y=ytop
repeat
show a tile at x,y
y = y + htile
until y >= ybot
To show the tile we need to locate appropriate tile and then send it to appropriate position in the camera view.
However, we treat column number differently from the row number: columns wrap while rows do not:
row = y/htile
if (0 <= row) and (row < nrows) then
col = (x/wtile) mod ncols
xtile = x - (x mod wtile)
ytile = y - (y mod htile)
display tile[row][col] at xtile,ytile
endif
Of course xtile and ytile are our map-scale longitude and latitude, so the 'display tile at' routine must transform them to the camera view coordinates by subtracting the camera position from them:
xinwiev = xtile - xcam
yinview = ytile - ycam
and then apply the resulting values relative to the camera view's center at the displaying device (screen).
Another level of complication will appear if you want to implement zooming in and out the view, that is dynamic scaling of the map, but I'm sure you'll find out yourself which calculations will need applying the zoom factor for correct results. :)

Related

How to efficiently compute the future position of a point that will move in a box and bounce on its walls (2D)?

I have a simple maths/physics problem here: In a Cartesian coordinate system, I have a point that moves in time with a known velocity. The point is inside a box, and bounces orthognally on its walls.
Here is a quick example I did on paint:
What we know: The red point position, and its velocity which is defined by an angle θ and a speed. Of course we know the dimensions of the green box.
On the example, I've drawn in yellow its approximate trajectory, and let's say that after a determined period of time which is known, the red point is on the blue point. What would be the most efficient way to compute the blue point position?
I've tought about computing every "bounce point" with trigonometry and vector projection, but I feel like it's a waste of resources because trigonometric functions are usually very processor hungry. I'll have more than a thousand points to compute like that so I really need to find a more efficient way to do it.
If anyone has any idea, I'd be very grateful.
Apart from programming considerations, it has an interesting solution from geometric point of view. You can find the position of the point at a specific time T without considering its temporal trajectory during 0<t<T
For one minute, forget the size and the boundaries of the box; and assume that the point can move on a straight line for ever. Then the point has constant velocity components vx = v*cos(θ), vy = v*sin(θ) and at time T its virtual porition will be x' = x0 + vx * T, y' = y0 + vy * T
Now you need to map the virtual position (x',y') into the actual position (x,y). See image below
You can recursively reflect the virtual point w.r.t the borders until the point comes back into the reference (initial) box. And this is the actual point. Now the question is how to do these mathematics? and how to find (x,y) knowing (x',y')?
Denote by a and b the size of the box along x and y respectively. Then nx = floor(x'/a) and ny = floor(y'/b) indicates how far is the point from the reference box in terms of the number of boxes. Also dx = x'-nx*a and dy = y'-ny*b introduces the relative position of the virtual point inside its virtual box.
Now you can find the true position (x,y): if nx is even, then x = dx else x = a-dx; similarly if ny is even, then y = dy else y = b-dy. In other words, even number of reflections in each axis x and y, puts the true point and the virtual point in the same relative positions, while odd number of reflections make them different and complementary.
You don't need to use trigonometric function all the time. Instead get normalized direction vector as (dx, dy) = (cos(θ), sin(θ))
After bouncing from vertical wall x-component changes it's sign dx = -dx, after bouncing from horizontal wall y-component changes it's sign dy = -dy. You can see that calculations are blazingly simple.
If you (by strange reason) prefer to use angles, use angle transformations from here (for ball with non-zero radius)
if ((ball.x + ball.radius) >= window.width || (ball.x - ball.radius) <= 0)
ball.theta = M_PI - ball.theta;
else
if ((ball.y + ball.radius) >= window.height || (ball.y - ball.radius) <= 0)
ball.theta = - ball.theta;
To get point of bouncing:
Starting point (X0, Y0)
Ray angle Theta, c = Cos(Theta), s = Sin(Theta);
Rectangle coordinates: bottom left (X1,Y1), top right (X2,Y2)
if c >= 0 then //up
XX = X2
else
XX = X1
if s >= 0 then //right
YY = Y2
else
YY = Y1
if c = 0 then //vertical ray
return Intersection = (X0, YY)
if s = 0 then //horizontal ray
return Intersection = (XX, Y0)
tx = (XX - X0) / c //parameter when vertical edge is met
ty = (YY - Y0) / s //parameter when horizontal edge is met
if tx <= ty then //vertical first
return Intersection = (XX, Y0 + tx * s)
else //horizontal first
return Intersection = (X0 + ty * c, YY)

Calculate new center of a map after zoom-event

I am trying to figure out how to zoom in to a map, meaning I zoom to the position of the mouse and back again.
For this I have to recalculate the center of the map for every zoom iteration.
I use these formulas for zooming in and it works fine
// amount is 1.0 when zooming in and -1.0 when zooming out
newCenterX = (eventPoint.getX() - (mapWidth / 2)) * resolution + center.getX();
newCenterY = ((eventPoint.getY() - (mapHeight / 2)) * resolution / (-amount)) + center.getY();
But unfortunately I can't figure out how to zoom out, I just kinda can't get my head around it, so a little help from some math-enthusiast would be greatly appreciated. Thanks.
Your question is not quite clear. I assume that your question is: if I change zoom on the map, where do I have to move the location of the center of the image on the Earth such that the mouse still points to the same position on the Earth.
I'm not aware of a closed formula for anything reasonably close to the real case (i.e. taking into account curvature of the Earth). For a simple case of close enough zoom when you can approximate the surface of the Earth by a local plain rectangle, the question becomes more or less the question of zooming on images.
Let's introduce some notation. Xr - real X position on the original image (Earth) in pixels or whatever. Xi - position on the scaled image in pixels. W - width of the zoomed view in pixels. Z - zoom level. Any of those notations might be additionally modified with index and/or suffix c meaning "center". Example: Xrc1 - X position of the center of the image at zoom level #1 on the original image (Earth).
If we want to calculate Xi from Xr, the formula is:
(Xi - Xic)*Z = (Xr - Xrc)
And obviously Xic is always W/2.
Now consider that we have a zoom level Z1 and mouse points to the Xi and the users scales to some other zoom level Z2. We want to find where to move Xrc1 such that for our point on the real image (Earth) Xr its projections are the same or Xi1 = Xi2= Xi. So
(Xi - W/2)*Z1 = (Xr - Xrc1)
(Xi - W/2)*Z2 = (Xr - Xrc2)
To solve this for Xrc2 lets multiply the first by Z2, the second by Z1
(Xr - Xrc1)*Z2 = (Xi - W/2)*Z1*Z2 = (Xr - Xrc2)*Z1
So
Xrc2 = (Xrc1*Z2 + Xr*(Z1-Z2)) / Z1
Or if we name use K as the name for difference in scales Z2/Z1
Xrc2 = Xrc1*K + Xr*(1-K)
Sanity checks:
If Xr = Xrc1 i.e. mouse is pointing to the center, Xrc2 = Xrc1
If the mouse points to the corner and K is 0.5 (zoom twice as close) obviously the center should move twice closer to the Xr.
The problem was that I used the "new resolution" for the calculation, after it had already changed, due to the zoom event. Instead I needed the "old resolution" which is half the new resolution.
Also, I had to account for the x-offset from the center, which, when zooming out, is reversed.
So the revised formulas for zooming are
Double resolutionFactor = zoomLevels > 0 ? resolution : (resolution / 2);
newCenterX = ((eventX - (mapWidth / 2)) * zoomLevels) * resolutionFactor + oldCenterX;
newCenterY = ((eventY - (mapHeigth / 2)) * resolutionFactor / (-zoomLevels)) + oldCenterY;

Using trigonometry to calculate angle of movement using mouse position

I'm building a game in Lua for fun (even if you don't know Lua, you can probably help me with this as it applies to any programming language). My problem is I have an x and y variable defined in a table for the player:
player = {}
player.x = 10
player.y = 10
player.velocity = 50
My goal is to have the player move towards the mouses position on the screen. I have it currently set up to increase/decrease the x value and y value for every update depending on the mouse position. My code looks something like this:
function update(delta_time) -- delta_time is time in milliseconds since last update
if mouse.x > screen.width and mouse.y < screen.height then
player.x = player.x + player.velocity * delta_time
player.y = player.y + player.velocity * delta_time
end
That was just one example of a direction that I would define. My problem is that I don't want to have gigantic blocks of flow controls checking for what quadrant the x and y position of the mouse are in, and adjusting the players x and y position accordingly. I would rather have a fluid 360 degree detection that can move the player towards the angle the mouse is positioned from the center.
Another problem I have is when I move the player to the right of the screen, I will simply increase the x value, but when I move the player to the northeast side of the screen, I increase the x AND y value. This means that the player will go 2 TIMES faster depending on how fine the angle of movement is. When I make north east east angles and north west west, the player now goes 3 TIMES faster because I increase/decrease y by 2 and x by 1. I have no idea how to fix this. I am really good with math and trig, but I am bad at applying it to my game. All I need is someone to switch the lights on for me and I will understand. Thank you for your time if you actually read all this.
Compute a vector from the player position to the mouse position. Normalize this vector (i.e., divide it by its length), then multiply by player.velocity, and then add it to player.x and player.y. This way, the speed is constant, and you get smooth movement in all directions.
-- define the difference vector
vec = {}
vec.x = mouse.x - player.x
vec.y = mouse.y - player.y
-- compute its length, to normalize
vec_len = math.pow(math.pow(vec.x, 2) + math.pow(vec.y, 2), 0.5)
-- normalize
vec.x = vec.x / vec_len
vec.y = vec.y / vec_len
-- move the player
player.x = player.x + vec.x * player.velocity * delta_time
player.y = player.y + vec.y * player.velocity * delta_time

Estimated position of vector after time, angle and speed

How to calculate 3D vector position after some time of movement at given angle and speed?
I have these variables available: current position, horizontal angle, vertical angle and speed.
I want to calculate position in the future.
Speed is defined as:
float distMade = this->Position().GetDistanceTo(lastPosition);
float speed = (distMade / timeFromLastCheck) * 1000; // result per sec
// checking every 100ms
Vertical angle coordinate system:
Facing 100% down -PI/2 (-1.57)
Facing 100% up PI/2 (1.57)
Horizontal angle:
Radian system, facing north = PI/2
Facing west = PI
Position 3d vector: x, y, z where z is height level.
It looks like you are trying to predict a future position based on current position and previous position, and know the duration between them.
In this case, it seems like you don't need the angular directions at all. Just keep your "speed" as a vector.
speed = (position() - lastPosition) / (time-last_time);
future_position = position()+(future_time-time)*speed;
If your vector objects don't have operators overloaded, look for some that do or perform the calculation on each x,y,z component independently.
This is, of course, does not take into account any acceleration, just predicts based on current velocity. You could also smooth it out by averaging over the last 5-10 speeds to get a slightly less jittery prediction. If you want to account for acceleration, then you'll have to track last_speed in the same fashion you are tracking last_position currently, and acceleration is just speed-last_speed. And you'd likely want to do an average over that as well.
in that case your speed is cartesian speed
so:
get point positions in cartesian space in different time
P0=(x0,y0,z0); - last position [units]
P1=(x1,y1,z1); - actual position [units]
dt=0.1; - time difference between obtaining P0,P1
compute new position in time pasted from obtaining P1
P(t)=P1+(P1-P0)*t/dt
expand:
x=x1+(x1-x0)*t/dt
y=y1+(y1-y0)*t/dt
z=z1+(z1-z0)*t/dt
if you need angh,angv,dist (and origin of your coordinate system is (0,0,0)
then you use this or modify it to your coordinate system:
dist = |P|=sqrt(x*x+y*y+z*z)
angv=asin(z/dist)
angh=atan2(y,x)
this is for: Z axis = UP, Y axis = North, X axis = East
if origin is not (0,0,0) then just substract it from P before conversion
If your horizontal angle is azimuthal angle, and vertical angle is elevation,
then
X = X0 + V * t * Cos(Azimuth) * Cos(Elevation)
Y = Y0 + V * t * Sin(Azimuth) * Cos(Elevation)
Z = Z0 + V * t * Sin(Elevation)

Angular displacement on canvas

I have a square (100px x 100px) with origin at 0,0 (upper left).
When I move the mouse, lets say 10 pixel x and y, I move the origin according to displacement and then origin becomes 10,10, simple. Works fine!
When I rotate the square, my rotation function rotates it fine, but then, after the square is rotated, lets say 10 degrees, the origin point should be move accordingly to the rotation. And now, I have no idea of the formula I have to apply to make it append!
I wikipedia, but I tink it's too complicated.
http://en.wikipedia.org/wiki/Angular_displacement
and
http://en.wikipedia.org/wiki/Cosine#Sine.2C_cosine.2C_and_tangent
Example: After a 90 deg rotation to the left, the origin is now at : lower left, now when I move the mouse to to right, the picture go UP!!!!
If I understand your problem correctly, you are applying an offset to the rectangle points based on your mouse position, then rotating the resulting points about the origin.
Instead, try applying your mouse offset after you do your rotation, not before.
Suppose you have a figure and you want to rotate it by angle alpha and translate it so that point (cx, cy) of the figure gets to point (sx, sy) after the transformation.
The transformation is
transformed_x = x*cos(alpha) - y*sin(alpha) + offset_x
transformed_y = x*sin(alpha) + y*cos(alpha) + offset_y
to compute desired offset_x and offset_y values you just need to put your requirement about (cx, cy) and (sx, sy) into the above equations:
sx = cx*cos(alpha) - cy*sin(alpha) + offset_x
sy = cx*sin(alpha) + cy*cos(alpha) + offset_y
and now you can easily extract the offset values from that:
offset_x = sx - cx*cos(alpha) + cy*sin(alpha)
offset_y = sy - cx*sin(alpha) - cy*cos(alpha)
To set up canvas transform for it you need just to call
context.translate(sx - cx*Math.cos(alpha) + cy*Math.sin(alpha),
sy - cx*Math.sin(alpha) - cy*Math.cos(alpha));
context.rotate(alpha);
You can see a little demo of this formula following this link.

Resources