Calculate new center of a map after zoom-event - math

I am trying to figure out how to zoom in to a map, meaning I zoom to the position of the mouse and back again.
For this I have to recalculate the center of the map for every zoom iteration.
I use these formulas for zooming in and it works fine
// amount is 1.0 when zooming in and -1.0 when zooming out
newCenterX = (eventPoint.getX() - (mapWidth / 2)) * resolution + center.getX();
newCenterY = ((eventPoint.getY() - (mapHeight / 2)) * resolution / (-amount)) + center.getY();
But unfortunately I can't figure out how to zoom out, I just kinda can't get my head around it, so a little help from some math-enthusiast would be greatly appreciated. Thanks.

Your question is not quite clear. I assume that your question is: if I change zoom on the map, where do I have to move the location of the center of the image on the Earth such that the mouse still points to the same position on the Earth.
I'm not aware of a closed formula for anything reasonably close to the real case (i.e. taking into account curvature of the Earth). For a simple case of close enough zoom when you can approximate the surface of the Earth by a local plain rectangle, the question becomes more or less the question of zooming on images.
Let's introduce some notation. Xr - real X position on the original image (Earth) in pixels or whatever. Xi - position on the scaled image in pixels. W - width of the zoomed view in pixels. Z - zoom level. Any of those notations might be additionally modified with index and/or suffix c meaning "center". Example: Xrc1 - X position of the center of the image at zoom level #1 on the original image (Earth).
If we want to calculate Xi from Xr, the formula is:
(Xi - Xic)*Z = (Xr - Xrc)
And obviously Xic is always W/2.
Now consider that we have a zoom level Z1 and mouse points to the Xi and the users scales to some other zoom level Z2. We want to find where to move Xrc1 such that for our point on the real image (Earth) Xr its projections are the same or Xi1 = Xi2= Xi. So
(Xi - W/2)*Z1 = (Xr - Xrc1)
(Xi - W/2)*Z2 = (Xr - Xrc2)
To solve this for Xrc2 lets multiply the first by Z2, the second by Z1
(Xr - Xrc1)*Z2 = (Xi - W/2)*Z1*Z2 = (Xr - Xrc2)*Z1
So
Xrc2 = (Xrc1*Z2 + Xr*(Z1-Z2)) / Z1
Or if we name use K as the name for difference in scales Z2/Z1
Xrc2 = Xrc1*K + Xr*(1-K)
Sanity checks:
If Xr = Xrc1 i.e. mouse is pointing to the center, Xrc2 = Xrc1
If the mouse points to the corner and K is 0.5 (zoom twice as close) obviously the center should move twice closer to the Xr.

The problem was that I used the "new resolution" for the calculation, after it had already changed, due to the zoom event. Instead I needed the "old resolution" which is half the new resolution.
Also, I had to account for the x-offset from the center, which, when zooming out, is reversed.
So the revised formulas for zooming are
Double resolutionFactor = zoomLevels > 0 ? resolution : (resolution / 2);
newCenterX = ((eventX - (mapWidth / 2)) * zoomLevels) * resolutionFactor + oldCenterX;
newCenterY = ((eventY - (mapHeigth / 2)) * resolutionFactor / (-zoomLevels)) + oldCenterY;

Related

How can I seamlessly wrap map tiles around cylindrically?

I'm creating a game that takes place on a map, and the player should be able to scroll around the map. I'm using real-world data from NASA as a 5700 by 2700 pixel image split into 4 smaller ones, each corresponding to a hemisphere:
How I split up the image:
The player will be viewing the world through a camera, which is currently in a 4:3 aspect ratio, which can be moved around. Its height and width can be described as two variables x and y, currently at 480 and 360 respectively.
Model of the camera:
In practice, the camera is "fixed" and instead the tiles move. The camera's center is described as two variables: xcam and ycam.
Currently, the 4 tiles move and hide flawlessly. The problem arises when the camera passes over the "edge" at 180 degrees latitude. What should happen is that the tiles on one side should show and move as if the world was a cylinder without any noticeable gaps. I update xcam by doing this equation to it:
xcam = ((xcam + (2700 - x) mod (5400 - x)) - (2700 - x)
And the tiles' centers update according to these equations (I will focus only on tiles 1 and 2 for simplicity):
tile1_x = xcam - 1350
tile1_y = ycam + 650
tile2_x = xcam + 1350
tile2_y = ycam + 650
Using this, whenever the camera moves past the leftmost edge of tile 1, it "skips" and instead of tile 1 still being visible with tile 2 in view, it moves enough so that tile 2's rightmost edge is in the camera's rightmost edge.
Here's what happens in reality: ,
and here's what I want to happen: .
So, is there any way to update the equations I'm using (or even completely redo everything) so that I can get smooth wrapping?
I think you unnecessarily hard-code a number of tiles and their sizes, and thus bind your code to those data. In my opinion it would be better to store them in some variables, so that they can be easily modified in one place if data ever changes. This also allows us to write a more flexible code.
So, let's assume we have variables:
// logical size of the whole Earth's map,
// currently 2 and 2
int ncols, nrows;
// single tile's size, currently 2700 and 1350
int wtile, htile;
// the whole Earth map's size
// always ncols*wtile and nrows*htile
int wmap, hmap;
Tile tiles[nrows][ncols];
// viewport's center in map coordinates
int xcam, ycam;
// viewport's size in map coordinates, currently 480 and 360
int wcam, hcam;
Whenever we update the player's position, we need to make sure the position falls within an allowed range. But, we need to establish the coordinates system first in order to define the allowed range. For example, if x values span from 0 to wmap-1, increasing rightwards (towards East), and y values span from 0 to hmap-1, increasing downwards (toward South), then:
// player's displacement
int dx, dy;
xcam = (xcam + dx) mod wmap
ycam = (ycam + dy) mod hmap
assures the camera position is always within the map. (Assumed the mod operator always returns non-negative value. Should it work like the C language % operator, which returns negative result for negative dividend, one needs to add a divisor first to make sure the first argument is non-negative: xcam = (xcam + dx + wmap) mod wmap, etc.)
If you'd rather like to have xcam,ycam = 0,0 at the center of a map (that is, at the Greenwich meridian and the equator), then the allowed range would be -wmap/2 through wmap/2-1 for x and -hmap/2 through hmap/2 - 1 for y. Then:
xcam = (xcam + dx + wmap/2) mod wmap - wmap/2
ycam = (ycam + dy + hmap/2) mod hmap - hmap/2
More generally, let x0, y0 denote the 'zero' position of camera relative to the upper-left corner of the map. Then we can update the camera position by transforming it to the map's coordinates, then shifting and wrapping, and finally transforming back to camera's coordinates:
xmap = xcam + x0
ymap = ycam + y0
xmap = (xmap + dx) mod wmap
ymap = (ymap + dy) mod hmap
xcam = xmap - x0
ycam = ymap - y0
or, more compactly:
xcam = (xcam + dx + x0) mod wmap - x0
ycam = (ycam + dy + y0) mod hmap - y0
Now, when we know the position of the viewport (camera) relative to the map, we need to fill it with the map tiles. And a new decision must be made here.
When we travel from Anchorage, Alaska (western hemisphere) to the North, we eventually reach the North Pole and then we'll find ourselves in the eastern hemisphere, headin South. If we proceed in the same direction, we'll get to Kuusamo, Norway, then Sankt Petersburg, Russia, then Kiev, Ukraine... But that would be a travel to the South! We usually do not describe it as a next part of the initial North route. Consequently, we do not show the part 'past the pole' as an upside-down extension of a map. Hence the map should never show tiles above row number 0 or below row nrows-1.
On the other hand, when we travel along circles of latitude, we smoothly cross the 0 and 180 meridians and switch between the eastern and western hemisphere. So if the camera view covers area on both sides of the left or right edge of the map, we need to continue filling the view with tiles from the other end of the tiles array. If we use a map scaled down, so that it is smaller than the viewport, we may even need to iterate that more than once!
The left edge of a camera view corresponds to the 'longitude' of xleft = xcam - wcam/2 and the right one to xrght = xcam + wcam/2. So we can step across the viewport by the tile's width to find out appropriate columns and show them:
x = xleft
repeat
show a column at x
x = x + wtile
until x >= xrght
The 'show a column at x' part requires finding appropriate column, then iterating across the column to show corresponding tiles. Let's find out which tiles fit the camera view:
ytop = ycam - hcam/2
ybot = ycam + hcam/2
y=ytop
repeat
show a tile at x,y
y = y + htile
until y >= ybot
To show the tile we need to locate appropriate tile and then send it to appropriate position in the camera view.
However, we treat column number differently from the row number: columns wrap while rows do not:
row = y/htile
if (0 <= row) and (row < nrows) then
col = (x/wtile) mod ncols
xtile = x - (x mod wtile)
ytile = y - (y mod htile)
display tile[row][col] at xtile,ytile
endif
Of course xtile and ytile are our map-scale longitude and latitude, so the 'display tile at' routine must transform them to the camera view coordinates by subtracting the camera position from them:
xinwiev = xtile - xcam
yinview = ytile - ycam
and then apply the resulting values relative to the camera view's center at the displaying device (screen).
Another level of complication will appear if you want to implement zooming in and out the view, that is dynamic scaling of the map, but I'm sure you'll find out yourself which calculations will need applying the zoom factor for correct results. :)

Calcluation of viewport coordinates

I read an article about normalized device coordinates (on the german DGL wiki) and the following example is provided:
"Let's consider that we had a Viewport with dimensions 1024 pixel(width) and 768 pixel height. A point P with absolute, not normalized, coordinates P(350/210) would be in normalized coordinates P(-0,32/-0,59).These coordinates can now be projected on a Viewport (800x600) just by multiplying the normalized device coordinates (similar to vector scaling) with the size of the viewport. In this case the result would be P(273/164).
Somehow I can't understand how one can get to the result provided (I mean 273/164 and -0,32/-0,59 ...could somebody explain to me how to calculate the coordinates?
P.S. : This is the article - https://wiki.delphigl.com/index.php/Normalisierte_Ger%C3%A4tekoordinate
Thank you!
That article is definitely lacking description. I can get you part of the way there; maybe someone with more math can help finish.
According to this answer, the formula to convert non-normalized coords to normalized coords is:
(where Cx/y = Coordinate X/Y; Sx/y = Screen X/Y; and Nx/y = Normalized X/Y).
Plugging the example's numbers in:
Nx = (350/1024) * 2 - 1 = -0.31640625
Ny = 1 - (210/768) * 2 = 0.453125
...or (-.36, 0.45).
Reversing this to get the new coords:
Cx = (1 + -0.31640625) / 2 * 800 = 273.4375
Cy = (1 - 0.453125) / 2 * 600 = 164.0625
Note that the Y value doesn't match. This is probably because my calculation doesn't account for the aspect ratio, and it should be since these screens have a .75 aspect ratio, while NDC's is 1. This SO answer may help too.

How to calculate the z-distance of a camera to view an image at 100% of its original scale in a 3D space

How can one calculate the camera distance from an object in 3D space (an image in this case) such that the image is at its original pixel width.
Am I right in assuming that this is possible given the aspect ratio of the camera, fov, and the original width/height of the image in pixels?
(In case it is relevant, I am using THREE.js in this particular instance).
Thanks to anyone who can help or lead me in the right direction!
Thanks everyone for all the input!
After doing some digging and then working out how this all fits into the exact problem I was trying to solve with THREE.js, this was the answer I came up with in JavaScript as the target Z distance for displaying things at their original scale:
var vFOV = this.camera.fov * (Math.PI / 180), // convert VERTICAL fov to radians
var targetZ = window.innerHeight / (2 * Math.tan(vFOV / 2) );
I was trying to figure out which one to mark as the answer but I kind of combined all of them into this solution.
Trigonometrically:
A line segment of length l at a right angle to the view plane and at a distance of n perpendicular to it will subtend arctan(l/n) degrees on the camera. You can arrive at that result by simple trigonometry.
Hence if your field of view in direction of the line is q, amounting to p pixels, you'll end up occupying p*arctan(l/n)/q pixels.
So, using y as the output number of pixels:
y = p*arctan(l/n)/q
y*q/p = arctan(l/n)
l/tan(y*q/p) = n
Linear algebra:
In a camera with a field-of-view of 90 degrees and a viewport of 2w pixels wide, the projection into screen space is equivalent to:
x' = w - w*x/z
When perpendicular, the length of a line on screen is the difference between two such xs so by normal associativity and commutivity rules:
l' = w - w*l/z
Hence:
w - l' = w*l/z
z = (w - l') / (w*l)
If your field of view is actually q degrees rather than 90 then you can use the cotangent to scale appropriately.
In your original question you said that you're using css3D. I suggest that you do the following:
Set up an orthographic camera with fov = 1..179 degrees, where left = screenWidth / 2, right = screenWidth / - 2, top = screenHeight / 2, bottom = screenHeight / - 2. Near and far planes do not affect CSS3D rendering as far as I can tell from experience.
camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
camera.fov = 75;
now you need to calculate the distance between the camera and object in such way that when the object is projected using the camera with settings above, the object has 1:1 coordinate correspondence on screen. This can be done in following way:
var camscale = Math.tan(( camera.fov / 2 ) / 180 * Math.PI);
var camfix = screenHeight / 2 / camscale;
place your div to position: x, y, z
set the camera's position to 0, 0, z + camfix
This should give you 1:1 coordinate correspondence with rendered result and your pixel values in css / div styles. Remember that the origin is in center and the object's position is the center of the object so you need to do adjustments in order to achieve coordinate specs from top-left corner for example
object.x = ( screenWidth - objectWidth ) / 2 + positionLeft
object.y = ( screenHeight - objectHeight ) / 2 + positionTop
object.z = 0
I hope this helps, I was struggling with same thing (exact control of the css3d scene) but managed to figure out that the Orthographic camera + viewport size adjusted distance from object did the trick. Don't alter the camera rotation or its x and y coordinates, just fiddle with the z and you're safe.

Angular displacement on canvas

I have a square (100px x 100px) with origin at 0,0 (upper left).
When I move the mouse, lets say 10 pixel x and y, I move the origin according to displacement and then origin becomes 10,10, simple. Works fine!
When I rotate the square, my rotation function rotates it fine, but then, after the square is rotated, lets say 10 degrees, the origin point should be move accordingly to the rotation. And now, I have no idea of the formula I have to apply to make it append!
I wikipedia, but I tink it's too complicated.
http://en.wikipedia.org/wiki/Angular_displacement
and
http://en.wikipedia.org/wiki/Cosine#Sine.2C_cosine.2C_and_tangent
Example: After a 90 deg rotation to the left, the origin is now at : lower left, now when I move the mouse to to right, the picture go UP!!!!
If I understand your problem correctly, you are applying an offset to the rectangle points based on your mouse position, then rotating the resulting points about the origin.
Instead, try applying your mouse offset after you do your rotation, not before.
Suppose you have a figure and you want to rotate it by angle alpha and translate it so that point (cx, cy) of the figure gets to point (sx, sy) after the transformation.
The transformation is
transformed_x = x*cos(alpha) - y*sin(alpha) + offset_x
transformed_y = x*sin(alpha) + y*cos(alpha) + offset_y
to compute desired offset_x and offset_y values you just need to put your requirement about (cx, cy) and (sx, sy) into the above equations:
sx = cx*cos(alpha) - cy*sin(alpha) + offset_x
sy = cx*sin(alpha) + cy*cos(alpha) + offset_y
and now you can easily extract the offset values from that:
offset_x = sx - cx*cos(alpha) + cy*sin(alpha)
offset_y = sy - cx*sin(alpha) - cy*cos(alpha)
To set up canvas transform for it you need just to call
context.translate(sx - cx*Math.cos(alpha) + cy*Math.sin(alpha),
sy - cx*Math.sin(alpha) - cy*Math.cos(alpha));
context.rotate(alpha);
You can see a little demo of this formula following this link.

I've got my 2D/3D conversion working perfectly, how to do perspective

Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math.
Although its a 2.5D world, lets pretend its just 2d for this question.
// xa: x-accent, the x coordinate of the projection
// mapP: a coordinate on a map which need to be projected
// _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection
xa = mapP.x * xDistX + mapP.y * xDistY;
ya = mapP.x * yDistX + mapP.y * yDistY;
xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity).
x-axis-angle = atan(yDistX/xDistX)
y-axis-angle = atan(yDistY/yDistY)
a "normal" coordinate system like this
--------------- x
|
|
|
|
|
y
has values like this:
xDistX = 1;
yDistX = 0;
xDistY = 0;
YDistY = 1;
So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down.
When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this).
So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective.
I wanted to add some perspective to this grid so i added some extra's like this:
camera = new MapPoint(60, 60);
dx = mapP.x - camera.x; // delta x
dy = mapP.y - camera.y; // delta y
dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera
fac = 1 - dist / 100; // this formula determines the amount of perspective
xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ;
ya = fac * (mapP.x * yDistX + mapP.y * yDistY );
Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y).
For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this.
( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)
The function you've defined doesn't have an inverse. Just as an example, as user207422 already pointed out anything that's 100 units away from the camera will get mapped to (xa,ya)=(0,0), so the inverse isn't uniquely defined.
More importantly, that's not how you calculate perspective. Generally the perspective scaling factor is defined to be viewdist/zdist where zdist is the perpendicular distance from the camera to the object and viewdist is a constant which is the distance from the camera to the hypothetical screen onto which everything is being projected. (See the diagram here, but feel free to ignore everything else on that page.) The scaling factor you're using in your example doesn't have the same behaviour.
Here's a stab at trying to convert your code into a correct perspective calculation (note I'm not simplifying to 2D; perspective is about projecting three dimensions to two, trying to simplify the problem to 2D is kind of pointless):
camera = new MapPoint(60, 60, 10);
camera_z = camera.x*zDistX + camera.y*zDistY + camera.z*zDistz;
// viewdist is the distance from the viewer's eye to the screen in
// "world units". You'll have to fiddle with this, probably.
viewdist = 10.0;
xa = mapP.x*xDistX + mapP.y*xDistY + mapP.z*xDistZ;
ya = mapP.x*yDistX + mapP.y*yDistY + mapP.z*yDistZ;
za = mapP.x*zDistX + mapP.y*zDistY + mapP.z*zDistZ;
zdist = camera_z - za;
scaling_factor = viewdist / zdist;
xa *= scaling_factor;
ya *= scaling_factor;
You're only going to return xa and ya from this function; za is just for the perspective calculation. I'm assuming the the "za-direction" points out of the screen, so if the pre-projection x-axis points towards the viewer then zDistX should be positive and vice-versa, and similarly for zDistY. For a trimetric projection you would probably have xDistZ==0, yDistZ<0, and zDistZ==0. This would make the pre-projection z-axis point straight up post-projection.
Now the bad news: this function doesn't have an inverse either. Any point (xa,ya) is the image of an infinite number of points (x,y,z). But! If you assume that z=0, then you can solve for x and y, which is possibly good enough.
To do that you'll have to do some linear algebra. Compute camera_x and camera_y similar to camera_z. That's the post-transformation coordinates of the camera. The point on the screen has post-tranformation coordinates (xa,ya,camera_z-viewdist). Draw a line through those two points, and calculate where in intersects the plane spanned by the vectors (xDistX, yDistX, zDistX) and (xDistY, yDistY, zDistY). In other words, you need to solve the equations:
x*xDistX + y*xDistY == s*camera_x + (1-s)*xa
x*yDistX + y*yDistY == s*camera_y + (1-s)*ya
x*zDistX + y*zDistY == s*camera_z + (1-s)*(camera_z - viewdist)
It's not pretty, but it will work.
I think that with your post i can solve the problem. Still, to clarify some questions:
Solving the problem in 2d is useless indeed, but this was only done to make the problem easier to grasp (for me and for the readers here). My program actually give's a perfect 3d projection (i checked it with 3d images rendered with blender). I did left something out about the inverse function though. The inverse function is only for coordinates between 0..camera.x * 0.5 and 0.. camera.y*0.5. So in my example between 0 and 30. But even then i have doubt's about my function.
In my projection the z-axis is always straight up, so to calculate the height of an object i only used the vieuwingangle. But since you cant actually fly or jumpt into the sky everything has only a 2d point. This also means that when you try to solve the x and y, the z really is 0.
I know not every funcion has an inverse, and some functions do, but only for a particular domain. My basic thought in this all was... if i can draw a grid using a function... every point on that grid maps to exactly one map-point. I can read the x and y coordinate so if i just had the correct function i would be able to calculate the inverse.
But there is no better replacement then some good solid math, and im very glad you took the time to give a very helpfull responce :).

Resources