Values inside Openlayers map to change when unit change from Metric to imperial - scale

Im using openlayers map where the i have few points in map and on clicking each it show its Northing, Easting values in a pop-up
When the user changes unit system from Metric to Imperial we observe the scale line unit changes in map. Similarly i need the Northing , Easting values of a point displayed to change based on unit (Metric - meter, Imperial - foot). below is the sample unit conversion
Eg: In metric (meter)
North - 26.27
East - 20
In Imperial (feet)
North - 86.20
East - 65.62
I expect the same conversion to happen inside map when unit change is done. I can receive the unit system changed inside map. But how to handle the unit conversion dynamically in map

The scaleline control isn't linked to your popup which presumably use projection units?
You would need to change the code for your popups, for example
var x = coord[0];
var y = coord[1];
if (scaleline.getUnits() == 'imperial') {
x = x * 100 / (2.54 * 12);
y = y * 100 / (2.54 * 12);
}
content.innerHTML = 'North - ' + y.toFixed(2) + '<br>East - ' + y.toFixed(2);

Related

How can I seamlessly wrap map tiles around cylindrically?

I'm creating a game that takes place on a map, and the player should be able to scroll around the map. I'm using real-world data from NASA as a 5700 by 2700 pixel image split into 4 smaller ones, each corresponding to a hemisphere:
How I split up the image:
The player will be viewing the world through a camera, which is currently in a 4:3 aspect ratio, which can be moved around. Its height and width can be described as two variables x and y, currently at 480 and 360 respectively.
Model of the camera:
In practice, the camera is "fixed" and instead the tiles move. The camera's center is described as two variables: xcam and ycam.
Currently, the 4 tiles move and hide flawlessly. The problem arises when the camera passes over the "edge" at 180 degrees latitude. What should happen is that the tiles on one side should show and move as if the world was a cylinder without any noticeable gaps. I update xcam by doing this equation to it:
xcam = ((xcam + (2700 - x) mod (5400 - x)) - (2700 - x)
And the tiles' centers update according to these equations (I will focus only on tiles 1 and 2 for simplicity):
tile1_x = xcam - 1350
tile1_y = ycam + 650
tile2_x = xcam + 1350
tile2_y = ycam + 650
Using this, whenever the camera moves past the leftmost edge of tile 1, it "skips" and instead of tile 1 still being visible with tile 2 in view, it moves enough so that tile 2's rightmost edge is in the camera's rightmost edge.
Here's what happens in reality: ,
and here's what I want to happen: .
So, is there any way to update the equations I'm using (or even completely redo everything) so that I can get smooth wrapping?
I think you unnecessarily hard-code a number of tiles and their sizes, and thus bind your code to those data. In my opinion it would be better to store them in some variables, so that they can be easily modified in one place if data ever changes. This also allows us to write a more flexible code.
So, let's assume we have variables:
// logical size of the whole Earth's map,
// currently 2 and 2
int ncols, nrows;
// single tile's size, currently 2700 and 1350
int wtile, htile;
// the whole Earth map's size
// always ncols*wtile and nrows*htile
int wmap, hmap;
Tile tiles[nrows][ncols];
// viewport's center in map coordinates
int xcam, ycam;
// viewport's size in map coordinates, currently 480 and 360
int wcam, hcam;
Whenever we update the player's position, we need to make sure the position falls within an allowed range. But, we need to establish the coordinates system first in order to define the allowed range. For example, if x values span from 0 to wmap-1, increasing rightwards (towards East), and y values span from 0 to hmap-1, increasing downwards (toward South), then:
// player's displacement
int dx, dy;
xcam = (xcam + dx) mod wmap
ycam = (ycam + dy) mod hmap
assures the camera position is always within the map. (Assumed the mod operator always returns non-negative value. Should it work like the C language % operator, which returns negative result for negative dividend, one needs to add a divisor first to make sure the first argument is non-negative: xcam = (xcam + dx + wmap) mod wmap, etc.)
If you'd rather like to have xcam,ycam = 0,0 at the center of a map (that is, at the Greenwich meridian and the equator), then the allowed range would be -wmap/2 through wmap/2-1 for x and -hmap/2 through hmap/2 - 1 for y. Then:
xcam = (xcam + dx + wmap/2) mod wmap - wmap/2
ycam = (ycam + dy + hmap/2) mod hmap - hmap/2
More generally, let x0, y0 denote the 'zero' position of camera relative to the upper-left corner of the map. Then we can update the camera position by transforming it to the map's coordinates, then shifting and wrapping, and finally transforming back to camera's coordinates:
xmap = xcam + x0
ymap = ycam + y0
xmap = (xmap + dx) mod wmap
ymap = (ymap + dy) mod hmap
xcam = xmap - x0
ycam = ymap - y0
or, more compactly:
xcam = (xcam + dx + x0) mod wmap - x0
ycam = (ycam + dy + y0) mod hmap - y0
Now, when we know the position of the viewport (camera) relative to the map, we need to fill it with the map tiles. And a new decision must be made here.
When we travel from Anchorage, Alaska (western hemisphere) to the North, we eventually reach the North Pole and then we'll find ourselves in the eastern hemisphere, headin South. If we proceed in the same direction, we'll get to Kuusamo, Norway, then Sankt Petersburg, Russia, then Kiev, Ukraine... But that would be a travel to the South! We usually do not describe it as a next part of the initial North route. Consequently, we do not show the part 'past the pole' as an upside-down extension of a map. Hence the map should never show tiles above row number 0 or below row nrows-1.
On the other hand, when we travel along circles of latitude, we smoothly cross the 0 and 180 meridians and switch between the eastern and western hemisphere. So if the camera view covers area on both sides of the left or right edge of the map, we need to continue filling the view with tiles from the other end of the tiles array. If we use a map scaled down, so that it is smaller than the viewport, we may even need to iterate that more than once!
The left edge of a camera view corresponds to the 'longitude' of xleft = xcam - wcam/2 and the right one to xrght = xcam + wcam/2. So we can step across the viewport by the tile's width to find out appropriate columns and show them:
x = xleft
repeat
show a column at x
x = x + wtile
until x >= xrght
The 'show a column at x' part requires finding appropriate column, then iterating across the column to show corresponding tiles. Let's find out which tiles fit the camera view:
ytop = ycam - hcam/2
ybot = ycam + hcam/2
y=ytop
repeat
show a tile at x,y
y = y + htile
until y >= ybot
To show the tile we need to locate appropriate tile and then send it to appropriate position in the camera view.
However, we treat column number differently from the row number: columns wrap while rows do not:
row = y/htile
if (0 <= row) and (row < nrows) then
col = (x/wtile) mod ncols
xtile = x - (x mod wtile)
ytile = y - (y mod htile)
display tile[row][col] at xtile,ytile
endif
Of course xtile and ytile are our map-scale longitude and latitude, so the 'display tile at' routine must transform them to the camera view coordinates by subtracting the camera position from them:
xinwiev = xtile - xcam
yinview = ytile - ycam
and then apply the resulting values relative to the camera view's center at the displaying device (screen).
Another level of complication will appear if you want to implement zooming in and out the view, that is dynamic scaling of the map, but I'm sure you'll find out yourself which calculations will need applying the zoom factor for correct results. :)

Creating 2D grid or mesh inside a cross section profile

2D grid in R?
I have a river cross section with its profile in terms of Y versus Z.
and would like to create a mesh in between the channel bed and a depth of water. Either of triangular and rectangular meshes are OK. The grid can be 0.01 x 0.01 m apart or any other. My target is to get the Y and Z coordinates of each mesh.
Thanks in advance for your kind cooperation
profile data
Y=c(-30,-2,0,8,20,31)
Z=c(30,10,2,9,30,39)
Since I posted this, i was working on it, finally I could develope the code
anyone can use it.
-here is the cross section profile
Y=c(-30,-2,0,8,20,31)
Z=c(30,10,2,9,30,39)
-To create a grid of points inside the cross section
- select y and z grid spacing
ygrid= 50 #cm
zgrid= 20 #cm
Ym = seq(min(Y),max(Y),ygrid/100 #y grid coord along the section
create an interpolation fucntion
f_z=approxfun(Y,Z)
Zm = f_z(Ym) #interpolated z coordinates of the section perimeter
plot(Ym, Zm,type="b")
the depth for which we have the observed surf. velocity or Q discharge
Depth_study = 6.05
create different depths from the section bed to the study depth
Depthm = seq(Z_bed,Depth_study,zgrid/100) # different water depths w/r to 0,0 point
now for different depth, take out the indices which Zm<=Depthm
list_points_mesh<-vector(("list"),length =length(Depthm))
Y_mesh<-vector(("list"),length =length(Depthm))
Z_mesh<-vector(("list"),length =length(Depthm))
for (j in 1:length(Depthm)) {
list_points_mesh[[j]] = which(Zm<=Depthm[j]) #gives indice of all the points which are below Depthm elevation
Y_mesh[[j]] = Ym[list_points_mesh[[j]]] #now we create pair of points using the indices
Z_mesh[[j]]= rep(Depthm[j],length(list_points_mesh[[j]]))
}
since the answer comes in form of a list, we take them out by unlist func.
ym=unlist(Y_mesh) #y coord of the grid points
zm=unlist(Z_mesh) #z // // // //
mesh_coord <- data.frame(ym,zm) #list of points inside the section
points(ym,zm,pch=".")

Translating Screen Coordinates [ x, y ] to Camera Pan and Tilt angles

I have a IP Camera which can PTZ. I am currently streaming live feed into the browser and want to allow user to click a point on the screen and the camera will pan and tilt so that the user clicked position will now become the center point of view.
my Camera Pan 360 degrees and Tilt from -55 to 90.
any algorithm that will guide to me achieve my goal ??
Let's start by declaring a 3D coordinate system around the camera (the origin). I will use the following: The z-axis points upwards. The x-axis is the camera direction with pan=tilt=0 and positive pan angles will move the camera towards the positive y-axis.
Then, the transform for a given pan/tilt configuration is:
T = Ry(-tilt) * Rz(pan)
This is the transform that positions our virtual image plane in 3D space. Let's keep that in mind and go to the image plane.
If we know the vertical and horizontal field of view and assume that lens distortions are already corrected, we can set up our image plane as follows: The image plane is 1 unit away from the camera (just by declaration) in the view direction. Let the center be the plane's local origin. Then, its horizontal extents are +- tan(fovx / 2) and its vertical extents are +- tan(fovy / 2).
Now, given a pixel position (x, y) in this image (origin in the top left corner), we first need to convert this location into a 3D direction. We start by calculating the local coordinates in the image plane. This is for the image's pixel width w and pixel height h:
lx = (2 * x / w - 1) * tan(fovx / 2)
ly = (-2 * y / h + 1) * tan(fovy / 2) (local y-axis points upwards)
lz = 1 (image plane is 1 unit away)
This is the ray that contains the according pixel under the assumption that there is no pan or tilt yet. But now it is time to get rid of this assumption. That's where our initial transform comes into play. We just need to transform this ray:
tx = cos(pan) * cos(tilt) * lx - cos(tilt) * sin(pan) * ly - sin(tilt) * lz
ty = sin(pan) * lx + cos(pan) * ly
tz = cos(pan) * sin(tilt) * lx - sin(pan) * sin(tilt) * ly + cos(tilt) * lz
The resulting direction now describes the ray that contains the specified pixel in the global coordinate system that we set up in the beginning. All that's left is calculate the new pan/tilt parameters:
tilt = atan2(tz, tx)
pan = asin(ty / sqrt(tx^2 + ty^2 + tz^2))

Calcluation of viewport coordinates

I read an article about normalized device coordinates (on the german DGL wiki) and the following example is provided:
"Let's consider that we had a Viewport with dimensions 1024 pixel(width) and 768 pixel height. A point P with absolute, not normalized, coordinates P(350/210) would be in normalized coordinates P(-0,32/-0,59).These coordinates can now be projected on a Viewport (800x600) just by multiplying the normalized device coordinates (similar to vector scaling) with the size of the viewport. In this case the result would be P(273/164).
Somehow I can't understand how one can get to the result provided (I mean 273/164 and -0,32/-0,59 ...could somebody explain to me how to calculate the coordinates?
P.S. : This is the article - https://wiki.delphigl.com/index.php/Normalisierte_Ger%C3%A4tekoordinate
Thank you!
That article is definitely lacking description. I can get you part of the way there; maybe someone with more math can help finish.
According to this answer, the formula to convert non-normalized coords to normalized coords is:
(where Cx/y = Coordinate X/Y; Sx/y = Screen X/Y; and Nx/y = Normalized X/Y).
Plugging the example's numbers in:
Nx = (350/1024) * 2 - 1 = -0.31640625
Ny = 1 - (210/768) * 2 = 0.453125
...or (-.36, 0.45).
Reversing this to get the new coords:
Cx = (1 + -0.31640625) / 2 * 800 = 273.4375
Cy = (1 - 0.453125) / 2 * 600 = 164.0625
Note that the Y value doesn't match. This is probably because my calculation doesn't account for the aspect ratio, and it should be since these screens have a .75 aspect ratio, while NDC's is 1. This SO answer may help too.

Using trigonometry to calculate angle of movement using mouse position

I'm building a game in Lua for fun (even if you don't know Lua, you can probably help me with this as it applies to any programming language). My problem is I have an x and y variable defined in a table for the player:
player = {}
player.x = 10
player.y = 10
player.velocity = 50
My goal is to have the player move towards the mouses position on the screen. I have it currently set up to increase/decrease the x value and y value for every update depending on the mouse position. My code looks something like this:
function update(delta_time) -- delta_time is time in milliseconds since last update
if mouse.x > screen.width and mouse.y < screen.height then
player.x = player.x + player.velocity * delta_time
player.y = player.y + player.velocity * delta_time
end
That was just one example of a direction that I would define. My problem is that I don't want to have gigantic blocks of flow controls checking for what quadrant the x and y position of the mouse are in, and adjusting the players x and y position accordingly. I would rather have a fluid 360 degree detection that can move the player towards the angle the mouse is positioned from the center.
Another problem I have is when I move the player to the right of the screen, I will simply increase the x value, but when I move the player to the northeast side of the screen, I increase the x AND y value. This means that the player will go 2 TIMES faster depending on how fine the angle of movement is. When I make north east east angles and north west west, the player now goes 3 TIMES faster because I increase/decrease y by 2 and x by 1. I have no idea how to fix this. I am really good with math and trig, but I am bad at applying it to my game. All I need is someone to switch the lights on for me and I will understand. Thank you for your time if you actually read all this.
Compute a vector from the player position to the mouse position. Normalize this vector (i.e., divide it by its length), then multiply by player.velocity, and then add it to player.x and player.y. This way, the speed is constant, and you get smooth movement in all directions.
-- define the difference vector
vec = {}
vec.x = mouse.x - player.x
vec.y = mouse.y - player.y
-- compute its length, to normalize
vec_len = math.pow(math.pow(vec.x, 2) + math.pow(vec.y, 2), 0.5)
-- normalize
vec.x = vec.x / vec_len
vec.y = vec.y / vec_len
-- move the player
player.x = player.x + vec.x * player.velocity * delta_time
player.y = player.y + vec.y * player.velocity * delta_time

Resources