I am trying to have a user-drawn rectangle serve as a query on a google fusion table containing kml polygon data. The problem is that the map displays the entire extent of any polygon found within the rectangle's bounds, and does not clip these polygons to the rectangle extent. Any help would be greatly appreciated.
Thanks
layerZoom = new google.maps.FusionTablesLayer({
query: {
select: 'geometry',
from: tableid1,
where: "ST_INTERSECTS(geometry, RECTANGLE(LATLNG(" + sw.lat() + "," + sw.lng() + "), LATLNG(" + ne.lat() + "," + ne.lng() + ")))"
}
Related
I'm using VegaLite and VegaDatasets to show a Mercator projection of the world map, and plot a point at a specific latitude and longitude. So far, this works to produce the map:
data = DataFrame(x=[-45], y=[35])
scale = 100
world = dataset("world-110m")
#vlplot(width=700, height=500,
config={
view={stroke=:transparent}
}
) +
#vlplot(data={values=world,
format={type=:topojson,
feature="countries",
}},
mark={type=:geoshape,
stroke=:darkgray,
strokeWidth=1},
projection={type=:mercator,
scale=scale,
center=[0,35]},
color={value="#eee"}
)
producing
However, when I try to plot the point in the dataframe "data", I'm trying to set the domain in X and Y directions to (-180,180) and (-90,90), respectively:
+ #vlplot(data=data,
:point,
x = {:x, scale = {domain = (-180,180)}},
y = {:y, scale = {domain = (-90,90)}},
)
resulting in the following plot, where the domains are not at the edge of the horizontal and vertical axis. Additionally, the point seems to be plotted incorrectly within the defined grid:
Using the #vlplot syntax, how can I properly set up a latitude/longitude grid on my map, and plot a single point?
For plotting geoshape and geopoint in VegaLite, you should specify the coordinate by longitude and latitude. So your last code should be (projection should be included):
+ #vlplot(data=data,
:point,
longitude = :x,
latitude = :y,
projection={type=:mercator,
scale=scale,
center=[0,35]}
)
I have an ee.Image that I export to TFRecord. I follow this tutorial (https://developers.google.com/earth-engine/guides/tfrecord).
I use this function :
ee.batch.Export.image.toDrive(
image = image,
description = name,
folder = folder,
fileNamePrefix = name,
region = region,
scale = 30,
fileFormat = 'TFRecord',
formatOptions = {
'patchDimensions': [128,128],
'kernelSize': [1,1],
'compressed': True,
}
)
After classifying my image, I want to convert it to KML. For that, I need the geodesic coordinates of my image's corners.
Normally, I would get them using ee.image.geometry().bounds(). However, when converting ee.Image to tfrecord, the patch dimensions (128,128) do not evenly divide the bounding box, so the border tiles along the greatest x/y edges are be dropped. Hence, the coordinates of the 4 corners of my image change (except for the top-left corner).
So, given the coordinates of the top-left corner of my image, and knowing the number of pixels (128,128), I want to recover the coordinates (geodesic) of the four corners.
How do I get the geodesic size of my pixel ?
ie :
x2 = x1 + size*128
y2 = y1 + size*128
Note: I know that my pixel is 30 meters !
Can anyone help? Thanks
I am able to get the current position the camera is in, i.e, its x,y,z co-ordinates in aframe.
In the below code, I am making my camera move forward.
function move_camera_forward(){
x=$("#cam").attr("position").x;
y=$("#cam").attr("position").y;
z=$("#cam").attr("position").z;
updated_pos=x+" "+y+" "+String(Number(z)-0.2);
$("#cam").attr("position",updated_pos);
}
But this moves the camera along z axis irrespective the direction the camera is facing. I want to move the camera based on the direction faced by the camera. If the camera is facing lets say 45 degrees, I want to update the three co-ordinates. For this I need to find out in which direction the camera is facing. How can I do this? Does it have something to do with fov?
I finally figured out how to do this. camera has a rotation attribute which gives me the angle of rotation. With this data and a bit of trigonometry, we can find the updated position. The below code moves the camera in the direction in which the user sees.
new_x = 0;
new_z = 0;
function move_camera_forward() {
x = $("#cam").attr("position").x;
y = $("#cam").attr("position").y;
z = $("#cam").attr("position").z;
radian = -($("#cam").attr("rotation").y) * (Math.PI / 180);
new_z = (new_z + (0.1 * Math.cos(radian)));
new_x = new_x + (0.1 * Math.sin(radian));
new_pos = new_x + " " + y + " " + (-new_z);
console.log(new_pos)
$("#cam").attr("position", new_pos)
}
You can dive into the Three.js API to get any additional info that Aframe doesn't necessarily bubble to the surface. So you can get the camera object using
var camera = document.querySelector('[camera]').object3D
and then you have access to all the Vector data for the camera. To get the direction the camera is facing you can use camera.getWorldDirection() and that returns a Vector3 with X,Y and Z values.
If i've 2 points
First one "29.98671,31.21431"
2nd points "29.97864,31.17557"
i put them in Google Maps, get the route between two of them, and i've another one "29.987201, 31.188547" , want to get the nearest point on the road which near to "29.987201, 31.188547".
is there away to do this using R ?help please .
1) Get route between two points.
library(ggmap)
# output = 'all' so we get the polyline of the path along the road
my_route <- route(from = "29.98671,31.21431",
to = "29.97864,31.17557",
structure = "route",
output = "all")
my_polyline <- my_route$routes[[1]]$legs[[1]]$steps[[1]]$polyline$points
2) Decode polyline to series of points, using function from this linked question
How to decode encoded polylines from OSRM and plotting route geometry?
# DecodeLineR <- function(encoded) {... see linked question ...}
route_points <- DecodeLineR(my_polyline)
3) Plot all the route points, along with our new point
new_point <- data.frame(lat=29.987201, lng=31.188547)
ggplot(route_points, aes(x=lng, y=lat)) +
geom_point(shape=21, alpha=0.5) +
geom_point(data = new_point, color = 'red') +
coord_quickmap() +
theme_linedraw()
4) Find which "route point" is closest to the new point
# get each distance in miles (great circle distance in miles)
library(fields)
route_points$distance_to_new <- t(rdist.earth(new_point, route_points))
# it's this one:
route_points[which.min(route_points$distance_to_new), ]
Answer: The 76th point on the poly line is closest, at ~0.19 miles away
lat lng distance_to_new
76 29.98688 31.18853 0.01903183
My tile engine is coming along. It can draw square, hexagonal and isometric staggered viewpoints. Where I'm struggling is with the isometric rotated (or diamond) viewpoint. Below is a picture of a 10x10 diamond map and the (simplified) code used to draw it. The tiles are 128x64.
http://garrypettet.com/images/forum_images/5%20col%20x%205%20rows.png
for row = 0 to rowLimit
for column = 0 to columnLimit
x = column * (TileWidth/2) + (row * (TileWidth/2)) + Origin.X
y = (column * (TileHeight/2)) - (row * (TileHeight/2)) + Origin.Y
// Draw the tile's image
buffer.Graphics.DrawPicture(Tiles(column, row).Image, x, y)
next column
next row
// Draw the buffer to the canvas
g.DrawPicture(buffer, 0, 0)
I know that this will draw the contents of the whole of Tiles() and not just those visible on screen but I'm trying to get the basics first.
What I can't figure out is an easy way to convert x,y coordinates on the map to tile column,row coordinates. I tried to reverse:
x = column * (TileWidth/2) + (row * (TileWidth/2)) + Origin.X
y = (column * (TileHeight/2)) - (row * (TileHeight/2)) + Origin.Y
To work out column and row given x and y and came up with this:
column = ((x/2) - (Origin.X/2) + y + Origin.Y) / TileHeight
row = ((x/2) - (Origin.X/2) - y - Origin.Y) / TileHeight
But that doesn't seem to work. Can anyone think of a better way to do this? Is there a better way to transform a grid of rectangles into a diamond and back again (given that I know very little about matrices....).
Thanks,
I am not sure I can follow the details of your problem, but if you are just looking to solve your formulas for x and y in terms of column and row, then
column=(x + y - (Origin.X + Origin.Y))/TileWidth
row = (x - y - (Origin.X - Origin.Y))/TileHeight
The easiest way to get these expression is to first add the expressions for x and y and solve for column, then subtract them and solve for row.