Understanding the geometry response in HERE API - here-api

I am trying to get road geometry from here API request of the form:
https://s.fleet.ls.hereapi.com/1/tile.json?layer=ROAD_GEOM_FC3&level=11&tilex=2157&tiley=1620&apiKey={MY_API_KEY}
Here is a typical geometry response:
LAT “5246282,,1,1,1”
LON “960310,30,24,13,10"
How exactly to understand this?
I am assuming the first point is 52.46282, 9.60310 but what's the logic behind this? And what do the next numbers in the comma mean?
A solution using the above numbers would be great.

Try this request:
https://fleet.ls.hereapi.com/1/doc/layer.json?apiKey={{HERE_API_KEY}}&layer=ROAD_GEOM_FC3
You can see that the attributes LAT and LON has this description:
"Latitude coordinates [10^-5 degree WGS84] along the polyline. Comma
separated. Each value is relative to the previous."
Example:
"5246282" has 5 decimals like 52.46282, the next value after the comma is a sum(positive value) or minus(negative value) on the previous value, like that: "5246282,5" = "52.46282,52.46287". If the next value is empty so repeat the last value again.
This means that:
LAT “5246282,,1,1,1”
LON “960310,30,24,13,10"
is like that:
LAT “52.46282,52.46282,52.46283,52.46284,52.46285”
LON “9.60310,9.60340,9.60364,9.60377,9.60387"

Related

Moving spatial data off gird cell corners

I have a seemingly simple question that I can’t seem to figure out. I have a large dataset of millions of data points. Each data point represents a single fish with its biological information as well as when and where it was caught. I am running some statistics on these data and have been having issues which I have finally tracked down to some data points having latitude and longitude values that fall exactly on the corners of the grid cells which I am using to bin my data. When these fish with lats and long that fall exactly onto grid cell corners are grouped into their appropriate grid cell, they end up being duplicated 4 times (one for each cell that touches the grid cell corner their lats and long identify).
Needless to say this is bad and I need to force those animals to have lats and long that don’t put them exactly on a grid cell corner. I realize there are probably lots of ways to correct something like this but what I really need is a simply way to identify latitudes and longitudes that have integer values, and then to modify them by a very small amount (randomly adding or subtracting) so as to shift them into a specific cell without creating a bias by shifting them all the same way.
I hope this explanation makes sense. I have included a very simple example in order to provide a workable problem.
fish <- data.frame(fish=1:10, lat=c(25,25,25,25.01,25.2,25.1,25.5,25.7,25,25),
long=c(140,140,140,140.23,140.01,140.44,140.2,140.05,140,140))
In this fish data frame there are 10 fish, each with an associated latitude and longitude. Fish 1, 2, 3, 9, and 10 have integer lat and long values that will place them exactly on the corners of my grid cells. I need some way of shifting just these values by something like plus are minus 0.01.
I can identify which lats or longs are integers easy enough with something like:
fish %>%
near(as.integer(fish$lat))
But am struggling to find a way to then modify all the integer values by some small amount.
To answer my own question I was able to work this out this morning with some pretty basic code, see below. All it takes is making a function that actually looks for whole numbers, where is.integer does not.
# Used to fix the is.integer function to actually work and not just look at syntax
is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol
# Use ifelse to change only whole number values of lat and long
fish$jitter_lat <- ifelse(is.wholenumber(fish$lat), fish$lat+rnorm(fish$lat, mean=0, sd=0.01), fish$lat)
fish$jitter_long <- ifelse(is.wholenumber(fish$long), fish$long+rnorm(fish$long, mean=0, sd=0.01), fish$long)

Extracting data from lower layers in a Rasterbrick

So I'm extracting data from a rasterbrick I made using the method from this question: How to extract data from a RasterBrick?
In addition to obtaining the data from the layer given by the date, I want to extract the data from months prior. In my best guess I do this by doing something like this:
sapply(1:nrow(pts), function(i){extract(b, cbind(pts$x[i],pts$y[i]), layer=pts$layerindex[i-1], nl=1)})
So it the extracting should look at layerindex i-1, this should then give the data for one month earlier. So a point with layerindex = 5, should look at layer 5-1 = 4.
However it doesn't do this and seems to give either some random number or a duplicate from months prior. What would be the correct way to go about this?
Your code is taking the value from the layer of the previous point, not the previous layer.
To see that imagine we are looking at the point in row 2 (i=2). your code that indicates the layer is pts$layerindex[i-1], which is pts$layerindex[1]. In other words, the layer of the point in row 1.
The fix is easy enough. For clarity I will write the function separetely:
foo = function(i) extract(b, cbind(pts$x[i],pts$y[i]), layer=pts$layerindex[i]-1, nl=1)
sapply(1:nrow(pts), foo)
I have not tested it, but this should be all.

Aggregate geolocation data [long / lat]

I have a big dataset with a lot of geolocation data (long / lat), which I want to map dependent on the frequency. I just want to show the frequencies of the cities and areas, not of each exact location. Since the geo data might vary a little bit for each city, the data has to be aggregated / clustered.
Unfortunately, just rounding the number does not work. I have already tried to create a matrix to measure the distance of each point, but my vector memory is not sufficient. Is there a simpler way?
This is how the original data looks like:
$long $lat
12.40495 52.52001
13.40233 52.50141
13.37698 52.51607
13.38886 52.51704
13.42927 52.48457
9.993682 53.55108
9.992470 53.55334
10.000654 53.55034
11.58198 48.13513
11.51450 48.13910
... ...
The result should look like this:
$long $lat $count
13.40495 52.52001 5
9.993682 53.55108 3
11.58198 48.13513 2
... ... ...
EDIT 1:
To cluster the points to one single point, a range of 25-50 km is fine.
EDIT 2:
This is how the map looks like, if I don't aggregate the points. I want to prevent the overlapping of the circles.

Find which Geocoded points lie within a circle of radius 25 miles of a specific point

I'm trying to plot certain medical facilities within a 25-mile radius of a certain geocoded point. :
Dataset of facilities looks like this:
Name Lat Long Type Color
A 42.09336 -76.79659 X green
B 43.75840 -74.25250 X green
C 43.16816 -77.60332 Y blue
...
The list of facilities, however, spans all across the country (USA), but I only want to plot the facilities that are present within the circle. The center of the buffer circle is the set of coordinates (long =-73.857932, lat = 41.514096) and radius 25 miles.
So in the dataset that I would need to plot, I need to filter the list of facilities, their latitude and longitude, type and color
I'm really new at this and running a tight deadline so if someone could explain that would be great.
PS: I also want to count the type of facility (but I guess that would be a simple dplyr %>% n() once the filter is created, right?)
You can use the function distHaversine from the geosphere package (assuming df is your dataframe and 43, -77 are the coordinates of your reference point):
geosphere::distHaversine(c(43, -77), df[, 2:3]) / 1609.334 <= 25
#[1] TRUE FALSE FALSE
The default output will be meters, so the division by 1609.334 will convert to miles.

Incorrect values returned by sp::over function?

I'm extracting elevation data of a route from a Digital Elevation Model using
my.elev <- over(new.points, mygrid)
new.points is a SpatialPoints object, with the coordinates (long/lat) of about 7000 points transformed in the CRS of mygrid
mygrid is a SpatialGridDataFrame with more than 8 millions elements
(more info in my previous question)
Having several NA values in my.elev, I debugged my code and I found that almost all the points in new.points repeated more than one time (in my route few segments are crossed two times):
- the first occurrence has the corresponding my.elev value correct
- the second one has a NA value (or sometimes, a quite different value)
I can easily solve the problem eliminating the duplicated values in new.points, but I wonder why the over function doesn't return the same value for the same point.

Resources