Let's say for example, that I have a few weather stations. Each station measures (e.g.) temperature and it's in different location.
Now I'm somewhere between those stations. I know their coordinates and I know my coordinates. How to estimate the temperature in my location?
One of simple methods - build some triangulation for weather stations
find triangle around your location, and use baricentric coordinates
to find weights for infuence of every station. Then estimate temperature as weighted combination:
T(P) = w * T(A) + u * T(B) + v * T(C)
Example of baricentric coordinates calculation
Related
I am trying to simulate N distances between a fixed point and other points randomly distributed around it within a given radius.
One way I've thought of is to simulate coordinates for the random points, then calculate the distances, then exclude distances greater than the given radius (say r = 250m):
X <- runif(N, -250, 250) # simulate random X coordinate
Y <- runif(N, -250, 250) # simulate random Y coordinate
distance <- sqrt(X^2 + Y^2) # calculate distance from random points to center
distance <- distance[distance < 250] # only include values within given radius
However, I am wondering if there is a way to simulate these distances without simulating the coordinates themselves. My end goal is to be able to do this in JAGS so solutions that work in JAGS are preferred. Is there a probability distribution that could be used to describe the probability of these distances to random points? An ideal solution would look something like this:
distance ~ pDistribution(N, 250)
or alternatively in JAGS:
for (1 in 1:N) {
distance[i] ~ pDistribution(250)
}
#jlhoward had a good idea with thinking in polar coordinates - that's what got me going in the right direction. However, by using r = runif(250), you would end up with points clustered around the center. To have a uniformly random distribution of points throughout the circle, there must be more points at greater distances from the center (because circumference/area increase with radius). Turns out you can do this with r <- 250 * sqrt(runif(N, 0, 1)). For my problem, all I needed was to generate these distances (i.e., radii), not the actual points, so this code is an adequate solution. This great video on finding random points in a circle is what helped me finally figure it out.
I have a data set of around 36K hotels geocoded with latitude and longitude.
For each of them, I would need to know how many other hotels (and also which of the others) are placed in different concentric circles around each point ( 2miles, 5miles, 10miles).
For example, the dataset looks like this:
ID Latitude Longitude Rooms
1 N K 200
2 N K 150
3 N K 80
4 N K 140
5 N K 100
I would need a measure of density for each hotel in each concentric circle (which is normally calculated by dividing the number of room of the focal hotel per hotel by the total number of rooms in its concentric circle)
Normally, I would calculate the distance between each point and then filter for the ones that are within each distance but with 36k points, it would take a lot of time because I would go to calculate the distance among each point when I probably need the distance for each point with other 4-5 others maximum.
Do you have an idea on how to calculate the distance and then the density efficiently using R or ArcGIS?
Thanks
It seems the best way to make your code more efficient is not by getting a more efficient distance calculating algorithm, but by only applying that algorithm to a couple of hotels.
You could do a rough "square" approximation very quickly:
make a new dataset of hotels sorted by latitude
make a new dataset of hotels sorted by longitude
For each hotel:
make 2 new empty lists: hotels_in_lat_range and hotels_in_long_range
start at your hotel in the latitude-sorted dataset, and go up until you reach a certain limit
go back down until you reach a lower limit, adding the hotels to hotels_in_lat_range as you go along
repeat steps 4 and 5 for the longitude-sorted dataset, adding hotels to hotels_in_long_range
for every hotel that is in both lists, calculate the distance between your test hotel and that hotel. If the distance is less than your circle radius, include it when you calculate the density.
For the upper and lower limits of latitude and longitude, I'd recommend using the following approximation (I wrote this in Python because I don't know R):
min_lat = max(-89.9, test_lat - 4 * math.degrees(test_rad/Earth_rad))
max_lat = min(89.9, test_lat + 4 * math.degrees(test_rad/Earth_rad))
min_long = max(
-180.0,
test_lat - 4 * math.degrees(
test_rad/(Earth_rad * min(cos(min_lat), cos(max_lat)))
)
)
max_long = min(
180.0,
test_lat + 4 * math.degrees(
test_rad/(Earth_rad * min(cos(min_lat), cos(max_lat)))
)
)
This is a reasonable approximation when your testing radius is significantly smaller than the Earth's radius. I'd recommend staying within 100 miles.
I’m ashamed bothering you with a stupid (but very necessary to me) question. I’ve a bunch of lat/lon points distributed almost randomly within a rectangle of ca. two x three degrees (latitude x longitude).
I need to calculate the maximum distance to the second nearest neighbor as well as the maximum distance to the farthest neighbor. I calculated these using package spatstat,
d2 <- max(nndist(data[,2:3], k = 2)
dn <- max(nndist(data[,2:3], k=(nrow(data))-1))
, respectively, and the distances obtained were 0.3 to 4.2.
I need these distances in kilometers.
So, I supposed that distances provided by nndist where expressed in radians.
So, if θ = a /r, where θ is the subtended angle in radians, a is arc length, and r is Earth radius), then, to calculate a the equations becomes: a = θr.
However, the distances transformed in such a way ranged from:
a = 6371 * 0.3 = 1911.3, and
a= 6371 * 4.2 = 2650.2
This is evidently wrong; since the maximum distance measured using – for example – Qgis between the farthest points is just 480 km…
Can anybody indicate me where am I mistaken?
Thanks a lot in advance!!!
nndist is simply calculating the euclidean distance. It does no unit conversion. As such you have given it values in "degrees", and thus it will return a value whose units are degrees. (not radians).
Thus
6371*0.3*pi/180 = 33.36
will give an approximation of the distance between these points.
A better approach would be to use great circle distances (eg in geosphere or gstat packages or to project the lat/long coordinates onto an appropriate map projection. (rgdal::spTransform will do this) and then nndist will calculate your distances in metres.
I'm having trouble figuring out how to calculate line-of-sight (LOS) between two (lat, lon) points, within R code. Any advice on how to approach this problem would be appreciated. I would like to use the R package - raster - for reading in the terrain elevation data. It seems the spgrass package could be leveraged (based on http://grass.osgeo.org/grass70/manuals/r.viewshed.html) but I wanted to avoid loading up a GIS. Thanks.
If you just want to know if point A can see point B then sample a large number of elevations from the line joining A to B to form a terrain profile and then see if the straight line from A to B intersects the polygon formed by that profile. If it doesn't, then A can see B. Coding that is fairly trivial. Conversely you could sample a number of points along the straight line from A to B and see if any of them have an elevation below the terrain elevation.
If you have a large number of points to compute, or if your raster is very detailed, or if you want to compute the entire area visible from a point, then that might take a while to run.
Also, unless your data is over a large part of the earth, convert to a regular metric grid (eg a UTM zone) and assume a flat earth.
I don't know of any existing package having this functionality, but using GRASS really isn't that much of a hassle.
Here's some code that uses raster and plyr:
cansee <- function(r, xy1, xy2, h1=0, h2=0){
### can xy1 see xy2 on DEM r?
### r is a DEM in same x,y, z units
### xy1 and xy2 are 2-length vectors of x,y coords
### h1 and h2 are extra height offsets
### (eg top of mast, observer on a ladder etc)
xyz = rasterprofile(r, xy1, xy2)
np = nrow(xyz)-1
h1 = xyz$z[1] + h1
h2 = xyz$z[np] + h2
hpath = h1 + (0:np)*(h2-h1)/np
return(!any(hpath < xyz$z))
}
viewTo <- function(r, xy, xy2, h1=0, h2=0, progress="none"){
## xy2 is a matrix of x,y coords (not a data frame)
require(plyr)
aaply(xy2, 1, function(d){cansee(r,xy,d,h1,h2)}, .progress=progress)
}
rasterprofile <- function(r, xy1, xy2){
### sample a raster along a straight line between two points
### try to match the sampling size to the raster resolution
dx = sqrt( (xy1[1]-xy2[1])^2 + (xy1[2]-xy2[2])^2 )
nsteps = 1 + round(dx/ min(res(r)))
xc = xy1[1] + (0:nsteps) * (xy2[1]-xy1[1])/nsteps
yc = xy1[2] + (0:nsteps) * (xy2[2]-xy1[2])/nsteps
data.frame(x=xc, y=yc, z=r[cellFromXY(r,cbind(xc,yc))])
}
Hopefully fairly self-explanatory but maybe needs some real documentation. I produced this with it:
which is a map of the points where a 50m high person can see a 2m high tower at the red dot. Yes, I got those numbers wrong when I ran it. It took about 20 mins to run on my 4 year old PC. I suspect GRASS could do this almost instantaneously and more correctly too.
Say you have n GPS coordinates how could you work out the central GPS point between them?
In case it helps anyone now or in the future, here's an algorithm that's valid even for points near the poles (if it's valid at all, i.e. if I haven't made a silly math mistake ;-):
Convert the latitude/longitude coordinates to 3D Cartesian coordinates:
x = cos(lat) * cos(lon)
y = cos(lat) * sin(lon)
z = sin(lat)
Compute the average of x, the average of y, and the average of z:
x_avg = sum(x) / count(x)
y_avg = sum(y) / count(y)
z_avg = sum(z) / count(z)
Convert that direction back to latitude and longitude:
lat_avg = arctan(z_avg / sqrt(x_avg ** 2 + y_avg ** 2))
lon_avg = arctan(y_avg / x_avg)
Depends on what you mean by the central GPS point. You could simply take the average of all the points, as suggested by Stephen - but keep in mind that GPS coordinates are not continuous - this will fail spectacularly around discontinuities such as the poles.
In most cases you'll need to convert to a coordinate system that doesn't have this issue.
You could also look at all the points bounded by it, calculated all the distances to each GPS point, and minimize the sum of the distances to all the GPS points. You'll need to look into great circle calculations for this.
Further, each GPS might have a higher or lower degree of uncertainty, you should take that into account and weight them accordingly.
What exactly are you trying to find out?
-Adam