R find distance between line and point at fixed bearing angles [duplicate] - r

This question already has answers here:
Find nearest distance from spatial point with direction specified
(3 answers)
Closed 2 years ago.
this is my reproducible example
########################################
library(sf)
# matrix of lon lat for the definition of the linestring
m<-rbind(
c(12.09136, 45.86471),
c(12.09120, 45.86495),
c(12.09136, 45.86531),
c(12.09137, 45.86540),
c(12.09188, 45.86585),
c(12.09200, 45.86592),
c(12.09264, 45.86622),
c(12.09329, 45.86624),
c(12.09393, 45.86597),
c(12.09410, 45.86585),
c(12.09423, 45.86540),
c(12.09411, 45.86495),
c(12.09393, 45.86471),
c(12.09383, 45.86451),
c(12.09329, 45.86414),
c(12.09264, 45.86413),
c(12.09200, 45.86425),
c(12.09151, 45.86451),
c(12.09136, 45.86471)
)
# define a linestring
ls<-st_linestring(m)
# create a simple feature with appropriate crs
ls<-st_sfc(ls, crs=4326)
# and now again going through the very same
# definition process for a point
# define a point
pt <- st_point(c(12.09286,45.86557))
# crate simple feature with appropriate crs
pt<-st_sfc(pt, crs = 4326)
plot(ls)
plot(pt, add=TRUE)
# this is computing the minimum distance from the point to the line
st_distance(ls, pt)
###############
given the above mentioned toy dataset, I need to find a proper method to calculate:
1 - the distance from each vertex of the line to the given point: and this is probably easily accomplished by calculating the distance between each couple of points (line vetex vs. point) through the simple application of the pythagorean theorem even if I'm quite dubious of that because of the crs in use (i.e. epsg 4326, in degree unit), so that I probably need first to convert the whole dataset to another reference system (with metric unit)...
2 - the distance between the point and the line at fixed bearing angles (10°, 20°, 30°,....,360° from the North): and this is where I'm really lost....
please give me some help in order to properly proceed with the calculation, possibly by using the 'sf' standard that I'm trying now to familiarize with
thanks

thank you for pointing me in the right direction
I worked out my final solution that I'm posting here for the sake of completeness
# my reproducible example
library(sf)
# matrix of lon lat for the definition of the linestring
m<-rbind(
c(12.09136, 45.86471),
c(12.09120, 45.86495),
c(12.09136, 45.86531),
c(12.09137, 45.86540),
c(12.09188, 45.86585),
c(12.09200, 45.86592),
c(12.09264, 45.86622),
c(12.09329, 45.86624),
c(12.09393, 45.86597),
c(12.09410, 45.86585),
c(12.09423, 45.86540),
c(12.09411, 45.86495),
c(12.09393, 45.86471),
c(12.09383, 45.86451),
c(12.09329, 45.86414),
c(12.09264, 45.86413),
c(12.09200, 45.86425),
c(12.09151, 45.86451),
c(12.09136, 45.86471)
)
# define the linestring
ls<-st_linestring(m)
# create a simple feature linestring with appropriate crs
ls<-st_sfc(ls, crs=4326)
# and now again going through the very same
# definition process for a point
# define the origin point
pt <- st_point(c(12.09286,45.86557))
# create simple feature point with appropriate crs
pt<-st_sfc(pt, crs = 4326)
plot(ls)
plot(pt, add=TRUE)
# get minimum distance from the origin point to the line
dist_min<-st_distance(ls, pt)
# get cordinates of the origin point
pt_orig<-st_coordinates(pt)
# load library for later use of the function destPoint()
library(geosphere)
# create vector of bearing angles of 10 degress amplitude
b_angles<-seq(0, 350, 10)
# create empty container for final result as data frame
result<-data.frame(bearing=NULL, distance=NULL)
for(i in 1:length(b_angles)){
result[i,"bearing"]<-b_angles[i]
# calculate destination point coordinates with bearing angle i
# at fixed safe distance (i.e. 100 times the minimum distance)
# so that to avoid null intersection in next step calculation
pt_dest<-destPoint(p=pt_orig, b=b_angles[i],d=dist_min*100)
# define linestring from origin to destination
b_ls<-st_sfc(st_linestring(rbind(pt_orig, pt_dest)), crs=4326)
# get the intersection point between two features
pt_int<-st_intersection(ls, b_ls)
# get the distance
d<-st_distance(pt, pt_int)
result[i,"distance"]<-d
}
I stick as much as possible with the "sf" approach which is giving the following warning inside the for loop in correspondace with the execution of st_intersection(): "although coordinates are longitude/latitude, st_intersection assumes that they are planar"
but considering the short distance I'm working with it seems to me an acceptable approximation
by the way, as far as I understand, it does not exists a corresponding function to geosphere::destPoint within the package "sf"
thanks

Related

Identifying points located near to a polygons boundary

I am trying to identify all points (postcodes in my case) that are located near to the coastline of the UK (i.e., a polygon). I am using R to process this.
I downloaded the geographical outline of United Kingdom from here as a shapefile. A list of all postcodes for the UK were accessed from the ONS here. Please note that the latter file is very large (211MB zipped).
To begin, I loaded in both files into R, and then convert them to the same coordinate reference system (OSGB1936; 27700). For the polygon of the UK, I convert this to lines that represent the boundary/coastline (note that while Northern Ireland shares a common boundary with Ireland, I will subset any postcodes erroneously matched as near the coastline by lat/long later). I then convert the points into spatial points.
# Load libraries
library(sf)
library(data.table)
# Load data
uk_shp <- read_sf("./GBR_adm/GBR_adm0.shp") # Load UK shapefile (ignore the download file says GBR, it is UK)
uk_shp <- st_transform(uk_shp, crs = 27700) # Convert to co-ordinate reference system (CRS) that allow buffers in correct units later (note: 4326 is World CRS)
uk_coast <- st_cast(uk_shp,"MULTILINESTRING") # Convert polygon to a line (i.e., coastline)
# Load in postcodes
pcd <- fread("./ONSPD_FEB_2022_UK/Data/ONSPD_FEB_2022_UK.csv") # Load all postcodes for Great Britain - this is a very large file so I also create a single
pcd <- pcd[, c(1:3, 43:44)] # Drop unnecessary information/columns to save memory
# Convert to spatial points data frame
pcd_sp <- pcd %>% # For object of postcodes
st_as_sf(coords = c("long", "lat")) %>% # Define as spatial object and identify which columns tell us the position of points
st_set_crs(27700) # Set CRS
I originally thought the most efficient approach to take would be to define what a coastal region is (here defined as within 5km of the coastline), create a buffer to represent that around the coastline, and then use a point-in-polygon function to select all points within the buffers. However, the code below had not finished running overnight which probably suggests that it was the incorrect approach and I an unsure why it is taking so long.
uk_coast <- st_buffer(uk_coast, 5000) # Create 5km buffer
pcd_coastal <- st_intersection(uk_buf, pcd_sp) # Point-in-polygon (i.e., keep only the postcodes that are located in the buffer region)
So I changed my approach to calculate the straight-line distance of each point to the nearest coastline. In running the code below, it gives incorrect distances. For example below, I select one postcode (AB12 4XP) which is located ~2.6km from the coastline, however the code below gives ~82km which is very wrong. I had tried st_nearest_feature() but could not get it to work (it may do, but was beyond my attempts).
test <- pcd_sp[pcd_sp$pcd == "AB124XP",] # Subset test postcode
dist <- st_distance(test, uk_coast, by_element = TRUE, which = "Euclidean") # Calculate distance
I am unsure how to proceed from here - I don't think it is the wrong CRS. It might be that the multilinestring conversion is causing problems. Does anyone have suggestions what to do?
sf has an st_is_within_distance function that can test if points are within a distance of a line. My test data is 10,000 random points in the bounding box of the UK shape, and the UK shape in OSGB grid coordinates.
> system.time({indist = st_is_within_distance(uk_coast, pts, dist=5000)})
user system elapsed
30.907 0.003 30.928
But this isn't building a spatial index. The docs say that it does build a spatial index if the coordinates are "geographic" and the flag for using spherical geometry is set. I don't understand why it can't build one for cartesian coordinates, but lets see how much faster it is...
Transform takes no time at all:
> ukLL = st_transform(uk_coast, 4326)
> ptsLL = st_transform(pts, 4326)
Then test...
system.time({indistLL = st_is_within_distance(ukLL, ptsLL, dist=5000)})
user system elapsed
1.405 0.000 1.404
Just over a second. Any difference between the two? Let's see:
> setdiff(indistLL[[1]], indist[[1]])
[1] 3123
> setdiff(indist[[1]], indistLL[[1]])
integer(0)
So point 3123 is in the set using lat-long, but not the set using OSGB. There's nothing in OSGB that isn't in the lat-long set.
Quick plot to show the selected points:
> plot(uk_coast$geometry)
> plot(pts$geometry[indistLL[[1]]], add=TRUE)

I would like to work out the distance of data points (lat/long) from the edges of a shape file in R and then apply a criterion to the data points?

I have data points of a species observed using camera traps and would like to measure the distance of each camera trap site (CameraStation) to the edge of a national park using R. I have a shapefile of the park (shp) and want to apply a criterion to CameraStation(s) which are <5km from the edge. My data frame (df) consists of multiple events/observations (EventID) per CameraStation. The aim is to analyse when events near the park edge are most frequent given other environmental factors such as Season, Moon Phase and DayNight (also columns in DF).
I found a package called distance in R but this is for distance sampling and not what I want to do. Which package is relevant in this situation?
I expect the following outcome:
EventID CameraStation Distance(km) Within 5km
0001 Station 1 4.3 Yes
0002 Station 1 4.3 Yes
0003 Station 2 16.2 No
0004 Station 3 0.5 Yes
...
Here's a general solution, adapted from Spacedmans answer to this question at gis.stackexchange. Note: This solution requires working in a projected coordinate system. You can transform to a projected CRS if needed using spTransform.
The gDistance function of the rgeos package calculates the distance between geometries, but for the case of points inside a polygon the distance is zero. The trick is to create a new "mask" polygon where the original polygon is a hole cut out from the mask. Then we can measure the distance between points in the hole and the mask, which is the distance to the edge of the original polygon that we really care about.
We'll use the shape file of the Yellowstone National Park Boundary found on this page.
library(sp) # for SpatialPoints and proj4string
library(rgdal) # to read shapefile with readOGR
library(rgeos) # for gDistance, gDifference, and gBuffer
# ab67 was the name of the shape file I downloaded.
yellowstone.shp <- readOGR("ab67")
# gBuffer enlarges the boundary of the polygon by the amount specified by `width`.
# The units of `width` (meters in this case) can be found in the proj4string
# for the polygon.
yellowstone_buffer <- gBuffer(yellowstone.shp, width = 5000)
# gDifference calculates the difference between the polygons, i.e. what's
# in one and not in the other. That's our mask.
mask <- gDifference(yellowstone_buffer, yellowstone.shp)
# Some points inside the park
pts <- list(x = c(536587.281264245, 507432.037861251, 542517.161278414,
477782.637790409, 517315.171218198),
y = c(85158.0056377799, 77251.498952222, 15976.0721391485,
40683.9055315169, -3790.19457474617))
# Sanity checking the mask and our points.
plot(mask)
points(pts)
# Put the points in a SpatialPointsDataFrame with camera id in a data field.
spts.df <- SpatialPointsDataFrame(pts, data = data.frame(Camera = ordered(1:length(pts$x))))
# Give our SpatialPointsDataFrame the same spatial reference as the polygon.
proj4string(spts.df) <- proj4string(yellowstone.shp)
# Calculate distances (km) from points to edge and put in a new column.
spts.df$km_to_edge <- apply(gDistance(spts.df, difference, byid=TRUE),2,min)/1000
# Determine which records are within 5 km of an edge and note in new column.
spts.df$edge <- ifelse(spts.df$km_to_edge < 5, TRUE, FALSE)
# Results
spts.df
# coordinates Camera km_to_edge edge
# 1 (536587.3, 85158.01) 1 1.855010 TRUE
# 2 (507432, 77251.5) 2 9.762755 FALSE
# 3 (542517.2, 15976.07) 3 11.668700 FALSE
# 4 (477782.6, 40683.91) 4 4.579638 TRUE
# 5 (517315.2, -3790.195) 5 8.211961 FALSE
Here's a quick solution.
Simplify the outline of your shapefile into N points. Then calculate the minimum distance for each camera trap to every point in the outline of the national park.
library(geosphrere)
n <- 500 ##The number of points summarizing the shapefile
NPs <- ##Your shapefile goes here
NP.pts <- spsample(NPs, n = n, type = "regular")
CP.pts <- ## Coordinates for a single trap
distances<-distm(coordinates(CP.pts), coordinates(NP.pts), fun = distHaversine)/1000
##Distance in Km between the trap to each point in the perimeter of the shapefile:
distances
Use distances to find the minimum distance between the shapefile and that given trap. This approach can easily be generalizable using for loops or apply functions.
I had a problem with the points data frame and shape file being projected so instead I used the example in this link to answer my question
https://gis.stackexchange.com/questions/225102/calculate-distance-between-points-and-nearest-polygon-in-r
Basically, I used this code;
df # my data frame with points
shp # my shapefile (non-projected)
dist.mat <- geosphere::dist2Line(p = df2, line = shp)
coordinates(df2)<-~Longitude+Latitude # Longitude and Latitude are columns in my df
dmat<-data.frame(dist.mat) # turned it into a data frame
dmat$km5 <- ifelse(dmat$distance < 5000, TRUE, FALSE) # in meters (5000)
coordinates(dmat)<-~lon+lat
df2$distance <- dmat$distance # added new Distance column to my df

Does this point stand within a polygon?

Very simple situation : a polygon define a geographical area and I want to know whether a point, given by it gps coordinates, lies within that polygon.
I went through many SO questions and have tried various functions and packages like sp, but cannot make out why it fails.
I tried with this very simple function:
https://www.rdocumentation.org/packages/SDMTools/versions/1.1-221/topics/pnt.in.poly
install.packages("SDMTools v1.1-221")
library(SDMTools v1.1-221)
## Coordinates of the polygon corners
lat <- c(48.43119, 48.43119, 48.42647, 48.400031, 48.39775, 48.40624, 48.42060, 48.42544, 48.42943 )
lon <- c(-71.06970, -71.04180, -71.03889, -71.04944, -71.05991, -71.06764, -71.06223, -71.06987, -71.07004)
pol = cbind(lat=lat,lng=lon)
## Point to be tested
x <- data.frame(lng=-71.05609, lat=48.40909)
## Visualization, this point clearly stands in the middle of the polygon
plot(rbind(pol, x))
polygon(pol,col='#99999990')
## Is that point in the polygon?
out = pnt.in.poly(x,poly)
## Well, no (pip=0)
print(out)
The example given for this function works with me, but this simple case no... why is that?
I have not used the method that you are using, but I have one from within sp which works flawlessly on your point and polygon.
I cherry picked your code and left the lat and lon as vectors and the point coordinates as values to suit the functions requirements.
But you could just has easily have made a data frame and used the columns explicitly as lat/lon values.
Here is the gist of it:
require(sp)
## Your polygon
lat <- c(48.43119, 48.43119, 48.42647, 48.400031, 48.39775, 48.40624, 48.42060, 48.42544, 48.42943 )
lon <- c(-71.06970, -71.04180, -71.03889, -71.04944, -71.05991, -71.06764, -71.06223, -71.06987, -71.07004)
## Your Point
lng=-71.05609
lt=48.40909
# sp function which tests for points in polygons
point.in.polygon(lt, lng, lat, lon, mode.checked=FALSE)
And here is the output:
[1] 1
The interpretation of this from the documentation:
integer array values are:
0 point is strictly exterior to polygon
1 point is strictly interior to polygon
2 point lies on the relative interior of an edge of polygon
3 point is a vertex of polygon
As your point is a 1 based on this, it should be wholly within the polygon as your map shows! The key to getting good output with these types of data is serving the variables in the right formats.
you could just as easily had a data frame df with df$lat and df$lon as the two polygon variables as well as a test frame test with test$lat and test$lon as a series of points. You would just substitute each of those in the equation as such:
point.in.polygon(df$lat, df$lon, test$lat, test$lon, mode.checked=FALSE)
And it would return a vector of 0's, 1's 2's and 3's
Just be sure you get it in the right format first!
Here is a link to the function page:
I can't see it explicitly stated in the documentation for ?pnt.in.poly, but it appears the ordering of the lng and lat columns matter. You need to swap the column ordering in your pol and it works.
pol = cbind(lat=lat, lng=lon)
pnt.in.poly(x, pol)
# lng lat pip
# 1 -71.05609 48.40909 0
pol = cbind(lng=lon, lat=lat)
pnt.in.poly(x, pol)
# lng lat pip
# 1 -71.05609 48.40909 1
In spatial goemetry, lng is often thought of as the x-axis, and lat the y-axis, which you'll see is reversed in your plot()

Create Grid in R for kriging in gstat

lat long
7.16 124.21
8.6 123.35
8.43 124.28
8.15 125.08
Consider these coordinates, these coordinates correspond to weather stations that measure rainfall data.
The intro to the gstat package in R uses the meuse dataset. At some point in this tutorial: https://rpubs.com/nabilabd/118172, the guys makes use of a "meuse.grid" in this line of code:
data("meuse.grid")
I do not have such a file and I do not know how to create it, can I create one using these coordinates? Or at least point me to material that discusses how to create a custom grid for a custom area (i.e not using administrative boundaries from GADM).
Probably wording this wrong, don't even know if this question makes sense to R savvy people. Still, would love to hear some direction, or at least tips. Thanks a lot!
Total noob at R and statistics.
EDIT: See the sample grid that the tutorial I posted looks like, that's the thing I want to make.
EDIT 2: Would this method be viable? https://rstudio-pubs-static.s3.amazonaws.com/46259_d328295794034414944deea60552a942.html
I am going to share my approach to create a grid for kriging. There are probably more efficient or elegant ways to achieve the same task, but I hope this will be a start to facilitate some discussions.
The original poster was thinking about 1 km for every 10 pixels, but that is probably too much. I am going to create a grid with cell size equals to 1 km * 1 km. In addition, the original poster did not specify an origin of the grid, so I will spend some time determining a good starting point. I also assume that the Spherical Mercator projection coordinate system is the appropriate choice for the projection. This is a common projection for Google Map or Open Street Maps.
1. Load Packages
I am going to use the following packages. sp, rgdal, and raster are packages provide many useful functions for spatial analysis. leaflet and mapview are packages for quick exploratory visualization of spatial data.
# Load packages
library(sp)
library(rgdal)
library(raster)
library(leaflet)
library(mapview)
2. Exploratory Visualization of the station locations
I created an interactive map to inspect the location of the four stations. Because the original poster provided the latitude and longitude of these four stations, I can create a SpatialPointsDataFrame with Latitude/Longitude projection. Notice the EPSG code for Latitude/Longitude projection is 4326. To learn more about EPSG code, please see this tutorial (https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/OverviewCoordinateReferenceSystems.pdf).
# Create a data frame showing the **Latitude/Longitude**
station <- data.frame(lat = c(7.16, 8.6, 8.43, 8.15),
long = c(124.21, 123.35, 124.28, 125.08),
station = 1:4)
# Convert to SpatialPointsDataFrame
coordinates(station) <- ~long + lat
# Set the projection. They were latitude and longitude, so use WGS84 long-lat projection
proj4string(station) <- CRS("+init=epsg:4326")
# View the station location using the mapview function
mapview(station)
The mapview function will create an interactive map. We can use this map to determine what could be a suitable for the origin of the grid.
3. Determine the origin
After inspecting the map, I decided that the origin could be around longitude 123 and latitude 7. This origin will be on the lower left of the grid. Now I need to find the coordinate representing the same point under Spherical Mercator projection.
# Set the origin
ori <- SpatialPoints(cbind(123, 7), proj4string = CRS("+init=epsg:4326"))
# Convert the projection of ori
# Use EPSG: 3857 (Spherical Mercator)
ori_t <- spTransform(ori, CRSobj = CRS("+init=epsg:3857"))
I first created a SpatialPoints object based on the latitude and longitude of the origin. After that I used the spTransform to perform project transformation. The object ori_t now is the origin with Spherical Mercator projection. Notice that the EPSG code for Spherical Mercator is 3857.
To see the value of coordinates, we can use the coordinates function as follows.
coordinates(ori_t)
coords.x1 coords.x2
[1,] 13692297 781182.2
4. Determine the extent of the grid
Now I need to decide the extent of the grid that can cover all the four points and the desired area for kriging, which depends on the cell size and the number of cells. The following code sets up the extent based on the information. I have decided that the cell size is 1 km * 1 km, but I need to experiment on what would be a good cell number for both x- and y-direction.
# The origin has been rounded to the nearest 100
x_ori <- round(coordinates(ori_t)[1, 1]/100) * 100
y_ori <- round(coordinates(ori_t)[1, 2]/100) * 100
# Define how many cells for x and y axis
x_cell <- 250
y_cell <- 200
# Define the resolution to be 1000 meters
cell_size <- 1000
# Create the extent
ext <- extent(x_ori, x_ori + (x_cell * cell_size), y_ori, y_ori + (y_cell * cell_size))
Based on the extent I created, I can create a raster layer with number all equal to 0. Then I can use the mapview function again to see if the raster and the four stations matches well.
# Initialize a raster layer
ras <- raster(ext)
# Set the resolution to be
res(ras) <- c(cell_size, cell_size)
ras[] <- 0
# Project the raster
projection(ras) <- CRS("+init=epsg:3857")
# Create interactive map
mapview(station) + mapview(ras)
I repeated this process several times. Finally I decided that the number of cells is 250 and 200 for x- and y-direction, respectively.
5. Create spatial grid
Now I have created a raster layer with proper extent. I can first save this raster as a GeoTiff for future use.
# Save the raster layer
writeRaster(ras, filename = "ras.tif", format="GTiff")
Finally, to use the kriging functions from the package gstat, I need to convert the raster to SpatialPixels.
# Convert to spatial pixel
st_grid <- rasterToPoints(ras, spatial = TRUE)
gridded(st_grid) <- TRUE
st_grid <- as(st_grid, "SpatialPixels")
The st_grid is a SpatialPixels that can be used in kriging.
This is an iterative process to determine a suitable grid. Throughout the process, users can change the projection, origin, cell size, or cell number depends on the needs of their analysis.
#yzw and #Edzer bring up good points for creating a regular rectangular grid, but sometimes, there is the need to create an irregular grid over a defined polygon, usually for kriging.
This is a sparsely documented topic. One good answer can be found here. I expand on it with code below:
Consider the the built in meuse dataset. meuse.grid is an irregularly shaped grid. How do we make an grid like meuse.grid for our unique study area?
library(sp)
data(meuse.grid)
ggplot(data = meuse.grid) + geom_point(aes(x, y))
Imagine an irregularly shaped SpatialPolygon or SpatialPolygonsDataFrame, called spdf. You first build a regular rectangular grid over it, then subset the points in that regular grid by the irregularly-shaped polygon.
# First, make a rectangular grid over your `SpatialPolygonsDataFrame`
grd <- makegrid(spdf, n = 100)
colnames(grd) <- c("x", "y")
# Next, convert the grid to `SpatialPoints` and subset these points by the polygon.
grd_pts <- SpatialPoints(
coords = grd,
proj4string = CRS(proj4string(spdf))
)
# subset all points in `grd_pts` that fall within `spdf`
grd_pts_in <- grd_pts[spdf, ]
# Then, visualize your clipped grid which can be used for kriging
ggplot(as.data.frame(coordinates(grd_pts_in))) +
geom_point(aes(x, y))
If you have your study area as a polygon, imported as a SpatialPolygons, you could either use package raster to rasterize it, or use sp::spsample to sample it using sampling type regular.
If you don't have such a polygon, you can create points regularly spread over a rectangular long/lat area using expand.grid, using seq to generate a sequence of long and lat values.

Unit length in spatstat

I have what may be a very simplistic question on the KEST function in Spatstat.KEST graph output I'm using the KEST function in Spatstat to assess spatial randomness in a dataset. I have uploaded lat and long values spread over London and converted them to a PPP object, using the ripras function to specify the spatial domain. When I run my KEST analysis on my ppp, and plot the graph, I end up with an r value on the x, but although I know this is a distance measurement, I don't know what units it's using. I get this summary output:
Planar point pattern: 113 points
Average intensity 407.9378 points per square unit
Coordinates are given to 9 decimal places
Window: polygonal boundary
single connected closed polygon with 14 vertices
enclosing rectangle: [-0.5532963, 0.3519148] x [51.2901, 51.7022] units
Window area = 0.277003 square units
with the max r on the x axis being 0.1 units, and the K(r) on the y axis being 0.04. How do I figure out what unit of distance these equate to?
Your lat,lon coordinates correspond to points on a sphere (or ellipsoid or whatever) used as a model for planet Earth. Essentially, spatstat assumes you are using coordinates projected on a flat map. This conversion could be done with e.g. the sp package (using Buckingham Palace as an example):
library(sp)
lat = c(51.501476)
lon = c(-0.140634)
xy = data.frame(lon, lat)
coordinates(xy) <- c("lon", "lat")
proj4string(xy) <- CRS("+proj=longlat +datum=WGS84")
NE <- spTransform(xy, CRS("+proj=utm +zone=30 ellps=WGS84"))
NE <- as.data.frame(NE)
The result is a data.frame with projected coordinates in Easting, Northing in metres. Then you can continue your analysis from there. To assign a unit label like "m" for prettier labels in figures use the function unitname on your ppp object (assuming the object is called X): unitname(X) <- "m"
If the function is able to accept geographic coordinates, then it is using a great circle equation to calculate distance. This normally results in units that are in Kilometers.
It is not very good practice to perform PPA on non-projected data. If possible, you should project your data into a coordinate system that is in distance units. I believe that most of the functions in spatstat use Euclidean distance, which is quite inappropriate for projection units in decimal degrees. Since there is not a latlong argument in the Kest function, I do not believe that your results are valid.
The K function itself (i.e. the theoretical K-function, not just the computer code) assumes that the space is flat rather than curved.
This would probably be a reasonable approximation in your case (points scattered over a few dozen kilometres) but not for a point pattern scattered over a continent. That is, in general the planar K-function should not be used for point patterns on a sphere.
The other posts are correct. The Kest function expects the coordinates to be given in an isometric coordinate system. You just need to express the spatial locations in a coordinate system in which the x and y coordinates are measured in the same distance units. Longitude and latitude are not measured in the same distance units because one degree (say) of longitude does not represent the same distance as one degree of latitude. Ege Rubak's example using spTransform is probably the best way to go.

Resources