spatial filtering by proximity in R - r

I have occurrence points for a species, and I'd like to remove potential sampling bias (where some regions might have much greater density of points than others). One way to do this would be to maximize a subset of points that are no less than a certain distance X of each other. Essentially, I would prevent points from being too close to each other.
Are there any existing R functions to do this? I've searched through various spatial packages, but haven't found anything, and can't figure out exactly how to implement this myself.
An example occurrence point dataset can be downloaded here.
Thanks!

I've written a new version of this function that no longer really follows rMaternII.
The input can either be a SpatialPoints, SpatialPointsDataFrame or matrix object.
Seems to work well, but suggestions welcome!
filterByProximity <- function(xy, dist, mapUnits = F) {
#xy can be either a SpatialPoints or SPDF object, or a matrix
#dist is in km if mapUnits=F, in mapUnits otherwise
if (!mapUnits) {
d <- spDists(xy,longlat=T)
}
if (mapUnits) {
d <- spDists(xy,longlat=F)
}
diag(d) <- NA
close <- (d <= dist)
diag(close) <- NA
closePts <- which(close,arr.ind=T)
discard <- matrix(nrow=2,ncol=2)
if (nrow(closePts) > 0) {
while (nrow(closePts) > 0) {
if ((!paste(closePts[1,1],closePts[1,2],sep='_') %in% paste(discard[,1],discard[,2],sep='_')) & (!paste(closePts[1,2],closePts[1,1],sep='_') %in% paste(discard[,1],discard[,2],sep='_'))) {
discard <- rbind(discard, closePts[1,])
closePts <- closePts[-union(which(closePts[,1] == closePts[1,1]), which(closePts[,2] == closePts[1,1])),]
}
}
discard <- discard[complete.cases(discard),]
return(xy[-discard[,1],])
}
if (nrow(closePts) == 0) {
return(xy)
}
}
Let's test it:
require(rgeos)
require(sp)
pts <- readWKT("MULTIPOINT ((3.5 2), (1 1), (2 2), (4.5 3), (4.5 4.5), (5 5), (1 5))")
pts2 <- filterByProximity(pts,dist=2, mapUnits=T)
plot(pts)
axis(1)
axis(2)
apply(as.data.frame(pts),1,function(x) plot(gBuffer(SpatialPoints(coords=matrix(c(x[1],x[2]),nrow=1)),width=2),add=T))
plot(pts2,add=T,col='blue',pch=20,cex=2)

There is also an R package called spThin that performs spatial thinning on point data. It was developed for reducing the effects of sampling bias for species distribution models, and does multiple iterations for optimization. The function is quite easy to implement---the vignette can be found here. There is also a paper in Ecography with details about the technique.

Following Josh O'Brien's advice, I looked at spatstat's rMaternI function, and came up with the following. It seems to work pretty well.
The distance is in map units. It would be nice to incorporate one of R's distance functions that always returns distances in meters, rather than input units, but I couldn't figure that out...
require(spatstat)
require(maptools)
occ <- readShapeSpatial('occurrence_example.shp')
filterByProximity <- function(occ, dist) {
pts <- as.ppp.SpatialPoints(occ)
d <- nndist(pts)
z <- which(d > dist)
return(occ[z,])
}
occ2 <- filterByProximity(occ,dist=0.2)
plot(occ)
plot(occ2,add=T,col='blue',pch=20)

Rather than removing data points, you might consider spatial declustering. This involves giving points in clusters a lower weight than outlying points. The two simplest ways to do this involve a polygonal segmentation, like a Voronoi diagram, or some arbitrary grid. Both methods will weight points in each region according to the area of the region.
For example, if we take the points in your test (1,1),(2,2),(4.5,4.5),(5,5),(1,5) and apply a regular 2-by-2 mesh, where each cell is three units on a side, then the five points fall into three cells. The points ((1,1),(2,2)) falling into the cell [0,3]X[0,3] would each have weights 1/( no. of points in current cell TIMES tot. no. of occupied cells ) = 1 / ( 2 * 3 ). The same thing goes for the points ((4.5,4.5),(5,5)) in the cell (3,6]X(3,6]. The "outlier", (1,5) would have a weight 1 / ( 1 * 3 ). The nice thing about this technique is that it is a quick way to generate a density based weighting scheme.
A polygonal segmentation involves drawing a polygon around each point and using the area of that polygon to calculate the weight. Generally, the polygons completely cover the entire region, and the weights are calculated as the inverse of the area of each polygon. A Voronoi diagram is usually used for this, but polygonal segmentations may be calculated using other techniques, or may be specified by hand.

Related

DBSCAN Clustering returning single cluster with noise points

I am trying to perform DBSCAN clustering on the data https://www.kaggle.com/arjunbhasin2013/ccdata. I have cleaned the data and applied the algorithm.
data1 <- read.csv('C:\\Users\\write\\Documents\\R\\data\\Project\\Clustering\\CC GENERAL.csv')
head(data1)
data1 <- data1[,2:18]
dim(data1)
colnames(data1)
head(data1,2)
#to check if data has empty col or rows
library(purrr)
is_empty(data1)
#to check if data has duplicates
library(dplyr)
any(duplicated(data1))
#to check if data has NA values
any(is.na(data1))
data1 <- na.omit(data1)
any(is.na(data1))
dim(data1)
Algorithm was applied as follows.
#DBSCAN
data1 <- scale(data1)
library(fpc)
library(dbscan)
set.seed(500)
#to find optimal eps
kNNdistplot(data1, k = 34)
abline(h = 4, lty = 3)
The figure shows the 'knee' to identify the 'eps' value. Since there are 17 attributes to be considered for clustering, I have taken k=17*2 =34.
db <- dbscan(data1,eps = 4,minPts = 34)
db
The result I obtained is "The clustering contains 1 cluster(s) and 147 noise points."
No matter whatever values I change for eps and minPts the result is same.
Can anyone tell where I have gone wrong?
Thanks in advance.
You have two options:
Increase the radius of your center points (given by the epsilon parameter)
Decrease the minimum number of points (minPts) to define a center point.
I would start by decreasing the minPts parameter, since I think it is very high and since it does not find points within that radius, it does not group more points within a group
A typical problem with using DBSCAN (and clustering in general) is that real data typically does not fall into nice clusters, but forms one connected point cloud. In this case, DBSCAN will always find only a single cluster. You can check this with several methods. The most direct method would be to use a pairs plot (a scatterplot matrix):
plot(as.data.frame(data1))
Since you have many variables, the scatterplot pannels are very small, but you can see that the points are very close together in almost all pannels. DBSCAN will connect all points in these dense areas into a single cluster. k-means will just partition the dense area.
Another option is to check for clusterability with methods like VAT or iVAT (https://link.springer.com/chapter/10.1007/978-3-642-13657-3_5).
library("seriation")
## calculate distances for a small sample
d <- dist(data1[sample(seq(nrow(data1)), size = 1000), ])
iVAT(d)
You will see that the plot shows no block structure around the diagonal indicating that clustering will not find much.
To improve clustering, you need to work on the data. You can remove irrelevant variables, you may have very skewed variables that should be transformed first. You could also try non-linear embedding before clustering.

Shrink convex hull

I have a bunch of points in 2D space and have calculated a convex hull for them. I would now like to "tighten" the hull so that it no longer necessarily encompasses all points. In the typical nails-in-board-with-rubber-band analogy, what I'd like to achieve is to be able to tune the elasticity of the rubber band and allow nails to bend at pressure above some limit. That's just an analogy, there is no real physics here. This would kind-of be related to the reduction in hull area if a given point was removed, but not quite because there could be two points that are very close to each-other. This is not necessarily related to outlier detection, because you could imagine a pattern where a large fractions of the nails would bend if they are on a narrow line (imagine a hammer shape for example). All of this has to be reasonably fast for thousands of points. Any hints where I should look in terms of algorithms? An implementation in R would be perfect, but not needed.
EDIT AFTER COMMENT: The three points I've labelled are those with largest potential for reducing the hull area if they are excluded. In the plot there is no other set of three points that would result in a larger area reduction. A naive implementation of what I'm looking for would maybe be to randomly sample some fraction of the points, calculate the hull area, remove each point on the hull iteratively, recalculate the area, repeat many times and remove points that tend to lead to high area reduction. Maybe this could be implemented in some random forest variant? It's not quite right though, because I would like the points to be removed one by one so that you get the following result. If you looked at all points in one go it would possibly be best to trim from the edges of the "hammer head".
Suppose I have a set of points like this:
set.seed(69)
x <- runif(20)
y <- runif(20)
plot(x, y)
Then it is easy to find the subset points that sit on the convex hull by doing:
ss <- chull(x, y)
This means we can plot the convex hull by doing:
lines(x[c(ss, ss[1])], y[c(ss, ss[1])], col = "red")
Now we can randomly remove one of the points that sits on the convex hull (i.e. "bend a nail") by doing:
bend <- ss[sample(ss, 1)]
x <- x[-bend]
y <- y[-bend]
And we can then repeat the process of finding the convex hull of this new set of points:
ss <- chull(x, y)
lines(x[c(ss, ss[1])], y[c(ss, ss[1])], col = "blue", lty = 2)
To get the point which will, on removal, cause the greatest reduction in area, one option would be the following function:
library(sp)
shrink <- function(coords)
{
ss <- chull(coords[, 1], coords[, 2])
outlier <- ss[which.min(sapply(seq_along(ss),
function(i) Polygon(coords[ss[-i], ], hole = FALSE)#area))]
coords[-outlier, ]
}
So you could do something like:
coords <- cbind(x, y)
new_coords <- shrink(coords)
new_chull <- new_coords[chull(new_coords[, 1], new_coords[, 2]),]
new_chull <- rbind(new_chull, new_chull[1,])
plot(x, y)
lines(new_chull[,1], new_chull[, 2], col = "red")
Of course, you could do this in a loop so that new_coords is fed back into shrink multiple times.
Calculate a robust center and variance using mcd.cov in MASS and the mahalanobis distance of each point from it (using mahalanobis in psych). We then show a quantile plot of the mahalanobis distances using PlotMD from modi and also show the associated outliers in red in the second plot. (There are other functions in modi that may be of interest as well.)
library(MASS)
library(modi)
library(psych)
set.seed(69)
x <- runif(20)
y <- runif(20)
m <- cbind(x, y)
mcd <- cov.mcd(m)
md <- mahalanobis(m, mcd$center, mcd$cov)
stats <- PlotMD(md, 2, alpha = 0.90)
giving:
(continued after screenshot)
and we show the convex hull using lines and the outliers in red:
plot(m)
ix <- chull(m)
lines(m[c(ix, ix[1]), ])
wx <- which(md > stats$halpha)
points(m[wx, ], col = "red", pch = 20)
Thank you both! I've tried various methods for outlier detection, but it's not quite what I was looking for. They have worked badly due to weird shapes of my clusters. I know I talked about convex hull area, but I think filtering on segment lengths yields better results and is closer to what I really wanted. Then it would look something like this:
shrink <- function(xy, max_length = 30){
to_keep <- 1:(dim(xy)[1])
centroid <- c(mean(xy[,1]), mean(xy[,2]))
while (TRUE){
ss <- chull(xy[,1], xy[,2])
ss <- c(ss, ss[1])
lengths <- sapply(1:(length(ss)-1), function(i) sum((xy[ss[i+1],] - xy[ss[i],])^2))
# This gets the point with the longest convex hull segment. chull returns points
# in clockwise order, so the point to remove is either this one or the one
# after it. Remove the one furthest from the centroid.
max_point <- which.max(lengths)
if (lengths[max_point] < max_length) return(to_keep)
if (sum((xy[ss[max_point],] - centroid)^2) > sum((xy[ss[max_point + 1],] - centroid)^2)){
xy <- xy[-ss[max_point],]
to_keep <- to_keep[-ss[max_point]]
}else{
xy <- xy[-ss[max_point + 1],]
to_keep <- to_keep[-ss[max_point + 1]]
}
}
}
It's not optimal because it factors in the distance to the centroid, which I would have liked to avoid, and there is a max_length parameter that should be calculated from the data instead of being hard-coded.
No filter:
It looks like this because there are 500 000 cells in here, and there are many that end up "wrong" when projecting from ~20 000 dimensions to 2.
Filter:
Note that it filters out points at tips of some clusters. This is less-than-optimal but ok. The overlap between some clusters is true and should be there.

Find correct 2D translation of a subset of coordinates

I have a problem I wish to solve in R with example data below. I know this must have been solved many times but I have not been able to find a solution that works for me in R.
The core of what I want to do is to find how to translate a set of 2D coordinates to best fit into an other, larger, set of 2D coordinates. Imagine for example having a Polaroid photo of a small piece of the starry sky with you out at night, and you want to hold it up in a position so they match the stars' current positions.
Here is how to generate data similar to my real problem:
# create reference points (the "starry sky")
set.seed(99)
ref_coords = data.frame(x = runif(50,0,100), y = runif(50,0,100))
# generate points take subset of coordinates to serve as points we
# are looking for ("the Polaroid")
my_coords_final = ref_coords[c(5,12,15,24,31,34,48,49),]
# add a little bit of variation as compared to reference points
# (data should very similar, but have a little bit of noise)
set.seed(100)
my_coords_final$x = my_coords_final$x+rnorm(8,0,.1)
set.seed(101)
my_coords_final$y = my_coords_final$y+rnorm(8,0,.1)
# create "start values" by, e.g., translating the points we are
# looking for to start at (0,0)
my_coords_start =apply(my_coords_final,2,function(x) x-min(x))
# Plot of example data, goal is to find the dotted vector that
# corresponds to the translation needed
plot(ref_coords, cex = 1.2) # "Starry sky"
points(my_coords_start,pch=20, col = "red") # start position of "Polaroid"
points(my_coords_final,pch=20, col = "blue") # corrected position of "Polaroid"
segments(my_coords_start[1,1],my_coords_start[1,2],
my_coords_final[1,1],my_coords_final[1,2],lty="dotted")
Plotting the data as above should yield:
The result I want is basically what the dotted line in the plot above represents, i.e. a delta in x and y that I could apply to the start coordinates to move them to their correct position in the reference grid.
Details about the real data
There should be close to no rotational or scaling difference between my points and the reference points.
My real data is around 1000 reference points and up to a few hundred points to search (could use less if more efficient)
I expect to have to search about 10 to 20 sets of reference points to find my match, as many of the reference sets will not contain my points.
Thank you for your time, I'd really appreciate any input!
EDIT: To clarify, the right plot represent the reference data. The left plot represents the points that I want to translate across the reference data in order to find a position where they best match the reference. That position, in this case, is represented by the blue dots in the previous figure.
Finally, any working strategy must not use the data in my_coords_final, but rather reproduce that set of coordinates starting from my_coords_start using ref_coords.
So, the previous approach I posted (see edit history) using optim() to minimize the sum of distances between points will only work in the limited circumstance where the point distribution used as reference data is in the middle of the point field. The solution that satisfies the question and seems to still be workable for a few thousand points, would be a brute-force delta and comparison algorithm that calculates the differences between each point in the field against a single point of the reference data and then determines how many of the rest of the reference data are within a minimum threshold (which is needed to account for the noise in the data):
## A brute-force approach where min_dist can be used to
## ameliorate some random noise:
min_dist <- 5
win_thresh <- 0
win_thresh_old <- 0
for(i in 1:nrow(ref_coords)) {
x2 <- my_coords_start[,1]
y2 <- my_coords_start[,2]
x1 <- ref_coords[,1] + (x2[1] - ref_coords[i,1])
y1 <- ref_coords[,2] + (y2[1] - ref_coords[i,2])
## Calculate all pairwise distances between reference and field data:
dists <- dist( cbind( c(x1, x2), c(y1, y2) ), "euclidean")
## Only take distances for the sampled data:
dists <- as.matrix(dists)[-1*1:length(x1),]
## Calculate the number of distances within the minimum
## distance threshold minus the diagonal portion:
win_thresh <- sum(rowSums(dists < min_dist) > 1)
## If we have more "matches" than our best then calculate a new
## dx and dy:
if (win_thresh > win_thresh_old) {
win_thresh_old <- win_thresh
dx <- (x2[1] - ref_coords[i,1])
dy <- (y2[1] - ref_coords[i,2])
}
}
## Plot estimated correction (your delta x and delta y) calculated
## from the brute force calculation of shifts:
points(
x=ref_coords[,1] + dx,
y=ref_coords[,2] + dy,
cex=1.5, col = "red"
)
I'm very interested to know if there's anyone that solves this in a more efficient manner for the number of points in the test data, possibly using a statistical or optimization algorithm.

R code that evaluates line-of-sight (LOS) between two (lat, lon) points

I'm having trouble figuring out how to calculate line-of-sight (LOS) between two (lat, lon) points, within R code. Any advice on how to approach this problem would be appreciated. I would like to use the R package - raster - for reading in the terrain elevation data. It seems the spgrass package could be leveraged (based on http://grass.osgeo.org/grass70/manuals/r.viewshed.html) but I wanted to avoid loading up a GIS. Thanks.
If you just want to know if point A can see point B then sample a large number of elevations from the line joining A to B to form a terrain profile and then see if the straight line from A to B intersects the polygon formed by that profile. If it doesn't, then A can see B. Coding that is fairly trivial. Conversely you could sample a number of points along the straight line from A to B and see if any of them have an elevation below the terrain elevation.
If you have a large number of points to compute, or if your raster is very detailed, or if you want to compute the entire area visible from a point, then that might take a while to run.
Also, unless your data is over a large part of the earth, convert to a regular metric grid (eg a UTM zone) and assume a flat earth.
I don't know of any existing package having this functionality, but using GRASS really isn't that much of a hassle.
Here's some code that uses raster and plyr:
cansee <- function(r, xy1, xy2, h1=0, h2=0){
### can xy1 see xy2 on DEM r?
### r is a DEM in same x,y, z units
### xy1 and xy2 are 2-length vectors of x,y coords
### h1 and h2 are extra height offsets
### (eg top of mast, observer on a ladder etc)
xyz = rasterprofile(r, xy1, xy2)
np = nrow(xyz)-1
h1 = xyz$z[1] + h1
h2 = xyz$z[np] + h2
hpath = h1 + (0:np)*(h2-h1)/np
return(!any(hpath < xyz$z))
}
viewTo <- function(r, xy, xy2, h1=0, h2=0, progress="none"){
## xy2 is a matrix of x,y coords (not a data frame)
require(plyr)
aaply(xy2, 1, function(d){cansee(r,xy,d,h1,h2)}, .progress=progress)
}
rasterprofile <- function(r, xy1, xy2){
### sample a raster along a straight line between two points
### try to match the sampling size to the raster resolution
dx = sqrt( (xy1[1]-xy2[1])^2 + (xy1[2]-xy2[2])^2 )
nsteps = 1 + round(dx/ min(res(r)))
xc = xy1[1] + (0:nsteps) * (xy2[1]-xy1[1])/nsteps
yc = xy1[2] + (0:nsteps) * (xy2[2]-xy1[2])/nsteps
data.frame(x=xc, y=yc, z=r[cellFromXY(r,cbind(xc,yc))])
}
Hopefully fairly self-explanatory but maybe needs some real documentation. I produced this with it:
which is a map of the points where a 50m high person can see a 2m high tower at the red dot. Yes, I got those numbers wrong when I ran it. It took about 20 mins to run on my 4 year old PC. I suspect GRASS could do this almost instantaneously and more correctly too.

Identify a linear feature on a raster map and return a linear shape object using R

I would like to identify linear features, such as roads and rivers, on raster maps and convert them to a linear spatial object (SpatialLines class) using R.
The raster and sp packages can be used to convert features from rasters to polygon vector objects (SpatialPolygons class). rasterToPolygons() will extract cells of a certain value from a raster and return a polygon object. The product can be simplified using the dissolve=TRUE option, which calls routines in the rgeos package to do this.
This all works just fine, but I would prefer it to be a SpatialLines object. How can I do this?
Consider this example:
## Produce a sinuous linear feature on a raster as an example
library(raster)
r <- raster(nrow=400, ncol=400, xmn=0, ymn=0, xmx=400, ymx=400)
r[] <- NA
x <-seq(1, 100, by=0.01)
r[cellFromRowCol(r, round((sin(0.2*x) + cos(0.06*x)+2)*100), round(x*4))] <- 1
## Quick trick to make it three cells wide
r[edge(r, type="outer")] <- 1
## Plot
plot(r, legend=FALSE, axes=FALSE)
## Convert linear feature to a SpatialPolygons object
library(rgeos)
rPoly <- rasterToPolygons(r, fun=function(x) x==1, dissolve=TRUE)
plot(rPoly)
Would the best approach be to find a centre line through the polygon?
Or is there existing code available to do this?
EDIT: Thanks to #mdsumner for pointing out that this is called skeletonization.
Here's my effort. The plan is:
densify the lines
compute a delaunay triangulation
take the midpoints, and take those points that are in the polygon
build a distance-weighted minimum spanning tree
find its graph diameter path
The densifying code for starters:
densify <- function(xy,n=5){
## densify a 2-col matrix
cbind(dens(xy[,1],n=n),dens(xy[,2],n=n))
}
dens <- function(x,n=5){
## densify a vector
out = rep(NA,1+(length(x)-1)*(n+1))
ss = seq(1,length(out),by=(n+1))
out[ss]=x
for(s in 1:(length(x)-1)){
out[(1+ss[s]):(ss[s+1]-1)]=seq(x[s],x[s+1],len=(n+2))[-c(1,n+2)]
}
out
}
And now the main course:
simplecentre <- function(xyP,dense){
require(deldir)
require(splancs)
require(igraph)
require(rgeos)
### optionally add extra points
if(!missing(dense)){
xy = densify(xyP,dense)
} else {
xy = xyP
}
### compute triangulation
d=deldir(xy[,1],xy[,2])
### find midpoints of triangle sides
mids=cbind((d$delsgs[,'x1']+d$delsgs[,'x2'])/2,
(d$delsgs[,'y1']+d$delsgs[,'y2'])/2)
### get points that are inside the polygon
sr = SpatialPolygons(list(Polygons(list(Polygon(xyP)),ID=1)))
ins = over(SpatialPoints(mids),sr)
### select the points
pts = mids[!is.na(ins),]
dPoly = gDistance(as(sr,"SpatialLines"),SpatialPoints(pts),byid=TRUE)
pts = pts[dPoly > max(dPoly/1.5),]
### now build a minimum spanning tree weighted on the distance
G = graph.adjacency(as.matrix(dist(pts)),weighted=TRUE,mode="upper")
T = minimum.spanning.tree(G,weighted=TRUE)
### get a diameter
path = get.diameter(T)
if(length(path)!=vcount(T)){
stop("Path not linear - try increasing dens parameter")
}
### path should be the sequence of points in order
list(pts=pts[path+1,],tree=T)
}
Instead of the buffering of the earlier version I compute the distance from each midpoint to the line of the polygon, and only take points that are a) inside, and b) further from the edge than 1.5 of the distance of the inside point that is furthest from the edge.
Problems can arise if the polygon kinks back on itself, with long segments, and no densification. In this case the graph is a tree and the code reports it.
As a test, I digitized a line (s, SpatialLines object), buffered it (p), then computed the centreline and superimposed them:
s = capture()
p = gBuffer(s,width=0.2)
plot(p,col="#cdeaff")
plot(s,add=TRUE,lwd=3,col="red")
scp = simplecentre(onering(p))
lines(scp$pts,col="white")
The 'onering' function just gets the coordinates of one ring from a SpatialPolygons thing that should only be one ring:
onering=function(p){p#polygons[[1]]#Polygons[[1]]#coords}
Capture spatial lines features with the 'capture' function:
capture = function(){p=locator(type="l")
SpatialLines(list(Lines(list(Line(cbind(p$x,p$y))),ID=1)))}
Thanks to #klewis at gis.stackexchange.com for linking to this elegant algorithm for finding the centre line (in response to a related question I asked there).
The process requires finding the coordinates on the edge of a polygon describing the linear feature and performing a Voronoi tessellation of those points. The coordinates of the Voronoi tiles that fall within the polygon of the linear feature fall on the centre line. Turn these points into a line.
Voronoi tessellation is done really efficiently in R using the deldir package, and intersections of polygons and points with the rgeos package.
## Find points on boundary of rPoly (see question)
rPolyPts <- coordinates(as(as(rPoly, "SpatialLinesDataFrame"),
"SpatialPointsDataFrame"))
## Perform Voronoi tessellation of those points and extract coordinates of tiles
library(deldir)
rVoronoi <- tile.list(deldir(rPolyPts[, 1], rPolyPts[,2]))
rVoronoiPts <- SpatialPoints(do.call(rbind,
lapply(rVoronoi, function(x) cbind(x$x, x$y))))
## Find the points on the Voronoi tiles that fall inside
## the linear feature polygon
## N.B. That the width parameter may need to be adjusted if coordinate
## system is fractional (i.e. if longlat), but must be negative, and less
## than the dimension of a cell on the original raster.
library(rgeos)
rLinePts <- gIntersection(gBuffer(rPoly, width=-1), rVoronoiPts)
## Create SpatialLines object
rLine <- SpatialLines(list(Lines(Line(rLinePts), ID="1")))
The resulting SpatialLines object:
You can get the boundary of that polygon as SpatialLines by direct coercion:
rLines <- as(rPoly, "SpatialLinesDataFrame")
Summarizing the coordinates down to a single "centre line" would be possible, but nothing immediate that I know of. I think that process is generally called "skeletonization":
http://en.wikipedia.org/wiki/Topological_skeleton
I think ideal solution would be to build such negative buffer which dynamically reach the minimum width and doesn't break when value is too large; keeps continued object and eventually, draws a line if the value is reached. But unfortunately, this may be very compute demanding because this would be done probably in steps and checks if the value for particular point is enough to have a point (of our middle line). Possible it's ne need to have infinitive number of steps, or at least, some parametrized value.
I don't know how to implement this for now.

Resources