how can i get the length of the longest line (side) of each polygon? what package would be least problematic? I wonder if there is any function other than iterating point by point (some dummy example below)? Other than that I wonder if there is any method of getting bearings and or angles of each segment versus neighbours, but as some experts are overly sensitive about asking only one question at a time I will leave it for any other time;) Thanks in advance
r1 = cbind(c(180114, 180553, 181127, 180114), c(332349, 332057, 332342, 332349))
r2 = cbind(c(180042, 180545, 180553, 180042), c(332373, 332026, 331426, 332373))
r3 = cbind(c(179110, 179907, 180433, 179110), c(331086, 330620, 330494, 331086))
r4 = cbind(c(180304, 180403,179632,180304), c(332791, 333204, 333635, 332791))
sr1=Polygons(list(Polygon(r1)),"r1")
sr2=Polygons(list(Polygon(r2)),"r2")
sr3=Polygons(list(Polygon(r3)),"r3")
sr4=Polygons(list(Polygon(r4)),"r4")
sr=SpatialPolygons(list(sr1,sr2,sr3,sr4))
srdf=SpatialPolygonsDataFrame(sr, data.frame(cbind(1:4,5:2),
row.names=c("r1","r2","r3","r4")))
Using an R package called GeoAxe you can split geospatial objects into pieces. Once you have the segments individually, you can use another package called SpatialLines to get the segments' lengths. From here you can select the greatest one.
A possible downside of this method for you is that the data must be a spatial format e.g. .shp for input but luckily there is also a package for creating these.
Does this help ?
splitPolygon <- function(x){
st_segment = function(r){st_linestring(t(matrix(unlist(r), 2, 2)))}
m <- data.frame(st_coordinates(x)[, 1:2])
nr <- nrow(m)
m2 <- cbind(m[-nr,], m[-1,])
names(m2) <- c("Xstart", "Ystart", "Xend", "Yend")
m3 <- rbind(m2,
data.frame(Xstart = m$Xend[nr],
Ystart = m$Yend[nr],
Xend = m$Xend[1],
Yend = m$Yend))
m3$geom = st_sfc(sapply(1:nrow(m3),
function(i){st_segment(m3[i,])},simplify=FALSE))
st_sf(m3, crs = st_crs(x))
}
longest_side_of_polygon <- function(x){
df <- splitPolygon(x)
l <- st_length(df)
lmat <- st_coordinates(df[which.max(l), ])[,1:2]
st_sfc(st_linestring(rbind(lmat[1,], lmat[2,])),
crs = st_crs(x))
}
I have a raster map with many patches (clumps of continguous cells with the same value). What I need to do is to obtain the coordinates of the center (or close to the center) of each patch.
I am very unexperienced with raster package but it seems I can get coordinates only if I know the position of the cells in the map. Is there any way to get coordinates giving a value of the cells instead? Thank you
If by patch you mean clumps, Raster package allows you to find , and isolate, clumps. Taking the clump() raster package example, and extending it:
library(raster)
library(igraph)
detach("package:coin", unload=TRUE)
r <- raster(ncols=12, nrows=12)
set.seed(0)
r[] <- round(runif(ncell(r))*0.7 )
rc <- clump(r)
clump_id <- getValues(rc)
xy <- xyFromCell(rc,1:ncell(rc))
df <- data.frame(xy, clump_id, is_clump = rc[] %in% freq(rc, useNA = 'no')[,1])
df[df$is_clump == T, ]
plot(r)
plot(rc)
text(df[df$is_clump == T, 1:2], labels = df[df$is_clump == T, 3])
May not be as interesting as you could expect.
You do it all over with directions = 4
rc <- clump(r, directions = 4)
clump_id <- getValues(rc)
xy <- xyFromCell(rc,1:ncell(rc))
df <- data.frame(xy, clump_id, is_clump = rc[] %in% freq(rc, useNA = 'no')[,1])
df[df$is_clump == T, ]
to get
and maybe clump 'centroids'
dfm <- ddply(df[df$is_clump == T, ], .(clump_id), summarise, xm = mean(x), ym = mean(y))
plot(rc)
text(dfm[, 2:3], labels = dfm$clump_id)
Notes:
There will be an error if you try to use clump() without first
detach modeltools library. modeltools is called by coin and maybe
other statistical libraries.
You could take the mean of the coordinates of each patch:
# some dummy data
m <- matrix(c(
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,4,4,0,
0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,4,4,0,
0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,4,4,0,
0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,
0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,
0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,
0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,
0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,
0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,
0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,
0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,
0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,
0,0,2,3,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,
0,0,2,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0), nrow=20, byrow=T)
# create a raster
r <- raster(m)
# convert raster to points
p <- data.frame(rasterToPoints(r))
# filter out packground
p <- p[p$layer > 0,]
# for each patch calc mean coordinates
sapply(split(p[, c("x", "y")], p$layer), colMeans)
I have a raster stack with 27 rasters in it. I have 27 corresponding polygons in a spatial polygon data frame. I want to take polygon[i] overlay it on raster[i], extract and sum the values from raster [i], get a count of the number of cells within the polygon[i] and then divide the sum value by the # of cells. In other words, the raster is a utilization distribution or a kernel density of use. I want to know much use is occurring in the area of the polygon where it is overlapping the raster. I want to divide by the number of cells in the polygon to take into account the size of the polygon.
I have a script that was given to me that does this, only it was written with the intention of extracting data from 1 raster only by any number of spatial polygons in the data frame. It works, its ugly, and I now would like to convert it to something more stream line. I only wish I had someone around me who could help because this might take a while?
This is code Ive been given and my summary of what I think is going on:
msum99Kern07 = SpatialPolygonDataFrame (many polygons)
KERNWolfPIX07m = Raster (this is a single raster, I have 27 rasters I put into a stack
)
#Extracting value from raster to many polygons
sRISK_Moose07m<- extract(KERNWolfPIX07m, msum99Kern07,df=FALSE,method='bilinear')
#Calculate THE SUM FOR EACH polygon#
sRISK_Moose07m<-unlist(lapply(sRISK_Moose07m, function(x) if (!is.null(x)) sum(x, na.rm=TRUE) else NA ))
sRISK_Moose07m<-as.data.frame(sRISK_Moose07m)
#Im not sure why these next commands are needed Im only guessing
#data.frame(levels) as there are many polygons creating a dataframe to put the info into
ID_SUM_07<-as.data.frame(levels(as.factor(msum07locs$ID2)))
#ADD ID TO THE risk data frame
sRISK_Moose07m$ID<-ID_SUM_07[,1]
#NUMBER OF CELLS WITHIN POLYGON EXTRACT CELLS/ POLYGON
NB_SUM2007m<-cellFromPolygon(KERNWolfPIX07m, msum99Kern07)
NB_SUM07m<-unlist(lapply(NB_SUM2007m, function(x) if (!is.null(x)) length(x) else NA ))
#####CONVERT TO DATA FRAME
NB_SUM07m<-as.data.frame(NB_SUM07m)
###ADD THE NB OF CELLS TO THE RISK_SUM FILE###
sRISK_Moose07m$NB_CELLS<-NB_SUM07m[,1]
###DIVIDING VALUE by NB CELLS##
sRISK_Moose07m$DIVID<-sRISK_Moose07m$sRISK_Moose07m/sRISK_Moose07m$NB_CELLS
Now, I have my spatial polygon data frame with 27 polygons and my raster stack with 27 rasters. I want to select the raster[i] and polygon[i] and extract, sum, and calculate the kernel density of the overlapping area. One side thing to keep in mind, I may get an error because it is possible that the polygon and raster do not overlap...I don't know how to check for this in R at all.
My script I have started:
moose99kern = spatial polygon data frame 27 moose
Rastwtrial = stack of 27 rasters having the same unique name as the ID in moose99kern
mkernID=unique(moose99kern$id)
for (i in length(mkernID)){
r = Rastwtrial[Rastwtrial[[i]]== mkernID[i]] #pick frm Rasterstack the raster that has the same name
mp = moose99kern[moose99kern$id == mkernID[i]] #pick from spatialpolygondataframe the polygon that has the same name
RISK_MooseTrial<- extract(r, mp, df=T, method'bilinear')
risksum = (RISK_MooseTrial, function(x) if (!is.null(x)) sum(x, na.rm=TRUE) else NA )#sum all the values that were extracted from the raster
My script doesn't even start to work because I don't know how to index a raster stack. But even still, going through 1 raster/1polygon at a time, Im not sure what to do next in the code. If this is too much for StackOverflow I apologize. Im just seriously stuck and have no where to turn.
Here is test data with 2 individuals for polygons
dput(mtestpoly)
new("SpatialPolygonsDataFrame"
, data = structure(list(id = structure(1:2, .Label = c("F01001_1", "F07002_1"
), class = "factor"), area = c(1259.93082578125, 966.364499511719
)), .Names = c("id", "area"), row.names = c("F01001_1", "F07002_1"
), class = "data.frame")
, polygons = list(<S4 object of class structure("Polygons", package = "sp")>,
<S4 object of class structure("Polygons", package = "sp")>)
, plotOrder = 1:2
, bbox = structure(c(6619693.77161797, 1480549.31292137, 6625570.48348294,
1485861.5586371), .Dim = c(2L, 2L), .Dimnames = list(c("x", "y"
), c("min", "max")))
, proj4string = new("CRS"
, projargs = NA_character_
dput(Rastwtest)
new("RasterStack"
, filename = ""
, layers = list(<S4 object of class structure("RasterLayer", package = "raster")>,
<S4 object of class structure("RasterLayer", package = "raster")>)
, title = character(0)
, extent = new("Extent"
, xmin = 1452505.6959799
, xmax = 1515444.7110552
, ymin = 6575235.1959799
, ymax = 6646756.8040201
)
, rotated = FALSE
, rotation = new(".Rotation"
, geotrans = numeric(0)
, transfun = function ()
NULL
)
, ncols = 176L
, nrows = 200L
, crs = new("CRS"
, projargs = NA_character_
)
, z = list()
, layernames = "Do not use the layernames slot (it is obsolete and will be removed)\nUse function 'names'"
)
Maybe I miss something , but I think you over complicated the problem. For me you have :
stack of raster : a list of raster : ss
a list of polygons of the same size as ss : polys
You need to apply extract for each pair(layer,poly) from (ss,polys)
sapply(1:nlayers(ss), function(i) {
m <- extract(ss[[i]],polys[i], method='bilinear', na.rm= T)[[1]]
d <- ifelse (!is.null(m) , sum(m)/length(m), NA)
d
})
Here an example of 2 legnths since you don't give a reproducible example :
## generate some data
library(raster)
cds1 <- rbind(c(-180,-20), c(-160,5), c(-60, 0), c(-160,-60), c(-180,-20))
cds2 <- rbind(c(80,0), c(100,60), c(120,0), c(120,-55), c(80,0))
## In your case you need something like SpatialPolygons(moose99kern)
polys <- SpatialPolygons(list(Polygons(list(Polygon(cds1)), 1),
Polygons(list(Polygon(cds2)), 2)))
r <- raster(ncol=36, nrow=18)
r[] <- 1:ncell(r)
r1 <- raster(ncol=36, nrow=18)
r1[] <- seq(-1,-2,length.out=ncell(r1))
ss <- stack(r,r1)
## density compute
sapply(1:nlayers(ss), function(i) {
## sum of values of the cells of a Raster ss[[i]] covered by the poly polys[i]
m <- extract(ss[[i]],polys[i], method='bilinear', na.rm= T)[[1]]
d <- ifelse (!is.null(m) , sum(m)/length(m), NA)
})
[1] 387.815789 -1.494714
When you are asking questions about R, always use simple reproducible examples, not your own data; unless perhaps what you want to do works for such an example, but not for your data, but then still show the example that works and the error message you are getting. You can typically start with the examples in the help files, as in below from ?extract
r <- raster(ncol=36, nrow=18)
r[] <- 1:ncell(r)
s <- stack(r, r*2)
cds1 <- rbind(c(-180,-20), c(-160,5), c(-60, 0), c(-160,-60), c(-180,-20))
cds2 <- rbind(c(80,0), c(100,60), c(120,0), c(120,-55), c(80,0))
polys <- SpatialPolygons(list(Polygons(list(Polygon(cds1)), 1),
Polygons(list(Polygon(cds2)), 2)))
v <- extract(s, polys, small=TRUE)
#cellnumbers for each polygon
sapply(v, NROW)
# mean for each polygon
sapply(v, function(x) apply(x, 2, mean, na.rm=T))
the functions in sapply need to be refined if some of your polgyons our outside of the raster (i.e. returning NULL, but the "small=TRUE" option should avoid problems with very small polygons inside the raster. Also note that there is no "method" argument when extracting with SpatialPolygon* objects.
Do not use a loop, unless to prevent memory problems if you have very many cells for each polygon.