Hope someone can help, I have a large dataset from which I have generated 10 estUD's with the same grid and h value=200. Here is a subset of just two of them. I can visulise them using image(liud) but when I try to use the fuction getverticeshr I get an error that the subscript is out of bounds. I have tried changing the grid and the value of h to no avail. I wonder if it something to do with the way I am combining them into and out of a list?
library(adehabitatHR)
#combine all Ud's into one dataset
liud <- list(Y2889a, Y2889b)
class(liud) <- "estUDm"
image(liud)#plot all est ud's
v<-getverticeshr(liud)
I have reproduced the error with the puechabonsp dataset below
library(adehabitatHR)
## Load the data
data(puechabonsp)
loc <- puechabonsp$relocs
## have a look at the data
head(as.data.frame(loc))
## the first column of this data frame is the ID
## Estimation of UD for each of the animals (two here as an example)
udBrock <- kernelUD(loc[as.data.frame(loc)[,1]=="Brock",], grid=200)
udCalou <- kernelUD(loc[as.data.frame(loc)[,1]=="Calou",], grid=200)
liud <- list(udBrock, udCalou)
class(liud) <- "estUDm"
image(liud)#plot all est ud's
v<-getverticeshr(liud)
Thanks for your comment Chris, I should have explained my dataset. I have 10 animals and have generated random points based on recorded polygons for each animal. I have run this 100 times per animal. My aim is to generate a mean utilized distribution for each animal based on all 100 runs. so far I have used this code:
xybat <- subset(bat.master, bat.master$id =="Y2889a",select=x:loopno )
#change to spatial points
xy <- xybat[1:2]#first two rows save as coords
df <- xybat[-1:-3]#remove unneded columns for ud
SPDF <- SpatialPointsDataFrame(coords=xy, data=df)#combine df and xy
udHR <- kernelUD(SPDF, h = 200, grid=habitat, kernel=epa)
## I would proceed using the raster packages
ud <- stack(lapply(udHR, raster))
## You can now check the first one
plot(ud[[1]])
## or at all of them
#plot(ud)
## take the mean
plot(udm <- mean(ud))
## now you can either proceed in raster and calculate your isopleths or convert it back to a estUD, this is a bit of a hack and not the nicest way to do it
Y2889a<- udHR[[1]]
Y2889a#grid <- as(udm, "GridTopology")
so if I follow your suggestion and run the kernelud function on the whole dataset I still need to stack each of the animal's ud's separatley and then combine them into an EstUDm and I am back to the same problem. I hope you can help me come up with a solution.
Best wishes,
Simone
This basically just a generalisation for multiple animals of my previouse answer, maybe it is useful:
library(adehabitatHR)
library(raster)
## generate some dummy data for 15 animals, each with 10 replications)
pts <- replicate(15, SpatialPointsDataFrame(coords=cbind(rnorm(1000), rnorm(1000)),
data=data.frame(id=rep(1:10, each=100))))
## generate uds
uds <- lapply(pts, function(x) kernelUD(x, h = "href", same4all = TRUE, kern = "bivnorm"))
udsr <- lapply(uds, function(x) stack(lapply(x, raster)))
## You can now check the first one
plot(udsr[[1]][[1]])
## or at all 10 uds of the first animal
plot(udsr[[1]])
## take the mean
udsm <- lapply(udsr, mean)
## go back to adehabitat
for (i in seq_along(udsm)) {
uds[[i]] <- uds[[i]][[1]]
uds[[i]]#grid <- as(udsm[[i]], "GridTopology")
}
## now you can work with udHR as if it were a HR estimate
iso95 <- lapply(uds, getverticeshr, percent=95)
## plot first animal
plot(iso95[[1]])
## plot second animal
plot(iso95[[2]])
Related
I have been struggling with this for hours.
I have a shapefile (called "shp") containing 177 polygons i.e. 177 counties. This shapefile is overlaid on a raster. My raster (called "ras") is made of pixels having different pollution values.
Now I would like to extract all pixel values and their number of occurrences for each polygon.
This is exactly what the QGIS function "zonal histogram" is doing. But I would like to do the exact same thing in R.
I tried the extract() function and I managed to get a mean value per county, which is already a first step, but I would like to make a pixels distribution (histogram).
Could someone give me a hand ?
Many thanks,
Marie-Laure
Thanks a lot for your help. Next time I promise I will be careful and explain my issue more in details.
With your help I managed to find a solution.
I also used this website : http://zevross.com/blog/2015/03/30/map-and-analyze-raster-data-in-r/
For information, first I had to uninstall the "tidyr" package because there was a conflict with the extract function.
In case it can help someone, here is the final code :
# Libraries loading
library(raster)
library(rgdal)
library(sp)
# raster layer import
ras=raster("C:/*.tif")
# shapefile layer import
shp<-shapefile("C:/*.shp")
# Extract the values of the pixels raster per county
ext <- extract(ras, shp, method='simple')
# Function to tabulate pixel values by region & return a data frame
tabFunc <- function(indx, extracted, region, regname) {
dat <- as.data.frame(table(extracted[[indx]]))
dat$name <- region[[regname]][[indx]]
return(dat)
}
# run through each county & compute a table of the number
# of raster cells by pixel value. ("CODE" is the county code)
tabs <- lapply(seq(ext), tabFunc, ext, shp, "CODE")
# assemble into one data frame
df <- do.call(rbind, tabs)
# to see the data frame in R
print(df)
# table export
write.csv(df,"C:/*.csv", row.names = FALSE)
Here is a minimal, self-contained, reproducible example (almost literally from ?raster::extract, so not difficult to make)
library(raster)
r <- raster(ncol=36, nrow=18, vals=rep(1:9, 72))
cds1 <- rbind(c(-180,-20), c(-160,5), c(-60, 0), c(-160,-60), c(-180,-20))
cds2 <- rbind(c(80,0), c(100,60), c(120,0), c(120,-55), c(80,0))
polys <- spPolygons(cds1, cds2)
Now you can do
v <- extract(r, polys)
par(mfrow=c(1,2))
z <- lapply(v, hist)
Or more fancy
mains <- c("first", "second")
par(mfrow=c(1,2))
z <- lapply(1:length(v), function(i) hist(v[[i]], main=mains[i]))
Or do you want a barplot
z <- lapply(1:length(v), function(i) barplot(table(v[[i]]), main=mains[i]))
I want to create a weight matrix based on distance. My code for the moment looks as follows and functions for a smaller sample of the data. However, with the large dataset (569424 individuals in 24077 locations) it doesn't go through. The problem arise at the nb2blocknb fuction. So my question would be: How can I optimize my code for large datasets?
# load all survey data
DHS <- read.csv("Daten/final.csv")
attach(DHS)
# define coordinates matrix
coormat <- cbind(DHS$location, DHS$lon_s, DHS$lat_s)
coorm <- cbind(DHS$lon_s, DHS$lat_s)
colnames(coormat) <- c("location", "lon_s", "lat_s")
coo <- cbind(unique(coormat))
c <- as.data.frame(coo)
coor <- cbind(c$lon_s, c$lat_s)
# get a list with beneighbored locations thath are inbetween 50 km distance
neighbor <- dnearneigh(coor, d1 = 0, d2 = 50, row.names=c$location, longlat=TRUE, bound=c("GE", "LE"))
# get neighborhood list on individual level
nb <- nb2blocknb(neighbor, as.character(DHS$location)))
# weight matrix in list format
nbweights.lw <- nb2listw(nb, style="B", zero.policy=TRUE)
Thanks a lot for your help!
you're trying to make 1.3 e10 distance calculations. The results would be in the GB.
I think you'd want to limit either the maximum distance or the number of nearest neighbors you're looking for. Try nn2 from the RANN package:
library('RANN')
nearest_neighbours_w_distance<-nn2(coordinatesA, coordinatesB,10)
note that this operation is not symmetric (Switching coordinatesA and coordinatesB gives different results).
Also you would first have to convert your gps coordinates to a coordinate reference system in which you can calculate euclidean distances, for example UTM (code not tested):
library("sp")
gps2utm<-function(gps_coordinates_matrix,utmzone){
spdf<-SpatialPointsDataFrame(gps_coordinates_matrix[,1],gps_coordinates_matrix[,2])
proj4string(spdf) <- CRS("+proj=longlat +datum=WGS84")
return(spTransform(spdf, CRS(paste0("+proj=utm +zone=",utmzone," ellps=WGS84"))))
}
Below I have a set of points with locations and attributes.
I have one problem here:
The Attr is not passed into final point_grid_loc
Secondly, what I want do do next is take 1 random point from each grid and return it as a data.frame or SpatialPointDataFrame of points.
Struggling with how to approach it:
# Install libraries
library(sp)
library(gstat)
# Set seed for reproducible results
set.seed = 34
x <- c(5.9,6.5,7.1,3.1,5.6,8.1,6.3,5.8,2.1,8.8,5.3,6.8,9.9,2.5,5.8,9.1,2.4,2.5,9.2)
y <- c(3.6,6.5,5.4,5.2,1.1,5.1,2.7,3.8,6.07,4.4,7.3,1.8,9.2,8.5,6.8,9.3,2.5,9.2,2.5)
attr <- c(23,56,2,34,7,89,45,34,2,34,5,67,8,99,6,65,3,32,12)
initialdata <- data.frame(x,y,attr)
colnames(initialdata) <- c("x","y","attr")
# Creating SpatialPointDataFrame:
coords <- data.frame(initialdata$x,initialdata$y)
coords <- SpatialPoints(coords, proj4string=CRS(as.character(NA)), bbox = NULL)
initialdata_DF <- data.frame(coords,initialdata$attr)
initialdata_SPDF <- SpatialPointsDataFrame(coords,initialdata_DF)
#==============#
cellsize <- 3
#==============#
# Creating a grid which will constitute a mesh for stratified sampling
# Info how to include CSR p. 50 yellow book
bb<- bbox(coords)
cs <- c(cellsize,cellsize)
cc <- bb[,1] + (cs/2)
cd <- ceiling(diff(t(bb))/cs)
initialdata_grd <- GridTopology(cellcentre.offset = cc, cellsize = cs,
cells.dim = cd)
initialdata_SG <- SpatialGrid(initialdata_grd) # Final grid created here
# Plot the results:
plot(initialdata_SG)
plot(initialdata_SPDF, add=T,col="blue", pch="+")
# Create a polygon:
poly <- as.SpatialPolygons.GridTopology(initialdata_grd)
# Identifies which point is in which grid/polygon location:
point_grid_loc <- data.frame(initialdata_SG,grid=over(initialdata_SPDF,poly))
I think you're running into trouble at the last step because you're calling the wrong object. If you want to add grid location to your spatial data, try:
initialdata_SPDF$grid <- over(initialdata_SPDF, poly)
To do the sampling part, you can use a split/apply/combine approach, like this:
# Split the spatial data into a list of data frames by grid location
gridlist <- split(initialdata_SPDF, initialdata_SPDF$grid)
# Sample one row from each data frame (grid cell) in the resulting list; see sample() help for details on that part
samples <- lapply(gridlist, function(x) x[sample(1:nrow(x), 1, FALSE),])
# Bind those rows back together in a new data frame
sampledgrid <- do.call(rbind, samples)
I have some data for 10 animals from which I have generated some random points. Each data set I have replicated 100 times. Below I have separated out of the data 1 animal and generated kernelUD's for each rep. I would now like someway to combine the UD's to be able to produce a summed estimated density map which I can then go on and measure 50% and 90% home ranges along with other metrics.
bat.master <- read.csv("C:/Users/Sim/Dropbox/Wallington GIS/bat.master")
names(bat.master)
# subset data frame to 1st bat only
bat1 <- bat.master$id="Y2889a"
xybat1 <- subset(bat.master, bat.master$id == "Y2889a",select=x:loopno )
# change to spatial points
xy <- xybat1[1:2] # first two rows save as coords
SPDF <- SpatialPointsDataFrame(coords=xy, data=df) # combine df and xy
ud1 <- kernelUD(SPDF, h = "href", same4all = TRUE, kern = "bivnorm")
Not sure if I understood your question right, but you could try something like this:
library(adehabitatHR)
## generate some dummy data
SPDF <- SpatialPointsDataFrame(coords=cbind(rnorm(1000), rnorm(1000)),
data=data.frame(id=rep(1:10, each=100)))
udHR <- kernelUD(SPDF, h = "href", same4all = TRUE, kern = "bivnorm")
## I would proceed using the raster packages
library(raster)
ud1 <- stack(lapply(udHR, raster))
## You can now check the first one
plot(ud1[[1]])
## or at all of them
plot(ud1)
## take the mean
plot(udm <- mean(ud1))
## now you can either proceed in raster and calculate your isopleths or convert it back to a estUD, this is a bit of a hack and not the nicest way to do it
udHR <- udHR[[1]]
udHR#grid <- as(udm, "GridTopology")
## now you can work with udHR as if it were a HR estimate
plot(getverticeshr(udHR, percent=95))
plot(getverticeshr(udHR, percent=50), add=TRUE)
I am trying to estimate home range overlap among a batch of animals that I have estimated home ranges for through both the MCP and kernelUD methods in the adehabitatHR package. I realize there is a function for calculating overlap with the kernel method, but can anyone tell me how best to calculate overlap for MCP?
I gather the rgeos package has a gIntersection function that I have begun to play around with.
Has anyone come up with a relatively straightforward method/code - whether in adehabitatHR, regeos, or elsewhere?
Maybe something like this helps:
library(adehabitatHR)
library(rgeos)
## Use some simulated data
ani1 <- SpatialPoints(matrix(rnorm(200, mean=2), ncol=2))
ani2 <- SpatialPoints(matrix(rnorm(200, mean=1), ncol=2))
## Function to calculate overlaps
gOverlap <- function(hr1, hr2, ...) {
a <- gIntersection(hr1, hr2, ...)
if (is.null(a)) {
return(0)
}
gArea(a, byid=TRUE) / gArea(hr1, byid=TRUE)
}
## Calcualte homeranges
hr1 <- mcp(ani1)
hr2 <- mcp(ani2)
## Calculate HR overlap
gOverlap(hr1, hr2)
Running overlap for single pairs at once can be very inefficient when you have many animals or many collaring sessions. I'm sure there is a cleaner way to do this (and I welcome suggestions on refinements!), but this worked for me:
#Relevant libraries (I think they're all here!)
library(adehabitatHR)
library(geosphere)
library(rgeos)
library(nlme)
#Define an overlap function for a single dyad
dyad.overlap <- function(cp,dyads.df){
p1<-subset(cp, cp#data$id==dyads.df[1])
p2<-subset(cp, cp#data$id==dyads.df[2])
Overlap<- ifelse(is.null(gIntersection(p1,p2)), 0, gArea(gIntersection(p1,p2)) / gArea(p1)) #puts 0 if no overlap between the dyad
return(Overlap)
}
#Define a function for overlap of all animals from a given time period
mcp.overlaps <- function(period.df){
period.df$Animal<-factor(period.df$Animal) #remove any ids not relevant to that period
count <- length(unique(period.df$Animal)) #identify number of individuals for the period
anim <- unique(period.df$Animal) #identify names of individuals/periods
xy<-SpatialPointsDataFrame(period.df[c("Easting","Southing")], data=data.frame(id=period.df$Animal)) #create SPDF
proj4string(xy)<-CRS("+init=epsg:32750") #define projection
cp <- adehabitatHR::mcp(xy, percent=95) #create Spatial Polygons Data Frame of the 95% MCPs
writeOGR(obj=cp, dsn="tempdir", layer=paste(period.df$file_folder[i],"95 Percent MCPs"), driver="ESRI Shapefile") #export shp of 95% MCPs for period, if desired
dyads<-(combn(anim, 2, simplify=T)) #all possible dyad combinations
dyads.df<-data.frame(A1=dyads[1,1:count],A2=dyads[2,1:count]) #creates a data frame of the pairs
dyads.df$Overlap<-apply(dyads.df, 1, dyad.overlap, cp=cp)
dyads.df$Period<-period.df$file_folder[1]
return(dyads.df)
}
#Now run the overlap for each time period included in the data frame
All.Overlaps<-do.call(rbind.data.frame, gapply(df, groups=df$file_folder, FUN=mcp.overlaps))
Of course, if you only have a single time period (e.g., a year of data and looking at annual home ranges), all you have to do is:
All.Overlaps <- mcp.overlaps(df)
This assumes that all data is in a single data frame (df), with the variables defined as following (adjust names to suit your data):
file_folder: collaring period/session of interest
Animal: unique identifier for an individual
Easting/Southing: your lat long coordinates (be sure to adjust the projection as needed)
I have modified a little the Johannes' answer, maybe it helps a little more:
library(adehabitatHR)
library(rgeos)
data(puechabonsp)
rel <- puechabonsp$relocs
cp <- mcp(rel[,1])
## Set Up a matrix which will store the results
mat <- matrix(NA,4,4,dimnames=list(c("1","2","3","4"),
c("1","2","3","4")))
## Set Up the loop
gOverlap <- function(hr, number, matrix){
for(i in c(1:number)){
for(j in c(1:number)){
a <- gIntersection(hr[i,], hr[j,])
if (is.null(a)){
matrix[i, j] <- 0
} else{matrix[i, j] <- gArea(a)}
}
}
return(matrix)
}
##Test the function
gOverlap(cp, 4, mat)
##Plotting to corroborate
plot(cp)
The output is a matrix with every intersection, and to obtain the total overlap you only have to sum row or columns without the self-intersection. Finally, you can add or delete individuals just modifying the initial matrix.
All the best,