R: Gstat universal cokriging resolution - r

I am trying to do universal cokriging in R with the Gstat package. I have a script that i was helped with, but now i'm stuck and can't ask assistance from the original source.
The problem is that i can't change the output resolution of the cokriged data. I would like to import the interpolated map to ArcMap and point-to-raster leaves me with a very low resolution.
My script is as follows:
library(raster)
library(gstat)
library(sp)
library(rgdal)
library(FitAR)
Loading my dataset, that containes coordinates and sampled values:
kova<-read.table("katvus_point_modif3.txt",sep=" ",header=T)
coordinates(kova)=~POINT_X+POINT_Y
Loading depth values at the same coordinates as the previous, this is my covariate:
Sygavus<-read.table("sygavus_point_cokrig.txt",sep=" ",header=T)
coordinates(Sygavus)=~POINT_X+POINT_Y
overlay <- over(kova,Sygavus)
kova$Sygavus <- overlay$Sygavus
This is supposed to set the boundary for my interpolation, the file is an exported shapefile from ArcMap:
border <- shapefile("area_2014.shp")
projection(kova)=projection(border)
This is supposed to create a grid for cokriging and the res= should let me specify what resolution i want the output to be, but no matter what number i use the output does not change.
grid <- spsample(border,type="regular",res=25)
I remove overlaping points:
zero <- zerodist(kova)
kova <- kova[-zero[,2],]
I load in the depth covariate raster-file. This is a depth raster export from ArcMap to ascii form:
depth <- raster("htp_depth_covar.asc")
projection(depth)=projection(border)
overlay <- extract(depth,kova)
kova$depth <- overlay
I remove na! values from the overlain depth values (These values should be the same as the previously loaded depth covariate table at the respective coordinates, but if i leave that part out, the script stops functioning)
kova <- kova[!is.na(kova$depth),]
kova.gstat <- gstat(id="Kova",formula=kova~depth,data=kova)
kova.gstat <- gstat(kova.gstat,id="Sygavus",formula=Sygavus~depth,data=kova)
var.kova <- variogram(kova.gstat)
plot(var.kova)
kova.gstat <- gstat(kova.gstat,id=c("Kova","Sygavus"),model=vgm(psill=cov(kova$kova,kova$Sygavus),model="Mat",range=12000,nugget=0))
kova.gstat <- fit.lmc(var.kova,kova.gstat,model=vgm(psill=cov(kova$kova,kova$Sygavus),model="Mat",range=12000,nugget=0))
plot(var.kova,kova.gstat$model)
overlay <- extract(depth,grid)
grid <- as.data.frame(grid)
grid$depth <- overlay
coordinates(grid)=~x1+x2
projection(grid)=projection(border)
krige <- predict.gstat(kova.gstat,grid)
spplot(krige,c("Kova.pred"))
write.table(krige, "kova.raster1.ck.csv", sep=";", dec=",", row.names=F)
Any help in understanding the gstat cokriging and the script overall would be greatly appreciated!

Because you don't provide a reproducible example I can only guess, but I think that spsample ignores the res=25 argument. Try n=1000 instead and then increase that value to get higher resolution.

Related

Raster calculation in R

I have two files from these website:
https://sedac.ciesin.columbia.edu/data/set/gpw-v4-population-count-rev11/data-download
And a shapefile of China from these website
https://gadm.org/download_country_v3.html
I would like to compute the difference between the raster population layers, that I can show a map where each pixel represents the change in the population in China.
I used this code
library(raster)
library(sf)
library(tmap)
p_15 <- terra::rast("gpw-v4-population-count-rev11_2015_2pt5_min_tif/gpw_v4_population_count_rev11_2015_2pt5_min.tif")
p_20 <- terra::rast("gpw-v4-population-count-rev11_2020_2pt5_min_tif/gpw_v4_population_count_rev11_2020_2pt5_min.tif")
CHN <- sf::read_sf("gadm36_CHN_shp/gadm36_CHN_1.shp")
CHN <- sf::st_transform(CHN, crs="epsg:4490")|> terra::vect()
p_15<- terra::project(p_15,'EPSG:4490')
p_20 <- terra::project(p_20,'EPSG:4490')
p_15_crop <- terra::crop(p_15, CHN)
p_20_crop <- terra::crop(p_20, CHN)
p_15_mask <- mask(p_15_crop, CHN)
p_20_mask <- mask(p_2_crop, CHN)
The code above everything works fine.
Now I used overlay from the raster package to calculate the difference between the population layers to show the change in each pixel.
I gave these code
diff1520 <- overlay(p_15_mask, p_20_mask, fun=function(x,y){return(y-x)})
But I got the error message method not applicable??? What is wrong with the code?
By the way, I also used geodata package, but did not solve my problem
Simply subtracting the objects will work. But if you still want to apply a function to a SpatRaster, you can use terra::lapp, which is equivalent to raster::overlay. The main difference is that you have to combine the layers first.
library(terra)
p_mask <- c(p_15_mask, p_20_mask)
diff1520 <- lapp(p_mask, fun=function(x,y){return(y-x)})
It's probably because you created your masks with terra. So the masks are SpatRast objects and you tried to use the overlay() function from raster and that only works with raster objects.
You can do what you want with
diff1520 <- p_20_mask - p_15_mask
That's the basic terra way.

Query raster brick layer based on another raster in R

I have a NetCDF file of global oceanographic (OmegaA) data at relatively coarse spatial resolution with 33 depth levels. I also have a global bathymetry raster at much finer resolution. My goal is to use get the seabed OmegaA data from the NetCDF file, using the bathymetry data to determine the desired depth. My code so far;
library(raster)
library(rgdal)
library(ncdf4)
# Aragonite data. Defaults to CRS WGS84
ncin <- nc_open("C:/..../GLODAPv2.2016b.OmegaA.nc")
ncin.depth <- ncvar_get(ncin, "Depth")# 33 depth levels
omegaA.brk <- brick("C:/.../GLODAPv2.2016b.OmegaA.nc")
omegaA.brk <-rotate(omegaA.bkr)# because netCDF is in Lon 0-360.
# depth raster. CRS WGS84
r<-raster("C:/....GEBCO.tif")
# resample the raster brick to the resolution that matches the bathymetry raster
omegaA.brk <-resample(omegaA.brk, r, method="bilinear")
# create blank final raster
omegaA.rast <- raster(ncol = r#ncols, nrow = r#nrows)
extent(omegaA.rast) <- extent(r)
omegaA.rast[] <- NA_real_
# create vector of indices of desired depth values
depth.values<-getValues(r)
depth.values.index<-which(!is.na(depth.values))
# loop to find appropriate raster brick layer, and extract the value at the desired index, and insert into blank raster
for (p in depth.values.index) {
dep.index <-which(abs(ncin.depth+depth.values[p]) == min(abs(ncin.depth+depth.values[p]))) ## this sometimes results in multiple levels being selected
brk.level <-omegaA.brk[[dep.index]] # can be more than on level if multiple layers selected above.
omegaA.rast[p] <-omegaA.brk[[1]][p] ## here I choose the first level if multiple levels have been selected above
print(paste(p, "of", length(depth.values.index))) # counter to look at progress.
}
The problem: The result is a raster with massive gaps (NAs) in it where there should be data. The gaps often take a distinctive shape - eg, follow a contour, or along a long straight line. I've pasted a cropped example.
enter image description here
I think this could be because either 1) for some reason the 'which' statement in the loop is not finding a match or 2) a misalignment of the projections is created which I've read can happen when using 'Rotate'.
I've tried to make sure all the extents, resolutions, number of cells, and CRS's are all the same, which they seem to be.
To speed up the process I've cropped the global brick and bathy raster to my area of interest, again checking that all the spatial resolutions, etc etc match - I've not included those steps here for simplicity.
At a loss. Any help welcome!
Without a reproducible example, this kind of problems is hard to solve. I can't tell where your problem is but I'll present to you the approach I would try. Maybe it's good, maybe it's bad, I don't know but it may inspire you to find a way to go around your problem.
To my understanding, you have a brick of OmegaA (33 layers/depth) and a bathymetry raster. You want to get the OmegaA value at the bottom of the sea. Here is how I would do:
Make OmegaA raster to the same resolution and extent to the bathymetry one
Transforme the bathymetry raster into a raster brick of 33 three layers of 0-1. e.g. If the sea bottom is at 200m for one particular pixel, than this pixel on all depth layer other than 200 is 0 and 1 for the 200. To program this, I would go the long way, something like
:
r_1 <- r
values(r_1) <- values(r)==10 # where 10 is the depth (it could be a range with < or >)
r_2 <- r
values(r_2) <- values(r)==20
...
r_33 <- r
values(r_33) <- values(r)==250
r_brick <- brick(r_1, r_2, ..., r_33)
then you multiple both your raster bricks. They have the same dimension, it should be easy. The output should be a raster brick of 33 layers with 0 everywhere where it isn't the bottom of the sea and the value of OmegaA anywhere else.
Combine all the layer of the brick obtained previously into a simple raster with a sum.
This should work. If you have problem with dealing with raster brick, you could make the data into base R arrays, it could be simpler.
Good luck.

Dealing with unordered XY points to create a polygon shapefile in R

I've inherited a geodatabase of polygons of lakes for which I am trying to create sampling grids on each lake. My current strategy is to export spatial data to CSV, use R to run a loop to create the grids on each lake, and then write to a new shapefile. However, here is my problem, when exporting to a CSV the WKT strings get messed up and put onto different lines. Okay, no problem, I moved on to exporting just the geometry to CSV so that I get X-Y values. When I simply plot the points they look perfect (using plot(y~x)), but the points are not in order. So, when I transform the data to a SpatialPolygon in the sp package in R using the following sequence:
XY-points -> Polygon -> Polygons -> SpatialPolygon
and then plot the SpatialPolygon I get this:
I know this is an artifact of incorrectly ordered points, because when I order the points by X and then by Y and run the same procedure here is what I get:
This is what the correct plotting is supposed to look like (X-Y data plotted with open circles):
Here is a short reproducible example of what I am trying to deal with:
library(sp)
# correct polygon
data <- data.frame(x=c(1:10, 10:1), y=c(5:1, 1:10, 10:6))
# plot(y~x, data=data)
correct.data.points <- rbind(data, data[1,]) # to close the ring for a polygon
correct.data.coords <- as.matrix(cbind(correct.data.points))
correct.data.poly <- Polygon(correct.data.coords, hole=F)
correct.data.poly <- Polygons(list(correct.data.poly), ID=0)
correct.data.poly.sp <- SpatialPolygons(list(correct.data.poly))
plot(correct.data.poly.sp)
# incorrect polygon
scr.data <- data[c(sample(1:20)),]
# plot(y~x, data=scr.data)
scr.data.points <- rbind(scr.data, scr.data[1,]) # to close the ring for a polygon
scr.data.coords <- as.matrix(cbind(scr.data.points))
scr.data.poly <- Polygon(scr.data.coords, hole=F)
scr.data.poly <- Polygons(list(scr.data.poly), ID=0)
scr.data.poly.sp <- SpatialPolygons(list(scr.data.poly))
plot(scr.data.poly.sp)
Any thoughts? Thanks for any help or insight anyone can provide. Also, for reference I am using QGIS 2.6.0 and the MMQGIS Python plugin to do the geometry exporting.

In R, how to average spatial points data over spatial grid squares

Managed to solve problem now
I have a set of around 50 thousand points that have coordinates and one value associated with them. I would like to be able to place points into a grid averaging the associated value of all points that fall into a grid square. So I want to end up with an object that identifies each grid square and gives the average inside the grid square.
I have the data in a spatial points data frame and a spatial grid object if that helps.
Improving answer: I have definitely done some searching, sorry about the initial state of the question I had only managed to frame the question inside my own head; hadn't had to communicate it to anyone else before...
Here is example data that hopefully illustrates the problem more clearly
##make some data
longi <- runif(100,0,10)
lati <- runif(100,0,10)
value <- runif(500,20,30)
##put in data frame then change to spatial data frame
df <- data.frame("lon"=longi,"lat"=lati,"val"=value)
coordinates(df) <- c("lon","lat")
proj4string(df) <- CRS("+proj=longlat")
##create a grid that bounds the data
grd <- GridTopology(cellcentre.offset=bbox(df)[,1],
cellsize=c(1,1),cells.dim=c(11,11))
sg <- SpatialGrid(grd)
Then I hope to get an object albeit a vector/data frame/list that gives me the average of value in each grid cell/square and some way of identifying which cell it is.
Solution
##convert the grid into a polygon##
polys <- as.SpatialPolygons.GridTopology(grd)
proj4string(polys) <- CRS("+proj=longlat")
##can now use the function over to select the correct points and average them
results <- rep(0, length(polys))
for(i in 1:length(polys)) {
results[i] = mean(df$val[which(!is.na(over(x=df,y=polys[i])))])
}
My question now is if this is the best way to do it or is there a more efficient way?
Your description is vague at best. Please try to ask more specific answers preferably, with code illustrating what you have already tried. Averaging a single value in your point data or a single raster cell makes absolutely no sense.
The best guess at an answer I can provide is to use raster extract() to assign the raster values to a sp point object and then use tapply() to aggregate the values to your grouping values in the points. You can use the coordinates of the points to identify cell location or alternately, the cellnumbers returned from extract (per below example).
require(raster)
require(sp)
# Create example data
r <- raster(ncol=500, nrow=500)
r[] <- runif(ncell(r))
pts <- sampleRandom(r, 100, sp=TRUE)
# Add a grouping value to points
pts#data <- data.frame(ID=rownames(pts#data), group=c( rep(1,25),rep(2,25),
rep(3,25),rep(4,25)) )
# Extract raster values and add to #data slot dataframe. Note, the "cells"
# attribute indicates the cell index in the raster.
pts#data <- data.frame(pts#data, extract(r, pts, cellnumbers=TRUE))
head(pts#data)
# Use tapply to cal group means
tapply(pts#data$layer, pts#data$group, FUN=mean)

Having trouble calculating Home Range area

I am having a lot of trouble in R calculating the area of home range of an animal. I thought once I produced a home range (if I've done it correctly) calculating the area would be easy, but no
I've pasted some of the code I've been trying. I wonder would anyone have any insight?
# Load package
library(adehabitat)
#Load file Frodo
dd <- read.csv(file.choose(), header = T)
# Plot the home range
xy <- dd[,c("X","Y")]
id <- dd[,"name"]
hr<- mcp(xy,id,percent=95)
plot(hr)
points(xy[xy$id=="frodo",])
#Great. Home range produced. Now calculate area
area <- mcp.area(xy, id,percent = 95),
# Result 2.287789e-09 Ha. Way to small. Maybe it doesnt like Lat / Long.
# Will try and convert coordinates into M or Km
# Load map project
library(mapproj)
x<-mapproject(t$X,t$Y,projection="mercator")
# Its converted it to something but its not M's or Km's.
# I'll try and run it anyway
xy <- x[,c("X","Y")]
# incorrect number of dimensions
# Ill try Project 4
library(proj4)
xy <- dd[,c("X","Y")]
tr <- ptransform(xy/180*pi, '+proj=latlong +ellps=sphere',
'+proj=merc +ellps=sphere')
View(tr)
# There seems to be a Z column filled with 0's.
# It that going to affect anything?
# Let's look at the data
plot(tr)
# Looks good, Lets try and create a home range
xy <- tr[,c("x","y")]
# 'incorrect number of dimensions'
No idea what the problem is. Don't know if I'm on the right track or doing something completely wrong
In order to calculate area you need your points in a projected coordinate systems (area in long/lat would just be units of degree). The type of projection you use is going to have a big effect on the resulting area. For instance the Mercator projection distorts area away from the Equator -- you might want to look into the best equal-area projection for your location. I am going to answer the programming part of your question, once you find the right projections to use you can substitute them in.
require(sp)
require(rgdal)
orig.points <- dd[,c("X","Y")]
# geographic coordinate system of your points
c1 <- CRS("+proj=latlong +ellps=sphere")
# define as SpatialPoints
p1 <- SpatialPoints(orig.points, proj4string=c1)
# define projected coordinate system of your choice, I am using the one you
# defined above, but see:
# http://www.remotesensing.org/geotiff/proj_list/mercator_1sp.html
# to make sure your definition of the mercator projection is appropriate
c2 <- CRS("+proj=merc ellps=sphere")
p2 <- spTransform(p1, c2) # project points
# convert to Polygon (this automatically computes the area as an attribute)
poly <- Polygon(p2)
poly#area #will print out the area

Resources