r - How to translate this time series calculation into a raster calculation? - r

I'm trying to reproduce this vector (time series) calculation code:
gamma.parameters<- fitdistr(may_baseline_3months[may_baseline_3months>0],"gamma")
into a raster calculation code.
What this code originally does is trying to fit a gamma distribution by maximum likelihood estimation to a vector (time series) may_baseline_3months.
And what I want to do is to calculate the same thing but with a raster stack.
I tried doing this with calc() function:
f1<-function(x)
{
library(MASS)
return(fitdistr(x,"gamma"))
}
gamma.parameters<- calc(x = may_baseline_3months,fun = f1)
Error in .calcTest(x[1:5], fun, na.rm, forcefun, forceapply) :
cannot use this function
but it didn't work.
Note: My raster stack has only 4 layer.
EDIT
You can download a example data here spi
The fitdistr is part of the procedure of my main goal. I'm trying to calcule the Standard Precipitation Index. I already did it with a time series of a monthly precipitation of 30 year.
Here is the code for a time series till the line that I'm stock:
data<-read.csv("guatemala_spi.csv",header = T,sep=";")
dates<-data[,1]
rain_1month<-data[,2]
rain_3months<-0
#Setting the first 2 elements to NA because I'm going to calcule the accumulating the rainfall for 3 month
for (i in c(1:2)) {
rain_3months[i]<-NA
}
#Accumulating the rainfall for the rest of the data
number_of_months<-length(rain_1month)
for (j in c(3:number_of_months))
{
rain_3months[j]<-0.0
for (i in c(0:2))
{
rain_3months[j] = rain_3months[j] + rain_1month[j-i]
}
}
#Extracting a time-series for the month of interest (May)
may_rain_3months<-rain_3months[substr(dates,5,6)==”05”]
dates_may<-dates[substr(dates,5,6)==”05”]
number_of_years<-length(dates_may)
#Fitting the gama distribution by maximum likelihood estimation
start_year<-1971
end_year<-2010
start_index<-which(substr(dates_may,1,4)==start_year)
end_index<-which(substr(dates_may,1,4)==end_year)
may_baseline_3months<-may_rain_3months[start_index:end_index]
library(MASS)
gamma.parameters<-fitdistr(may_baseline_3months[may_baseline_3months>0],"gamma")
That last line is the one that I'm having problems to calculate for a raster stack.
Here's what I have so far in raster form:
Example multi-layer raster here (Monthly precipitation 2001 to 2004, 48 layers in total)
#Initiating a dates vector
dates<-c("200101","200102","200103","200104","200105","200106","200107","200108","200109","200110","200111","200112",
"200201","200202","200203","200204","200205","200206","200207","200208","200209","200210","200211","200212",
"200301","200302","200303","200304","200305","200306","200307","200308","200309","200310","200311","200312",
"200401","200402","200403","200404","200405","200406","200407","200408","200409","200410","200411","200412")
#Initiating a NA raster
rain_3months_1layer<-raster(nrow=1600, ncol=1673,extent(-118.4539, -34.80395, -50, 30),res=c(0.05,0.05))
values(rain_3months_1layer)<-NA
#Creating a raster stack NA of 48 layers
rain_3months<-stack(mget(rep( "rain_3months_1layer" , 48 )))
#Reading the data
rain_1month <- stack("chirps_rain_1month.tif")
#Accumulating the rainfall
number_of_months<-nlayers(rain_1month)
for (j in c(3:number_of_months))
{
rain_3months[[j]]<-0.0
for (i in c(0:2))
{
rain_3months[[j]] = rain_3months[[j]] + rain_1month[[j-i]]
}
}
#Extracting the raster for the month of interest (May)
may_rain_3months<-stack(rain_3months[[which(substr(dates,5,6)=="05", arr.ind = T)]])
dates_may<-dates[substr(dates,5,6)=="05"]
number_of_years<-length(dates_may)
#Fitting the gama distribution by maximum likelihood estimation
start_year<-2001
end_year<-2004
start_index<-which(substr(dates_may,1,4)==start_year)
end_index<-which(substr(dates_may,1,4)==end_year)
may_baseline_3months<-stack(may_rain_3months[[start_index:end_index]])
library(MASS)
f1<-function(x)
{
library(MASS)
return(fitdistr(x,"gamma"))
}
gamma.parameters<- calc(x = may_baseline_3months,fun = f1)
I can't make calc() to compute fitdistr() to the raster stack.

You need to make a function that calc can use. Your function f1 returns an object of class fitdistr. The calc function does not know what to do with that:
library(MASS)
set.seed(0)
x <- runif(10)
f1 <- function(x) {
return(fitdistr(x,"gamma"))
}
a <- f1(x)
class(a)
# [1] "fitdistr"
a
# shape rate
# 4.401575 6.931571
# (1.898550) (3.167113)
You need a function that returns numbers. Like f2:
f2 <- function(x) {
fitdistr(x,"gamma")$estimate
}
b <- f2(x)
class(b)
#[1] "numeric"
b
# shape rate
#4.401575 6.931571
Test f2 with calc:
library(raster)
s <- stack(lapply(1:12, function(i) setValues(r, runif(ncell(r)))))
r <- calc(s, f2)
I assume that this answers your questions. I cannot be sure because your question is way too complex. The first thing you need to do with a problems like this is to create a simple example like I have done above.
Next question
Error in stats::optim(x = c(7, 7, 7, 7), par = list(shape = Inf, rate
= Inf), : non-finite value supplied by optim.
That is a different issue, you are providing fitdistr with values it cannot deal with. You can add a try clause to skip over those. You could identify which cells this happens in and what the values are to see if there is something else you should do.
f3 <- function(x) {
x <- try (fitdistr(x,"gamma")$estimate, silent=TRUE )
if (class(x) == 'try-error') { c(-9999, -9999) } else { x }
}
x[1] <- NA
f2(x)
#Error in fitdistr(x, "gamma") : 'x' contains missing or infinite values
f3(x)
#[1] -9999 -9999
Note that you need to make sure that the number of values returned by f3 should always be the same. In this case two values. Here I use -9999 so that you can identify the cells. You can also use NA

Related

function to call the different columns for calculating Correlation and Confidence interval using Bootstrap in R

Here is the problem I am currently facing: I have a data frame (let's call A) of 200 observations (rows) and 12 variables (columns). where I am, trying to find out the confidence interval using Bootstrap based on Correlation between two variables in the data frame.
My Data:
library(boot)
library(tidyverse)
library(psychometric)
hsb2 <- read.table("https://stats.idre.ucla.edu/stat/data/hsb2.csv", sep=",", header=T)
here I am trying to find out the confidence interval by using bootstrap based correlation formula
I wrote code for that its work.
k<-CIr(r=orig.cor, n = 21, level = .95)
k
n<-length(hsb2$math)
#n
B<-5000
boot.cor.all<-NULL
for (i in 1:B){
index<-sample(1:n, replace=T)
boot.v2<-hsb2$math[index]
boot.v1<-hsb2$write[index]
boot.cor<-cor(boot.v1, boot.v2,method="spearman")
boot.cor.all<-c(boot.cor.all, boot.cor)
}
ci_boot<-quantile(boot.cor.all, prob=c(0.025, 0.975))
ci_boot
Result:
[1] 0.6439442
[1] 0.2939780 0.8416635
2.5% 97.5%
0.5556964 0.7211145
Here is the actual problem I am facing where I have to write a function to get
result for another variable but
this function not working
bo<-function(v1,v2,df){
orig.cor <- cor(df$v1,df$v2,method="spearman")
orig.ci<-CIr(r=orig.cor, n = 21, level = .95)
B<-5000
n<-length(df$v1)
boot.cor.all<-NULL
for (i in 1:B){
index<-sample(1:n, replace=T)
boot.hvltt2<-df$v1[index]
boot.hvltt<-df$v2[index]
boot.cor<-cor(boot.hvltt2, boot.hvltt,method="spearman")
boot.cor.all<-c(boot.cor.all, boot.cor)
}
ci_boot<-quantile(boot.cor.all, prob=c(0.025, 0.975))
return(orig.cor,orig.ci,ci_boot)
}
after calling this function I am getting error
bo(math,write,hsb2)
bo(math,read,hsb2)
bo(female,write,hsb2)
bo(female,read,hsb2)
I am getting this error
Error in cor(df$v1, df$v2, method = "spearman") : supply both 'x' and 'y' or a matrix-like 'x'
how to write a function correctly.
I want the result as each time a call function it needs to be stored in data frame like below
Variable1 variable2 Orig Cor Orig CI bootstrap CI
math wirte 0.643 0.2939780 0.8416635 0.5556964 0.7211145
math read 0.66 0.3242639 0.8511580 0.5736904 0.7400174
female read -0.059 -0.4787978 0.3820967 -0.20432743 0.08176896
female write
science write
science read
The logic was right, I just had to make some changes on how you access the elements on df. R doesn't recognized the objects math and write because they are columns inside the data.frame. One way to pass them as arguments to the function is to define them as strings v1 = "math" and then access them with df[,v1]
bo<-function(v1,v2,df){
orig.cor <- cor(df[,v1],df[,v2],method="spearman")
orig.ci<-CIr(r=orig.cor, n = 21, level = .95)
B<-5000
n<-nrow(df) #Changed length to nrow
boot.cor.all<-NULL
for (i in 1:B){
index<-sample(1:n, replace=T)
boot.hvltt2<-df[index,v1]
boot.hvltt<-df[index,v2]
boot.cor<-cor(boot.hvltt2, boot.hvltt,method="spearman")
boot.cor.all<-c(boot.cor.all, boot.cor)
}
ci_boot<-quantile(boot.cor.all, prob=c(0.025, 0.975))
return(list(orig.cor,orig.ci,ci_boot)) #wrap your returns in a list
}
bo("math","write",hsb2)

Calculate maximum length of consecutive days above a certain threshold in a raster stack

I would like to calculate the maximum length of consecutive days above a threshold t given a raster stack s as shown below:
library(raster)
set.seed(112)
x1 <- raster(nrows=10, ncols=10)
x2=x3=x4=x5=x6=x1
x1[]= runif(ncell(x1))
x2[]= runif(ncell(x1))
x3[]= runif(ncell(x1))
x4[]= runif(ncell(x1))
x5[]= runif(ncell(x1))
x6[]= runif(ncell(x1))
s=stack(x1,x2,x3,x4,x5,x6)*56
Here is my current function.
fun <- function(x,t){
y <- rle((x > t)*1)
z <- y$lengths[y$values==1]
return(max(z,0))
}
I have also set a parameter q for export as advised in the cluster {raster} function
q <- 0
I expect a raster layer as an output but instead the error below pops up.
[1] "cannot use this function"
attr(,"class")
[1] "snow-try-error" "try-error"
Error in clusterR(s, calc, args = list(fun = fun), export = "q") :
cluster error
What could be the problem?
First, if you use random values in your example data, please also set the random seed so it's reproducible.
library(raster)
set.seed(42)
x1 <- raster(nrows=10, ncols=10)
s <- do.call(stack,lapply(1:6,function(x) setValues(x1,runif(ncell(x1)))*56))
As to your question, the only thing you need is a simple function that can be passed into calc to obtain the desired results:
cd <- function(x,t){
y <- rle((x > t)*1)
z <- y$lengths[y$values==1]
return(max(z,0))
}
This function uses rle, or run length encoding, to calculate the number of consecutive runs in a vector. In this case I'm looking for the maximum number of consecutive 1s, which come from multiplying TRUE values (value is above threshold t) with 1.
In the end you want to return the maximum run of a value 1, with 0 being a fallback in case there's no occurrence (sidenote: 1 indicates a single, non-consecutive occurence).
Finally, cd can be passed into calc, in this case using a threshold of 40:
plot(calc(s,function(x) cd(x,40)))

How to extrapolate a raster using in R

I am trying to downscale climatic conditions using the methodology in this article using the R software. I am almost there, but I am missing a couple of steps
Packages and data needed
For this example I uploaded some data to the archive.org website to load the required packages and data used in this example use the following code:
library(raster)
library(rgdal)
download.file("https://archive.org/download/Downscaling/BatPatagonia.rds", "Bat.rds")
download.file("https://archive.org/download/Downscaling/TempMinPatNow.rds", "Tmin.rds")
BatPatagonia <- readRDS("Bat.rds")
TempMinPatNow <- readRDS("Tmin.rds")
BatPatagonia is a raster file with the Bathymetry and altitude of the area extracted and transformed from the GEBCO dataset, while the TempMinPatNow is the minimum temperature of the same area for January extracted from worldclim. The plots of the datasets are seen bellow:
The goal of this question
In order to downscale past data from the last glacial maximum I need to model how current climate would be like if the sea level was the same as it was in the past. In order to do that I use the GEBCO data and to figure out more or less were the coast was. According to the methodology in the article cited above this are the first three steps to follow:
Create a DEM for land above 20 meters above sea level
Calculate a Multiple Linear Regression in a moving window
Extrapolate coefficients to the ocean
Point 3 is what I have been struggling to do, I will show how I did the first 2 points, and show what I have been looking for trying to solve point 3
1. Create a DEM for land 20 meters above sea level
In order to do this I took the BatPatagonia raster, and replaced all values bellow 20 meters with NA values using the following code:
Elev20 <- BatPatagonia
values(Elev20) <- ifelse(values(Elev20) <= 20, NA, values(Elev20))
The resulting raster is shown in the following image
2. Calculate a Multiple Linear Regression in a moving window
According to the manuscript in page 2591, the next step is to do a Multiple Linear Regression in a moving window using the Following formula for altitudes over 20 meters:
We already have the elevation data, but we also need the rasters for latitude and longitude, for that we use the following code, where we first create the Latitude and Longitude Rasters:
Latitud <- BatPatagonia
Longitud <- BatPatagonia
data_matrix <- raster::xyFromCell(BatPatagonia, 1:ncell(BatPatagonia))
values(Latitud) <- data_matrix[, 2]
values(Longitud) <- data_matrix[, 1]
We will multiply this by a raster mask of the areas that have elevations over 20 meters, so that we get only the values that we need:
Elev20Mask <- BatPatagonia
values(Elev20Mask) <- ifelse(values(Elev20Mask) <= 20, NA, 1)
Longitud <- Elev20Mask*Longitud
Latitud <- Elev20Mask*Latitud
Now I will build a stack with the response variables and the predictor variables:
Preds <- stack(Elev20, Longitud, Latitud, TempMinPatNow)
names(Preds) <- c("Elev", "Longitud", "Latitud", "Tmin")
The resulting stack is shown in the following figure:
As stated in the paper the moving window should be of 25 by 25 cells, resulting in a total of 625 cells, however they state that if the moving window has less than 170 cells with data, the regression should not be performed, and it should have a maximum of 624 cells in order to ensure that we are only modelling areas close to the coast. The result of this Multiple Regression with the moving window should be a stack with the Local intercept, and the local estimation of each one of the Betas that are in the equation shown above. I found out how to make this using the following code using the getValuesFocal function (This loop takes a while):
# First we establish the 25 by 25 window
w <- c(25, 25)
# Then we create the empty layers for the resulting stack
intercept <- Preds[[1]]
intercept[] <- NA
elevationEst <- intercept
latitudeEst <- intercept
longitudeEst <- intercept
Now we start the code:
for (rl in 1:nrow(Preds)) {
v <- getValuesFocal(Preds[[1:4]], row = rl, nrows = 1, ngb = w, array = FALSE)
int <- rep(NA, nrow(v[[1]]))
x1 <- rep(NA, nrow(v[[1]]))
x2 <- rep(NA, nrow(v[[1]]))
x3 <- rep(NA, nrow(v[[1]]))
x4 <- rep(NA, nrow(v[[1]]))
for (i in 1:nrow(v[[1]])) {
xy <- na.omit(data.frame(x1 = v[[1]][i, ], x2 = v[[2]][i, ], x3 = v[[3]][i,
], y = v[[4]][i, ]))
if (nrow(xy) > 170 & nrow(xy) <= 624) {
coefs <- coefficients(lm(as.numeric(xy$y) ~ as.numeric(xy$x1) +
as.numeric(xy$x2) + as.numeric(xy$x3)))
int[i] <- coefs[1]
x1[i] <- coefs[2]
x2[i] <- coefs[3]
x3[i] <- coefs[4]
} else {
int[i] <- NA
x1[i] <- NA
x2[i] <- NA
x3[i] <- NA
}
}
intercept[rl, ] <- int
elevationEst[rl, ] <- x1
longitudeEst[rl, ] <- x2
latitudeEst[rl, ] <- x3
message(paste(rl, "of", nrow(Preds), "ready"))
}
Coeffs <- stack(intercept, elevationEst, latitudeEst, longitudeEst, (intercept + Preds$Elev * elevationEst + Preds$Longitud * longitudeEst + Preds$Latitud *latitudeEst), Preds$Tmin)
names(Coeffs) <- c("intercept", "elevationEst", "longitudeEst", "latitudeEst", "fitted", "Observed")
The result of this loop is the coeffs stack, show bellow:
This is where I got stuck:
Extrapolate coefficients to the ocean
The goal now is to extrapolate the first 4 rasters of the Coeffs stack (intercept, elevationEst, longitudeEst and latitudeEst) to where the coast should be according to the last glacial maximum which was 120 meters shallower
MaxGlacier <- BatPatagonia
values(MaxGlacier) <- ifelse(values(MaxGlacier) < -120, NA,1)
The projected coastline is shown in the following map:
The way the authors projected the coefficients to the coast was by filling the gaps using by solving poisson's equation using the poisson_grid_fill of the NCL language from NCAR. But I would like to keep it simple and try to do all in the same language. I also found a similar function in python.
I would be happy with any extrapolation process that works well, I am not limiting my search to that algorithm.
I found several r packages that fill gaps such as the Gapfill package and even found a review of methods to fill gaps, but most of them are for interpolating and mostly for NDVI layers that can be based on other layers where the gap is filled.
Any ideas on how to move froward on this?
Thanks
Thinking back several decades to my physics undergrad days, we used Laplace relaxation to solve these types of Poisson equation problems. I'm not sure, but I guess that may also be how poisson_grid_fill works. The process is simple. Relaxation is an iterative process where we calculate each cell except those that form the boundary condition as the mean of the cells that are horizontally or vertically adjacent, then repeat until the result approaches a stable solution.
In your case, the cells for which you already have values provide your boundary condition, and we can iterate over the others. Something like this (demonstrated here for the intercept coefficient - you can do the others the same way):
gaps = which(is.na(intercept)[])
intercept.ext = intercept
w=matrix(c(0,0.25,0,0.25,0,0.25,0,0.25,0), nc=3, nr=3)
max.it = 1000
for (i in 1:max.it) intercept.ext[gaps] = focal(intercept.ext, w=w, na.rm=TRUE)[gaps]
intercept.ext = mask(intercept.ext, MaxGlacier)
Edit
Here's the same process embedded in a function, to demonstrate how you might use a while loop that continues until a desired tolerance is reached (or maximum number of iterations is exceeded). Note that this function is to demonstrate the principle, and is not optimised for speed.
gap.fill = function(r, max.it = 1e4, tol = 1e-2, verbose=FALSE) {
gaps = which(is.na(r)[])
r.filled = r
w = matrix(c(0,0.25,0,0.25,0,0.25,0,0.25,0), nc=3, nr=3)
i = 0
while(i < max.it) {
i = i + 1
new.vals = focal(r.filled, w=w, na.rm=TRUE)[gaps]
max.residual = suppressWarnings(max(abs(r.filled[gaps] - new.vals), na.rm = TRUE))
if (verbose) print(paste('Iteration', i, ': residual = ', max.residual))
r.filled[gaps] = new.vals
if (is.finite(max.residual) & max.residual <= tol) break
}
return(r.filled)
}
intercept.ext = gap.fill(intercept)
intercept.ext = mask(intercept.ext, MaxGlacier)
plot(stack(intercept, intercept.ext))

Error is fsolve in R when trying to fill a matrix with values; looking for explanation and/or solution

Solution Found
I am trying to plot a volatility surface using "persp" in R. To do so I need to fill a matrix, z, with implied volatilities.
I have a data frame of the strike prices, time and market prices. Data only contains call options.
AAPL <- #data
df <- data.frame(AAPL$Strike.Price,AAPL$Time.Left,AAPL$Market.Price)
I currently have a matrix, zz, that has stock prices in the first column, times as the headers and the respective market prices in columns 2, 3 and 4. It is important to note that some values of the market prices are missing (NA).
zz <- cast(df, df.Strike.Price ~ df.Time.Left)
For my x, y axis, I define the vectors:
x0 <- zz$df.Strike.Price #Strike prices for calculation of imp. vol.
x <- zz$df.Strike.Price / 153.06 #Axis for plotting
y <- c(time1, time2, time3)
Now the z matrix for plotting implied volatility. I start with an empty matrix
z = matrix(data=NA,nrow=length(x0),ncol=length(y))
Then I attempt to fill the matrix, leaving NA for values that cannot be calculated
for(i in 1:length(x0)){
for(j in 1:length(y)){
#Formula for Black-Scholes call option price (no dividends)
BS = function(X,T,sigma){
#Parameters
S=153.06; r=0.05 #Stock value is same for all options, r is arbitrarily selected to be some constant.
d1 = (log(S/X) + (r + sigma^2/2)*T) / (sigma*sqrt(T))
d2 = d1 - sigma*sqrt(T)
#Price for call options
price = S*pnorm(d1) - X*exp(-r*T)*pnorm(d2)
return(price)
}
#To address NA entries in zz
if(is.na(zz[i,j+1] == TRUE)){
z[i,j] = NA
}
#This is the part of the code that causes issues
else{
#Function for fsolve, the Black-Scholes price minus the market price.
A = function(sigma){
a = BS(x0[i], y[j], sigma) - zz[i,j+1]
return(a)
}
V = fsolve(A, 0.5) #Should give me the implied volatility from market data.
z[i,j] = V
}
}
}
Upon executing this piece of code I get the error message:
Error in if (norm(s, "F") < tol || norm(as.matrix(ynew), "F") < tol) break :
missing value where TRUE/FALSE needed
I'm not sure what the error is about. Is there a way to overcome this problem or an alternative method to getting the implied volatilities instead of using fsolve?
The error is to do with the changes in sigma becoming too small for the function fsolve. I was able to find another function that could solve non-linear equations and used that instead.
The function was nleqslv from a package of the same name nleqslv.

extracting residuals from pixel by pixel regression

I am trying to extract the residuals from a regression run pixel by pixel on a raster stack of NDVI/precipitation. My script works when i run it with a small part of my data. But when i try to run the whole of my study area i get: "Error in setValues(out, x) : values must be numeric, integer, logical or factor"
The lm works, since I can extract both slope and intercept. I just cant extract the residuals.
Any idea of how this could be fixed?
Here is my script:
setwd("F:/working folder/test")
gimms <- list.files(pattern="*ndvi.tif")
ndvi <- stack(gimms)
precip <- list.files(pattern="*pre.tif")
pre <- stack(precip)
s <- stack(ndvi,pre)
residualfun = function(x) { if (is.na(x[1])){ NA } else { m <- lm(x[1:6] ~ x[7:12], na.action=na.exclude)
r <- residuals.lm(m)
return (r)}}
res <- calc(s,residualfun)
And here is my data: https://1drv.ms/u/s!AhwCgWqhyyDclJRjhh6GtentxFOKwQ
Your function only test if the first layer shows NA values to avoid fitting the model. But there may be NA in other layers. You know that because you added na.action = na.exclude in your lm fit.
The problem is that if the model removes some values because of NA, the residuals will only have the length of the non-NA values. This means that your resulting r vector will have different lengths depending on the amount of NA values in layers. Then, calc is not be able to combine results of different lengths in a stack a a defined number of layers.
To avoid that, you need to specify the length of r in your function and attribute residuals only to non-NA values.
I propose the following function that now works on the dataset your provided. I added (1) the possibility compare more layers of each if you want to extend your exploration (with nlayers), (2) avoid fitting the model if there are only two values to compare in each layer (perfect model), (3) added a try if for any reason the model can fit, this will output values of -1e32 easily findable for further testing.
library(raster)
setwd("/mnt/Data/Stackoverflow/test")
gimms <- list.files(pattern="*ndvi.tif")
ndvi <- stack(gimms)
precip <- list.files(pattern="*pre.tif")
pre <- stack(precip)
s <- stack(ndvi,pre)
# Number of layers of each
nlayers <- 6
residualfun <- function(x) {
r <- rep(NA, nlayers)
obs <- x[1:nlayers]
cov <- x[nlayers + 1:nlayers]
# Remove NA values before model
x.nona <- which(!is.na(obs) & !is.na(cov))
# If more than 2 points proceed to lm
if (length(x.nona) > 2) {
m <- NA
try(m <- lm(obs[x.nona] ~ cov[x.nona]))
# If model worked, calculate residuals
if (is(m)[1] == "lm") {
r[x.nona] <- residuals.lm(m)
} else {
# alternate value to find where model did not work
r[x.nona] <- -1e32
}
}
return(r)
}
res <- calc(s, residualfun)

Resources