polynomial fitting on spectral data - r

I want to fit a polynomial function (max. 3rd order) on each raster cell over all my spectral bands (Landsat 1-7) creating a new raster(stack) representing the coefficients.
I got my data (including NA values) in a stack with 6 Layer (Landsat Band 1-7[excluding 6]).
I guess somehow I should tell the polynomial function on which spectral wavelength the bands are located
Landsat7 Wavelength (micrometers)
Band 1 0.45-0.52
Band 2 0.52-0.60
Band 3 0.63-0.69
Band 4 0.77-0.90
Band 5 1.55-1.75
Band 7 2.09-2.35
so that it can fit it properly.
Has anyone an idea how to do that polynomial fitting of each cell and extracting the coefficients in R? Thanks for any help!

Your question is not very clear, as you do not specify what you are fitting. I am guessing it is band number. You can do something like this.
library(raster)
b <- brick(system.file("external/rlogo.grd", package="raster"))
b[[2]][125:225] <- NA
s <- stack(b, flip(b, 'y'))
names(s) <- paste0('b', 1:6)
bands <- 1:6
f <- function(x) {
# in case of NAs; match the number of coefficients returned
if (any(is.na(x))) return(c(NA, NA, NA))
m <- lm(x ~ bands + I(bands^2))
coefficients(m)
}
z <- calc(s, f)
z
plot(z)
If you need to speed this up you can follow the example here:
https://gis.stackexchange.com/questions/144211/want-cell-linear-regression-values-for-a-netcdf-or-multi-band-raster/144408#144408

Related

Linear regresion on each raster pixel to predict future month (in R language)

I have successfull run this code. I have read it from:
Can't Calculate pixel-wise regression in R on raster stack with fun
library(raster)
# Example data
r <- raster(nrow=15, ncol=10)
set.seed(0)
# Now I make 6 raster (1 raster/months), then assign each pixel's value randomly
s <- stack(lapply(1:6, function(i) setValues(r, rnorm(ncell(r), i, 3))))
names(s) <- paste0('Month', c(1,2,3,4,5,6))
# Extract each pixel values
x <- values(s)
# Model with linreg
m <- lm(Month6 ~ ., data=data.frame(x))
# Prediction raster
p <- predict(s, m)
If you run that code, p will be a raster. But, I still confused. How to make raster in the future? For example, I want 'Month8' raster based on 6 previous raster?
What I mean is, each pixels has different linreg equations (where X=Month1, ... , Months6). If I input X=Month8, I will have 150 cells of Y for 8th Month that represent in each pixel of raster.
What I have done
# Lets try make a data frame for clear insight for my data
x <- values(s)
DF <- data.frame(x)
# Make X as month, and y is target.
library(data.table)
DF_T <- transpose(DF)
Month <- seq(1,nrow(DF_T))
DF_T <- cbind(Month, DF_T)
# Make prediction for first pixel
V1_lr <- lm(V1 ~ Month, data=DF_T)
# prediction for 8th Months in a pixel
V1_p <- predict(V1_lr, data.frame(Month=8))
V1_p
This is just one pixel. I want the entire raster for 'Month8'

Maximum pseudo-likelihood estimator for soft-core point process

I am trying to fit a soft-core point process model on a set of point pattern using maximum pseudo-likelihood. I followed the instructions given in this paper by Baddeley and Turner
And here is the R-code I came up with
`library(deldir)
library(tidyverse)
library(fields)
#MPLE
# irregular parameter k
k <- 0.4
## Generate dummy points 50X50. "RA" and "DE" are x and y coordinates
dum.x <- seq(ramin, ramax, length = 50)
dum.y <- seq(demin, demax, length = 50)
dum <- expand.grid(dum.x, dum.y)
colnames(dum) <- c("RA", "DE")
## Combine with data and specify which is data point and which is dummy, X is the point pattern to be fitted
bind.x <- bind_rows(X, dum) %>%
mutate(Ind = c(rep(1, nrow(X)), rep(0, nrow(dum))))
## Calculate Quadrature weights using Voronoi cell area
w <- deldir(bind.x$RA, bind.x$DE)$summary$dir.area
## Response
y <- bind.x$Ind/w
# the sum of distances between all pairs of points (the sufficient statistics)
tmp <- cbind(bind.x$RA, bind.x$DE)
t1 <- rdist(tmp)^(-2/k)
t1[t1 == Inf] <- 0
t1 <- rowSums(t1)
t <- -t1
# fit the model using quasipoisson regression
fit <- glm(y ~ t, family = quasipoisson, weights = w)
`
However, the fitted parameter for t is negative which is obviously not a correct value for a softcore point process. Also, my point pattern is actually simulated from a softcore process so it does not make sense that the fitted parameter is negative. I tried my best to find any bugs in the code but I can't seem to find it. The only potential issue I see is that my sufficient statistics is extremely large (on the order of 10^14) which I fear may cause numerical issues. But the statistics are large because my observation window spans a very small unit and the average distance between a pair of points is around 0.006. So sufficient statistics based on this will certainly be very large and my intuition tells me that it should not cause a numerical problem and make the fitted parameter to be negative.
Can anybody help and check if my code is correct? Thanks very much!

extracting residuals from pixel by pixel regression

I am trying to extract the residuals from a regression run pixel by pixel on a raster stack of NDVI/precipitation. My script works when i run it with a small part of my data. But when i try to run the whole of my study area i get: "Error in setValues(out, x) : values must be numeric, integer, logical or factor"
The lm works, since I can extract both slope and intercept. I just cant extract the residuals.
Any idea of how this could be fixed?
Here is my script:
setwd("F:/working folder/test")
gimms <- list.files(pattern="*ndvi.tif")
ndvi <- stack(gimms)
precip <- list.files(pattern="*pre.tif")
pre <- stack(precip)
s <- stack(ndvi,pre)
residualfun = function(x) { if (is.na(x[1])){ NA } else { m <- lm(x[1:6] ~ x[7:12], na.action=na.exclude)
r <- residuals.lm(m)
return (r)}}
res <- calc(s,residualfun)
And here is my data: https://1drv.ms/u/s!AhwCgWqhyyDclJRjhh6GtentxFOKwQ
Your function only test if the first layer shows NA values to avoid fitting the model. But there may be NA in other layers. You know that because you added na.action = na.exclude in your lm fit.
The problem is that if the model removes some values because of NA, the residuals will only have the length of the non-NA values. This means that your resulting r vector will have different lengths depending on the amount of NA values in layers. Then, calc is not be able to combine results of different lengths in a stack a a defined number of layers.
To avoid that, you need to specify the length of r in your function and attribute residuals only to non-NA values.
I propose the following function that now works on the dataset your provided. I added (1) the possibility compare more layers of each if you want to extend your exploration (with nlayers), (2) avoid fitting the model if there are only two values to compare in each layer (perfect model), (3) added a try if for any reason the model can fit, this will output values of -1e32 easily findable for further testing.
library(raster)
setwd("/mnt/Data/Stackoverflow/test")
gimms <- list.files(pattern="*ndvi.tif")
ndvi <- stack(gimms)
precip <- list.files(pattern="*pre.tif")
pre <- stack(precip)
s <- stack(ndvi,pre)
# Number of layers of each
nlayers <- 6
residualfun <- function(x) {
r <- rep(NA, nlayers)
obs <- x[1:nlayers]
cov <- x[nlayers + 1:nlayers]
# Remove NA values before model
x.nona <- which(!is.na(obs) & !is.na(cov))
# If more than 2 points proceed to lm
if (length(x.nona) > 2) {
m <- NA
try(m <- lm(obs[x.nona] ~ cov[x.nona]))
# If model worked, calculate residuals
if (is(m)[1] == "lm") {
r[x.nona] <- residuals.lm(m)
} else {
# alternate value to find where model did not work
r[x.nona] <- -1e32
}
}
return(r)
}
res <- calc(s, residualfun)

R: How to get a sum of two distributions?

I have a simple question.
I would like to sum of two non-parametric distributions.
Here is an example.
There are two cities which have 10 houses. we know energy consumption for each house. (edited) I want to get the probability distribution of the sum of a random house chosen from each city.
A1 <- c(1,2,3,3,3,4,4,5,6,7) #10 houses' energy consumption for city A
B1 <- c(11,13,15,17,17,18,18,19,20,22) #10 houses' energy consumption for city B
I have a probability distribution of A1 and B1, how can I get the probability distribution of A1+B1?
If I just use A1+B1 in R, it gives 12 15 18 20 20 22 22 24 26 29. However, I don't think this is right. Becuase there is not order in houses.
When I change the order of houses, it gives another results.
# Original
A1 <- c(1,2,3,3,3,4,4,5,6,7)
B1 <- c(11,13,15,17,17,18,18,19,20,22)
#change order 1
A2 <- c(7,6,5,4,4,3,3,3,2,1)
B2 <- c(22,20,19,18,18,17,17,15,13,11)
#change order 2
A3 <- c(3,3,3,4,4,5,6,7,1,2)
B3 <- c(17,17,18,18,19,13,20,11,22,15)
sum1 <- A1+B1; sum1
sum2 <- A1+B2; sum2
sum3 <- A3+B3; sum3
Red lines are sum1, sum2, and sum3. I am not sure how can I get the distribution of the sum of two distributions.Please give me any ideas.Thanks!
(If those distributions are normal or uniform distributions, I could get the sum of distribution easily, but these are not a normal and there is no order)
In theory, the sum distribution of two random variables is the convolution of their PDFs, details, as:
PDF(Z) = PDF(Y) * PDF(X)
So, I think this case can be computed by convolution.
# your data
A1 <- c(1,2,3,3,3,4,4,5,6,7) #10 houses' energy consumption for city A
B1 <- c(11,13,15,17,17,18,18,19,20,22) #10 houses' energy consumption for city B
# compute PDF/CDF
PDF_A1 <- table(A1)/length(A1)
CDF_A1 <- cumsum(PDF_A1)
PDF_B1 <- table(B1)/length(B1)
CDF_B1 <- cumsum(PDF_B1)
# compute the sum distribution
PDF_C1 <- convolve(PDF_B1, PDF_A1, type = "open")
# plotting
plot(PDF_C1, type="l", axe=F, main="PDF of A1+B1")
box()
axis(2)
# FIXME: is my understand for X correct?
axis(1, at=seq(1:14), labels=(c(names(PDF_A1)[-1],names(PDF_B1))))
Note:
CDF: cumulative distribution function
PDF: probability density function
## To make the x-values correspond to actually sums, consider
## compute PDF
## pad zeros in probability vectors to convolve
r <- range(c(A1, B1))
pdfA <- pdfB <- vector('numeric', diff(r)+1L)
PDF_A1 <- table(A1)/length(A1) # same as what you have done
PDF_B1 <- table(B1)/length(B1)
pdfA[as.numeric(names(PDF_A1))] <- as.vector(PDF_A1) # fill the values
pdfB[as.numeric(names(PDF_B1))] <- as.vector(PDF_B1)
## compute the convolution and plot
res <- convolve(pdfA, rev(pdfB), type = "open")
plot(res, type="h", xlab='Sum', ylab='')
## In this simple case (with discrete distribution) you can compare
## to previous solution
tst <- rowSums(expand.grid(A1, B1))
plot(table(tst) / sum(as.vector(table(tst))), type='h')
Edit:
Now that I better understand the question, and see #jeremycg 's answer, I think I have a different approach that I think will scale better with sample size.
Rather than relying on the values in A1 and B1 being the only values in the distribution, we could infer that those are simply samples from a distribution. To avoid imposing a particular form on the distribution, I'll use an empirical 'equivalent': the sample density. If we use the density function, we can infer the relative probabilities of sampling a continuous range of household energy uses from either town. We can randomly draw an arbitrary number of energies (with replacement), from the density()$x values, where the sample's we take are weighted with prob=density()$y ... i.e., peaks in the density plot are at x-values that should be resample more often.
As a heuristic, an oversimplified statement could say that mean(A1) is 3.8, and mean(B1) is 17, so the sum of energy uses from the two cities should be, on average, ~20.8. Using this as the "does it make sense test"/ heuristic, I think the following approach aligns with the type of result you want.
sample_sum <- function(A, B, n, ...){
qss <- function(X, n, ...){
r_X <- range(X)
dens_X <- density(X, ...)
sample(dens_X$x, size=n, prob=dens_X$y, replace=TRUE)
}
sample_A <- qss(A, n=n, ...)
sample_B <- qss(B, n=n, ...)
sample_A + sample_B
}
ss <- sample_sum(A1, B1, n=100, from=0)
png("~/Desktop/answer.png", width=5, height=5, units="in", res=150)
plot(density(ss))
dev.off()
Note that I bounded the density plot at 0, because I'm assuming you don't want to infer negative energies. I see that the peak in the resultant density is just above 20, so 'it makes sense'.
The potential advantage here is that you don't need to look at every possible combination of energies from the houses in the two cities to understand the distribution of summed energy uses. If you can define the distribution of both, you can define the distribution of paired sums.
Finally, the computation time is trivial, especially compared the approach finding all combinations. E.g., with 10 million houses in each city, if I try to do the expand.grid approach I get a Error: cannot allocate vector of size 372529.0 Gb error, whereas the sample_sum approach takes 0.12 seconds.
Of course, if the answer doesn't help you, the speed is worthless ;)
You probably want something like:
rowSums(expand.grid(A1, B1))
Using expand.grid will get you a dataframe of all combinations of A1 and B1, and rowSums will add them.
Is it not the case that sorting the distribution prior to adding solves this problem?
A1 <- c(1,2,3,3,3,4,4,5,6,7) #10 houses' energy consumption for city A
B1 <- c(11,13,15,17,17,18,18,19,20,22) #10 houses' energy consumption for city B
sort(A1)+sort(B1)

Moving window regression

I want to perform a moving window regression on every pixel of two raster stacks representing Band3 and Band4 of Landsat data. The result should be two additional stacks, one representing the Intercept and the other one representing the slope of the regression.
So layer 1 of stack "B3" and stack "B4" result in layer 1 of stack "intercept" and stack "slope". Layer 2 of stack B3 and stack B4 result in layer 2,.... and so on.
I already came along the gwrfunction, but want to stay in the raster package.
I somehow know that focal must be included in order to set my moving window (which should be 3x3 pixels) and somehow a linear model like: lm(as.matrix(b3)~as.matrix(b4)) although I don't think that this gets me the values pixelwise...
Instead of a rasterstack a layer by layer approach is also possible. (So it must not necessarily be a rasterstack of Band3.
Has anyone a glue how to program this in R?
Here is one approach, adapted from ?raster::localFun
set.seed(0)
b <- stack(system.file("external/rlogo.grd", package="raster"))
x <- flip(b[[2]], 'y') + runif(ncell(b))
y <- b[[1]] + runif(ncell(b))
# local regression:
rfun <- function(x, y, ...) {
d <- na.omit(data.frame(x, y))
if (nrow(d) < 3) return(NA)
m <- lm(y~x, data=d)
# return slope
coefficients(m)[2]
}
ff <- localFun(x, y, fun=rfun)
plot(ff)
Unfortunately you will have to run this twice to get both the slope and intercept (coefficients(m)[1]).

Resources