Finding p value between two rasters in R - r

How can a p value be obtained between two rasters?
I currently have two rasters, and I would like to compute a p value.
I convert both into dataframes with na.rm=T.
df1<-as.data.frame(r1,na.rm=T)
df1<-as.data.frame(r2,na.rm=T)
cor.test(df1$gc,df2$ip)$p.value
Error in cor.test.default(df1$gc,df2$ip)) :
'x' and 'y' must have the same length
Even if I dont go for the na.rm, this comes
df1<-as.data.frame(r1)
df1<-as.data.frame(r2)
cor.test(df1$gc,df2$ip)$p.value
[1] 0

you have to pass a numeric vector to cor.test(). You are getting to that converting to a data.frame and getting the column, but it's probably easier to use the raster function values():
set.seed(666)
r1 <- raster(matrix(rnorm(100), 10, 10))
r2 <- raster(matrix(rnorm(100), 10, 10))
cor.test(values(r1), values(r2))$p.value
0.07144313
The error when you remove NA is because the two vector do not have the same length, i.e. in one vector you have more NA cells than in the other. Depending on many NA you have, the p-value you get at the end (0) may be important or not. Did you try making a simple boxplot or distribution plot of the values in both rasters? This may help you understand if the p-value is simply an artefact of too many NA.
d <- data.frame(
x = c(values(r1), values(r2)),
y = c(rep(1, ncell(r1)), rep(2, ncell(r2)))
)
boxplot(d$x ~ d$y)
/Emilio

Related

Calculate Euclidean distance between multiple pairs of points in dataframe in R

I'm trying to calculate the Euclidean distance between pairs of points in a dataframe in R, and there's an ID for each pair:
ID <- sample(1:10, 10, replace=FALSE)
P <- runif(10, min=1, max=3)
S <- runif(10, min=1, max=3)
testdf <- data.frame(ID, P, S)
I found several ways to calculate the Euclidean distance in R, but I'm either getting an error, returning only 1 value (so it's computing the distance between the entire vector), or I end up with a matrix when all I need is a 4th column with the distance between each pair (columns 'P' and 'S.') I'm a bit confused by matrices so I'm not sure how to work with that result.
Tried making a function and applying it to the 2 columns but I get an error:
testdf$V <- apply(testdf[ , c('P', 'S')], 1, function(P, S) sqrt(sum((P^2, S^2)))
# Error in FUN(newX[, i], ...) : argument "S" is missing, with no default
Then tried using the dist() function in the stats package but it only returns 1 value:
(Same problem if I follow the method here: https://www.statology.org/euclidean-distance-in-r/)
P <- testdf$P
S <- testdf$S
testProbMatrix <- rbind(P, S)
stats::dist(testProbMatrix, method = "euclidean")
# returns only 1 distance
Returns a matrix
(Here's a nice explanation why: Calculate the distances between pairs of points in r)
stats::dist(cbind(P, S), method = "euclidean")
But I'm confused how to pull the distances out of the matrix and attach them to the correct ID for each pair of points. I don't understand why I have to make a matrix instead of just applying the function to the dataframe - matrices have always confused me.
I think this is the same question as here (Finding euclidean distance between all pair of points) but for R instead of Python
Thanks for the help!
Try this out if you would just like to add another column to your dataframe
testdf$distance <- sqrt((P^2 + S^2))

How to solve prcomp.default(): cannot rescale a constant/zero column to unit variance

I have a data set of 9 samples (rows) with 51608 variables (columns) and I keep getting the error whenever I try to scale it:
This works fine
pca = prcomp(pca_data)
However,
pca = prcomp(pca_data, scale = T)
gives
> Error in prcomp.default(pca_data, center = T, scale = T) :
cannot rescale a constant/zero column to unit variance
Obviously it's a little hard to post a reproducible example. Any ideas what the deal could be?
Looking for constant columns:
sapply(1:ncol(pca_data), function(x){
length = unique(pca_data[, x]) %>% length
}) %>% table
Output:
.
2 3 4 5 6 7 8 9
3892 4189 2124 1783 1622 2078 5179 30741
So no constant columns. Same with NA's -
is.na(pca_data) %>% sum
>[1] 0
This works fine:
pca_data = scale(pca_data)
But then afterwards both still give the exact same error:
pca = prcomp(pca_data)
pca = prcomp(pca_data, center = F, scale = F)
So why cant I manage to get a scaled pca on this data? Ok, lets make 100% sure that it's not constant.
pca_data = pca_data + rnorm(nrow(pca_data) * ncol(pca_data))
Same errors. Numierc data?
sapply( 1:nrow(pca_data), function(row){
sapply(1:ncol(pca_data), function(column){
!is.numeric(pca_data[row, column])
})
} ) %>% sum
Still the same errors. I'm out of ideas.
Edit: more and a hack at least to solve it.
Later, still having a hard time clustering this data eg:
Error in hclust(d, method = "ward.D") :
NaN dissimilarity value in intermediate results.
Trimming values under a certain cuttoff eg < 1 to zero had no effect. What finally worked was trimming all columns that had more than x zeros in the column. Worked for # zeros <= 6, but 7+ gave errors. No idea if this means that this is a problem in general or if this just happened to catch a problematic column. Still would be happy to hear if anyone has any ideas why because this should work just fine as long as no variable is all zeros (or constant in another way).
I don't think you're looking for zero variance columns correctly. Let's try with some dummy data. First, an acceptable matrix: of 10x100:
mat <- matrix(rnorm(1000, 0), nrow = 10)
And one with a zero-variance column. Let's call it oopsmat.
const <- rep(0.1,100)
oopsmat <- cbind(const, mat)
The first few elements of oopsmat look like this:
const
[1,] 0.1 0.75048899 0.5997527 -0.151815650 0.01002536 0.6736613 -0.225324647 -0.64374844 -0.7879052
[2,] 0.1 0.09143491 -0.8732389 -1.844355560 0.23682805 0.4353462 -0.148243210 0.61859245 0.5691021
[3,] 0.1 -0.80649512 1.3929716 -1.438738923 -0.09881381 0.2504555 -0.857300053 -0.98528008 0.9816383
[4,] 0.1 0.49174471 -0.8110623 -0.941413109 -0.70916436 1.3332522 0.003040624 0.29067871 -0.3752594
[5,] 0.1 1.20068447 -0.9811222 0.928731706 -1.97469637 -1.1374734 0.661594937 2.96029102 0.6040814
Let's try scaled and unscaled PCAs on oopsmat:
PCs <- prcomp(oopsmat) #works
PCs <- prcomp(oopsmat, scale. = T) #not forgetting the dot
#Error in prcomp.default(oopsmat, scale. = T) :
#cannot rescale a constant/zero column to unit variance
Because you can't divide by the standard deviation if it's infinity. To identify the zero-variance column, we can use which as follows to get the variable name.
which(apply(oopsmat, 2, var)==0)
#const
#1
And to remove zero variance columns from the dataset, you can use the same apply expression, setting variance not equal to zero.
oopsmat[ , which(apply(oopsmat, 2, var) != 0)]
Hope that helps make things clearer!
In addition to Joe's answer, just check that the classes of the columns in your dataframe are numerics.
If there are integers, then you'll get variances of 0, causing the scaling to fail.
So if,
class(my_df$some_column)
is an integer64, for example, then do the following
my_df$some_column <- as.numeric(my_df$some_column)
Hope this helps someone.
The error is because one of the column has constant values.
Calculate standard deviation of all the numeric cols to find the zero variance variables.
If the standard deviation is zero, you can remove the variable and compute pca

log- and z-transforming my data in R

I'm preparing my data for a PCA, for which I need to standardize it. I've been following someone else's code in vegan but am not getting a mean of zero and SD of 1, as I should be.
I'm using a data set called musci which has 13 variables, three of which are labels to identify my data.
log.musci<-log(musci[,4:13],10)
stand.musci<-decostand(log.musci,method="standardize",MARGIN=2)
When I then check for mean=0 and SD=1...
colMeans(stand.musci)
sapply(stand.musci,sd)
I get mean values ranging from -8.9 to 3.8 and SD values are just listed as NA (for every data point in my data set rather than for each variable). If I leave out the last variable in my standardization, i.e.
log.musci<-log(musci[,4:12],10)
the means don't change, but the SDs now all have a value of 1.
Any ideas of where I've gone wrong?
Cheers!
You data is likely a matrix.
## Sample data
dat <- as.matrix(data.frame(a=rnorm(100, 10, 4), b=rexp(100, 0.4)))
So, either convert to a data.frame and use sapply to operate on columns
dat <- data.frame(dat)
scaled <- sapply(dat, scale)
colMeans(scaled)
# a b
# -2.307095e-16 2.164935e-17
apply(scaled, 2, sd)
# a b
# 1 1
or use apply to do columnwise operations
scaled <- apply(dat, 2, scale)
A z-transformation is quite easy to do manually.
See below using a random string of data.
data <- c(1,2,3,4,5,6,7,8,9,10)
data
mean(data)
sd(data)
z <- ((data - mean(data))/(sd(data)))
z
mean(z) == 0
sd(z) == 1
The logarithm transformation (assuming you mean a natural logarithm) is done using the log() function.
log(data)
Hope this helps!

Using mat2listw function in R to create spatial weights matrix

I am attempting to create a weights object in R with the mat2listw function. I have a very large spatial weights matrix (roughly 22,000x22,000)
that was created in Excel and read into R, and I'm now trying to implement:
library(spdep)
SW=mat2listw(matrix)
I am getting the following error:
Error in if (any(x<0)) stop ("values in x cannot be negative"): missing
value where TRUE/FALSE needed.
What's going wrong here? My current matrix is all 0's and 1's, with no
missing values and no negative elements. What am I missing?
I'd appreciate any advice. Thanks in advance for your help!
Here is a simple test to your previous comment:
library(spdep)
m1 <-matrix(rbinom(100, 1, 0.5), ncol =10, nrow = 10) #create a random 10 * 10 matrix
m2 <- m1 # create a duplicate of the first matrix
m2[5,4] <- NA # assign an NA value in the second matrix
SW <- mat2listw(m1) # create weight list matrix
SW2 <- mat2listw(m2) # create weight list matrix
The first matrix one does not fail, but the second matrix does. The real question is now why your weight matrix is created containing NAs. Have you considered creating spatial weight matrix in r? Using dnearneigh or other function.

Maximum first derivative in for values in a data frame R

Good day, I am looking for some help in processing my dataset. I have 14000 rows and 500 columns and I am trying to get the maximum value of the first derivative for individual rows in different column groups. I have my data saved as a data frame with the first column being the name of a variable. My data looks like this:
Species Spec400 Spec405 Spec410 Spec415
1 AfricanOilPalm_1_Lf_1 0.2400900 0.2318345 0.2329633 0.2432734
2 AfricanOilPalm_1_Lf_10 0.1783162 0.1808581 0.1844433 0.1960315
3 AfricanOilPalm_1_Lf_11 0.1699646 0.1722618 0.1615062 0.1766804
4 AfricanOilPalm_1_Lf_12 0.1685733 0.1743336 0.1669799 0.1818896
5 AfricanOilPalm_1_Lf_13 0.1747400 0.1772355 0.1735916 0.1800227
For each of the variables in the species column, I want to get the maximum derivative from Spec495 to Spec500 for example. This is what I did before I ran into errors.
x<-c(495,500,505,510,515,520,525,530,535,540,545,550)##get x values of reflectance(Spec495 to Spec500)
y.data.f<-hsp[,21:32]##get row values for the required columns
y<-as.numeric(y.data.f[1,])##convert to a vector, for just the first row of data
library(pspline) ##Using a spline so a derivative maybe calculated from a list of numeric values
I really wanted to avoid using a loop because of the time it takes, but this is the only way I know of thus far
for(j in 1:14900)
+ { y<-as.numeric(y.data.f[j,]) + a1d<-max(predict(sm.spline(x, y), x, 1))
+ write.table(a1d, file = "a1-d-appended.csv", sep = ",",
+ col.names = FALSE, append=TRUE) + }
This loop runs up until the 7861th value then get this error:
Error in smooth.Pspline(x = ux, y = tmp[, 1], w = tmp[, 2], method = method, :
NA/NaN/Inf in foreign function call (arg 6)
I am sure there must be a way to avoid using a loop, maybe using the plyr package, but I can't figure out how to do so, nor which package would be best to get the value for maximum derivative.
Can anyone offer some insight or suggestions? Thanks in advance
First differences are the numerical analog of first derivatives when the x-dimension is evenly spaced. So something along the lines of:
which.max( diff ( predict(sm.spline(x, y))$ysmth) ) )
... will return the location of the maximum (positive) slope of the smoothed spline. If you wanted the maximal slope allowing it to be either negative or postive you would use abs() around the predict()$ysmth. If you are having difficulties with non-finite values then using an index of is.finite will clear both Inf and NaN difficulties:
predy <- predict(sm.spline(x, y))$ysmth
predx <- predict(sm.spline(x, y))$x
is.na( predy ) <- !is.finite(pred)
plot(predx, predy, # NA values will not blow up R plotting function,
# ... just create discontinuities.
main ="First Derivative")

Resources