How to get N values along with pearson correlation? - r

I am using the code below to calculate the correlation map between two datasets.this code worked fine and I got the results which look like:![enter image description here]![enter image description here][1].
I would like also to get another map displaying how many pairs were used in calculation of each pixel so I get map of N a long with map of correlation.
as per Paul Hiemstra this function gave cor and N:
cor_withN = function(...) {
cor_obj = cor.test(...)
print(sprintf("N = %s", cor_obj$parameter + 2))
return(data.frame(cor = cor_obj$estimate, N = cor_obj$parameter + 2))
}
cor_withN(runif(100), runif(100))
[1] "N = 100"
cor N
cor 0.1718225 100
when I simply replaced cor by cor_withN I got this error:
Error in cor.test.default(...) : not enough finite observations
How can I imply this function in my code to get two maps of correlation and N values ?

1. Error
Error in cor.test.default(...) : not enough finite observations
According to corr.test source (http://svn.r-project.org/R/trunk/src/library/stats/R/cor.test.R) this error can appear in two cases:
You are using Pearson's correlation and have less than 3 finite pairs of observations.
You are using Kendall's or Spearman's correlation and have less than 2 pairs.
Indeed, cor.test(c(1,2), c(2,3)) causes exactly the same error, while cor(c(1,2), c(2,3)) gives an answer.
Note, that cor.test uses complete.cases(x,y) for calculations. So, look into your data - probably there are not enough pairs somewhere.
2. Function
cor returns numeric value, your function corr_withN returns data.frame. So, it doesn't look like you can simply replace one by another.
As I understand you need just a matrix of size 1440x720 which will be plotted over the map. In this case you can just use cor for the first plot, and simple function returning the number of pairs used to calculate correlation for the second. The function itself can be as simple as:
cor_withN <- function(...) {
cor.test(...)$parameter+2
}
UPDATE: After comment
If cor_withN must return NA when there are less than 3 pairs it should be modified:
cor_withN <- function(...) {
res <- try(cor.test(...)$parameter+2, silent=TRUE)
ifelse(class(res)=="try-error", NA, res)
}
This function tries to compute correlation and, if it fails, returns NA or number of pairs otherwise.

Related

Principal component analysis using R. Automatic and manual results do not match

Two different methods of the principal component analysis were conducted to analyze the following data (ch082.dat) using the Box1's R-code, below.https://drive.google.com/file/d/1xykl6ln-bUnXIs-jIA3n5S3XgHjQbkWB/view?usp=sharing
The first method uses the rotation matrix (See 'ans_mat' under the '#rotated data' of the Box1's code) and,
the second method uses the 'pcomp' function (See 'rpca' under the '#rotated data' of the Box1's code).
However, there is a subtle discrepancy in the answer between the method using the rotation matrix and the method using the 'pcomp' function.
make it match
My Question
What should I do so that the result of the rotation matrix -based method matches the result of the'pcomp' function?
As far as I've tried with various data, including other data, the actual discrepancies seem to be limited to scale shifts and mirroring transformations.
The results of the rotation matrix -based method is shown in left panel.
The results of the pcomp function -based method is shown in right panel.
Mirror inversion can be seen in "ch082.dat" data.(See Fig.1);
It seems that, in some j, the sign of the "jth eigenvector of the correlation matrix" and the sign of the "jth column of the output value of the prcomp function" may be reversed. If there is a degree of overlap in the eigenvalues, it is possible that the difference may be more complex than mirror inversion.
Fig.1
There is a scale shift for the Box2's data (See See Fig.2), despite the centralization and normalization to the data.
Fig.2
Box.1
#dataload
##Use the 'setwd' function to specify the directory containing 'ch082.dat'.
##For example, if you put this file directly under the C drive of your Windows PC, you can run the following command.
setwd("C:/") #Depending on where you put the file, you may need to change the path.
getwd()
w1<-read.table("ch082.dat",header = TRUE,row.names = 1,fileEncoding = "UTF-8")
w1
#Function for standardizing data
#Thanks to https://qiita.com/ohisama2/items/5922fac0c8a6c21fcbf8
standalize <- function(data)
{ for(i in length(data[1,]))
{
x <- as.matrix(data[,i])
y <- (x-mean(x)/sd(x))
data[,i] <- y
}
return(data)}
#Method using rotation matrix
z_=standalize(w1)
B_mat=cor(z_) #Compute correlation matrix
eigen_m <- eigen(B_mat)
sample_mat <- as.matrix(z_)
ans_mat=sample_mat
for(j in 1:length(sample_mat[1,])){
ans_mat[,j]=sample_mat%*%eigen_m$vectors[,j]
}
#Method using "rpca" function
rpca <- prcomp(w1,center=TRUE, scale=TRUE)
#eigen vectors
eigen_m$vectors
rpca
#rotated data
ans_mat
rpca$x
#Graph Plots
par(mfrow=c(1,2))
plot(
ans_mat[,1],
ans_mat[,2],
main="Rotation using eigenvectors"
)
plot(rpca$x[,1], rpca$x[,2],
main="Principal component score")
par(mfrow=c(1,1))
#summary
summary(rpca)$importance
Box2.
sample_data <- data.frame(
X = c(2,4, 6, 5,7, 8,10),
Y = c(6,8,10,11,9,12,14)
)
X = c(2,4, 6, 5,7, 8,10)
Y = c(6,8,10,11,9,12,14)
plot(Y ~ X)
w1=sample_data
Reference
https://logics-of-blue.com/principal-components-analysis/
(Written in Japanease)
The two sets of results agree. First we can simplify your code a bit. You don't need your function or the for loop:
z_ <- scale(w1)
B_mat <- cor(z_)
eigen_m <- eigen(B_mat)
ans_mat <- z_ %*% eigen_m$vectors
Now the prcomp version
z_pca <- prcomp(z_)
z_pca$sdev^2 # Equals eigen_m$values
z_pca$rotation # Equals eigen_m$vectors
z_pca$x # Equals ans_mat
Your original code mislabeled ans_mat columns. They are actually the principal component scores. You can fix that with
colnames(ans_mat) <- colnames(z_pca$x)
The pc loadings (and therefore the scores) are not uniquely defined with respect to reflection. In other words multiplying all of the loadings or scores in one component by -1 flips them but does not change their relationships to one another. Multiply z_pca$x[, 1] by -1 and the plots will match:
z_pca$x[, 1] <- z_pca$x[, 1] * -1
dev.new(width=10, height=6)
par(mfrow=c(1,2))
plot(ans_mat[,1], ans_mat[,2], main="Rotation using eigenvectors")
plot(z_pca$x[,1], z_pca$x[,2], main="Principal component score")

Generating n new datasets by randomly sampling existing data, and then applying a function to new datasets

For a paper I'm writing I have subsetted a larger dataset into 3 groups, because I thought the strength of correlations between 2 variables in those groups would differ (they did). I want to see if subsetting my data into random groupings would also significantly affect the strength of correlations (i.e., whether what I'm seeing is just an effect of subsetting, or if those groupings are actually significant).
To this end, I am trying to generate n new data frames by randomly sampling 150 rows from an existing dataset, and then want to calculate correlation coefficients for two variables in those n new data frames, saving the correlation coefficient and significance in a new file.
But, HOW?
I can do it manually, e.g., with dplyr, something like
newdata <- sample_n(Random_sample_data, 150)
output <- cor.test(newdata$x, newdata$y, method="kendall")
I'd obviously like to not type this out 1000 or 100000 times, and have been trying things with loops and lapply (see below) but they've not worked (undoubtedly due to something really obvious that I'm missing!).
Here I have tried to assign each row to a different group, with 10 groups in total, and then to do correlations between x and y by those groups:
Random_sample_data<-select(Range_corrected, x, y)
cat <- sample(1:10, 1229, replace=TRUE)
Random_sample_cats<-cbind(Random_sample_data,cat)
correlation <- function(c) {
c <- cor.test(x,y, method="kendall")
return(c)
}
b<- daply(Random_sample_cats, .(cat), correlation)
Error message:
Error in cor.test(x, y, method = "kendall") :
object 'x' not found
Once you have the code for what you want to do once, you can put it in replicate to do it n times. Here's a reproducible example on built-in data
result = replicate(n = 10, expr = {
newdata <- sample_n(mtcars, 10)
output <- cor.test(newdata$wt, newdata$qsec, method="kendall")
})
replicate will save the result of the last line of what you did (output <- ...) for each replication. It will attempt to simplify the result, in this case cor.test returns a list of length 8, so replicate will simplify the results to a matrix with 8 rows and 10 columns (1 column per replication).
You may want to clean up the results a little bit so that, e.g., you only save the p-value. Here, we store only the p-value, so the result is a vector with one p-value per replication, not a matrix:
result = replicate(n = 10, expr = {
newdata <- sample_n(mtcars, 10)
cor.test(newdata$wt, newdata$qsec, method="kendall")$p.value
})

Maximum first derivative in for values in a data frame R

Good day, I am looking for some help in processing my dataset. I have 14000 rows and 500 columns and I am trying to get the maximum value of the first derivative for individual rows in different column groups. I have my data saved as a data frame with the first column being the name of a variable. My data looks like this:
Species Spec400 Spec405 Spec410 Spec415
1 AfricanOilPalm_1_Lf_1 0.2400900 0.2318345 0.2329633 0.2432734
2 AfricanOilPalm_1_Lf_10 0.1783162 0.1808581 0.1844433 0.1960315
3 AfricanOilPalm_1_Lf_11 0.1699646 0.1722618 0.1615062 0.1766804
4 AfricanOilPalm_1_Lf_12 0.1685733 0.1743336 0.1669799 0.1818896
5 AfricanOilPalm_1_Lf_13 0.1747400 0.1772355 0.1735916 0.1800227
For each of the variables in the species column, I want to get the maximum derivative from Spec495 to Spec500 for example. This is what I did before I ran into errors.
x<-c(495,500,505,510,515,520,525,530,535,540,545,550)##get x values of reflectance(Spec495 to Spec500)
y.data.f<-hsp[,21:32]##get row values for the required columns
y<-as.numeric(y.data.f[1,])##convert to a vector, for just the first row of data
library(pspline) ##Using a spline so a derivative maybe calculated from a list of numeric values
I really wanted to avoid using a loop because of the time it takes, but this is the only way I know of thus far
for(j in 1:14900)
+ { y<-as.numeric(y.data.f[j,]) + a1d<-max(predict(sm.spline(x, y), x, 1))
+ write.table(a1d, file = "a1-d-appended.csv", sep = ",",
+ col.names = FALSE, append=TRUE) + }
This loop runs up until the 7861th value then get this error:
Error in smooth.Pspline(x = ux, y = tmp[, 1], w = tmp[, 2], method = method, :
NA/NaN/Inf in foreign function call (arg 6)
I am sure there must be a way to avoid using a loop, maybe using the plyr package, but I can't figure out how to do so, nor which package would be best to get the value for maximum derivative.
Can anyone offer some insight or suggestions? Thanks in advance
First differences are the numerical analog of first derivatives when the x-dimension is evenly spaced. So something along the lines of:
which.max( diff ( predict(sm.spline(x, y))$ysmth) ) )
... will return the location of the maximum (positive) slope of the smoothed spline. If you wanted the maximal slope allowing it to be either negative or postive you would use abs() around the predict()$ysmth. If you are having difficulties with non-finite values then using an index of is.finite will clear both Inf and NaN difficulties:
predy <- predict(sm.spline(x, y))$ysmth
predx <- predict(sm.spline(x, y))$x
is.na( predy ) <- !is.finite(pred)
plot(predx, predy, # NA values will not blow up R plotting function,
# ... just create discontinuities.
main ="First Derivative")

cost function in cv.glm of boot library in R

I am trying to use the crossvalidation cv.glm function from the boot library in R to determine the number of misclassifications when a glm logistic regression is applied.
The function has the following signature:
cv.glm(data, glmfit, cost, K)
with the first two denoting the data and model and K specifies the k-fold.
My problem is the cost parameter which is defined as:
cost: A function of two vector arguments specifying the cost function
for the crossvalidation. The first argument to cost should correspond
to the observed responses and the second argument should correspond to
the predicted or fitted responses from the generalized linear model.
cost must return a non-negative scalar value. The default is the
average squared error function.
I guess for classification it would make sense to have a function which returns the rate of misclassification something like:
nrow(subset(data, (predict >= 0.5 & data$response == "no") |
(predict < 0.5 & data$response == "yes")))
which is of course not even syntactically correct.
Unfortunately, my limited R knowledge let me waste hours and I was wondering if someone could point me in the correct direction.
It sounds like you might do well to just use the cost function (i.e. the one named cost) defined further down in the "Examples" section of ?cv.glm. Quoting from that section:
# [...] Since the response is a binary variable an
# appropriate cost function is
cost <- function(r, pi = 0) mean(abs(r-pi) > 0.5)
This does essentially what you were trying to do with your example. Replacing your "no" and "yes" with 0 and 1, lets say you have two vectors, predict and response. Then cost() is nicely designed to take them and return the mean classification rate:
## Simulate some reasonable data
set.seed(1)
predict <- seq(0.1, 0.9, by=0.1)
response <- rbinom(n=length(predict), prob=predict, size=1)
response
# [1] 0 0 0 1 0 0 0 1 1
## Demonstrate the function 'cost()' in action
cost(response, predict)
# [1] 0.3333333 ## Which is right, as 3/9 elements (4, 6, & 7) are misclassified
## (assuming you use 0.5 as the cutoff for your predictions).
I'm guessing the trickiest bit of this will be just getting your mind fully wrapped around the idea of passing a function in as an argument. (At least that was for me, for the longest time, the hardest part of using the boot package, which requires that move in a fair number of places.)
Added on 2016-03-22:
The function cost(), given above is in my opinion unnecessarily obfuscated; the following alternative does exactly the same thing but in a more expressive way:
cost <- function(r, pi = 0) {
mean((pi < 0.5) & r==1 | (pi > 0.5) & r==0)
}
I will try to explain the cost function in simple words. Let's take
cv.glm(data, glmfit, cost, K) arguments step by step:
data
The data consists of many observations. Think of it like series of numbers or even.
glmfit
It is generalized linear model, which runs on the above series. But there is a catch it splits data into several parts equal to K. And runs glmfit on each of them separately (test set), taking the rest of them as training set. The output of glmfit is a series consisting of same number of elements as the split input passed.
cost
Cost Function. It takes two arguments first the split input series(test set), and second the output of glmfit on the test input. The default is mean square error function.
.
It sums the square of difference between observed data point and predicted data point. Inside the function a loop runs over the test set (output and input should have same number of elements) calculates difference, squares it and adds to output variable.
K
The number to which the input should be split. Default gives leave one out cross validation.
Judging from your cost function description. Your input(x) would be a set of numbers between 0 and 1 (0-0.5 = no and 0.5-1 = yes) and output(y) is 'yes' or 'no'. So error(e) between observation(x) and prediction(y) would be :
cost<- function(x, y){
e=0
for (i in 1:length(x)){
if(x[i]>0.5)
{
if( y[i]=='yes') {e=0}
else {e=x[i]-0.5}
}else
{
if( y[i]=='no') {e=0}
else {e=0.5-x[i]}
}
e=e*e #square error
}
e=e/i #mean square error
return (e)
}
Sources : http://www.cs.cmu.edu/~schneide/tut5/node42.html
The cost function can optionally be defined if there is one you prefer over the default average squared error. If you wanted to do so then the you would write a function that returns the cost you want to minimize using two inputs: (1) the vector of known labels that you are predicting, and (2) the vector of predicted probabilities from your model for those corresponding labels. So for the cost function that (I think) you described in your post you are looking for a function that will return the average number of accurate classifications which would look something like this:
cost <- function(labels,pred){
mean(labels==ifelse(pred > 0.5, 1, 0))
}
With that function defined you can then pass it into your glm.cv() call. Although I wouldn't recommend using your own cost function over the default one unless you have reason to. Your example isn't reproducible, so here is another example:
> library(boot)
>
> cost <- function(labels,pred){
+ mean(labels==ifelse(pred > 0.5, 1, 0))
+ }
>
> #make model
> nodal.glm <- glm(r ~ stage+xray+acid, binomial, data = nodal)
> #run cv with your cost function
> (nodal.glm.err <- cv.glm(nodal, nodal.glm, cost, nrow(nodal)))
$call
cv.glm(data = nodal, glmfit = nodal.glm, cost = cost, K = nrow(nodal))
$K
[1] 53
$delta
[1] 0.8113208 0.8113208
$seed
[1] 403 213 -2068233650 1849869992 -1836368725 -1035813431 1075589592 -782251898
...
The cost function defined in the example for cv.glm clearly assumes that the predictions are probabilities, which would require the type="response" argument in the predict function. The documentation from library(boot) should state this explicitly. I would otherwise be forced to assume that the default type="link" is used inside the cv.glm function, in which case the cost function would not work as intended.

producing a grid of results in R

I am writing some code to do a maximum likelihood estimation of some parameter values, and I am trying to create a surface plot of parameter values taken from the optim function, and need to create a grid to do so. It is the part whereby I need to create a grid that is confounding me,
My MLE function looks like:
loglike<-function(par,dat,scale)
{ ptp<-dat[1:length(dat)-1]
ptp1<-dat[2:length(dat)]
r<-par['r']
k<-par['k']
sigma<-par['sigma']
if(scale=='log')
{
return(sum(dnorm(log(ptp1)-log(ptp)*exp(r-(ptp/k)),mean=0,sd=sigma,log=T)))
}
if (scale=='sqrt')
{
return(sum(dnorm(sqrt(ptp1)-sqrt(ptp)*exp(r-(ptp/k)),mean=0,sd=sigma,log=T)))
}
if (scale=='linear')
{
return(sum(dnorm(ptp1-ptp*exp(r-(ptp/k)),mean=0,sd=sigma,log=T)))
}
}
I have already created some data from the optim giving me corresponding parameter values
I have tried taking output from the optim function and putting it into the expand.grid function like:
gridlog<-expand.grid(logs[,"r"],logs[,"sigma"],logs[,"k"])
But all this is doing is creating a large matrix filled with all the same values.
Where the data going into the expand.grid function is filled from :
logs<-list()
for(i in seq(1,300,0.1)){
logs[i]<-optim(par=c(r=i,k=i,sigma=i),fn=loglike,dat=dat,scale='log',method='Nelder-Mead',control=list(fnscale=-1))
}
logs<-do.call(rbind,logs)
This creates a 300 long matrix of corresponding sigma's r's and k's
My data is:
c(100, 128.675595618645, 75.436115414503, 146.398449792328, 102.419994706974,
207.397726741841, 23.4579309898438, 42.4085746569567, 119.498216389673,
59.7845591706614, 119.37201616882, 252.047672957539, 28.3165331949818,
57.4918213065119, 311.615538092141, 8.53779749227741, 31.5382580618134,
115.617013730077, 43.6907812963781, 70.9139870053552, 123.004040266686,
132.575148404208, 114.813947981006, 115.950032495637, 120.891472762661,
97.0207348527786, 235.618894638631, 17.0936655960759, 49.4419128844531,
112.476950569973, 58.3241789008329, 80.0300102105128, 103.248819284132,
99.1968765946717, 113.905769052605, 143.181386861766, 62.962989192695,
174.054591300157, 39.9156352770331, 81.8344415290292, 176.631480374326,
51.5564038694108, 131.542259464434, 72.5981749979889, 38.9733086158719,
126.808054274927, 73.6960412245896, 62.5484608101147, 55.539355637003,
137.888502803112, 106.921926717155, 140.000738390606, 162.512046122238,
26.2949484171288, 80.4110888678422, 74.0481779531392, 33.9890286552257,
142.477859644323, 55.1820570626643, 107.242498924143, 56.8497685792794,
143.676120209843, 84.2334844367379, 67.0330079913484, 109.96246704725,
157.216290273118, 59.4585552091703, 67.2986524284706, 55.2529503291083,
38.932960005221, 62.7454169122216, 210.687014199037, 38.7348882392115,
75.6645116341029, 115.924283193145, 117.772958122253, 45.5313134644358,
112.306998515583, 38.7001172906923, 66.1308507048062, 122.516808638813,
38.8283932430479, 168.014298040365, 38.0902373313928, 117.414876109978,
168.615976661456, 66.5037228223079, 94.4482610053865, 505.254990783834,
1.05181785078369, 1.77594058056118, 4.36034444400473, 12.1485473106491,
82.2373017835424, 58.9775202042162, 132.907299665772, 51.2346939236555,
123.251093218535, 143.077217943039, 96.1524852870813)
Any help anyone could give would be greatly appreciated!!
#find optimum:
fit<-optim(par=c(r=1,k=1,sigma=1),fn=loglike,dat=dat,scale='log',
method='Nelder-Mead',control=list(fnscale=-1))
fit$par
r k sigma
0.3911590 254.4989317 0.5159761
# make grid around optimum with few selected sigma values:
rs<-seq(0.01,1,length=30)
ks<-seq(230,280,length=30)
sigmas<-c(0.25,0.5159761,0.75)
# this will contains all parameter combinations
# and the corresponding likelihood values
mlegrid<-cbind(as.matrix(expand.grid(rs,ks,sigmas)),0) #Matrix
colnames(mlegrid)<-c('r','k','sigma','likelihood')
for(i in 1:nrow(mlegrid)){ #go through all combinations
mlegrid[i,4]<- loglike(par=mlegrid[i,1:3],dat=dat,scale='log')
}
mlegrid[which.max(mlegrid[,4]),]
r k sigma likelihood
0.3855172 257.5862069 0.5159761 -74.9940496
# almost the same as from optim
# (differences due to sparse grid, more dense gives more accurate results)
#for interactive plots, static versions with `persp` function
library(rgl)
persp3d(x=rs,y=ks,
z=matrix(mlegrid[mlegrid[,3]==sigmas[1],4],nrow=length(rs)),col=2)
#with sigma from optim
persp3d(x=rs,y=ks,
z=matrix(mlegrid[mlegrid[,3]==sigmas[2],4],nrow=length(rs)),col=2)
persp3d(x=rs,y=ks,
z=matrix(mlegrid[mlegrid[,3]==sigmas[3],4],nrow=length(rs)),col=2)

Resources