Error in as.Date.numeric() : 'origin' must be supplied - r

I have received a paper in which they included the R files for their empirical results. Nevertheless, I have some problems while trying to run their codes:
data <- vni$R[198:length(vni$R)]; date <- vni$Date[198:length(vni$R)]
l <- length(data)
rw_length <- 52 # 52 weeks (~ 1 year)
bound <- vector()
avr <- vector()
for (i in (rw_length+1):l) {
AVR.test <- AutoBoot.test(data[(i-rw_length):i],nboot=2000,"Normal",c(0.025, 0.975))
bound <- append(bound, AVR.test$CI.stat)
avr <- append(avr, AVR.test$test.stat)
}
lower <- bound[seq(1, length(bound), 2)]
upper <- bound[seq(2, length(bound), 2)]
results <- matrix(c(date[(rw_length+1):l],data[(rw_length+1):l],avr,upper, lower),ncol=5, dimnames = list(c(),c("Date", "Return", "AVR", "Upper", "Lower")))
And I get the following error: `
Error in as.Date.numeric(e) : 'origin' must be supplied`
for the results <- matrix(c(date[(rw_length+1):l],data[(rw_length+1):l],avr,upper, lower),ncol=5, dimnames = list(c(),c("Date", "Return", "AVR", "Upper", "Lower")))
My dataset is:
Date P R
1 2001-03-23 259.60 0.0000000000
2 2001-03-30 269.30 0.0366840150
3 2001-04-06 284.69 0.0555748690
4 2001-04-13 300.36 0.0535808860
5 2001-04-20 317.76 0.0563146260
...
935 2019-02-15 950.89 0.0454163960
936 2019-02-22 988.91 0.0392049380
937 2019-03-01 979.63 -0.0094283770
Could you please help me with that issue?
Thanks alot!

Everything in a matrix must be the same class. This is often found when there's a string among numbers, where
m <- matrix(0, nr=2, nc=2)
m
# [,1] [,2]
# [1,] 0 0
# [2,] 0 0
m[1] <- "a"
m
# [,1] [,2]
# [1,] "a" "0"
# [2,] "0" "0"
In this case, you have Date (first column) and numeric (all others? no idea what AutoBoot is). And because it's trying to coerce from least-complex to most-complex (from numeric to Date), the non-Date objects are being converted.
matrix(c(Sys.Date(), 1.1))
# Error in as.Date.numeric(e) : 'origin' must be supplied
I suggest that trying to store this in a matrix is therefore fundamentally flawed. If you want to store a Date object among numbers, you have two options:
Store it as a data.frame, where each column can have its own class.
Pre-convert the "Date" data to numeric and store it as a number. This means that if/when you need the dates to be of class Date again, you'll need to as.Date(..., origin="1970-01-01").

Related

Most efficient way in base R to do pairwise correlations between thousands of columns in a matrix [duplicate]

I'm new to R, so I apologize if this is a straightforward question, however I've done quite a bit of searching this evening and can't seem to figure it out. I've got a data frame with a whole slew of variables, and what I'd like to do is create a table of the correlations among a subset of these, basically the equivalent of "pwcorr" in Stata, or "correlations" in SPSS. The one key to this is that not only do I want the r, but I also want the significance associated with that value.
Any ideas? This seems like it should be very simple, but I can't seem to figure out a good way.
Bill Venables offers this solution in this answer from the R mailing list to which I've made some slight modifications:
cor.prob <- function(X, dfr = nrow(X) - 2) {
R <- cor(X)
above <- row(R) < col(R)
r2 <- R[above]^2
Fstat <- r2 * dfr / (1 - r2)
R[above] <- 1 - pf(Fstat, 1, dfr)
cor.mat <- t(R)
cor.mat[upper.tri(cor.mat)] <- NA
cor.mat
}
So let's test it out:
set.seed(123)
data <- matrix(rnorm(100), 20, 5)
cor.prob(data)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0000000 NA NA NA NA
[2,] 0.7005361 1.0000000 NA NA NA
[3,] 0.5990483 0.6816955 1.0000000 NA NA
[4,] 0.6098357 0.3287116 0.5325167 1.0000000 NA
[5,] 0.3364028 0.1121927 0.1329906 0.5962835 1
Does that line up with cor.test?
cor.test(data[,2], data[,3])
Pearson's product-moment correlation
data: data[, 2] and data[, 3]
t = 0.4169, df = 18, p-value = 0.6817
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.3603246 0.5178982
sample estimates:
cor
0.09778865
Seems to work ok.
Here is something that I just made, I stumbled on this post because I was looking for a way to take every pair of variables, and get a tidy nX3 dataframe. Column 1 is a variable, Column 2 is a variable, and Column 3 and 4 are their absolute value and true correlation. Just pass the function a dataframe of numeric and integer values.
pairwiseCor <- function(dataframe){
pairs <- combn(names(dataframe), 2, simplify=FALSE)
df <- data.frame(Vairable1=rep(0,length(pairs)), Variable2=rep(0,length(pairs)),
AbsCor=rep(0,length(pairs)), Cor=rep(0,length(pairs)))
for(i in 1:length(pairs)){
df[i,1] <- pairs[[i]][1]
df[i,2] <- pairs[[i]][2]
df[i,3] <- round(abs(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]])),4)
df[i,4] <- round(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]]),4)
}
pairwiseCorDF <- df
pairwiseCorDF <- pairwiseCorDF[order(pairwiseCorDF$AbsCor, decreasing=TRUE),]
row.names(pairwiseCorDF) <- 1:length(pairs)
pairwiseCorDF <<- pairwiseCorDF
pairwiseCorDF
}
This is what the output is:
> head(pairwiseCorDF)
Vairable1 Variable2 AbsCor Cor
1 roll_belt accel_belt_z 0.9920 -0.9920
2 gyros_dumbbell_x gyros_dumbbell_z 0.9839 -0.9839
3 roll_belt total_accel_belt 0.9811 0.9811
4 total_accel_belt accel_belt_z 0.9752 -0.9752
5 pitch_belt accel_belt_x 0.9658 -0.9658
6 gyros_dumbbell_z gyros_forearm_z 0.9491 0.9491
I've found that the R package picante does a nice job dealing with the problem that you have. You can easily pass your dataset to the cor.table function and get a table of correlations and p-values for all of your variables. You can specify Pearson's r or Spearman in the function. See this link for help:
http://www.inside-r.org/packages/cran/picante/docs/cor.table
Also remember to remove any non-numeric columns from your dataset prior to running the function. Here's an example piece of code:
install.packages("picante")
library(picante)
#Insert the name of your dataset in the code below
cor.table(dataset, cor.method="pearson")
You can use the sjt.corr function of the sjPlot-package, which gives you a nicely formatted correlation table, ready for use in your Office application.
Simplest function call is just to pass the data frame:
sjt.corr(df)
See examples here.

Calculate the 20 steps sum of a variable

I have a daily data base of retunrs from a portfolio. For a model I am replicating the authors calculate for each month, the realized variance RVt from daily returns in the previous 21 sessions.
To do this here is a small example of how I am trying to calculate it:
x <- rnorm(24795, 0, 0.2) #Generate random numbers to simulate my sample
x_2 <-x^2 #the model specify to work with the square returns
# I need the monthly sum of the square returns. For this I create a matrix
#with the length if x/20 because each month consist in 20 trading sessions
rv <- matrix(NA, nrow=(length(x_2)/20), ncol=1)
#I create the first step
rv[1] <- sum(x_2[1:20])
#I create a loop to make the sum of from x_2[21:40] and continue
# with this 20 steps sums
for (i in 2:1239){
rv[i] <- sum(x_2[i+20:i+39])
}
rv
The problem is that my loop is summing as:
x_2[21:40]
x_2[22:41]
x_2[23:42]
instead of
x_2[21:40]
x_2[41:60]
x_2[61:80]
Does anyone knows what I a doing wrong?
Here is a picture of the forula from the paper:
Formula
Thanks
Miguel
We could use seq
i1 <- seq(21, length(x_2), by = 20)
i1 <- i1[-length(i1)]
i2 <- c(i1[-1] - 1, length(x_2))
head(i1)
#[1] 21 41 61 81 101 121
head(i2)
#[1] 40 60 80 100 120 140
rv[-1] <- unlist(Map(function(i, j) sum(x_2[i:j]), i1, i2))
-output
> head(rv)
[,1]
[1,] 1.0533125
[2,] 1.0914327
[3,] 0.7530577
[4,] 1.0559202
[5,] 0.6579956
[6,] 0.9139404
> tail(rv)
[,1]
[1234,] 0.7115833
[1235,] 0.6104712
[1236,] 0.6161004
[1237,] 0.7440868
[1238,] 0.7284476
[1239,] 1.8718138
We can use tapply dividing every 20 numbers in a group.
result <- tapply(x_2, ceiling(seq_along(x_2)/20), sum)
Verify the result -
head(result)
# 1 2 3 4 5 6
#1.0872762 0.4487953 1.1764887 0.8852306 0.8394201 1.0295633
sum(x_2[1:20])
#[1] 1.087276
sum(x_2[21:40])
#[1] 0.4487953
sum(x_2[41:60])
#[1] 1.176489

coercing numeric and then graph from matrix in R

I have a symmetrix distance matrix (x):
0 2.6096 2.3601 5.6109
2.6096 0 1.7045 6.8441
2.3601 1.7045 0 6.5946
5.6109 6.8441 6.5946 0
which I would like to analyse as a graph, in order to compute its spectral density. In order to do that, I want to follow these steps (with igraph):
x_mat <- as.matrix(x,matrix.type="adjacency") #get adjacency matrix`
x_graph <- graph.adjacency(x_mat) #convert to graph
x_lap <- graph.laplacian(x_graph) #convert to laplacian graph
x_eig <- eigen(x_lap,symmetric=TRUE,only.values=TRUE)
(I'm not sure how to plot the spectral density, but I'm not even there yet)
But I'm having trouble from the start. I can get my matrix to be a matrix
x_mat <- as.matrix(x,matrix.type="adjacency")
is.matrix(x_mat)
[1] TRUE
x_mat
[,1]
[1,] Numeric,39204
But I cannot coerce it to be numeric
mode(x_mat) <- "numeric"
_Error in eval(expr, envir, enclos) :
(list) object cannot be coerced to type 'double'_
I need the adjacency matrix to be numeric in order to move along my pipeline. Any advice? Alternative methods to achieve my goal, of course, are also welcome.
Thanks in advance.
data.matrix should provide what you need.
df <- read.table(header=F, text='
0 2.6096 2.3601 5.6109
2.6096 0 1.7045 6.8441
2.3601 1.7045 0 6.5946
5.6109 6.8441 6.5946 0
')
mat <- data.matrix(df)
is.matrix(mat)
> is.matrix(mat)
[1] TRUE
is.numeric(mat)
> is.numeric(mat)
[1] TRUE

Pairwise Correlation Table

I'm new to R, so I apologize if this is a straightforward question, however I've done quite a bit of searching this evening and can't seem to figure it out. I've got a data frame with a whole slew of variables, and what I'd like to do is create a table of the correlations among a subset of these, basically the equivalent of "pwcorr" in Stata, or "correlations" in SPSS. The one key to this is that not only do I want the r, but I also want the significance associated with that value.
Any ideas? This seems like it should be very simple, but I can't seem to figure out a good way.
Bill Venables offers this solution in this answer from the R mailing list to which I've made some slight modifications:
cor.prob <- function(X, dfr = nrow(X) - 2) {
R <- cor(X)
above <- row(R) < col(R)
r2 <- R[above]^2
Fstat <- r2 * dfr / (1 - r2)
R[above] <- 1 - pf(Fstat, 1, dfr)
cor.mat <- t(R)
cor.mat[upper.tri(cor.mat)] <- NA
cor.mat
}
So let's test it out:
set.seed(123)
data <- matrix(rnorm(100), 20, 5)
cor.prob(data)
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0000000 NA NA NA NA
[2,] 0.7005361 1.0000000 NA NA NA
[3,] 0.5990483 0.6816955 1.0000000 NA NA
[4,] 0.6098357 0.3287116 0.5325167 1.0000000 NA
[5,] 0.3364028 0.1121927 0.1329906 0.5962835 1
Does that line up with cor.test?
cor.test(data[,2], data[,3])
Pearson's product-moment correlation
data: data[, 2] and data[, 3]
t = 0.4169, df = 18, p-value = 0.6817
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.3603246 0.5178982
sample estimates:
cor
0.09778865
Seems to work ok.
Here is something that I just made, I stumbled on this post because I was looking for a way to take every pair of variables, and get a tidy nX3 dataframe. Column 1 is a variable, Column 2 is a variable, and Column 3 and 4 are their absolute value and true correlation. Just pass the function a dataframe of numeric and integer values.
pairwiseCor <- function(dataframe){
pairs <- combn(names(dataframe), 2, simplify=FALSE)
df <- data.frame(Vairable1=rep(0,length(pairs)), Variable2=rep(0,length(pairs)),
AbsCor=rep(0,length(pairs)), Cor=rep(0,length(pairs)))
for(i in 1:length(pairs)){
df[i,1] <- pairs[[i]][1]
df[i,2] <- pairs[[i]][2]
df[i,3] <- round(abs(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]])),4)
df[i,4] <- round(cor(dataframe[,pairs[[i]][1]], dataframe[,pairs[[i]][2]]),4)
}
pairwiseCorDF <- df
pairwiseCorDF <- pairwiseCorDF[order(pairwiseCorDF$AbsCor, decreasing=TRUE),]
row.names(pairwiseCorDF) <- 1:length(pairs)
pairwiseCorDF <<- pairwiseCorDF
pairwiseCorDF
}
This is what the output is:
> head(pairwiseCorDF)
Vairable1 Variable2 AbsCor Cor
1 roll_belt accel_belt_z 0.9920 -0.9920
2 gyros_dumbbell_x gyros_dumbbell_z 0.9839 -0.9839
3 roll_belt total_accel_belt 0.9811 0.9811
4 total_accel_belt accel_belt_z 0.9752 -0.9752
5 pitch_belt accel_belt_x 0.9658 -0.9658
6 gyros_dumbbell_z gyros_forearm_z 0.9491 0.9491
I've found that the R package picante does a nice job dealing with the problem that you have. You can easily pass your dataset to the cor.table function and get a table of correlations and p-values for all of your variables. You can specify Pearson's r or Spearman in the function. See this link for help:
http://www.inside-r.org/packages/cran/picante/docs/cor.table
Also remember to remove any non-numeric columns from your dataset prior to running the function. Here's an example piece of code:
install.packages("picante")
library(picante)
#Insert the name of your dataset in the code below
cor.table(dataset, cor.method="pearson")
You can use the sjt.corr function of the sjPlot-package, which gives you a nicely formatted correlation table, ready for use in your Office application.
Simplest function call is just to pass the data frame:
sjt.corr(df)
See examples here.

Why does the calculation of Cohen's kappa fail across different packages on this contingency table?

I have a contingency table for which I would like to calculate Cohens's kappa - the level of agreement. I have tried using three different packages, which all seem to fail to some degree. The package e1071 has a function specifically for a contingency table, but that too seems to fail. Below is reproducable code. You will need to install packages concord, e1071, and irr.
# Recreate my contingency table, output with dput
conf.mat<-structure(c(810531L, 289024L, 164757L, 114316L), .Dim = c(2L,
2L), .Dimnames = structure(list(landsat_2000_bin = c("0", "1"
), MOD12_2000_binForest = c("0", "1")), .Names = c("landsat_2000_bin",
"MOD12_2000_binForest")), class = "table")
library(concord)
cohen.kappa(conf.mat)
library(e1071)
classAgreement(conf.mat, match.names=TRUE)
library(irr)
kappa2(conf.mat)
The output I get from running this is:
> cohen.kappa(conf.mat)
Kappa test for nominally classified data
4 categories - 2 methods
kappa (Cohen) = 0 , Z = NaN , p = NaN
kappa (Siegel) = -0.333333 , Z = -0.816497 , p = 0.792892
kappa (2*PA-1) = -1
> classAgreement(conf.mat, match.names=TRUE)
$diag
[1] 0.6708459
$kappa
[1] NA
$rand
[1] 0.5583764
$crand
[1] 0.0594124
Warning message:
In ni[lev] * nj[lev] : NAs produced by integer overflow
> kappa2(conf.mat)
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 2
Raters = 2
Kappa = 0
z = NaN
p-value = NaN
Could anyone advise on why these might fail? I have a large dataset, but as this table is simple I didn't think that could cause such problems.
In the first function, cohen.kappa, you need to specify that you are using count data and not just a n*m matrix of n subjects and m raters.
# cohen.kappa(conf.mat,'count')
cohen.kappa(conf.mat,'count')
The second function is much more tricky. For some reason, your matrix is full of integer and not numeric. integer can't store really big numbers. So, when you multiply two of your big numbers together, it fails. For example:
i=975288
j=1099555
class(i)
# [1] "numeric"
i*j
# 1.072383e+12
as.integer(i)*as.integer(j)
# [1] NA
# Warning message:
# In as.integer(i) * as.integer(j) : NAs produced by integer overflow
So you need to convert your matrix to have integers.
# classAgreement(conf.mat)
classAgreement(matrix(as.numeric(conf.mat),nrow=2))
Finally take a look at the documentation for ?kappa2. It requires an n*m matrix as explained above. It just won't work with your (efficient) data structure.
Do you need to know specifically why those fail? Here is a function that computes the statistic -- in a hurry, so I might clean it up later (kappa wiki):
kap <- function(x) {
a <- (x[1,1] + x[2,2]) / sum(x)
e <- (sum(x[1,]) / sum(x)) * (sum(x[,1]) / sum(x)) + (1 - (sum(x[1,]) / sum(x))) * (1 - (sum(x[,1]) / sum(x)))
(a-e)/(1-e)
}
Tests/output:
> (x = matrix(c(20,5,10,15), nrow=2, byrow=T))
[,1] [,2]
[1,] 20 5
[2,] 10 15
> kap(x)
[1] 0.4
> (x = matrix(c(45,15,25,15), nrow=2, byrow=T))
[,1] [,2]
[1,] 45 15
[2,] 25 15
> kap(x)
[1] 0.1304348
> (x = matrix(c(25,35,5,35), nrow=2, byrow=T))
[,1] [,2]
[1,] 25 35
[2,] 5 35
> kap(x)
[1] 0.2592593
> kap(conf.mat)
[1] 0.1258621

Resources