lapply function with 2 count variables - r

I am very green in R, so there is probably a very easy solution to this:
I want to calculate the average correlation between the column vectors in a square matrix:
x<-matrix(rnorm(10000),ncol=100)
aux<-matrix(seq(1,10000))
loop<-sapply(aux,function(i,j) cov(x[,i],x[,j])
cor_x<-mean(loop)
When evaluating the sapply line I get the error 'subscript out of bounds'. I know I can do this via a script but is there any way to achieve this in one line of code?

No need for any loops. Just use mean(cov(x)), which does this very efficiently.

The problem is due to aux. The variable auxhas to range from 1 to 100 since you have 100 columns. But your aux is a sequence along the rows of x and hence ranges from 1 to 10000. It will work with the following code:
aux <- seq(1, 100)
loop <- sapply(aux, function(i, j) cov(x[, i], x[, j]))
Afterwards, you can calculate mean covariance with:
cor_x <- mean(loop)
If you want to exclude duplicate fields (e.g., cov(X,Y) is inherently identical to cov(Y,X)), you can use:
cor_x <- mean(loop[upper.tri(loop, diag = TRUE)])
If you also want to exclude cov(X,X), i.e., variance, you can use:
cor_x <- mean(loop[upper.tri(loop)])

Related

Coding in R - how to do rolling window without a for-loop? For-loop too slow

I understand for-loops are slow in R, and the suite of apply() functions are designed to be used instead (in many cases).
However, I can't figure out how to use those functions in my situation, and advice would be greatly appreciated.
I have a list/vector of values (let's say length=10,000) and at every point, starting at the 21st value, I need to take the standard deviation of the trailing 20 values. So at 21st, I take SD of 1st-21st . At 22nd value, I take SD(2:22) and so on.
So you see I have a rolling window where I need to take the SD() of the previous 20 indices. Is there any way to accomplish this faster, without a for-loop?
I found a solution to my question.
The zoo package has a function called "rollapply" which does exactly that: uses apply() on a rolling window basis.
library(microbenchmark)
library(ggplot2)
# dummy vector
c <- 50
x <- sample(1:100, c, replace=T)
# parameter
y <- 20 # length of each vector
z <- length(x) - y # final starting index
# benchmark
xx <-
microbenchmark(lapply = {a <- lapply( 1:z, \(i) sd(x[i:(i+y)]) )}
, loop = {
b <- vector("list", z)
for (i in 1:z)
{
b[[i]] <- sd(x[i:(i+y)])
}
}
, times = 30
)
# plot
autoplot(xx) +
ggtitle(paste('vector of size', c))
It would appear while lapply has the speed advantage of a smaller vector, a loop should be used with longer vectors.
I would maintain, however, loops are not slow per se as long as they are not applied incorrectly (iterating over rows).

Calculating difference between points in vector

I'm trying to calculate the difference between all points in a vector of length 10605 in R. For example, I am trying to do this:
for (i in 1:10605){
for (j in 1:10605){
differences[i] = housedata$Mean_household_income[i] - housedata$Mean_household_income[j]
}
}
It is taking so long to compute, and I'm thinking there's a more timely way to calculate the difference between all the points with each other in this vector. Does anyone have any suggestions?
Thanks!
Seems like the dist function should do that. Distance matrices are only lower triangular because distance(x,y) == distance(y,x):
my.distances <- dist(housedata$Mean_household_income,
housedata$Mean_household_income)
It's going to be faster since it's done in C code. Just type:
dist
You could loop through an incrementally shifted/wrapped copy of the vector and subtract the two vectors. You still have to loop through the length of the data once and shift and subtract the vector each time, but it will probably save some time.
Here is an example:
# make a shift/wrap function
shift <- function(df,offset){
df[((1:length(df))-1-offset)%%length(df)+1]
}
# make some data
data <- seq(1,4)
# make an empty vector to hold the data
difs = vector()
# loop through the data
for(i in 1:length(data)){
shifted <- shift(data,i)
result <- data - shifted
difs <- c(difs, result)
}
print(difs)
What about using outer? It uses a vectorized function (here -) on all combinations of two vectors and stores the results in a matrix.
For example,
x <- runif(10605)
system.time(
differences <- outer(x, x, '-')
)
takes one second on my computer.

Sum a list of variables in R

I am trying to sum up a list of variables.
qdisgust <- c(2,3,5,8,12,17,18,22,23,25,28,29,31,33)
vqdisgust <- list()
n <- length(qdisgust)
lhs <- paste("mydata$Disgust_", qdisgust, sep="")
eq <- paste("vqdisgust <- c(lhs)")
eval(parse(text=eq))
I sucessfully get all the variables in the list, but then am not able to get the sum of them. I assume there would be a way even simpler to do this.
Thank you for your help!
If we are looking to get the sum of multiple columns in 'mydata', we can use colSums
colSums(mydata)
If qdisgust is the index of columns, either
colSums(mydata[paste0("Disgust_", qdisgust)])
or
colSums(mydata[grep("Disgust_", names(mydata))])

R - apply over increasing submatrices, instead of individual rows/cols

So I've been pondering how to do this without a for loop and I couldn't come up with a good answer. Here is an example of what I mean:
sampleData <- matrix(rnorm(25,0,1),5,5)
meanVec <- vector(length=length(sampleData[,1]))
for(i in 1:length(sampleData[,1])){
subMat <- sampleData[1:i,]
ifelse( i == 1 , sumVec <- sum(subMat) ,sumVec <- apply(subMat,2,sum) )
meanVec[i] <- mean(sumVec)
}
meanVec
The actual matrix I want to do this to is reasonably large, and to be honest, for this application it won't make a huge difference in speed, but it's a question I think should be answered:
How can I get rid of that for loop and replace with some *ply call?
Edit: In the example given, I generate sample data, and define a vector equal to the number of rows in the vector.
The for loop does the following steps:
1) takes a submatrix, from row 1 to row i
2) if i is 1, it just sums up the values in that vector
3) if i is not 1, it gets the sum of each row, then gets the mean of the sum and stores that in position i of the vector meanVec.
Finally, it prints out the mean of that sum.
This does what you describe:
cumsum(rowSums(sampleData))/seq_len(nrow(sampleData))
However, your code doesn't do the same.

R-Operating on subset of columns from dataframe with ddply

I have a large-ish dataframe (40000 observations of 800 variables) and wish to operate on a range of columns of every observation with something akin to dot product. This is how I implemented it:
matrixattempt <- as.matrix(dframe)
takerow <- function(k) {as.vector(matrixattempt[k,])}
takedot0 <- function(k) {sqrt(sum(data0averrow * takerow(k)[2:785]))}
for (k in 1:40000){
print(k)
dframe$dot0aver[k]<-takedot0(k)
}
The print is just to keep track of what's going on. data0averrow is a numeric vector, same size as takerow(k)[2:785], that has been pre-defined.
This is running, and from a few tests running correctly, but it is very slow.
I searched for dot product for a subset of columns, and found this question, but could not figure out how to apply it to my setup. ddply sounds like it should work faster (although I do not want to do splitting and would have to use the same define-id trick that the referenced questioner did). Any insight/hints?
Try this:
sqrt(colSums(t(matrixattempt[, 2:785]) * data0averrow))
or equivalently:
sqrt(matrixattempt[, 2:785] %*% data0averrow)
Use matrix multiplication and rowSums on the result:
dframe$dot0aver <- NA
dframe$dot0aver[2:785] <- sqrt( rowSums(
matrixattempt[2:785,] %*% data0averrow ))
It's the sqrt of the dot-product of data0aver with each row in the range

Resources