Creating names for a matrix of data in R - r

I have a simple 12 x 2 matrix called m that contains my dataset (see below).
Question
I was wondering why when I use dimnames(m) to create two names for the two columns of my data, I run into an Error? Is there a better way to create column names for this data in R?
Here is my R code:
Group1 = rnorm(6, 7) ; Group2 = rnorm(6, 9)
Level = gl(n = 2, k = 6)
m = matrix(c(Group1 , Group2, Level), nrow = 12, ncol = 2)
dimnames(m) <- list( DV = Group1, Level = Level)

replace dimnames(m) with
colnames(m) <- c("DV","Level")

Related

Finding index of array of matrices, that is closest to each element of another matrix in R

I have an array Q which has size nquantiles by nfeatures by nfeatures. In this, essentially the slice Q[1,,] would give me the first quantile of my data, across all nfeatures by nfeatures of my data.
What I am interested in, is using another matrix M (again of size nfeatures by nfeatures) which represents some other data, and asking the question to which quantile do each of the elements in M lie in Q.
What would be the quickest way to do this?
I reckon I could do double for loop across all rows and columns of the matrix M and come up with a solution similar to this: Finding the closest index to a value in R
But doing this over all nfeatures x nfeatures values will be very inefficient. I am hoping that there might exist a vectorized way of approaching this problem, but I am at a lost as to how to approach this.
Here is a reproducible way of the slow way I can approach the problem with O(N^2) complexity.
#Generate some data
set.seed(235)
data = rnorm(n = 100, mean = 0, sd = 1)
list_of_matrices = list(matrix(data = data[1:25], ncol = 5, nrow = 5),
matrix(data = data[26:50], ncol = 5, nrow = 5),
matrix(data = data[51:75], ncol = 5, nrow = 5),
matrix(data = data[76:100], ncol = 5, nrow = 5))
#Get the quantiles (5 quantiles here)
Q <- apply(simplify2array(list_of_matrices), 1:2, quantile, prob = c(seq(0,1,length = 5)))
#dim(Q)
#Q should have dims nquantiles by nfeatures by nfeatures
#Generate some other matrix M (true-data)
M = matrix(data = rnorm(n = 25, mean = 0, sd = 1), nrow = 5, ncol = 5)
#Loop through rows and columns in M to find which index of the array matches up closest with element M[i,j]
results = matrix(data = NA, nrow = 5, ncol = 5)
for (i in 1:nrow(M)) {
for (j in 1:ncol(M)) {
true_value = M[i,j]
#Subset Q to the ith and jth element (vector of nqauntiles)
quantiles = Q[,i,j]
results[i,j] = (which.min(abs(quantiles-true_value)))
}
}
'''

Applying a distance matrix to multiple data frames

I have 20 data frames of different lengths, but all the same number of columns. I would like to run some analyses, in this case a distance matrix using vegan, for each of these data frames. I have searched around and just figure I am missing a step somewhere.
dummy data is using 5 df, and I have been trying to use the lapply.
df1<- matrix(data = c(1:100), nrow = 10, ncol = 10)
df2<- matrix(data = c(1:150), nrow = 15, ncol = 10)
df3<- matrix(data = c(1:50), nrow = 5, ncol = 10)
df4<- matrix(data = c(1:200), nrow = 20, ncol = 10)
df5<- matrix(data = c(1:100), nrow = 10, ncol = 10)
Y<- list(df1, df2, df3, df4, df5)
Y.dc <- lapply(Y, dist.ldc(Y, "chord"))
I have also tried just running it on the list directly, and I keep getting errors there too.
Y.dc<- dist.ldc(Y, "chord")
Ideally, I would like to not run 20 lines/chunks of code for each frame.
Eventually, I would also like to be able to generate nMDS plots, and run PERMANOVAs on each of the data frames all at once as well. Would I need to write/run a function in order to accomplish that?
A valid syntax :
Y.dc <- lapply(Y, dist.ldc, method = "chord")
(I assumed function dist.lc came from package adespatial, which I don't know)

How can I append spearman Rho stat to new object?

I am carrying out a number of spearman's rank correlations and I want to produce a list of all the Rho estimates automatically.
Here is some sample data:
A <- data.frame('Area' = c(4, 6, 5),
'flow' = c(1, 1, 1))
B <- data.frame('Area' = c(6, 8, 4),
'flow' = c(1, 2, 1))
files <- list(A, B)
frames <- list('A', 'B')
I currently have the following code that carries out a correlation for each data frame in the list:
lapply(files, function (x)
cor.test(~flow + Area, data = x, method = 'spearman'))
However, what I would like to do is add another line to this to extract the Rho estimation for each correlation and append this to a new list.
How can I do this?

Achieving t random variables with each different df and ncp in R?

I'm trying to generate 5 random t variates using rt(), with each of the 5 having a particular df (respectively, from 1 to 5) and a particular ncp (respectively, seq(0, 1, l = 5)). So, 5 random t-variables each having a different df and a different ncp.
To achieve the above, I tried the below with no success. What could be the efficient R code to achieve what I described above?
vec.rt = Vectorize(function(n, df, ncp) rt(n, df, ncp), c("n", "df", "ncp"))
vec.rt(n = 5, df = 1:5, ncp = seq(0, 1, l = 5))
Or
mapply(FUN = rt, n = 5 , df = 1:5, ncp = seq(0, 1, l = 5))
Notice for:
rt(n = 5, df = 1:5, ncp = seq(0, 1, l = 5))
R gives the following warning:
Warning message:
In if (is.na(ncp)) { :
the condition has length > 1 and only the first element will be used
Rephrasing your question helps to find an answer: you want sample of length 1 (n = 1) from 5 random variables each having different parameters.
mapply(FUN = rt, n = 1 , df = 1:5, ncp = seq(0, 1, l = 5))

ffbase: merge on columns X and Y and closest column Z

I would like to accomplish the following using ffdf: Merge on columns X and Y and closest Time and then merge on the closes column B. However,the procedure that I know in smaller samples involves using outer merges (as shown below). What is a way around this for a large sample that won't fit in memory (and probably wouldn't work on sqldf), using ffbase? If not possible, what would be the best library for this?
As a reproducible example, same as below:
set.seed(1)
df.ff <- as.ffdf(cbind(expand.grid(x = 1:3, y = 1:5), time = round(runif(15) * 30)))
to.merge.ff <- as.ffdf(data.frame(x = c(2, 2, 2, 3, 2), y = c(1, 1, 1, 5, 4), time = c(17, 12, 11.6, 22.5, 2), val = letters[1:5], stringsAsFactors = F))
I borrow the following example from #ChinmayPatil here to highlight the similar procedure I would like to follow: (R - merge dataframes on matching A, B and *closest* C?):
require(data.table)
set.seed(1)
df <- setDT(cbind(expand.grid(x = 1:3, y = 1:5), time = round(runif(15) * 30)))
to.merge <- setDT(data.frame(x = c(2, 2, 2, 3, 2), y = c(1, 1, 1, 5, 4), time = c(17, 12, 11.6, 22.5, 2), val = letters[1:5], stringsAsFactors = F))
## First do a left outer merge
A <- merge(to.merge,df, by = c('x','y'), all.x = T )
## Then calculate a diff row as such
A$diff <- abs(A$time.x - A$time.y)
##then take the minimum distance
A[ , .I[which.min(diff)] , by = c('x', 'y' ) ]
Given that my question got so few views and no answers, I will describe the approach I came up with to solve this problem with the hopes that someone might find it useful (or even for me as a reminder for later in the future):
To me, the most difficult aspect of performing this match on one columns and then nearest match on another columns is that I kept thinking that doing an outer join (as described in the post) was necessary. The solution is pretty simple using data.table and ffdfdply. For the purpose of illustration, assume there is one large ffdf object and one regular data.table that fits in memory:
### Large ffdf object
A <- as.ffdf(data.table( dates.A = seq.Date(as.Date('2008-01-01'),as.Date('2008-01-31'), by = '3 days'),
letters.A = LETTERS[1:4] , value.A = runif(4) ))
### Small data.table that fits in memory
B <- data.table( date.B = seq.Date(as.Date('2008-01-01'),as.Date('2008-01-05'), by = 'days'),
letters.B = LETTERS[1:4] , value.B = runif(4) )
Then you can simply define a function that does the merging using data.table and roll = 'nearest':
merge.ff <- function(x){
setDT(x)
x[, ':=' (dates.merge = dates.A, letters.merge = letters.A)]
B[, ':=' (dates.merge = date.B, letters.merge = letters.B)]
setkeyv(x, c('letters.merge','dates.merge'))
setkeyv(B, c('letters.merge','dates.merge'))
as.data.frame(B[x, roll = 'nearest'])
}
and apply it to A:
result <- ffdfdply( A, split = A$dates.A, FUN = merge.ff)
the key was just essentially using the roll method in data.table and pass it to ffdfdply. It seemed to be quite efficient.

Resources