I have 2 matrices. One contains the quantities a client bought of products. The matrix looks like this quantitymatrix:
the other one contains unitprices at which a client bought the products. The matrix looks like this pricematrix:
How can I run a linear regression with the matrices so that I obtain the slope for each product?
Your data:
quantity <- matrix(c(4,2,6, 9,4,3, 1,1,2, 3,1,5), 3, 4)
price <- matrix(c(1,0.5,8, 4.2,1.2,2, 2,5,2, 1,2.5,1), 3, 4)
First, you have to transform your two matrices into a single data frame. (Although you can avoid that if you want, but I think it makes it much more straightforward if you do so):
df <- data.frame(quantity = as.numeric(quantity),
price = as.numeric(price),
product = rep(1:4, each = 3), ID = 1:3)
Then, run the linear models by groups:
lms <- by(df, df$product, FUN = function(x) lm(price~quantity, data = x))
And get the slopes:
slopes <- sapply(lms, coef)[2,]
If however, you want to keep the orignial matrices as they are, you can run a simple loop:
slopes <- numeric(dim(price)[2])
for (i in 1:dim(price)[2]) {
model <- lm(price[,i]~quantity[,i])
slopes[i] <- coef(model)[2]
}
NB: this solution assumes that the two matrices have identical dimensions.
And if you want to avoid loops, the following solution may be faster:
f <- function(x,y) coef(lm(y~x))[2]
l <- function(m) lapply(seq_len(ncol(m)), function(i) m[,i])
mapply(f, l(quantity), l(price))
Related
I have two data matrices both having the same dimensions. I want to extract the same series of columns vectors. Then take both series as vectors, then calculate different errors for example mean absolute error (mae), mean percentage error (mape) and root means square error
(rmse). My data matrix is quite large dimensional so I try to explain with an example and calculate these errors manually as:
mat1<- matrix(6:75,ncol=10,byrow=T)
mat2<- matrix(30:99,ncol=10,byrow=T)
mat1_seri1 <- as.vector(mat1[,c(1+(0:4)*2)])
mat1_seri2<- as.vector(mat1[,c(2+(0:4)*2)])
mat2_seri1 <- as.vector(mat1[,c(1+(0:4)*2)])
mat2_seri2<- as.vector(mat1[,c(2+(0:4)*2)])
mae1<-mean(abs(mat1_seri1-mat2_seri1))
mae2<-mean(abs(mat1_seri2-mat2_seri2))
For mape
mape1<- mean(abs(mat1_seri1-mat2_seri1)/mat1_seri1)*100
mape2<- mean(abs(mat1_seri2-mat2_seri2)/mat1_seri2)*100
similarly, I calculate rmse from their formula, as I have large data matrices so manually it is quite time-consuming. Is it's possible to do this using looping which gives an output of the errors (mae,mape,rmse) term for each series separately.
I'm not sure if this is what you are looking for, but here is a function that could automate the process, maybe there is also a better way:
fn <- function(m1, m2) {
stopifnot(dim(m1) == dim(m2))
mat1_seri1 <- as.vector(m1[, (1:ncol(m1))[(1:ncol(m1))%%2 != 0]])
mat1_seri2 <- as.vector(m1[, (1:ncol(m1))[!(1:ncol(m1))%%2]])
mat2_seri1 <- as.vector(m2[, (1:ncol(m2))[(1:ncol(m2))%%2 != 0]])
mat2_seri2 <- as.vector(m2[, (1:ncol(m2))[!(1:ncol(m2))%%2]])
mae1 <- mean(abs(mat1_seri1-mat2_seri1))
mae2 <- mean(abs(mat1_seri2-mat2_seri2))
mape1 <- mean(abs(mat1_seri1-mat2_seri1)/mat1_seri1)*100
mape2 <- mean(abs(mat1_seri2-mat2_seri2)/mat1_seri2)*100
setNames(as.data.frame(matrix(c(mae1, mae2, mape1, mape2), ncol = 4)),
c("mae1", "mae2", "mape1", "mape2"))
}
fn(mat1, mat2)
mae1 mae2 mape1 mape2
1 24 24 92.62581 86.89572
I have a list in R which looks something like this
b0=5;b1=2
f <- function(x) b0 + b1*x
Nsim <- 100
my.list <- vector("list", Nsim)
for(i in 1:Nsim){
x <- rep(0,1000)
y <- x
y[1] <- f(x[1])
for(j in 2:1000){
x[j] <- x[j-1] + rnorm(1,0,0.1)
y[j] < f(x[j])
}
my.list[[i]]$x <- x
my.list[[i]]$y <- y
}
In reality, f is the result of an optimisation routine and x tracks the input value over time and y is the function values which are generated. So in essence, I have Nsim time series. I want to plot metrics of these time series over time by averaging over the index i. For instance, the average performance of the algorithm over time.
At the moment I'm doing this with a bespoke function for each metric I want to calculate (e.g. mean squared error of x from the true value of x, another for generating error bars and so on). I want to use something like lapply to average over i so I can visualise how x and y evolve over time but that doesn't do the right thing.
Is what I want to output is a pointwise summary of the results. As an analogy, if my.list[[i]]$x was instead stored as a matrix, I could take colMeans() to see the average value of x over "time".
Is there a function/package which is good for working with lists of lists?
At least for what has been presented there is no real reason to use a list of lists. The x's are all the same and equal to 1, 2, 3, ... so this could be represented by a matrix with the x component being implicit or represented by row names or we could represent this as a ts object or zoo object. In the last two cases if X is the object time(X) is the common x.
mat <- sapply(my.list, "[[", "y")
ts(mat)
library(zoo); zoo(mat)
Alternately, get rid of my.list and construct one of these directly in the code.
I'm using the mice package to create multiple imputations. I want to create a correlations matrix (and a matrix of p-values for the correlation coefficients. I use miceadds::micombine.cor to do this. But this gives a dataframe with variables in the first to columns, and then a number of columns to contain r, p, t-values, and the like.
I'm looking for a way to turn this dataframe into a "good old" matrix with the correlation coefficient between x and y in position [x,y], and a matrix with p-values Does anyone have an easy way to do this?
Here's some code to reproduce:
data <- mtcars
mt.mis <- prodNA(mtcars, noNA = 0.1)
imputed <-mice(iris.mis, m = 5, maxit = 5, method = "pmm")
correlations<- miceadds::micombine.cor(mi.res=iris.mis, variables = c(1:3))
What I'm looking for is something like the output from cor(mtcars). Who can help?
I ended up writing my own function. Can probably be done much more efficiently, but this is what I made.
cormatrix <- function(r, N){
x <- 1
cormatrix <- matrix(nrow = N, ncol = N) # create empty matrix
for (i in 1:N) {
for (j in i:N) {
if(j>i){
cormatrix[i,j] <- r[x]
cormatrix[j,i] <- r[x]
x <- x + 1
}
}
}
diag(cormatrix) <- 1
cormatrix
}
You can call it with the output of micombine.cor and the number of variables in your model as arguments. So for example cormatrix(correlations$r,ncol(df)).
I am calculating a index that needs a matrix of species x sites and a matrix of cophenetic distances between species (generated from a phylogenetic tree). This block of code gives the objects needed to calculate it (site and tree):
library(ape)#phylogenetic tree
library(picante)#ses.mpd calculation
library(purrr)#list of distance matrices
#Sample matrix
set.seed(0000)
site <- matrix(data = sample(c(0, 1), 15, prob = c(0.4, 0.6), replace = T), ncol = 5, nrow = 3)
colnames(site) <- c("t1", "t2", "t3", "t4", "t5")
rownames(site) <- c("samp1", "samp2", "samp3")
#Sample phylogenetic tree
tree <- rcoal(5)
#Reordering species names in the community to match the order in the tree
site <- site[, tree$tip.label]
From the output above, I need to calculate ses.mpd 100 times using the same community matrix, but changing the distance matrix (100 of them stored in a list of 4gb). I used for loops to calculate ses.mpd, but I realised that it would take more than a month to get the output! I have used lapply before, but I do not know how to use it this time, neither purrr::map. I have seen similar questions here: Apply a function to list of matrices and here:Calculate function for all row combinations of two matrices in R, but none of them actually resembles my problem. Here is the code I used with for loop (updated by #Parfait). I need any other way faster than a loop to get the same output. Any suggestion? Thank you very much!
#Empty list for the resolved trees
many.trees <- list()
#Creates 5 resolved trees with the function ape::multi2di
for(i in 1:5){
many.trees[[i]] <- multi2di(tree)
}
#For each resolved tree, creates a distance matrix
many.dists <- map(many.trees, cophenetic)
#ses.mpd using each of the distance matrices above
out <- list()
for(i in 1:5){
out.2[[i]] <- ses.mpd(site, many.dists[[length(many.dists)]])# Thanks, #Parfait.
}
Consider an apply family solution for more compact code and avoid bookkeeping of initializing empty lists and assigning to it.
# Creates 5 resolved trees with the function ape::multi2di
many.trees <- replicate(5, multi2di(tree), simplify = FALSE)
# For each resolved tree, creates a distance matrix
many.dists <- lapply(many.trees, cophenetic)
# ses.mpd using each of the distance matrices above
out_nested <- lapply(many.dists, function(d) ses.mpd(site, d))
To retain names (if included in above methods), change lapply to sapply (equivalent to lapply with simplify=FALSE but maintains USE.NAMES=TRUE). The result would then be named lists.
# For each resolved tree, creates a distance matrix
many.dists <- sapply(many.trees, cophenetic, simplify = FALSE)
# ses.mpd using each of the distance matrices above
out <- sapply(many.dists, function(d) ses.mpd(site, d), simplify = FALSE)
out$first_name
out$second_name
out$third_name
out$fourth_name
out$fifth_name
I have a (k x n) matrix. I have initially managed to linearly regress (using the lm function) column 1 with each and every other column and extracted only the coefficients.
fore.choose <- matrix(0, 1, NCOL(assets))
for(i in seq(1, NCOL(assets), 1))
{
abc <- lm(assets[,1]~assets[,i])$coefficients
fore.choose[1,i] <- abc[2:length(abc)]
}
The coefficients are placed in the fore.choose matrix.
What I now need to do is to linearly regress column 2 with each and every other column, and then column 3 and so on and so forth and extract only the coefficients.
The output will be a square matrix of OLS univariate coefficients. Kind of similar to a correlation matrix, but it is the beta coefficients I am interested in.
fore.choose <- matrix(0, 1, NCOL(assets))
will initially need to become
fore.choose <- matrix(0, NCOL(assets), NCOL(assets))
I'd just compute the coefficients directly from the correlation matrix, using beta = cor(x,y)*sd(x)/sd(y), like this:
# set up some sample data
set.seed(1)
d <- matrix(rnorm(50), ncol=5)
# get the coefficients
s <- apply(d, 2, sd)
cor(d)*outer(s, s, "/")
You could also use lsfit to get the coefficients of one term on all the others at once and then only have one loop to do:
sapply(1:ncol(d), function(i) {
coef(lsfit(d[,i], d))[2,]
})
I'm sure there must be a more elegant way than to nested loops.
fore.choose <- matrix(NA, NCOL(assets), NCOL(assets))
abc <- NULL
for(i in seq_len(ncol(assets))){ # loop over "dependant" columns
for(j in seq_len(ncol(assets))){ # loop over "independant" columns
abc <- lm(assets[,i]~assets[,j])$coefficients
fore.choose[i,j] <- abc[-1]
}
}