R: multicollinearity issues using glib(), Bayesian Model Averaging (BMA-package) - r
I am experiencing difficulties estimating a BMA-model via glib(), due to multicollinearity issues, even though I have clearly specified which columns to use. Please find the details below.
The data I'll be using for the estimation via Bayesian Model Averaging:
Cij <- c(357848,766940,610542,482940,527326,574398,146342,139950,227229,67948,
352118,884021,933894,1183289,445745,320996,527804,266172,425046,
290507,1001799,926219,1016654,750816,146923,495992,280405,
310608,1108250,776189,1562400,272482,352053,206286,
443160,693190,991983,769488,504851,470639,
396132,937085,847498,805037,705960,
440832,847631,1131398,1063269,
359480,1061648,1443370,
376686,986608,
344014)
n <- length(Cij);
TT <- trunc(sqrt(2*n))
i <- rep(1:TT,TT:1); #row numbers: year of origin
j <- sequence(TT:1) #col numbers: year of development
k <- i+j-1 #diagonal numbers: year of payment
#Since k=i+j-1, we have to leave out another dummy in order to avoid multicollinearity
k <- ifelse(k == 2, 1, k)
I want to evaluate the effect of i and j both via levels and factors, but of course not in the same model. Since I can decide to include i and j as factors, levels, or not include them at all and for k either to include as level, or exclude, there are a total of 18 (3x3x2) models. This brings us to the following data frame:
X <- data.frame(Cij,i.factor=as.factor(i),j.factor=as.factor(j),k,i,j)
X <- model.matrix(Cij ~ -1 + i.factor + j.factor + k + i + j,X)
X <- as.data.frame(X[,-1])
Next, via the following declaration I specify which variables to consider in each of the 18 models. According to me, no linear dependence exists in these specifications.
model.set <- rbind(
c(rep(0,9),rep(0,9),0,0,0),
c(rep(0,9),rep(0,9),0,1,0),
c(rep(0,9),rep(0,9),0,0,1),
c(rep(0,9),rep(0,9),1,0,0),
c(rep(1,9),rep(0,9),0,0,0),
c(rep(0,9),rep(1,9),0,0,0),
c(rep(0,9),rep(0,9),0,1,1),
c(rep(0,9),rep(0,9),1,1,0),
c(rep(0,9),rep(1,9),0,1,0),
c(rep(0,9),rep(0,9),1,0,1),
c(rep(1,9),rep(0,9),0,0,1),
c(rep(1,9),rep(0,9),1,0,0),
c(rep(0,9),rep(1,9),1,0,0),
c(rep(1,9),rep(1,9),0,0,0),
c(rep(0,9),rep(0,9),1,1,1),
c(rep(0,9),rep(1,9),1,1,0),
c(rep(1,9),rep(0,9),1,0,1),
c(rep(1,9),rep(1,9),1,0,0))
Then I call the glib() function, telling it to select the specified columns from X according to model.set.
library(BMA)
model.glib <- glib(X,Cij,error="poisson", link="log",models=model.set)
which results in the error
Error in glim(x, y, n, error = error, link = link, scale = scale) : X matrix is not full rank
The function first checks whether the matrix is f.c.r, before it evaluates which columns to select from X via model.set. How do I circumvent this, or is there any other way to include all 18 models in the glib() function?
Thank you in advance.
Related
R - Extract coefficients from a factor of lm object using conditions
I have fitted a lm with the following code: Eq1_females = <- lm(earnings ~ event_time + factor(age) + factor(year) - 1, data=females) Now, I would like to calculate a predicted value based on the factor coefficients, but this predicted value depends on certain conditions in the data. I therefore create a list of the coefficients and I now want to extract the factor coefficients if age = k and year = y, but it keeps returning 0 or NA. However, if I input a number (e.g. 34) instead of k, it does give the right value. I tried two different codes: estimates <- coef(Eq1_females) k = females$age[1] Eq1_females$coefficients["factor(age)k"] and estimates <- coef(Eq1_females) k = females$age[1] beta_age = estimates[grep("^factor\\(age\\)k", names(estimates))] (note that in the end, I would like to loop over different rows of females$age) What does work, is calculating beta_age = estimates[grep("^factor\\(age\\)34", names(estimates))] Could anyone tell me if there is a way of also getting the code to work with k in the beta_age formula? Thanks a lot in advance!
Answer Paste the right number to the regex pattern using paste0: beta = estimates[grep(paste0("^factor\\(Petal.Width\\)", k), names(estimates))] This returns: factor(Petal.Width)0.2 3.764947 Rationale In "^factor\\(age\\)k", it will treat k as the literal k. However, you are referring to variable k. By using paste(..., sep = "") or paste0(...) you can simply paste k to the base pattern.
R function to find which of 3 variables correlates most with another value?
I am conducting a study that analyzes speakers' production and measures their average F2 values. What I need is an R function that allows me to find a relationship for these F2 values with 3 other variables, and if there is, which one is the most significant. These variables have been coded as 1, 2, or 3 for things like "yes" "no" answers or whether responses are positive, neutral or negative (1, 2, 3 respectively). Is there a particular technique or R function/test that we can use to approach this problem? I've considered using ANOVA or a T-Test but am unsure if this will give me what I need.
A quick solution might look like this. Here, the cor function is used. Read its help page (?cor) to understand what is calculated. By default, the Pearson correlation coefficient is used. The function below return the variable with the highest Pearson correlation with respect to the reference variable. set.seed(111) x <- rnorm(100) y <- rnorm(100) z <- rnorm(100) ref <- 0.5*x + 0.5*rnorm(100) find_max_corr <- function(vars, ref){ val <- sapply(vars, cor, y = ref) val[which.max(val)] } find_max_corr(list('x' = x, 'y' = y, 'z' = z), ref)
Permutations and combinations of all the columns in R
I want to check all the permutations and combinations of columns while selecting models in R. I have 8 columns in my data set and the below piece of code lets me check some of the models, but not all. Models like column 1+6, 1+2+5 will not be covered by this loop. Is there any better way to accomplish this? best_model <- rep(0,3) #store the best model in this array for(i in 1:8){ for(j in 1:8){ for(x in k){ diabetes_prediction <- knn(train = diabetes_training[, i:j], test = diabetes_test[, i:j], cl = diabetes_train_labels, k = x) accuracy[x] <- 100 * sum(diabetes_test_labels == diabetes_prediction)/183 if( best_model[1] < accuracy[x] ){ best_model[1] = accuracy[x] best_model[2] = i best_model[3] = j } } } }
Well, this answer isn't complete, but maybe it'll get you started. You want to be able to subset by all possible subsets of columns. So instead of having i:j for some i and j, you want to be able to subset by c(1,6) or c(1,2,5), etc. Using the sets package, you can for the power set (set of all subsets) of a set. That's the easy part. I'm new to R, so the hard part for me is understanding the difference between sets, lists, vectors, etc. I'm used to Mathematica, in which they're all the same. library(sets) my.set <- 1:8 # you want column indices from 1 to 8 my.power.set <- set_power(my.set) # this creates the set of all subsets of those indices my.names <- c("a") #I don't know how to index into sets, so I created names (that are numbers, but of type characters) for(i in 1:length(my.power.set)) {my.names[i] <- as.character(i)} names(my.power.set) <- my.names my.indices <- vector("list",length(my.power.set)-1) for(i in 2:length(my.power.set)) {my.indices[i-1] <- as.vector(my.power.set[[my.names[i]]])} #this is the line I couldn't get to work I wanted to create a list of lists called my.indices, so that my.indices[i] was a subset of {1,2,3,4,5,6,7,8} that could be used in place of where you have i:j. Then, your for loop would have to run from 1:length(my.indices). But alas, I have been spoiled by Mathematica, and thus cannot decipher the incredibly complicated world of R data types.
Solved it, below is the code with explanatory comments: # find out the best model for this data number_of_columns_to_model <- ncol(diabetes_training)-1 best_model <- c() best_model_accuracy = 0 for(i in 2:2^number_of_columns_to_model-1){ # ignoring the first case i.e. i=1, as it doesn't represent any model # convert the value of i to binary, e.g. i=5 will give combination = 0 0 0 0 0 1 0 1 combination = as.binary(i, n=number_of_columns_to_model) # from the binaryLogic package model <- c() for(i in 1:length(combination)){ # choose which columns to consider depending on the combination if(combination[i]) model <- c(model, i) } for(x in k){ # for the columns decides by model, find out the accuracies of model for k=1:27 diabetes_prediction <- knn(train = diabetes_training[, model, with = FALSE], test = diabetes_test[, model, with = FALSE], cl = diabetes_train_labels, k = x) accuracy[x] <- 100 * sum(diabetes_test_labels == diabetes_prediction)/length(diabetes_test_labels) if( best_model_accuracy < accuracy[x] ){ best_model_accuracy = accuracy[x] best_model = model print(model) } } }
I trained with Pima.tr and tested with Pima.te. KNN Accuracy for pre-processed predictors was 78% and 80% without pre-processing (and this because of the large influence of some variables). The 80% performance is at par with a Logistic Regression model. You don't need to preprocess variables in Logistic Regression. RandomForest, and Logistic Regression provide a hint on which variables to drop, so you don't need to go and perform all possible combinations. Another way is to look at a matrix Scatter plot You get a sense that there is difference between type 0 and type 1 when it comes to npreg, glu, bmi, age You also notice the highly skewed ped and age, and you notice that there may be an anomaly data point between skin and and and other variables (you may need to remove that observation before going further) Skin Vs Type box plot shows that for type Yes, an extreme outlier exist (try removing it) You also notice that most of the boxes for Yes type are higher than No type=> the variables may add prediction to the model (you can confirm this through a Wilcoxon Rank Sum Test) The high correlation between Skin and bmi means that you can use one or the other or an interact of both. Another approach to reducing the number of predictors is to use PCA
How to work with binary contraints in linear optimization?
I have two input matrices, dt(10,3) & wt(3,3), that i need to use to find the optimal decision matrix (same dimension), Par(10,3) so as to maximize an objective function. Below R code would give some direction into the problem (used Sample inputs here) - #Input Matrices dt <- matrix(runif(300),100,3) wt <- matrix(c(1,0,0,0,2,0,0,0,1),3,3) #weights #objective function Obj <- function(Par) { P = matrix(Par, nrow = 10, byrow=F) # Reshape X = t((dt%*%wt)[,1])%*%P[,1] Y = t((dt%*%wt)[,2])%*%P[,2] Z = t((dt%*%wt)[,3])%*%P[,3] as.numeric(X+Y+Z) #maximize } Now I am struggling to apply the following constraints to the problem : 1) Matrix, Par can only have binary values (0 or 1) 2) rowSums(Par) = 1 (Basically a row can only have 1 in one of the three columns) 3) colSums(Par[,1]) <= 5, colSums(Par[,2]) <= 6, & colSums(Par[,3]) <= 4 4) X/(X+Y+Z) < 0.35, & Y/(X+Y+Z) < 0.4 (X,Y,Z are defined in the objective function) I tried coding the constraints in constrOptim, but not sure how to input binary & integer constraints. I am reading up on lpSolve, but not able to figure out. Any help much appreciated. Thanks!
I believe this is indeed a MIP so no issues with convexity. If I am correct the model can look like: This model can be easily transcribed into R. Note that LP/MIP solvers do not use functions for the objective and constraints (opposed to NLP solvers). In R typically one builds up matrices with the LP coefficients. Note: I had to make the limits on the column sums much larger (I used 50,60,40).
Based on Erwin's response, I am able to formulate the model using lpSolve in R. However still struggling to add the final constraint to the model (4th constraint in my question above). Here's what I am able to code so far : #input dimension r <- 10 c <- 3 #input matrices dt <- matrix(runif(r*c),r,c) wt <- matrix(c(1,0,0,0,2,0,0,0,1),3,3) #weights #column controller c.limit <- c(60,50,70) #create structure for lpSolve ncol <- r*c lp.create <- make.lp(ncol=ncol) set.type(lp.create, columns=1:ncol, type = c("binary")) #create objective values obj.vals <- as.vector(t(dt%*%wt)) set.objfn(lp.create, obj.vals) lp.control(lp.create,sense='max') #Add constraints to ensure sum of parameters for every row (rowSum) <= 1 for (i in 1:r){ add.constraint(lp.create, xt=c(1,1,1), indices=c(3*i-2,3*i-1,3*i), rhs=1, type="<=") } #Add constraints to ensure sum of parameters for every column (colSum) <= column limit (defined above) for (i in 1:c){ add.constraint(lp.create, xt=rep(1,r), indices=seq(i,ncol,by=c), rhs=c.limit[i], type="<=") } #Add constraints to ensure sum of column objective (t((dt%*%wt)[,i])%*%P[,i) <= limits defined in the problem) #NOT SURE HOW TO APPLY A CONSTRAINT THAT IS DEPENDENT ON THE OBJECTIVE FUNCTION solve(lp.create) get.objective(lp.create) #20 final.par <- matrix(get.variables(lp.create), ncol = c, byrow=T) # Reshape Any help that can get me to the finish line is much appreciated :) Thanks
R: interaction between continuous and categorical vars in 'isat' regression ('gets' package)
I want to calculate the differential response of y to x (continuous) depending on the categorical variable z. In the standard lm setup: lm(y~ x:z) However, I want to do this while allowing for Impulse Indicator Saturation (IIS) in the 'gets' package. However, the following syntax produces an error: isat(y, mxreg=x:z, iis=TRUE) The error message is of the form: "Error in solve.qr(out, tol = tol, LAPACK = LAPACK) : singular matrix 'a' in 'solve" 1: In x:z : numerical expression has 96 elements: only the first used 2: In x:z : numerical expression has 96 elements: only the first used" How should I modify the syntax? Thank you!
At the moment, alas, isat doesn't provide the same functionality as lm on categorical/character variables, nor on using * and :. We hope to address that in a future release. In the meantime you'll have to create distinct variables in your dataset representing the interaction. I guess something like the following... library(gets) N <- 100 x <- rnorm(N) z <- c(rep("A",N/4),rep("B",N/4),rep("C",N/4),rep("D",N/4)) e <- rnorm(N) y <- 0.5*x*as.numeric(z=="A") + 1.5*x*as.numeric(z=="B") - 0.75*x*as.numeric(z=="C") + 5*x*as.numeric(z=="D") + e lm.reg <- lm(y ~ x:z) arx.reg.0 <- arx(y,mxreg=x:z) data <- data.frame(y,x,z,stringsAsFactors=F) for(i in z[duplicated(z)==F]) { data[[paste("Zx",i,sep=".")]] <- data$x * as.numeric(data$z==i) } arx.reg.1 <- arx(data$y,mxreg=data[,c("x","Zx.A","Zx.B","Zx.C")]) isat.1 <- isat(data$y,mc=TRUE,mxreg=data[,c("x","Zx.A","Zx.B","Zx.C")],max.block.size=20) Note that as you'll be creating dummies for each category, there's a chance those dummies will cause singularity of your matrix of explanatory variables (if, as in my example, isat automatically uses 4 blocks). Using the argument max.block.size enables you to avoid this problem. Let me know if I haven't addressed your particular point.