Identify Principal component from Biplot in R - r

I'm doing a principal component analysis, after I got the analysis result, how to identify the first couple of principal predictors? As it is messy from the plot. It's hard to see the predictors names:
Which part of the PCA results should I look into? This is more like how to determine the most important predictors which could explain, lets' say 80%, of the variance of your data. We know, e.g, the first 5 component did this, while the principal component is just combination of predictors. How to identify those "important" predictors.

See this answer Principal Components Analysis - how to get the contribution (%) of each parameter to a Prin.Comp.?
The information is stored within your pca results.
If you used prcomp(), then $rotation is what you are after, or if you used princomp(), then $loadings holds the key.
Eg.
require(graphics)
data("USArrests")
pca_1<-prcomp(USArrests, scale = TRUE)
load_1<-with(pca_1,unclass(rotation))
aload_1<-abs(load_1)
sweep(aload_1, 2, colSums(aload_1), "/")
# PC1 PC2 PC3 PC4
#Murder 0.2761363 0.2540139 0.1890303 0.40186493
#Assault 0.3005008 0.1141873 0.1485443 0.46016113
#UrbanPop 0.1433452 0.5301651 0.2094067 0.08286886
#Rape 0.2800177 0.1016337 0.4530187 0.05510509
pca_2<-princomp(USArrests,cor=T)
load_2<-with(pca_2,unclass(loadings))
aload_2<-abs(load_2)
sweep(aload_2, 2, colSums(aload_2), "/")
# Comp.1 Comp.2 Comp.3 Comp.4
#Murder 0.2761363 0.2540139 0.1890303 0.40186493
#Assault 0.3005008 0.1141873 0.1485443 0.46016113
#UrbanPop 0.1433452 0.5301651 0.2094067 0.08286886
#Rape 0.2800177 0.1016337 0.4530187 0.05510509
As you can see, Murder, Assault, and Rape each contribute ~30% to PC1, whereas UrbanPop only contributes ~14% to PC1, yet is the major contributor to PC2 (~53%).

Related

Using predict in metafor when each author has multiple rows in the data

I'm running a meta-analysis where I'm interested in the effect of X on the effect of age on habitat use (raw mean values and variances) using the metafor package.
An example of one of my models is:
mod6 <-
rma.mv(
yi = Used_value,
V = Used_variance,
slab = Citation,
mods = ~ Age + poly(Slope, degrees = 2),
random = ~ 1 | Region,
data = vel.focal,
method = "ML"
)
My justification for not using Citation as a random effect is that using only Region accounts for more of the heterogeneity than when random = list( ~ 1 | Citation/ID, ~ 1 | Region) or when Citation/ID is used by itself.
What I need for output is the prediction for each age by region, but the predict() function for the model and the associated forest plot spits out the prediction for each row, as it assumes each row in the data is a unique study. In my case it is not as I have my input values separated by age and season.
predict(mod6)
pred se ci.lb ci.ub pi.lb pi.ub
Riehle and Griffith 1993.1 9.3437 2.3588 4.7205 13.9668 0.2362 18.4511
Riehle and Griffith 1993.2 9.3437 2.3588 4.7205 13.9668 0.2362 18.4511
Riehle and Griffith 1993.3 9.3437 2.3588 4.7205 13.9668 0.2362 18.4511
Spina 2000.1 8.7706 2.7386 3.4030 14.1382 -0.7364 18.2776
Spina 2000.2 8.5407 2.7339 3.1824 13.8991 -0.9611 18.0426
Spina 2000.3 8.5584 2.7406 3.1868 13.9299 -0.9509 18.0676
Vondracek and Longanecker 1993.1 12.6116 2.5138 7.6847 17.5385 3.3462 21.8769
Vondracek and Longanecker 1993.2 12.6116 2.5138 7.6847 17.5385 3.3462 21.8769
Vondracek and Longanecker 1993.3 12.3817 2.5327 7.4176 17.3458 3.0965 21.6669
Vondracek and Longanecker 1993.4 12.3817 2.5327 7.4176 17.3458 3.0965 21.6669
Does anybody know a way to modify the arguments inside predict() to tell it how you want your predictions output or to tell it that there are multiple rows per slab?
You need to use the newmods argument to specify the values for Age for which you want predicted values. You will have to plug in something for the linear and quadratic terms for the Slope variable as well (e.g., holding Slope constant at its mean and hence the quadratic term will just be the mean squared). Region is not a fixed effect, so it is not relevant if you want to compute predicted values based on the fixed effects. If you want to compute BLUPs for those random effects, you can do so with ranef(). One can then combine the predictions based on the fixed effects with the BLUPs. That would be the general idea, but implementing this will require a bit of programming.

I am working on ordered Logit. Tried to solve the estimates and proportional odds using R. Is it correct

Question: Have a look at data set Two.csv. It contains a potentially dependent binary variable Y , and
two potentially independent variables {X1, X2} for each unit of measurement.
(a) Read data set Two.csv into R and have a look at the format of the dependent variable.
Discuss three models which might be appropriate in this data situation. Discuss which
aspects speak in favor of each model, and which aspects against.
(b) Suppose variable Y measures financial ratings A : y = 1, B : y = 2, and C : y = 3, that
is, the creditworthiness A: high, B: intermediate, C: low for unit of measurement firm
i. Model Y by means of an ordered Logit model as a function of {X1,X2} and estimate
your model by means of a built-in command.
(c) Explain the proportional odds-assumption and test whether the assumption is critical
in the context of the data set at hand.
##a) Read data set Two.csv into R and have a look at the format of the dependent variable.
O <- read.table("C:/Users/DELL/Downloads/ExamQEIII2021/Two.csv",header=TRUE,sep=";")
str(O)
dim(O)
View(O)
##b)
library(oglmx)
ologit<- oglmx(y~x1+x2,data=O, link="logit",constantMEAN = FALSE, constantSD = FALSE,
delta=0,threshparam =NULL)
results.ologis <- ologit.reg(y~x1+x2,data=O)
summary(results.ologis)
## x1 1.46251
## x2 -0.45391
margins.oglmx(results.ologis,ascontinuous = FALSE) #Build in command for AMElogit
##c) Explain the proportional odds-assumption and test whether the assumption is critical
#in the context of the data set at hand.
#ordinal Logit WITH proportional odds(PO)
library(VGAM)
a <- vglm(y~x1+x2,family=cumulative(parallel=TRUE),data=O)
summary(a)
#ordinal Logit WITHOUT proportional odds [a considers PO and c doesn't]
c <- vglm(y~x1+x2,family=cumulative(parallel=FALSE),data=O)
summary(c)
pchisq(deviance(a)-deviance(c),df.residual(a)-df.residual(c),lower.tail=FALSE)
## 0.4936413 ## No significant difference in the variance left unexplained. Cannot
#confirm that PO assumption is critical.
#small model
LLa <- logLik(a)
#large model
LLc <- logLik(c)
#2*LLc-2*
df.residual(c)
df.residual(a) #or similarly, via a Likelihood Ratio test.
# or, if you are unsure about the number of degrees of freedom
LL<- 2*(LLc -LLa)
1-pchisq(LL,df.residual(a)-df.residual(c))
## 0.4936413 [SAME AS ## No sign. line]
##Conclusion: Likelihood do not differ significantly with the assumption of non PO.

"convergence" for a derived quantity in JAGS/R2Jags

UPDATE: Now with Traceplot example
UPDATE: Now with new traceplot
I am trying to adapt Outhwaite et. als 2018 code for occupancy modelling and have a couple of questions that I just can't seem to find an answer for...
Code used to create model
cat(
"model{
### Model ###
# State model
for (i in 1:nsite){
for (t in 1:nyear){
z[i,t] ~ dbern(psi[i,t])
logit(psi[i,t])<- b[t] + u[i]
}}
# Observation model
for(j in 1:nvisit) {
y[j] ~ dbern(Py[j]+0.0001)
Py[j]<- z[Site[j],Year[j]]*p[j]
logit(p[j]) <- a[Year[j]] + c*logL[j]
}
### Priors ###
# State model priors
for(t in 1:nyear){
b[t] ~ dunif(-10,10) # fixed year effect
}
for (i in 1:nsite) {
u[i] ~ dnorm(0, tau.u) # random site effect
}
tau.u <- 1/(sd.u * sd.u)
sd.u ~ dunif(0, 5) # half-uniform hyperpriors
# Observation model priors
for (t in 1:nyear) {
a[t] ~ dnorm(mu.a, tau.a) # random year effect
}
mu.a ~ dnorm(0, 0.01)
tau.a <- 1 / (sd.a * sd.a)
sd.a ~ dunif(0, 5) # half-uniform hyperpriors
c ~ dunif(-10, 10) # sampling effort effect
### Derived parameters ###
# Finite sample occupancy - proportion of occupied sites
for (t in 1:nyear) {
psi.fs[t] <- sum(z[1:nsite,t])/nsite
}
#data# nyear, nsite, nvisit, y, logL, Site, Year
}", file="bmmodel.txt"
)
Note that dbern(Py[j]+0.0001) includes a correction factor since dbern(0) is not supported in JAGS.
I am running the model on some plant data just basically trying it out to see if it runs and converges and behaves as I would expect it to.
Question number 1(ANSWERED): I am interested in the quantity psi.fs[t]. But since the model calculates this quantity after the actual modelling process, can convergence be assessed for psi.fs[t]?
R code for running model with R2JAGS
jagsrespsi<-jags(data.list, inits=test.inits,
n.chains=2, n.iter=15000, n.thin=3,
DIC=T,
model.file=paste0(modeltype,"model.txt"), parameters.to.save=c("psi.fs"))
Question number 2: When I use traceplot(jagsrespsi) to plot the traceplot seems all over the place but the Rhat for jagsrespsi$BUGSoutput is 1 for all my years? gelman.diag(as.mcmc(jagsrespsi)) also indicates convergence. Same goes for monitoring psi!
I am very astonished by this model behaviour and am suspecting there is something wrong... but no idea where to look
Yes, you can check psi.ft[] for convergence in exactly the same way as you check the convergence of the model's parameters. That's exactly what happens, for example, in a logistic regression, where the fitted probabilities of response are calculated as exp(z)/(1 + exp(z)) for some linear predictor z.
When you say the traceplot is "all over the place", what do you mean? This could be either good or bad. Can you show an example? A "good" traceplot looks like a "fat, hairy caterpillar": consecutive samples taken from all regions of the sample space, a horizontal hair ball. Although written for SAS, this page gives a reasonable high level description of what a good trace plot looks like, and what problems might be indicated by less-than-ideal examples.
In response to your edit to include the trace plot...
That doesn't look like a particularly good traceplot to me: there seems to be some negative autocorrelation between successive samples. Have you calculated the effective sample size [ESS]?
But the plot may look a little odd because your chain is very short, IMHO. You can use the ESS to provide a very rough approximation for the accuracy of an estimated probability: the worst case half width CI of a binomial proportion is +/-2 * sqrt(0.5*0.5/N), where N is the sample size (or ESS in this case). So even if the efficiency of your MCMC process is 1 - so that the ESS is equal to the chain length - then the accuracy of your estimates is only +/-0.02. To estimate a probability to 2 decimal places (so that the half width of the CI is no more than 0.005), you need an ESS of 40,000.
There's nothing wrong with using short chain lengths during testing, but for "production" runs, then I would always use a chan length much greater than 2,500. (And I'd also use multiple chains so that I can use Gelman-Rubin statistics to test for convergence.)

Grouping Variables in Multilevel Linear Models

I am trying to learn hierarchical models in R and I have generated some sample data for myself. I am having trouble with the correct syntax for coding a multilevel regression problem.
I generated some data for salaries in a Business school. I made the salaries depend linearly on the number of years of employment and the total number of publications by the faculty member. The faculty are in various departments and I made the base salary(intercept) different for each department and also the yearly hike(slopes) different for each department. This way, I have the intercept (base salary) and slope(w.r.t experience in number of years) of the salary depend on the nested level (department) and slope w.r.t another explanatory variable (Publications) not depend on the nested level. What would be the correct syntax to model this in R?
here's my data
Data <-data.frame(Sl_No = c(1:40),
+ Dept = as.factor(sample(c("Mark","IT","Fin"),40,replace = TRUE)),
+ Years = round(runif(40,1,10)))
pubs <-round(Data$Years*runif(40,1,3))
Data$Pubs <- pubs
lookup_table<-data.frame(Dept = c("Mark","IT","Fin","Strat","Ops"),
+ base = c(100000,140000,150000,150000,120000),
+ slope = c(6000,5000,3000,2000,4000))
Data <- merge(Data,lookup_table,by = 'Dept')
salary <-Data$base+Data$slope*Data$Years+Data$Pubs*10000+rnorm(length(Data$Dept))*10000
Data$base<-NULL
Data$slope<-NULL
I have tried the following:
1)
multilevel_model<-lmer(Salary~1|Dept+Pubs+Years|Dept, data = Data)
Error in model.matrix.default(eval(substitute(~foo, list(foo = x[[2]]))), :
model frame and formula mismatch in model.matrix()
2)
multilevel_model<-lmer(`Salary`~ Dept + `Pubs`+`Years`|Dept , data = Data)
boundary (singular) fit: see ?isSingular
I want to see the estimates of the salary intercept and yearly hike by Dept and the estimate of the effect of publication as a standalone (pooled). Right now I am not getting the code to work at all.
I know the base salary and the yearly hike by dept and the effect of a publication (since I generated it).
Dept base Slope
Fin 150000 3000
Mark 100000 6000
Ops 120000 4000
IT 140000 5000
Strat 150000 2000
Every publication increases the salary by 10,000.
ANSWER:
Thanks to #Ben 's answer here I think the correct model is
multilevel_model<-lmer(Salary~(1|Dept)+ Pubs +(0+Years|Dept), data = Data)
This gives me the following fixed effects by running
summary(multilevel_model)
Fixed effects:
Estimate Std. Error t value
(Intercept) 131667.4 10461.0 12.59
Pubs 10235.0 550.8 18.58
Correlation of Fixed Effects:
Pubs -0.081
The Department level coefficients are as follows:
coef(multilevel_model)
$Dept
Years (Intercept) Pubs
Fin 3072.5133 148757.6 10235.02
IT 5156.6774 136710.7 10235.02
Mark 5435.8301 102858.3 10235.02
Ops 3433.1433 118287.1 10235.02
Strat 963.9366 151723.1 10235.02
These are pretty good estiamtes of the original values. Now I need to learn to assess "how good" they are. :)
(1)
multilevel_model<-lmer(`Total Salary`~ 1|Dept +
`Publications`+`Years of Exp`|Dept , data = sample_data)
I can't immediately diagnose why this gives a syntax error, but parentheses are generally recommended around random-effect terms because the | operator has high precedence in formulas. Thus the response/right-hand side (RHS) formula
~ (1|Dept) + (`Publications`+`Years of Exp`|Dept)
might work, except that it would be problematic because both terms contain the same intercept term: if you wanted to do this you'd probably need
~ (1|Dept) + (0+`Publications`+`Years of Exp`|Dept)
(2)
~ Dept + `Publications`+`Years of Exp`|Dept
It doesn't really make any sense to put the same variable (Dept) on both the left- and right-hand sides of the bar.
You should probably use
~ pubs + years_exp + (1 + years_exp|Dept)
Since in principle the effect of publication could vary across departments, the maximal model would be
~ pubs + years_exp + (1 + pubs + years_exp|Dept)
It rarely makes sense to include a random effect without its corresponding fixed effect.
Note that you may get singular fits even if you have the right model; see the ?isSingular man page.
if the 18 observations listed above represent your whole data set, it's very likely too small to fit the maximal model successfully. Rule of thumb is that you need 10-20 observations per parameter estimated, and the maximal model has (intercept + 2 fixed-effect params + (3*4)/2=6 random-effect parameters) = 9 parameters. (Since it's simulated, you can easily simulate a big data set ...)
I'd recommend renaming variables in your data frame so you don't have to fuss with backtick-protecting variable names with spaces in them ...
The GLMM FAQ has more on model specification

In R, when fitting a regression with ordinal predictor variables, how do you suppress one of the polynomial contrast levels?

Below is some of the summary data from a mixed model I have run in R (produced by summary()):
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) -3.295e-01 1.227e-01 3.740e+01 -2.683 0.0108 *
STANDING.L 8.447e-02 7.091e-02 7.346e+02 1.188 0.2354
STANDING.Q -4.624e-03 5.940e-02 7.323e+02 -0.078 0.9380
STANDING.C 2.899e-03 5.560e-02 7.327e+02 0.052 0.9585
FIRST.CLASS1 2.643e-02 7.017e-02 7.308e+02 0.376 0.7068
CAREER.L 1.300e-01 5.917e-02 7.345e+02 2.189 0.0289 *
CAREER.Q 8.914e-04 7.370e-02 7.295e+02 0.012 0.9904
GENDER1 9.411e-02 5.892e-02 7.296e+02 1.596 0.1109
HS.COURSES.L -3.996e-02 7.819e-02 7.347e+02 -0.510 0.6102
HS.COURSES.Q 4.977e-02 6.674e-02 7.322e+02 0.745 0.4567
HS.COURSES.C 2.087e-02 5.735e-02 7.298e+02 0.364 0.7163
PARENT.LIVE1 5.770e-03 8.434e-02 7.296e+02 0.068 0.9455
CHILD.SETTING.L 1.241e-01 6.027e-02 7.288e+02 2.057 0.0400 *
CHILD.SETTING.Q -4.911e-02 4.879e-02 7.268e+02 -1.006 0.3146
ES.EXTRA.L 2.702e-02 8.202e-02 7.287e+02 0.329 0.7421
ES.EXTRA.Q 1.267e-01 7.761e-02 7.274e+02 1.631 0.1032
ES.EXTRA.C 8.317e-02 7.533e-02 7.287e+02 1.104 0.2701
TEACH.TAUGHT1 2.475e-01 6.316e-02 7.268e+02 3.918 9.79e-05 ***
SOME1ELSE.TAUGHT1 -1.818e-03 6.116e-02 7.277e+02 -0.030 0.9763
Several of my predictor variables are ordinal, as indicated by the Linear (.L), Quadratic (.Q), and sometimes Cubic (.C) terms that are being automatically generated for them. My question is this: How could I re-run this same regression removing, say, the ES.EXTRA.C term? In other words, I want to suppress one or more of the automatically-generated polynomial contrasts but potentially keep others. I would have thought update() could do this, but I haven't been able to get it to work.
I can't share my actual data, but this code will create a few outcomes that are sort of similar and include an illustration of smci's answer below as well:
set.seed(151) #Lock in a fixed random structure to these data.
Y.data = sort(round(rnorm(100, 75, 10))) #Some random Y data that are basically the same form as mine.
X.data1 = as.ordered(rep(c(1,2,3,4), each=25)) #Some random X data that are similar in form to mine.
summary(lm(Y.data~X.data1)) #This is what I had been doing, albeit using lmer() instead of lm(). It looks to have been creating the polynomial terms automatically.
summary(lm(Y.data~poly(X.data1, 3))) #Returns an error because X.data1 is not numeric
summary(lm(Y.data~poly(as.numeric(X.data1), 3))) #Now returns a call very similar to the first one, but this time I am in control of which polynomial terms are included.
summary(lm(Y.data~poly(as.numeric(X.data1), 2))) #The cubic term is suppressed now, as desired.
As a follow-up, is there a way using poly() to get only a certain mixture of polynomial terms? Say, the cubic and fourth power ones only? I have no idea why one would want to do that, but it seems like something worth knowing...
UPDATE: after you posted your code:
As I guessed you're building a model using polynomials of ordinal variables:
fit <- lm(y ~ poly(STANDING,3) + FIRST.CLASS + poly(CAREER,2) + GENDER +
poly(HS.COURSES,3) + poly(CHILD.SETTING,2) + poly(ES.EXTRA,3) ...)
If you want to prevent cubic terms, use poly(..., 2)
If you really want to only have cubic and quartic terms, no quadratic or linear, a hack is to use I(STANDING^3) + I(STANDING^4), although those will be raw polynomials (not orthogonal, centered and scaled like poly() does). I have never seen a need for this, sounds like a very strange request.
See related:
How to model polynomial regression in R?
UCLA: "R library Contrast coding systems for categorical variables"
FOOTNOTE: lmer() is for Fixed-Effects Models, if you don't know what that is, don't use it, use plain lm().

Resources