Create a Factor Analysis diagram/cluster visualization using R - r

I conducted a Factor Analysis in R and discovered there are 4 latent variables (factors).
I am trying to run a diagram/cluster visualization, to better represent my findings.
I would want something like this:
Here is my output:
print(factor2$loadings,cutoff = 0.3)
Loadings:
MR2 MR1 MR4 MR3
Inflatie 0.796
Dobanda 0.439
optq20 0.627
optq22 0.661
optq25 0.489
optq27 0.462
optq28 0.651
optq29 0.359
optq30 0.636
optq36 0.322
optq37 0.621
optq38 0.517
optq39 0.620
optq43 0.543
MR2 MR1 MR4 MR3
SS loadings 1.560 1.524 1.225 0.873
Proportion Var 0.111 0.109 0.087 0.062
Cumulative Var 0.111 0.220 0.308 0.370
I have searched all over the Internet but could not find some useful code for that.
Thank you!

Related

How to conduct meta-analysis for thousands of genes at one time using R package "metafor"

I'm trying to do a meta-analysis on many genes using R package "metafor", I know how to do it one gene at a time but it would be ridiculous to do so for thousands of genes. Could somebody help me out of this! Appreciate any suggestions!
I have all the results of se and HR for all the genes named 'se_summary' and 'HR_summary' respectively.
I need to use both se and HR of these genes from five studies "ICGC, TCGA, G71, G62, G8" as input to conduct the meta analysis.
The code I used to do the meta analysis for one single gene (using gene AAK1 as an example) is:
library(metafor)
se.AAK1 <- as.numeric(se_summary[rownames(se_summary) == 'AAK1',][,-1])
HR.AAK1 <- as.numeric(HR_summary[rownames(HR_summary) == 'AAK1',][,-1])
beta.AAK1 <- log(HR.AAK1)
####First I need to use the random model to see if the test for Heterogeneity is significant or not.
pool.AAK1 <- rma(beta.AAK1, sei=se.AAK1)
summary(pool.AAK1)
#### and this gives the following output:
#>Random-Effects Model (k = 5; tau^2 estimator: REML)
#> logLik deviance AIC BIC AICc
#> -2.5686 5.1372 9.1372 7.9098 21.1372
#>tau^2 (estimated amount of total heterogeneity): 0.0870 (SE = 0.1176)
#>tau (square root of estimated tau^2 value): 0.2950
#>I^2 (total heterogeneity / total variability): 53.67%
#>H^2 (total variability / sampling variability): 2.16
#>Test for Heterogeneity:
#>Q(df = 4) = 8.5490, p-val = 0.0734
#>Model Results:
#>estimate se zval pval ci.lb ci.ub
#> -0.3206 0.1832 -1.7500 0.0801 -0.6797 0.0385 .
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
####If the I^2 > 50%, we still use the Random-effect Model but if the I^2 <= 50%, we then use the Fixed-effect Model
pool.AAK1 <- rma(beta.AAK1, sei=se.AAK1, method="FE")
summary(pool.AAK1)
####this gives the following output:
#>Fixed-Effects Model (k = 5)
#> logLik deviance AIC BIC AICc
#> -2.5793 8.5490 7.1587 6.7681 8.4920
#>Test for Heterogeneity:
#>Q(df = 4) = 8.5490, p-val = 0.0734
#>Model Results:
#>estimate se zval pval ci.lb ci.ub
#> -0.2564 0.1191 -2.1524 0.0314 -0.4898 -0.0229 *
#>---
#>Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This works just fine if I got only one gene, but I need to do it all at one time for all these genes and then export the output including "Heterogeneity p-val", and all the model results "estimate, se, zval, pval, ci.lb, ci.ub " to one .txt file, each row for a gene, the output should be like this:
Gene_symbol Heterogeneity_p-val estimate se zval pval ci.lb ci.ub
AAK1 0.0734 -0.2564 0.1191 -2.1524 0.0314 -0.4898 -0.0229
A2M 0.9664 0.1688 0.1173 1.4388 0.1502 -0.0611 0.3987
In case of need, here is a piece of sample data "se_summary"
Gene_symbol ICGC_se TCGA_se G71_se G62_se G8_se
A1CF 0.312 0.21 0.219 0.292 0.381
A2M 0.305 0.21 0.219 0.292 0.387
A2ML1 0.314 0.211 0.222 0.289 0.389
A4GALT 0.305 0.21 0.225 0.288 0.388
A4GNT 0.306 0.211 0.222 0.288 0.385
AAAS 0.308 0.213 0.223 0.298 0.38
AACS 0.307 0.209 0.221 0.287 0.38
AADAC 0.302 0.212 0.221 0.293 0.404
AADAT 0.308 0.214 0.22 0.288 0.391
AAK1 0.304 0.209 0.22 0.303 0.438
AAMP 0.303 0.211 0.222 0.288 0.394
And a piece of sample data "HR_summary"
Gene_symbol ICGC_HR TCGA_HR G71_HR G62_HR G8_HR
A1CF 1.689 1.427 0.864 1.884 1.133
A2M 1.234 1.102 1.11 1.369 1.338
A2ML1 0.563 0.747 0.535 1.002 0.752
A4GALT 0.969 0.891 0.613 0.985 0.882
A4GNT 1.486 0.764 1.051 1.317 1.465
AAAS 1.51 1.178 1.076 0.467 0.681
AACS 1.4 1.022 1.255 1.006 1.416
AADAC 0.979 0.642 1.236 1.581 1.234
AADAT 1.366 1.405 1.18 1.057 1.408
AAK1 1.04 0.923 0.881 0.469 0.329
AAMP 1.122 0.639 1.473 0.964 1.284
point 1: if your data is collected from different populations, you should not use fixed effect model. because HR could be difference among your populations.
point 2: if you convert HR to log(HR), therefore SE should be calculated for log(HR).
your data:
se_summary=data.frame(
Gene_symbol=c("A1CF","A2M","A2ML1","A4GALT","A4GNT","AAAS","AACS","AADAC","AADAT","AAK1","AAMP"),
ICGC_se=c(0.312,0.305,0.314,0.305,0.306,0.308,0.307,0.302,0.308,0.304,0.303),
TCGA_se=c(0.21,0.21,0.211,0.21,0.211,0.213,0.209,0.212,0.214,0.209,0.211),
G71_se=c(0.219,0.219,0.222,0.225,0.222,0.223,0.221,0.221,0.22,0.22,0.222),
G62_se=c(0.292,0.292,0.289,0.288,0.288,0.298,0.287,0.293,0.288,0.303,0.288),
G8_se=c(0.381,0.387,0.389,0.388,0.385,0.38,0.38,0.404,0.391,0.438,0.394))
and
HR_summary=data.frame(
Gene_symbol=c("A1CF","A2M","A2ML1","A4GALT","A4GNT","AAAS","AACS","AADAC","AADAT","AAK1","AAMP"),
ICGC_HR=c(1.689,1.234,0.563,0.969,1.486,1.51,1.4,0.979,1.366,1.04,1.122),
TCGA_HR=c(1.427,1.102,0.747,0.891,0.764,1.178,1.022,0.642,1.405,0.923,0.639),
G71_HR=c(0.864,1.11,0.535,0.613,1.051,1.076,1.255,1.236,1.18,0.881,1.473),
G62_HR=c(1.884,1.369,1.002,0.985,1.317,0.467,1.006,1.581,1.057,0.469,0.964),
G8_HR=c(1.133,1.338,0.752,0.882,1.465,0.681,1.416,1.234,1.408,0.329,1.284))
1)merge data
data=cbind(se_summary,log(HR_summary[,-1]))
2) a function to calculate meta-log HR
met=function(x) {
y=rma(as.numeric(x[7:11]), sei=as.numeric(x[2:6]))
y=c(y$b,y$beta,y$se,y$zval,y$pval,y$ci.lb,y$ci.ub,y$tau2,y$I2)
y
}
3)perform function for all rows
results=data.frame(t(apply(data,1,met)))
rownames(results)=rownames(data)
colnames(results)=c("b","beta","se","zval","pval","ci.lb","ci.ub","tau2","I2")
4)results
> results
b beta se zval pval
A1CF 0.27683114 0.27683114 0.1538070 1.7998601 0.071882735
A2M 0.16877042 0.16877042 0.1172977 1.4388214 0.150201136
A2ML1 -0.37676308 -0.37676308 0.1182825 -3.1852811 0.001446134
A4GALT -0.18975044 -0.18975044 0.1179515 -1.6087159 0.107678477
A4GNT 0.09500277 0.09500277 0.1392486 0.6822528 0.495079085
AAAS -0.07012629 -0.07012629 0.2000932 -0.3504680 0.725987468
AACS 0.15333550 0.15333550 0.1170061 1.3104915 0.190029610
AADAC 0.04902471 0.04902471 0.1738017 0.2820727 0.777887764
AADAT 0.23785528 0.23785528 0.1181503 2.0131593 0.044097875
AAK1 -0.32062727 -0.32062727 0.1832183 -1.7499744 0.080122725
AAMP 0.02722082 0.02722082 0.1724461 0.1578512 0.874574077
ci.lb ci.ub tau2 I2
A1CF -0.024625107 0.57828740 0.04413257 37.89339
A2M -0.061128821 0.39866965 0.00000000 0.00000
A2ML1 -0.608592552 -0.14493360 0.00000000 0.00000
A4GALT -0.420931120 0.04143024 0.00000000 0.00000
A4GNT -0.177919527 0.36792508 0.02455208 25.35146
AAAS -0.462301836 0.32204926 0.12145183 62.23915
AACS -0.075992239 0.38266324 0.00000000 0.00000
AADAC -0.291620349 0.38966978 0.07385974 50.18761
AADAT 0.006285038 0.46942552 0.00000000 0.00000
AAK1 -0.679728455 0.03847392 0.08700387 53.66905
AAMP -0.310767314 0.36520895 0.07266674 50.07330
Put the data in long format, with both the effect sizes and the se data side by side, then use a split and apply rma to each of these. You can make your own version of broom's tidy function just for rma objects.
library(metafor)
library(reshape)
se_summary<-read.table(text="
Gene_symbol ICGC_se TCGA_se G71_se G62_se G8_se
AADAT 0.308 0.214 0.22 0.288 0.391
AAK1 0.304 0.209 0.22 0.303 0.438
AAMP 0.303 0.211 0.222 0.288 0.394
",header=T)
HR_summary<-read.table(text="
Gene_symbol ICGC_HR TCGA_HR G71_HR G62_HR G8_HR
AADAT 0.308 0.214 0.22 0.288 0.391
AAK1 0.304 0.209 0.22 0.303 0.438
AAMP 0.303 0.211 0.222 0.288 0.394
",header=T)
HR_summary<-melt(HR_summary,id.vars = "Gene_symbol")%>%
mutate(.,variable=sapply(strsplit(as.character(variable), split='_', fixed=TRUE), function(x) (x[1])))%>%
rename(gene=variable)
se_summary<-melt(se_summary,id.vars = "Gene_symbol")%>%
mutate(.,variable=sapply(strsplit(as.character(variable), split='_', fixed=TRUE), function(x) (x[1])))%>%
rename(gene=variable)
HR_summary<-merge(HR_summary,se_summary,by=c("Gene_symbol","gene"),suffixes=c(".HR",".se"))
tidy.rma<-function(x) {
return(data.frame(estimate=x$b,se=x$se,zval=x$zval,ci.lb=x$ci.lb,ci.ub=x$ci.ub,k=x$k,Heterog_pv=x$QEp#the main stuff: overall ES, etc
#variance components( random effects stuff): nlvls is n sites
)) #test for heterogeneity q value and p-value
}
rbindlist(lapply(split(HR_summary, droplevels(HR_summary$Gene_symbol)),
function(x)with(x, tidy.rma(rma(yi=value.HR, sei=value.se,method="FE")))),idcol = "Gene_symbol2")

Two types of variables in a single Heatmap (using R)

I have to plot data from immunized animals in a way to visualize possible correlations in protection. As a background, when we vaccinate an animal it produces antibodies, which might or not be linked to protection. We immunized bovine with 9 different proteins and measured antibody titers which goes up to 1.5 (Optical Density (O.D.)). We also measured tick load that goes up to 5000. Each animal have different titers for each protein and different tick loads, maybe some proteins are more important for protection than the others, and we think that a heatmap could illustrate it.
TL;DR: Plot a heatmap with one variable (Ticks) that goes from 6 up to 5000, and another variable (Prot1 to Prot9) that goes up to 1.5.
A sample of my data:
Animal Group Ticks Prot1 Prot2 Prot3 Prot4 Prot5 Prot6 Prot7 Prot8 Prot9
G1-54-102 control 3030 0.734 0.402 0.620 0.455 0.674 0.550 0.654 0.508 0.618
G1-130-102 control 5469 0.765 0.440 0.647 0.354 0.528 0.525 0.542 0.481 0.658
G1-133-102 control 2070 0.367 0.326 0.386 0.219 0.301 0.231 0.339 0.247 0.291
G3-153-102 vaccinated 150 0.890 0.524 0.928 0.403 0.919 0.593 0.901 0.379 0.647
G3-200-102 vaccinated 97 1.370 0.957 1.183 0.658 1.103 0.981 1.051 0.534 1.144
G3-807-102 vaccinated 606 0.975 0.706 1.058 0.626 1.135 0.967 0.938 0.428 1.035
I have little knowledge in R, but I'm really excited to learn more about it. So feel free to put whatever code you want and I will try my best to understand it.
Thank you in advance.
Luiz
Here is an option to use the ggplot2 package to create a heatmap. You will need to convert your data frame from wide format to long format. It is also important to convert the Ticks column from numeric to factor if the numbers are discrete.
library(tidyverse)
library(viridis)
dat2 <- dat %>%
gather(Prot, Value, starts_with("Prot"))
ggplot(dat2, aes(x = factor(Ticks), y = Prot, fill = Value)) +
geom_tile() +
scale_fill_viridis()
DATA
dat <- read.table(text = "Animal Group Ticks Prot1 Prot2 Prot3 Prot4 Prot5 Prot6 Prot7 Prot8 Prot9
'G1-54-102' control 3030 0.734 0.402 0.620 0.455 0.674 0.550 0.654 0.508 0.618
'G1-130-102' control 5469 0.765 0.440 0.647 0.354 0.528 0.525 0.542 0.481 0.658
'G1-133-102' control 2070 0.367 0.326 0.386 0.219 0.301 0.231 0.339 0.247 0.291
'G3-153-102' vaccinated 150 0.890 0.524 0.928 0.403 0.919 0.593 0.901 0.379 0.647
'G3-200-102' vaccinated 97 1.370 0.957 1.183 0.658 1.103 0.981 1.051 0.534 1.144
'G3-807-102' vaccinated 606 0.975 0.706 1.058 0.626 1.135 0.967 0.938 0.428 1.035",
header = TRUE, stringsAsFactors = FALSE)
In the newest version of ggplot2 / the tidyverse, you don't even need to explicitly load the viridis-package. The scale is included via scale_fill_viridis_c(). Exciting times!

Simultaneous factor solution - Factor analysis

In order to replicate the results of a previous study, I am trying to apply a method of factor analysis of a matrix that is described in Horst (1965) as "basic structure with simultaneous factor solution".
How would I approach this method in R?
Given a matrix m, and providing for instance that I extract two factors, I have tried applying the following:
fa(r = cor(m), rotate = 'none', factors = 2)
but I don't think this approach is right.
Just found out.
Library(psych)
Principal(r= cor(m), rotate = " none ", nfactor= 2)
Does the job. Horst refers to what is also called an eigen value decomposition. It can also be done using eigen() and attaining the same result.
.. not really.. loadings seem pretty close but looking at the maths I am not sure the method described below is akin to eigen value decomposition in fact, looking more closely, the method is applied directly on the raw data and no product momentum calculations are required..
.. I am trying (slowly) to work out the maths myself and to understand what the computation instruction describes.
For your information, here is the standardized matrix that is used for the calculation carried out in the example in the original textbook:
0.444 0.627 1.458 1.754 2.967 2.585 0.970 0.616 0.853
2.648 2.563 1.950 -1.341 -1.015 -0.700 0.904 0.976 0.150
-0.104 -0.159 0.049 0.510 -0.378 -0.468 2.217 2.378 2.291
-0.970 -1.216 -1.129 -0.079 -0.378 -0.645 0.287 0.312 -2.266
-1.164 -1.060 -1.485 -1.878 -0.021 -0.530 -1.483 0.190 -0.429
-0.956 -1.122 -0.938 -1.282 -0.779 0.121 0.447 -1.565 -0.429
0.198 -0.242 -0.055 0.021 0.526 -1.528 -0.575 -1.244 -0.114
-0.035 -0.485 1.129 -0.014 -0.894 -0.316 -1.421 -0.705 -0.349
-1.050 0.786 -0.048 0.101 -0.354 -0.433 -0.298 -0.377 -0.256
0.298 0.197 -0.010 0.558 0.253 0.464 -0.284 -0.240 -0.031
0.568 0.367 -0.429 0.811 -0.007 0.786 -0.250 0.081 0.541
0.125 -0.256 -0.492 0.839 0.079 0.665 -0.513 -0.422 0.039
here are the computation instructions and examples
... I was wondering if this is just a standard approach in factor analysis or in pricipal component analysis.. and if so, which one? The introduction says that this method is rank reduction type solution in the sense that the major product of the factor score and factor loading matrices yields a residual which is precisely of rank equal to that of the original matrix less the number of factors.
This particular type of analysis is "direct" in the sense that is carried out directly on the raw data (at best it is the normalized matrix).

How to set the level above which to display factor loadings from factanal() in R?

I was performing factor analysis with data state.x77, which is in R by default. After running the analysis, I inspected the factor loadings.
> output = factanal(state.x77, factors=3, rotation="promax")
> ld = output$loadings
> ld
Loadings:
Factor1 Factor2 Factor3
Population 0.161 0.239 -0.316
Income -0.149 0.681
Illiteracy 0.446 -0.284 -0.393
Life Exp -0.924 0.172 -0.221
Murder 0.917 0.103 -0.129
HS Grad -0.414 0.731
Frost 0.107 1.046
Area 0.387 0.585 0.101
Factor1 Factor2 Factor3
SS loadings 2.274 1.519 1.424
Proportion Var 0.284 0.190 0.178
Cumulative Var 0.284 0.474 0.652
It looks like that by default R is blocking all values less than 0.1. I was wondering if there is a way to set this blocking level by hand, say 0.3 instead of 0.1?
try this:
print(output$loadings, cutoff = 0.3)
see ?print.loadings for the details.

Difference between proc princomp in SAS and princomp command in R?

I am currently trying to obtain equivalent results with the proc princomp command in SAS and the princomp() command in R (in the stats package). The results I am getting are very similar, leading me to suspect that this isn't a problem with different options settings in the two commands. However, the outpus are also different enough that the component scores for each data row are notably different. They are also sign-reversed, but this doesn't matter, of course.
The end goal of this analysis is to produce a set of coefficients from the PCA to score data outside the PCA routine (i.e. a formula that can be applied to new datasets to easily produce scored data).
Without posting all my data, I'm hoping someone can provide some information on how these two commands may differ in their calculations. I don't know enough about the PCA math to determine if this is a conceptual difference in the processes or just something like an internal rounding difference. For simplicity, I'll post the eigenvectors for PC1 and PC2 only.
In SAS:
proc princomp data=climate out=pc_out outstat=pc_outstat;
var MAT MWMT MCMT logMAP logMSP CMI cmiJJA DD_5 NFFD;
run;
returns
Eigenvectors
Prin1 Prin2 Prin3 Prin4 Prin5 Prin6 Prin7 Prin8 Prin9
MAT 0.372 0.257 -.035 -.033 -.106 0.270 -.036 0.216 -.811
MWMT 0.381 0.077 0.160 -.261 0.627 0.137 -.054 0.497 0.302
MCMT 0.341 0.324 -.229 0.046 -.544 0.421 0.045 0.059 0.493
logMAP -.184 0.609 -.311 -.357 -.041 -.548 0.183 0.183 0.000
logMSP -.205 0.506 0.747 -.137 -.040 0.159 -.156 -.266 0.033
CMI -.336 0.287 -.451 0.096 0.486 0.499 0.050 -.318 -.031
cmiJJA -.365 0.179 0.112 0.688 -.019 0.012 0.015 0.588 0.018
DD_5 0.379 0.142 0.173 0.368 0.183 -.173 0.725 -.282 0.007
NFFD 0.363 0.242 -.136 0.402 0.158 -.351 -.637 -.264 0.052
In R:
PCA.model <- princomp(climate[,c("MAT","MWMT","MCMT","logMAP","logMSP","CMI","cmiJJA","DD.5","NFFD")], scores=T, cor=T)
PCA.model$loadings
returns
Eigenvectors
Comp.1 Comp.2 Comp.3 Comp.4 Comp.5 Comp.6 Comp.7 Comp.8 Comp.9
MAT -0.372 -0.269 0.126 -0.250 0.270 0.789
MWMT -0.387 -0.171 0.675 0.494 -0.325
MCMT -0.339 -0.332 0.250 0.164 -0.500 -0.414 -0.510
logMAP 0.174 -0.604 0.309 0.252 0.619 -0.213 0.125
logMSP 0.202 -0.501 -0.727 0.223 -0.162 0.175 -0.268
CMI 0.334 -0.293 0.459 -0.222 0.471 -0.495 -0.271
cmiJJA 0.365 -0.199 -0.174 -0.612 -0.247 0.590
DD.5 -0.382 -0.143 -0.186 -0.421 -0.695 -0.360
NFFD -0.368 -0.227 -0.487 0.309 0.655 -0.205
As you can see, the values are similar (sign reversed), but not identical. The differences matter in the scored data, the first row of which looks like this:
Prin1 Prin2 Prin3 Prin4 Prin5 Prin6 Prin7 Prin8 Prin9
SAS -1.95 1.68 -0.54 0.72 -1.07 0.10 -0.66 -0.02 0.05
R 1.61 -1.99 0.52 -0.42 -1.13 -0.16 0.79 0.12 -0.09
If I use a GLM (in SAS) or lm() (in R) to calculate the coefficients from the scored data, I get very similar numbers (inverse sign), with the exception of the intercept. Like so:
in SAS:
proc glm order=data data=pc_out;
model Prin1 = MAT MWMT MCMT logMAP logMSP CMI cmiJJA DD_5 NFFD;
run;
in R:
scored <- cbind(PCA.model$scores, climate)
pca.lm <- lm(Comp.1~MAT+MWMT+MCMT+logMAP+logMSP+CMI+cmiJJA+DD.5+NFFD, data=scored)
returns
Coefficients:
(Int) MAT MWMT MCMT logMAP logMSP CMI cmiJJA DD.5 NFFD
SAS 0.42 0.04 0.06 0.03 -0.65 -0.69 -0.003 -0.01 0.0002 0.004
R -0.59 -0.04 -0.06 -0.03 0.62 0.68 0.004 0.02 -0.0002 -0.004
So it would seem that the model intercept is changing the value in the scored data. Any thoughts on why this happens (why the intercept is different) would be appreciated.
Thanks again to all those who commented. Embarrassingly, the differences I found between the SAS proc princomp and R princomp() procedures was actually a product of a data error that I made. Sorry to those who took time to help answer.
But rather than let this question go to waste, I will offer what I found to be statistically equivalent procedures for SAS and R when running a principal component analysis (PCA).
The following procedures are statistically equivalent, with data named 'mydata' and variables named 'Var1', 'Var2', and 'Var3'.
In SAS:
* Run the PCA on your data;
proc princomp data=mydata out=pc_out outstat=pc_outstat;
var Var1 Var2 Var3;
run;
* Use GLM on the individual components to obtain the coefficients to calculate the PCA scoring;
proc glm order=data data=pc_out;
model Prin1 = Var1 Var2 Var3;
run;
In R:
PCA.model <- princomp(mydata[,c("Var1","Var2","Var3")], scores=T, cor=T)
scored <- predict(PCA.model, mydata)
scored <- cbind(PCA.model$scores, mydata)
lm(Comp.1~Var1+Var2+Var3, data=scored)

Resources