How can I compute T2 Hotelling after PCA? - r

I need to compute Hotelling T2 and SPE (Q), after the PCA analysis. I did it using the pca function from library mdatools, but I see the PC computed are different from the one computed by prcomp or princomp functions. Why?
library(mdatools)
NF4.3.pca4 <- pca(NF4.3, ncomp = 15, center = T, scale = T)
res <- NF4.3.pca4$calres
NF4.3.pca <- prcomp(NF4.3, center = T, scale. = T) #different eigenvalues
Is there another way to calculate T2 and SPE from principal components?
Data:
ASSORB_CAT1;ASSORB_CAT3;ASSORB_VOLANO;AZOTO_IN
0.03662109;23.55957;-12.30469;39.3
0;25.36621;-11.09619;39.2
-0.02441406;21.92383;-11.26709;39.2
-0.02441406;23.10791;-11.07178;39.1
-0.04882813;22.81494;-10.57129;39.59975
0;24.24316;-11.23047;39.89737
0;22.63184;-11.43799;39.8
-0.04882813;24.34082;-13.61084;39.5
0;21.83838;-11.1084;39.4
0;24.3042;-12.08496;39.3
0;24.67041;-12.40234;39.3

I will explain on an other dataset, as I do not have access to what you are analysing. The two methods that you applied on my dataset are identical. What can be confusing though is that if you look at stats_pca$sdev it is a root square of the vector of eigenvalues, whereas mdatools_pca$eigenvals reports eigenvalues themselves.
library(mdatools)
data("mtcars")
stats_pca <- prcomp(mtcars, center=TRUE, scale.=TRUE)
mdatools_pca <- mdatools::pca(mtcars, center=TRUE, scale=TRUE)
all.equal(sqrt(mdatools_pca$eigenvals)[1:length(stats_pca$sdev)], stats_pca$sdev)
# TRUE
If you want to go on with Hotelling's T2 I would recommend this read: PCA and Hotelling's T^2 for confidence intervall in R.

Related

Which are the purposes of using preProcess from "caret" package in R code?

"Hi everyone. When I see them using the K Nearest Network to classify the groups. I don't know why they just use the preProcess to standardize the data. Here are the code"
preProc <- preProcess(UB2[3:12])
UBn <- predict(preProc, UB2)
set.seed(12)
UBKm <- kmeans(UBn[3:12], centers = 5, iter.max = 1000)
You use preProcess to scale and center your variables, basically to have them in the same range.
In situations where the columns have different ranges, if you apply kmeans directly, it will mainly form clusters that minimize the variance on columns that have higher values.
For example we simulate three clusters that can be separated on variables of different scales:
library(caret)
library(MASS)
library(rgl)
set.seed(111)
Sigma <- matrix(c(10,1,1,1,1,1,1,1),3,3)
X = rbind(mvrnorm(n=200,c(50,1,1), Sigma),
mvrnorm(n=200,c(20,5,1), Sigma),
mvrnorm(n=200,c(20,2.5,2.5), Sigma))
X = data.frame(X,cluster=factor(rep(1:3,each=200)))
plot3d(X[,1:3],col=factor(rep(1:3,each=200)))
Not that X1 is in the range of 0-60 while X2,X3 are around -1 to 10..
If we do kmeans without scaling:
clus = kmeans(X[,1:3],3)
COLS = heat.colors(3)
plot3d(X[,1:3],col=COLS[clus$cluster])
It primarily tries to split using X1, ignoring X2,X3 resulting on a split in the original cluster 1.
So we scale and cluster:
clus = kmeans(predict(preProcess(X[,1:3]),X[,1:3]),3)
COLS = heat.colors(3)
plot3d(X[,1:3],col=COLS[clus$cluster])

Maximum pseudo-likelihood estimator for soft-core point process

I am trying to fit a soft-core point process model on a set of point pattern using maximum pseudo-likelihood. I followed the instructions given in this paper by Baddeley and Turner
And here is the R-code I came up with
`library(deldir)
library(tidyverse)
library(fields)
#MPLE
# irregular parameter k
k <- 0.4
## Generate dummy points 50X50. "RA" and "DE" are x and y coordinates
dum.x <- seq(ramin, ramax, length = 50)
dum.y <- seq(demin, demax, length = 50)
dum <- expand.grid(dum.x, dum.y)
colnames(dum) <- c("RA", "DE")
## Combine with data and specify which is data point and which is dummy, X is the point pattern to be fitted
bind.x <- bind_rows(X, dum) %>%
mutate(Ind = c(rep(1, nrow(X)), rep(0, nrow(dum))))
## Calculate Quadrature weights using Voronoi cell area
w <- deldir(bind.x$RA, bind.x$DE)$summary$dir.area
## Response
y <- bind.x$Ind/w
# the sum of distances between all pairs of points (the sufficient statistics)
tmp <- cbind(bind.x$RA, bind.x$DE)
t1 <- rdist(tmp)^(-2/k)
t1[t1 == Inf] <- 0
t1 <- rowSums(t1)
t <- -t1
# fit the model using quasipoisson regression
fit <- glm(y ~ t, family = quasipoisson, weights = w)
`
However, the fitted parameter for t is negative which is obviously not a correct value for a softcore point process. Also, my point pattern is actually simulated from a softcore process so it does not make sense that the fitted parameter is negative. I tried my best to find any bugs in the code but I can't seem to find it. The only potential issue I see is that my sufficient statistics is extremely large (on the order of 10^14) which I fear may cause numerical issues. But the statistics are large because my observation window spans a very small unit and the average distance between a pair of points is around 0.006. So sufficient statistics based on this will certainly be very large and my intuition tells me that it should not cause a numerical problem and make the fitted parameter to be negative.
Can anybody help and check if my code is correct? Thanks very much!

Different results when performing PCA in R with princomp() and principal ()

I tried to use princomp() and principal() to do PCA in R with data set USArressts. However, I got two different results for loadings/rotaion and scores.
First, I centered and normalised the original data frame so it is easier to compare the outputs.
library(psych)
trans_func <- function(x){
x <- (x-mean(x))/sd(x)
return(x)
}
A <- USArrests
USArrests <- apply(USArrests, 2, trans_func)
princompPCA <- princomp(USArrests, cor = TRUE)
principalPCA <- principal(USArrests, nfactors=4 , scores=TRUE, rotate = "none",scale=TRUE)
Then I got the results for the loadings and scores using the following commands:
princompPCA$loadings
principalPCA$loadings
Could you please help me to explain why there is a difference? and how can we interprete these results?
At the very end of the help document of ?principal:
"The eigen vectors are rescaled by the sqrt of the eigen values to produce the component loadings more typical in factor analysis."
So principal returns the scaled loadings. In fact, principal produces a factor model estimated by the principal component method.
In 4 years, I would like to provide a more accurate answer to this question. I use iris data as an example.
data = iris[, 1:4]
First, do PCA by the eigen-decomposition
eigen_res = eigen(cov(data))
l = eigen_res$values
q = eigen_res$vectors
Then the eigenvector corresponding to the largest eigenvalue is the factor loadings
q[,1]
We can treat this as a reference or the correct answer. Now we check the results by different r functions.
First, by function 'princomp'
res1 = princomp(data)
res1$loadings[,1]
# compare with
q[,1]
No problem, this function actually just return the same results as 'eigen'. Now move to 'principal'
library(psych)
res2 = principal(data, nfactors=4, rotate="none")
# the loadings of the first PC is
res2$loadings[,1]
# compare it with the results by eigendecomposition
sqrt(l[1])*q[,1] # re-scale the eigen vector by sqrt of eigen value
You may find they are still different. The problem is the 'principal' function does eigendecomposition on the correlation matrix by default. Note: PCA is not invariant with rescaling the variables. If you modify the code as
res2 = principal(data, nfactors=4, rotate="none", cor="cov")
# the loadings of the first PC is
res2$loadings[,1]
# compare it with the results by eigendecomposition
sqrt(l[1])*q[,1] # re-scale the eigen vector by sqrt of eigen value
Now, you will get the same results as 'eigen' and 'princomp'.
Summarize:
If you want to do PCA, you'd better apply 'princomp' function.
PCA is a special case of the Factor model or a simplified version of the factor model. It is just equivalent to eigendecomposition.
We can apply PCA to get an approximation of a factor model. It doesn't care about the specific factors, i.e. epsilons in a factor model. So, if you change the number of factors in your model, you will get the same estimations of the loadings. It is different from the maximum likelihood estimation.
If you are estimating a factor model, you'd better use 'principal' function, since it provides more functions, like rotation, calculating the scores by different methods, and so on.
Rescale the loadings of a PCA model doesn't affect the results too much. Since you still project the data onto the same optimal direction, i.e. maximize the variation in the resulting PC.
ev <- eigen(R) # R is a correlation matrix of DATA
ev$vectors %*% diag(ev$values) %*% t(ev$vectors)
pc <- princomp(scale(DATA, center = F, scale = T),cor=TRUE)
p <-principal(DATA, rotate="none")
#eigen values
ev$values^0.5
pc$sdev
p$values^0.5
#eigen vectors - loadings
ev$vectors
pc$loadings
p$weights %*% diag(p$values^0.5)
pc$loading %*% diag(pc$sdev)
p$loadings
#weights
ee <- diag(0,2)
for (j in 1:2) {
for (i in 1:2) {
ee[i,j] <- ev$vectors[i,j]/p$values[j]^0.5
}
};ee
#scores
s <- as.matrix(scale(DATA, center = T, scale = T)) %*% ev$vectors
scale(s)
p$scores
scale(pc$scores)

Hierarchical Cluster using dissimilarity matrix R

I have mixed data type matrix Data_string size (947 x 41) that contain numeric and categorical attributes.
I produced a distance matrix (947 x 947) using the daisy() function and Gower distance measure in Rstudio.
d <- daisy(Data_String, metric = "gower", stand = FALSE,type = list(symm = c("V1","V13") , asymm = c("V8","V9","V10")))
I applied hierarchical Cluster using dissimilarity matrix (d).
# hclust
hc <- hclust(d, method="complete")
plot(hc)
rect.hclust(hc, 4)
cut <- cutree(hc, k = 1:5)
View(cut)
#Diana
d_as <- as.matrix(d)
DianaCluster <- diana(d_as, diss = TRUE, keep.diss = TRUE)
print(DianaCluster)
plot(DianaCluster)
The following is the plots I had.
** Note: I couldn't upload the image here since I do not have enough reputation's points.
I am struggling to understand the results, can anyone please
1- suggest any solution that I can apply in R to simplify the understanding of my results.
or
2- how I can link it to my source data, since all the results are based on the dissimilarity matrix.
Please take a look at -
https://stats.stackexchange.com/questions/130974/how-to-use-both-binary-and-continuous-variables-together-in-clustering
It explains how to use gower dissimilarity matrix with hclust. Hope this helps!

Using anova() on gamma distributions gives seemingly random p-values

I am trying to determine whether there is a significant difference between two Gamm distributions. One distribution has (shape, scale)=(shapeRef,scaleRef) while the other has (shape, scale)=(shapeTarget,scaleTarget). I try to do analysis of variance with the following code
n=10000
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
glmm1 <- gam(y~x,family=Gamma(link=log))
anova(glmm1)
The resulting p values keep changing and can be anywhere from <0.1 to >0.9.
Am I going about this the wrong way?
Edit: I use the following code instead
f <- gl(2, n)
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
xy <- c(x, y)
anova(glm(xy ~ f, family = Gamma(link = log)),test="F")
But, every time I run it I get a different p-value.
You will indeed get a different p-value every time you run this, if you pick different realizations every time. Just like your data values are random variables, which you'd expect to vary each time you ran an experiment, so is the p-value. If the null hypothesis is true (which was the case in your initial attempts), then the p-values will be uniformly distributed between 0 and 1.
Function to generate simulated data:
simfun <- function(n=100,shapeRef=2,shapeTarget=2,
scaleRef=1,scaleTarget=2) {
f <- gl(2, n)
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
xy <- c(x, y)
data.frame(xy,f)
}
Function to run anova() and extract the p-value:
sumfun <- function(d) {
aa <- anova(glm(xy ~ f, family = Gamma(link = log),data=d),test="F")
aa["f","Pr(>F)"]
}
Try it out, 500 times:
set.seed(101)
r <- replicate(500,sumfun(simfun()))
The p-values are always very small (the difference in scale parameters is easily distinguishable), but they do vary:
par(las=1,bty="l") ## cosmetic
hist(log10(r),col="gray",breaks=50)

Resources