I did some digging around, but I'm still very new to the concept of latin hypercube sampling. I found this example which uses the lhs pacakge:
set.seed(1)
randomLHS(5,2)
[,1] [,2]
[1,] 0.84119491 0.89953985
[2,] 0.03531135 0.74352370
[3,] 0.33740457 0.59838122
[4,] 0.47682074 0.07600704
[5,] 0.75396828 0.35548904
From my understanding, the entries in the resulting matrix are the coordinates of 5 points that will be used to determine combinations of two continuous variables.
I'm trying to do a simulation with 5 categorical variables. The number of levels per variable range from 2 to 5. This results in 2 x 3 x 4 x 2 x 5 = 240 scenarios. I'd like to cut it down as much as possible so I was thinking of using a latin hypercube, but I'm confused about how to proceed. Any ideas would be much appreciated!
Also, do you know of any good resources that explains how to analyze the results from latin hypercube sampling?
I'd recommend sticking with the full factorial with 240 design points, for the following reasons.
Heck, this is what computers are for—to automate tedious
computational tasks. 240 design points is nothing, you're doing
this on a computer! You can easily automate the process with nested
loops iterating through the levels, one loop per factor. Don't
forget an innermost loop for replications. If each simulation takes
more than a minute or two, break it across multiple cores or multiple
machines. One of my students recently did this for his MS thesis
work, and was able to run more than a million simulated experiments
over a weekend.
With continuous factors, you generally assume some degree of smoothness in
the response surface and infer/project the response between adjacent design
points based on regression. With categorical data, inference isn't
valid for excluded factor combinations and interactions
may very well be the dominant effects. Unless you do the full
factorial, the combinations you omit may or may not be the most
important ones, but the point is that you'll never know if
you didn't sample there.
In general, you use the same analysis tools you would use if you were doing any other kind of sampling—Regression, logistic regression, ANOVA, partition trees,... For categorical factors, I'm a fan of partition trees.
Related
I am trying to carry out hierarchical cluster analysis (based on Ward's method) on a large dataset (thousands of records and 13 variables) representing multi-species observations of marine predators, to identify possible significant clusters in species composition.
Each record has date, time etc and presence/absence data (0 / 1) for each species.
I attempted hierarchical clustering with the function pvclust. I transposed the data (pvclust works on transposed tables), then I ran pvclust on the data selecting Jacquard distances (“binary” in R) as a distance measure (suitable for species pres/abs data) and Ward’s method (“ward.D2”). I used “parallel = TRUE” to reduce computation time. However, using a default of nboots= 1000, my computer was not able to finish the computation in hours and finally I got ann error, so I tried with lower nboots (100).
I cannot provide my dataset here, and I do not think it makes sense to provide a small test dataset, as one of the main issues here seems to be the size itself of the dataset. However, I am providing the lines of code I used for the transposition, clustering and plotting:
tdata <- t(data)
cluster <- pvclust(tdata, method.hclust="ward.D2", method.dist="binary",
nboot=100, parallel=TRUE)
plot(cluster, labels=FALSE)
This is the dendrogram I obtained (never mind the confusion at the lower levels due to overlap of branches).
As you can see, the p-values for the higher ramifications of the dendrogram all seem to be 0.
Now, I understand that my data may not be perfect, but I still think there is something wrong with the method I am using, as I would not expect all these values to be zero even with very low significance in the clusters.
So my questions would be
is there anything I got wrong in the pvclust function itself?
may my low nboots (due to “weak” computer) be a reason for the non-significance of my results?
are there other functions in R I could try for hierarchical clustering that also deliver p-values?
Thanks in advance!
.............
I have tried to run the same code on a subset of 500 records with nboots = 1000. This worked in a reasonable computation time, but the output is still not very satisfying - see dendrogram2 .dendrogram obtained for a SUBSET of 500 records and nboots=1000
I'm trying to replicate some Stata results in R and am having a lot of trouble. Specifically, I want to recover the same eigenvalues as Stata does in exploratory factor analysis. To provide a specific example, the factor help in Stata uses bg2 data (something about physician costs) and gives you the following results:
webuse bg2
factor bg2cost1-bg2cost6
(obs=568)
Factor analysis/correlation Number of obs = 568
Method: principal factors Retained factors = 3
Rotation: (unrotated) Number of params = 15
--------------------------------------------------------------------------
Factor | Eigenvalue Difference Proportion Cumulative
-------------+------------------------------------------------------------
Factor1 | 0.85389 0.31282 1.0310 1.0310
Factor2 | 0.54107 0.51786 0.6533 1.6844
Factor3 | 0.02321 0.17288 0.0280 1.7124
Factor4 | -0.14967 0.03951 -0.1807 1.5317
Factor5 | -0.18918 0.06197 -0.2284 1.3033
Factor6 | -0.25115 . -0.3033 1.0000
--------------------------------------------------------------------------
LR test: independent vs. saturated: chi2(15) = 269.07 Prob>chi2 = 0.0000
I'm interested in the eigenvalues in the first column of the table. When I use the same data in R, I get the following results:
bg2 = read.dta("bg2.dta")
eigen(cor(bg2)
$values
[1] 1.7110112 1.4036760 1.0600963 0.8609456 0.7164879 0.6642889 0.5834942
As you can see, these values are quite different from Stata's results. It is likely that the two programs are using different means of calculating the eigenvalues, but I've tried a wide variety of different methods of extracting the eigenvalues, including most (if not all) of the options in R commands fa, factanal, principal, and maybe some other R commands. I simply cannot extract the same eigenvalues as Stata. I've also read through Stata's manual to try and figure out exactly what method Stata uses, but couldn't figure it out with enough specificity.
I'd love any help! Please let me know if you need any additional information to answer the question.
I would advise against carrying out a factor analysis on all the variables in the bg2 data as one of the variables is clinid, which is an arbitrary identifier 1..568 and carries no information, except by accident.
Sensibly or not, you are not using the same data, as you worked on the 6 cost variables in Stata and those PLUS the identifier in R.
Another way to notice that would be to spot that you got 6 eigenvalues in one case and 7 in the other.
Nevertheless the important principle is that eigen(cor(bg2)) is just going to give you the eigenvalues from a principal component analysis based on the correlation matrix. So you can verify that pca in Stata would match what you report from R.
So far, so clear.
But your larger question remains. I don't know how to mimic Stata's (default) factor analysis in R. You may need a factor analysis expert, if any hang around here.
In short, PCA is not equal to principal axis method factor analysis.
Different methods of calculating eigenvalues are not the issue here. I'd bet that given the same matrix Stata and R match up well in reporting eigenvalues. The point is that different techniques mean different eigenvalues in principle.
P.S. I am not an R person, but I think what you call R commands are strictly R functions. In turn I am open to correction on that small point.
I am comparing several values using R, they are 8 variables stored in 1000 length vectors. That means, 1000*8 matrix, 8 columns represent 8 variables.
Then I call
boxplot(test),
I got like:
The mean values of 8 variables are very close to each other. Which makes the comparison and interpretation very hard. Can I include all the outliers in my plot ? Then the whole range would be easier to compare ? Or any other suggestions could be given to distinguish these variables ?
Here is the boxplot in question (since the OP doesn't have the rep to post pictures):
It looks like the medians (and likely also the means) are pretty much identical, but the variances differ between the eight categories, with category 1 having the lowest and 8 the highest variance. Depending on the real question involved, these two pieces of information (similar median/mean, different variance) may already be enough.
If you want a formal significance test whether the variances are equal, you can use Hartley's or Bartlett's test. If you want to formally test equality of means with unequal variances (so ANOVA is not appropriate), look here.
I want to do clustering of my data (kmeans or hclust) in R language (coding). My data is ordinal, which means that the data is Likert scale to measure the causes of cost escalation (I have 41 causes "variables") that scaled from 1 to 5, which 1 is no effect to 5 major effect (I have about 160 observations "who rank the causes")... any help of how to cluster the 41 cause based on the observations ... do I have to convert the scale to percentage or z score before clustering or any thing that help ...... I really need your help!! here is the data to play with https://docs.google.com/spreadsheet/ccc?key=0AlrR2eXjV8nXdGtLdlYzVk01cE96Rzg2NzRpbEZjUFE&usp=sharing
I want to cluster the variables (the columns) in terms of similarity of occurrence in observations... I follow the code in statmethods.net/advstats/cluster.html; but I couldn't cluster the variables (the columns) in terms of similarity of occurrence in observations and also I follow the work at mattpeeples.net/kmeans.html#help; but I don't know why he convert the data to percentage and then to Z-score standardize.
It isn't clear to me if you want to cluster the rows (the observations) in terms of similarity in the variables, or cluster the variables (the columns) in terms of similarity of occurrence in observations?
Anyway, see package cluster. This is a recommended package that comes with all R installations.
Read ?daisy for details of what is done with ordinal data. This metric can be used in functions such as agnes (for hierarchical clustering) or pam (for partitioning about medoids, a more robust version of k-means).
By default, these will cluster the rows/observations. Simply transpose the data object using t() if you want to cluster the columns (variables). Although that may well mess up the data depending on how you have stored them.
Converting the data to percentage is called normalization of data so all the variables are in the range of 0 - 1.
If data is not normalized you run the risk of bias towards dimensions with large values
the correlation matrix is so large (50000by50000) that it is not efficient in calculating what I want. What I want to do is to break it down to groups and treat each as separate correlation matrices. However, how do I deal with the dependence between those smaller correlation matrices? I have been researching online all day but nothing comes up. There should be some algorithm out there that is related to the approximation of large correlation matrices like this, right?
Even a 4 x 4 correlation matrix is sensitive to errors. In any case, here are some links that might help:
http://www.oxford-man.ox.ac.uk/documents/papers/2011OMI08_Sheppard.pdf
http://www.kevinsheppard.com/images/4/47/Chapter8.pdf
http://arxiv.org/PS_cache/arxiv/pdf/1009/1009.5331v1.pdf
http://cran.r-project.org/web/packages/tawny/index.html
http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf
http://nurometic.com/quantitative-finance/tawny/portfolio-optimization-with-tawny
http://quantivity.wordpress.com/2011/04/17/minimum-variance-portfolios/