I want to rotate my ordination so that the vector of an external variable lines up with the first dimension. When using an MDS as the ordination, I would use MDSrotate from vegan. Is there a way to do the same thing for a PCA?
Here is some dummy data.
df1 = data.frame(A=c(1,2,3,5.5,5,2), B=c(2,2,2,2,0.5,1), C=c(1.5,0,0,2.1,3,1), D=c(0.2,1,2,1,0.8,2), E=c(0.4,0.6,0.2,1.1,2,3))
env = data.frame(one=c(1.1,0,0.2,2,2.8,1.1), two=c(2,1,1,1.2,0.3,2))
pca <- rda(df1, scale = TRUE)
summary(pca)
biplot(pca, scaling = 3)
(fit1<-envfit(pca, (env), perm = 999))
plot(fit1)
I want to rotate the scores so that 'one' lines up with PC1. Is this possible? I've searched for a long time and can't find anything that appears to be equivelant to MDSrotate. Thank you
Related
Consider this two‐dimensional random walk:
where, Zt, Wt, t = 1,2,3, … are independent and identically distributed standard normal
random variables.
I am having problems in finding a way to simulate and plot the sample path of (X,Y) for t = 0,1, … ,100. I was given a sample:
The following code is an example of the way I am used to plot random walks in R:
set.seed(13579)
r<-sample(c(-1,1),size=100,replace=T,prob=c(0.5,0.5))
r<-c(10,r))
(w<-cumsum(r))
w<-as.ts(w)
plot(w,main="random walk")
I am not very sure of how to achieve this.
The problem I am having is that this kind of codes has a more "simple" result, with a line that goes either up or down, -1 or +1:
while the plot I need to create also goes from left to right and viceversa.
Would you help me in correcting the code I know so that it fits my task/suggesting a smarterst way to go about it? It would be greatly appreciated.
Cheers!
Instead of using sample, you need to use rnorm(100) to draw 100 samples from a standard normal distribution. Since the walk starts at [0, 0], we need to append a 0 at the start and do a cumsum on the result, i.e. cumsum(c(0, rnorm(100))).
We want to do this for both the x and y variables, then plot. The whole thing can be done in a single line of code in base R:
plot(x = cumsum(c(0, rnorm(100))), y = cumsum(c(0, rnorm(100))), type = 'l')
I´m working on the topic of calculating the robust working range of a process. For this purpose I´m building models from DOE data and simulating data with a monte carlo approach. Filtering the data with a criteria for the response leads to a allowed space (see plots for better visualization).
In the example below, there are 3 variables and the goal is to calculate the biggest possible square (in parallel with the axis) within the allowed room. This would describe the working range of the process. The coding is just to get every variable in the same range (-1 to 1).
library(tidyverse)
library(MASS)
library(ggplot2)
library(gridExtra)
library(rgl)
df<-data.frame(
X1=runif(100,0,2),
X2=runif(100,10,30),
X3=runif(100,5,75))%>%
mutate(Y1=2*X1-2*X2+X3)
f1<-Y1~X1+X2+X3
model1<- lm(f1, data=df)
m.c <- NULL
n=10000
for (k in 1:n)
{
X1=runif(1,0,2)
X2=runif(1,10,30)
X3=runif(1,5,75)
m.c = rbind(m.c, data.frame(X1, X2, X3))
}
m.c_coded<-m.c%>%
mutate(predict1=predict(model1, newdata = .))%>%
mutate(X1=(X1-1/1))%>%
mutate(X2=(X2-20)/10)%>%
mutate(X3=(X3-40)/35)
Space<- m.c_coded%>%
filter(predict1<=0)
p1<-ggplot(Space)+
geom_point(aes(X1, X2))+
xlim(-1,1)+
ylim(-1,1)
p2<-ggplot(Space)+
geom_point(aes(X1, X3))+
xlim(-1,1)+
ylim(-1,1)
p3<-ggplot(Space)+
geom_point(aes(X2, X3))+
xlim(-1,1)+
ylim(-1,1)
grid.arrange(arrangeGrob(p1,p2,p3, nrow = 1), nrow = 1)
MODR_plot3D<-plot3d( x=Space$X1, y=Space$X2, z=Space$X3, type = "p",
xlim = (c(-1,1)), ylim(c(-1,1)), zlim = (c(-1,1))
)
There are specialized programms for that (DOE software) which can calculate this so called Design-space, but I want to implement it in my R skript. Sadly I do not have any idea, how I can calculate the position (edges) of this square. My approach would be to find the maximum distance to the surface on (center of the square).
Does anyone an idea how I can calculate this cube in a proper way? If possible I want to extend this also for the n-dimensional room.
Here is a reproducible example :
set.seed(10)
pick <- sample(nrow(iris),nrow(iris)/2)
iris.training <- iris[pick,]
iris.testing <- iris[-pick,]
pca.training <- prcomp(iris.training[-5])
pca.testing <- prcomp(iris.testing[-5])
autoplot(pca.training,loadings.label=T,loadings=T)
autoplot(pca.testing,loadings.label=T,loadings=T)
Which produces the following output :
As one can see, pca on data.training and on data.testing produces very similar biplots but the first principal components has reversed its sign, they are mirrored. Is it possible to force a 180 degree rotation on the two components ?
You are not returning the rotated variables. Changed code is as below. Notice retx=TRUE
set.seed(10)
pick <- sample(nrow(iris),nrow(iris)/2)
iris.training <- iris[pick,]
iris.testing <- iris[-pick,]
pca.training <- prcomp(iris.training[-5], retx=TRUE)
pca.testing <- prcomp(iris.testing[-5], retx=TRUE)
autoplot(pca.training,loadings.label=TRUE,loadings=TRUE)
autoplot(pca.testing,loadings.label=TRUE,loadings=TRUE)
It produced the following outputs for training and testing.
I'm assuming autoplot is the function from the ggfortify package. There are probably two ways to do this. The easiest is to just ask to reverse the x axis, by writing
autoplot(pca.testing,loadings.label=TRUE,loadings=TRUE) + scale_x_reverse()
Notice that this didn't change any values: the X axis now runs from positive to negative instead of the usual direction.
The second is to modify the pca.testing object to swap the signs on the x axis.
This is statistically valid: PCA doesn't determine the signs of any components, but it's a bit tricky, because the signs show up in two places: component x for the data points, and component rotation for the arrows:
pca.testing$x[,1] <- - pca.testing$x[,1]
pca.testing$rotation[,1] <- -pca.testing$rotation[,1]
autoplot(pca.testing,loadings.label=TRUE,loadings=TRUE)
Not related to your question, but some advice: don't use T, use TRUE, otherwise the next time you have temperature data, you may inadvertantly change the value, and cause havoc with your analysis.
I'm working on some bioacoustical analysis and got stuck with an issue that I believe it can be worked out mathematically. I'll use an sound sample from seewavepackage:
library(seewave)
library(tuneR)
data(tico)
By storing a spectrogram (i.e. graphic representation of the sound wave tico) in an R object, we can now deal with the wave file computationally.
s <- spectro(tico, plot=F)
class(s)
>[1] "list"
length(s)
>[1] 3
The object created s consists in two numerical vectors x = s$time, y = s$freq representing the X and Y axis, respectively, and a matrix z = s$amp of amplitude values with the same dimensions of x and y. Z is a virtually a 3D matrix that can be plotted using persp3D (plot3D), plot_ly (plotly) or plot3d (rgl). Alternatively, the wave file can be plotted in 3D using seewave if one wishes to visualize it as an interative rgl plot.
spectro3D(tico)
That being said, the analysis I'm conducting aims to calculate contours of relative amplitude:
con <- contourLines(x=s$time, y=s$freq, z=t(s$amp), levels=seq(-25, -25, 1))
Select the longest contour:
n.con <- numeric(length(con))
for(i in 1:length(con)) n.con[i] <- length(con[[i]]$x)
n.max <- which.max(n.con)
con.max <- con[[n.max]]
And then plot the selected contour against the spectrogram of tico:
spectro(tico, grid=F, osc=F, scale=F)
polygon(x=con.max$x, y=con.max$y, lwd=2)
Now it comes the tricky part. I must find a way to "subset" the matrix of amplitude values s$amp using the coordinates of the longest contour con.max. What I aim to achieve is a new matrix containing only the amplitude values inside the polygon. The remaining parts of the spectrogram should then appear as blank spaces.
One approach I though it could work would be to create a loop that replaces every value outside the polygon for a given amplitude value (e.g. -25 dB). I once did an similar approach to remove the values below -30 dB and it worked out perfectly:
for(i in 1:length(s$amp)){if(s$amp[i] == -Inf |s$amp[i] <= -30)
{s$amp[i] <- -30}}
Another though would be to create a new matrix with the same dimensions of s$amp, subset s$amp using the coordinates of the contour, then replace the subset on the new matrix. Roughly:
mt <- matrix(-30, nrow=nrow(s$amp), ncol = ncol(s$amp))
sb <- s$amp[con.max$y, con.max$x]
new.mt <- c(mt, sb)
s$amp <- new.mt
I'll appreciate any help.
I wish to present a distance matrix in an article I am writing, and I am looking for good visualization for it.
So far I came across balloon plots (I used it here, but I don't think it will work in this case), heatmaps (here is a nice example, but they don't allow to present the numbers in the table, correct me if I am wrong. Maybe half the table in colors and half with numbers would be cool) and lastly correlation ellipse plots (here is some code and example - which is cool to use a shape, but I am not sure how to use it here).
There are also various clustering methods but they will aggregate the data (which is not what I want) while what I want is to present all of the data.
Example data:
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist(nba[1:20, -1], )
I am open for ideas.
You could also use force-directed graph drawing algorithms to visualize a distance matrix, e.g.
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist_m <- as.matrix(dist(nba[1:20, -1]))
dist_mi <- 1/dist_m # one over, as qgraph takes similarity matrices as input
library(qgraph)
jpeg('example_forcedraw.jpg', width=1000, height=1000, unit='px')
qgraph(dist_mi, layout='spring', vsize=3)
dev.off()
Tal, this is a quick way to overlap text over an heatmap. Note that this relies on image rather than heatmap as the latter offsets the plot, making it more difficult to put text in the correct position.
To be honest, I think this graph shows too much information, making it a bit difficult to read... you may want to write only specific values.
also, the other quicker option is to save your graph as pdf, import it in Inkscape (or similar software) and manually add the text where needed.
Hope this helps
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dst <- dist(nba[1:20, -1],)
dst <- data.matrix(dst)
dim <- ncol(dst)
image(1:dim, 1:dim, dst, axes = FALSE, xlab="", ylab="")
axis(1, 1:dim, nba[1:20,1], cex.axis = 0.5, las=3)
axis(2, 1:dim, nba[1:20,1], cex.axis = 0.5, las=1)
text(expand.grid(1:dim, 1:dim), sprintf("%0.1f", dst), cex=0.6)
A Voronoi Diagram (a plot of a Voronoi Decomposition) is one way to visually represent a Distance Matrix (DM).
They are also simple to create and plot using R--you can do both in a single line of R code.
If you're not famililar with this aspect of computational geometry, the relationship between the two (VD & DM) is straightforward, though a brief summary might be helpful.
Distance Matrices--i.e., a 2D matrix showing the distance between a point and every other point, are an intermediate output during kNN computation (i.e., k-nearest neighbor, a machine learning algorithm which predicts the value of a given data point based on the weighted average value of its 'k' closest neighbors, distance-wise, where 'k' is some integer, usually between 3 and 5.)
kNN is conceptually very simple--each data point in your training set is in essence a 'position' in some n-dimension space, so the next step is to calculate the distance between each point and every other point using some distance metric (e.g., Euclidean, Manhattan, etc.). While the training step--i.e., construcing the distance matrix--is straightforward, using it to predict the value of new data points is practically encumbered by the data retrieval--finding the closest 3 or 4 points from among several thousand or several million scattered in n-dimensional space.
Two data structures are commonly used to address that problem: kd-trees and Voroni decompositions (aka "Dirichlet tesselation").
A Voronoi decomposition (VD) is uniquely determined by a distance matrix--i.e., there's a 1:1 map; so indeed it is a visual representation of the distance matrix, although again, that's not their purpose--their primary purpose is the efficient storage of the data used for kNN-based prediction.
Beyond that, whether it's a good idea to represent a distance matrix this way probably depends most of all on your audience. To most, the relationship between a VD and the antecedent distance matrix will not be intuitive. But that doesn't make it incorrect--if someone without any statistics training wanted to know if two populations had similar probability distributions and you showed them a Q-Q plot, they would probably think you haven't engaged their question. So for those who know what they are looking at, a VD is a compact, complete, and accurate representation of a DM.
So how do you make one?
A Voronoi decomp is constructed by selecting (usually at random) a subset of points from within the training set (this number varies by circumstances, but if we had 1,000,000 points, then 100 is a reasonable number for this subset). These 100 data points are the Voronoi centers ("VC").
The basic idea behind a Voronoi decomp is that rather than having to sift through the 1,000,000 data points to find the nearest neighbors, you only have to look at these 100, then once you find the closest VC, your search for the actual nearest neighbors is restricted to just the points within that Voronoi cell. Next, for each data point in the training set, calculate the VC it is closest to. Finally, for each VC and its associated points, calculate the convex hull--conceptually, just the outer boundary formed by that VC's assigned points that are farthest from the VC. This convex hull around the Voronoi center forms a "Voronoi cell." A complete VD is the result from applying those three steps to each VC in your training set. This will give you a perfect tesselation of the surface (See the diagram below).
To calculate a VD in R, use the tripack package. The key function is 'voronoi.mosaic' to which you just pass in the x and y coordinates separately--the raw data, not the DM--then you can just pass voronoi.mosaic to 'plot'.
library(tripack)
plot(voronoi.mosaic(runif(100), runif(100), duplicate="remove"))
You may want to consider looking at a 2-d projection of your matrix (Multi Dimensional Scaling). Here is a link to how to do it in R.
Otherwise, I think you are on the right track with heatmaps. You can add in your numbers without too much difficulty. For example, building of off Learn R :
library(ggplot2)
library(plyr)
library(arm)
library(reshape2)
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
nba$Name <- with(nba, reorder(Name, PTS))
nba.m <- melt(nba)
nba.m <- ddply(nba.m, .(variable), transform,
rescale = rescale(value))
(p <- ggplot(nba.m, aes(variable, Name)) + geom_tile(aes(fill = rescale),
colour = "white") + scale_fill_gradient(low = "white",
high = "steelblue")+geom_text(aes(label=round(rescale,1))))
A dendrogram based on a hierarchical cluster analysis can be useful:
http://www.statmethods.net/advstats/cluster.html
A 2-D or 3-D multidimensional scaling analysis in R:
http://www.statmethods.net/advstats/mds.html
If you want to go into 3+ dimensions, you might want to explore ggobi / rggobi:
http://www.ggobi.org/rggobi/
In the book "Numerical Ecology" by Borcard et al. 2011 they used a function called *coldiss.r *
you can find it here: http://ichthyology.usm.edu/courses/multivariate/coldiss.R
it color codes the distances and even orders the records by dissimilarity.
another good package would be the seriation package.
Reference:
Borcard, D., Gillet, F. & Legendre, P. (2011) Numerical Ecology with R. Springer.
A solution using Multidimensional Scaling
data = read.csv("http://datasets.flowingdata.com/ppg2008.csv", sep = ",")
dst = tcrossprod(as.matrix(data[,-1]))
dst = matrix(rep(diag(dst), 50L), ncol = 50L, byrow = TRUE) +
matrix(rep(diag(dst), 50L), ncol = 50L, byrow = FALSE) - 2*dst
library(MASS)
mds = isoMDS(dst)
#remove {type = "n"} to see dots
plot(mds$points, type = "n", pch = 20, cex = 3, col = adjustcolor("black", alpha = 0.3), xlab = "X", ylab = "Y")
text(mds$points, labels = rownames(data), cex = 0.75)