I need to plot an ordination plot showing only let s say the 20 most abundant species.
I tried to do the sum of the species colunm and then select only a certain sum value:
abu <- colSums(dune)
abu
sol <- metaMDS(dune)
sol
plot(sol, type="text", display="species", select = abu > 40)
I get this error: select is not a graphical parameter
I would expect to see only small number of species but it does not happen,
how do you show only a small number of species in the NMDS plot?
This is not straightforward. You are getting an error because select is not a parameter for the plot. Unfortunately, the result of the analysis is not a data.frame that could be handled easily (e.g. with tidyverse), and even more unfortunately, the plot() function called is not your standard plot, but a method defined specifically for objects of this class. The authors of this method did not foresee your need, and therefore, we must make the plot manually. But to do that, we need to understand what is plotting and how.
Let us find out more about the object sol:
class(sol)
# [1] "metaMDS" "monoMDS"
methods(class="metaMDS")
# [1] goodness nobs plot points print scores sppscores<- text
Oh good, we have a plot method. After a moment of digging, we find it in the vegan package (not exported, so we need to access it via vegan:::plot.metaMDS). It appears to be a wrapper around a function called ordiplot. We edit the function with edit() to figure out what it is doing. Essentially, it boils down to the following (with loads of unnecessary code):
Y <- scores(sol, display="species")
plot(Y, type="n")
text(Y[,1], Y[,2], rownames(Y), col="red")
This is, more or less, your plot. Choosing the species to show is now trivial, but first we must make sure that rows of Y are in the same order as columns of dune:
all(colnames(dune) == rownames(Y))
Y.sel <- Y[colSums(dune) > 40, ]
plot(Y.sel[,1], Y.sel[,2], type="n", xlim=c(-.8, .8), ylim=c(-.4, .4))
text(Y.sel[,1], Y.sel[,2], rownames(Y.sel), col="red")
We can of course make a much nicer plot. For example, with ggplot (it is definitely possible to make a much nicer plot with base R as well). We could actually show the abundance of the plants using the size esthetics:
library(ggplot2)
library(ggrepel)
Y <- data.frame(Y)
Y$abundance <- colSums(dune)
Y$labels <- rownames(Y)
ggplot(Y, aes(x=NMDS1, y=NMDS2, size=abundance)) +
geom_point() + geom_text_repel(aes(label=labels)) +
theme_minimal()
To filter the species by abundance, we now can do the following:
library(tidyverse)
Y %>% filter(abundance > 40) %>%
ggplot(Y, aes(x=NMDS1, y=NMDS2, size=abundance)) +
geom_point() + geom_text_repel(aes(label=labels)) +
theme_minimal()
Related
I Simulated 10 realisations with 200 observations from an MA(3) process and plot them. The code I used is the following and the result in the picture:
Now I want to estimate the autocorrelation function (ACF) for each realisation and plot those in the same plot as seen below:
I want to plot them like this by creating lines of different colours by joining the points ... not the classical ACF plot with the spikes. I tried some things but I cannot figure out how to do it. Any help please?
The code I used for the first part (I am working with R in RStudio):
sim1 <- replicate(10,arima.sim(model = list(ma=c(0.7,-0.7,-0.4), order=c(0,0,3)), n = 200))
matplot(sim1, lty=1, type="l", col=rainbow(11))
I don't know how to do this without using ggplot2, so here it is:
First, some adaptations. I decreased the number of simulations to 3, so it's easier to understand the graphs; I added an Time column to serve as the x-axis; I Created the ACF for each simulation with apply, and stored it in a new df; And i pivoted the data to long for use of ggplot:
sim1 <- cbind("Time"=1:200, replicate(3,arima.sim(model = list(ma=c(0.7,-0.7,-0.4), order=c(0,0,3)), n = 200)))
acfsim1 <- cbind("Lag"=1:200, apply(sim1[,2:4], 2, function(x){acf(x, 200, plot=FALSE)$acf}))
dfsim1 = tidyr::pivot_longer(as.data.frame(sim1), cols=2:4)
dfacfsim1 = tidyr::pivot_longer(as.data.frame(acfsim1), cols=2:4)
Now for the plots. First the one you already had:
library(ggplot2)
ggplot(dfsim1, aes(y=value, color=name, x=Time)) +
geom_line()
Output:
Now the graph that you want, first using a line graph:
ggplot(dfsim1, aes(y=value, color=name, x=Lag)) +
geom_segment(data=dfacfsim1)
Output:
Now with a segment graph:
ggplot(dfsim1, aes(y=value, color=name, x=Lag)) +
geom_segment(data=dfacfsim1, aes(xend=Lag, yend=0))
Output:
I have a set of times that I would like to plot on a histogram.
Toy example:
df <- data.frame(time = c(1,2,2,3,4,5,5,5,6,7,7,7,9,9, ">10"))
The problem is that one value is ">10" and refers to the number of times that more than 10 seconds were observed. The other time points are all numbers referring to the actual time. Now, I would like to create a histogram that treats all numbers as numeric and combines them in bins when appropriate, while plotting the counts of the ">10" at the side of the distribution, but not in a separate plot. I have tried to call geom_histogram twice, once with the continuous data and once with the discrete data in a separate column but that gives me the following error:
Error: Discrete value supplied to continuous scale
Happy to hear suggestions!
Here's a kind of involved solution, but I believe it best answers your question, which is that you are desiring to place next to typical histogram plot a bar representing the ">10" values (or the values which are non-numeric). Critically, you want to ensure that you maintain the "binning" associated with a histogram plot, which means you are not looking to simply make your scale a discrete scale and represent a histogram with a typical barplot.
The Data
Since you want to retain histogram features, I'm going to use an example dataset that is a bit more involved than that you gave us. I'm just going to specify a uniform distribution (n=100) with 20 ">10" values thrown in there.
set.seed(123)
df<- data.frame(time=c(runif(100,0,10), rep(">10",20)))
As prepared, df$time is a character vector, but for a histogram, we need that to be numeric. We're simply going to force it to be numeric and accept that the ">10" values are going to be coerced to be NAs. This is fine, since in the end we're just going to count up those NA values and represent them with a bar. While I'm at it, I'm creating a subset of df that will be used for creating the bar representing our NAs (">10") using the count() function, which returns a dataframe consisting of one row and column: df$n = 20 in this case.
library(dplyr)
df$time <- as.numeric(df$time) #force numeric and get NA for everything else
df_na <- count(subset(df, is.na(time)))
The Plot(s)
For the actual plot, you are asking to create a combination of (1) a histogram, and (2) a barplot. These are not the same plot, but more importantly, they cannot share the same axis, since by definition, the histogram needs a continuous axis and "NA" values or ">10" is not a numeric/continuous value. The solution here is to make two separate plots, then combine them with a bit of magic thanks to cowplot.
The histogram is created quite easily. I'm saving the number of bins for demonstration purposes later. Here's the basic plot:
bin_num <- 12 # using this later
p1 <- ggplot(df, aes(x=time)) + theme_classic() +
geom_histogram(color='gray25', fill='blue', alpha=0.3, bins=bin_num)
Thanks to the subsetting previously, the barplot for the NA values is easy too:
p2 <- ggplot(df_na, aes(x=">10", y=n)) + theme_classic() +
geom_col(color='gray25', fill='red', alpha=0.3)
Yikes! That looks horrible, but have patience.
Stitching them together
You can simply run plot_grid(p1, p2) and you get something workable... but it leaves quite a lot to be desired:
There are problems here. I'll enumerate them, then show you the final code for how I address them:
Need to remove some elements from the NA barplot. Namely, the y axis entirely and the title for x axis (but it can't be NULL or the x axes won't line up properly). These are theme() elements that are easily removed via ggplot.
The NA barplot is taking up WAY too much room. Need to cut the width down. We address this by accessing the rel_widths= argument of plot_grid(). Easy peasy.
How do we know how to set the y scale upper limit? This is a bit more involved, since it will depend on the ..count.. stat for p1 as well as the numer of NA values. You can access the maximum count for a histogram using ggplot_build(), which is a part of ggplot2.
So, the final code requires the creation of the basic p1 and p2 plots, then adds to them in order to fix the limits. I'm also adding an annotation for number of bins to p1 so that we can track how well the upper limit setting works. Here's the code and some example plots where bin_num is set at 12 and 5, respectively:
# basic plots
p1 <- ggplot(df, aes(x=time)) + theme_classic() +
geom_histogram(color='gray25', fill='blue', alpha=0.3, bins=bin_num)
p2 <- ggplot(df_na, aes(x=">10", y=n)) + theme_classic() +
geom_col(color='gray25', fill='red', alpha=0.3) +
labs(x="") + theme(axis.line.y=element_blank(), axis.text.y=element_blank(),
axis.title.y=element_blank(), axis.ticks.y=element_blank()
) +
scale_x_discrete(expand=expansion(add=1))
#set upper y scale limit
max_count <- max(c(max(ggplot_build(p1)$data[[1]]$count), df_na$n))
# fix limits for plots
p1 <- p1 + scale_y_continuous(limits=c(0,max_count), expand=expansion(mult=c(0,0.15))) +
annotate('text', x=0, y=max_count, label=paste('Bins:', bin_num)) # for demo purposes
p2 <- p2 + scale_y_continuous(limits=c(0,max_count), expand=expansion(mult=c(0,0.15)))
plot_grid(p1, p2, rel_widths=c(1,0.2))
So, our upper limit fixing works. You can get really crazy playing around with positioning, etc and the plot_grid() function, but I think it works pretty well this way.
Perhaps, this is what you are looking for:
df1 <- data.frame(x=sample(1:12,50,rep=T))
df2 <- df1 %>% group_by(x) %>%
dplyr::summarise(y=n()) %>% subset(x<11)
df3 <- subset(df1, x>10) %>% dplyr::summarise(y=n()) %>% mutate(x=11)
df <- rbind(df2,df3 )
label <- ifelse((df$x<11),as.character(df$x),">10")
p <- ggplot(df, aes(x=x,y=y,color=x,fill=x)) +
geom_bar(stat="identity", position = "dodge") +
scale_x_continuous(breaks=df$x,labels=label)
p
and you get the following output:
Please note that sometimes you could have some of the bars missing depending on the sample.
I have a dataset of leaf trait measurements made at multiple sites at two contrasting seasons. I am interested to explore the association/line fit between a pair of traits and to differentiate the seasons at each site.
Rather than a linear regression, I would prefer to use the Standardised Major Axis approach within the smatr package:
e.g. sma.site1 <- sma(TraitA ~ TraitB * Visit, data=subset(myfile, Site=="Site1")) # testing the null hypothesis of common slopes for the two Visits (Seasons) at a given Site.
I can produce a handy lattice plot in ggplot2 with a separate panel for each Site and the points differentiated by Visit:
e.g. qplot(TraitB, TraitA, data=myfile, colour=Visit) + facet_wrap(~Site, ncol=2)
However, if I add trend lines fitted with the additional argument in ggplot2:
+ geom_smooth(aes(group=Visit), method="lm", se=F)
……, those lines are not a good match for the sma coefficients.
What I would like to do is fit the lines suggested by the sma test onto the ggplot lattice. Is there an easy, or efficient, way to do that?
I know that I can subset the data, produce a plot for each site, add the relevant lines with + geom_abline() and then stitch the separate plots up together with grid.arrange(). But that feels very long-winded.
I would be grateful for any pointers.
I don't know anything about the smatr package but you should be able to tweak this to get the right values. Since you provided no data I used the leaf data from the example in the pkg. The basic idea is to pull out the slope & intercept from the returned sma object and then facet the geom_abline. I may be misinterpreting the object, though.
library(smatr)
library(ggplot2)
data(leaflife)
do.call(rbind, lapply(unique(leaflife$site), function(x) {
obj <- sma(longev~lma*rain, data=subset(leaflife, site=x))
data.frame(site=x,
intercept=obj$coef[[1]][1, 1],
slope=obj$coef[[1]][2, 1])
})) -> fits
gg <- ggplot(leaflife)
gg <- gg + geom_point(aes(x=lma, y=longev, color=soilp))
gg <- gg + geom_abline(data=fits, aes(slope=slope, intercept=intercept))
gg <- gg + facet_wrap(~site, ncol=2)
gg
I just saw this question and am not sure if you are still interested in this. I run the code by hrbrmstr, and found actually the only thing you need to change is:
obj <- sma(longev~lma*rain, data=subset(leaflife, site == x))
then you can get the plot with four lines for each group.
and also
My datasets are pretty large and rendering generated QQ plots is slow and sometimes even freezes my browser. I know that one option that I have is simply to downsample the data vector. However, I wanted to try hex binning technique instead of downsampling. Unfortunately, I couldn't make it work (two of my several attempts are shown below). If downsampling is possible to achieve using hex binning (which I suspect is, as it's similar to histograms), I'd appreciate, if someone could show me how to do it. I use ggplot2. Thanks!
g <- ggplot(df, aes(x=var)) + stat_qq(aes(x = var), geom = "hex")
g <- ggplot(df, aes(x = var, y = ..density..)) +
geom_hex(aes(sample = var), stat = "qq")
print (g)
The first call results in the following error message:
Error: stat_qq requires the following missing aesthetics: sample
The second call results in this message:
Error in eval(expr, envir, enclos) : object 'density' not found
UPDATE: I think that more correct variant is this, but I'm not sure what should be the arguments:
g <- ggplot(df, aes(??, ??)) + stat_binhex()
Not sure if this is what you are looking for exactly, but I offer a couple ways to do hexagonal binning. First with ggplot as you are trying to work with and the second with the package hexbin which seems to look better to me, but just my preference.
library(ggplot2)
x <- rgamma(1000,8,2)
y <- rnorm(1000,4,1.5)
binFrame <- data.frame(x,y)
qplot(x,y,data=binFrame, geom='bin2d') # with ggplot...rectangular binning actually
library(hexbin)
hexbinplot(y~x, data=binFrame) # with hexbin...actually hexagonal binning
Edit:
So I was thinking a bit about this at lunch and I think the fundamental issues is that hexbining is a multidimensional data reduction technique and it seems like you are trying to do uni-variate QQ plots on really large sample, but with hexbin in ggplot. At any-rate I can think of a way to do hex bin plots with ggplot, but the best I came up with is to start from scratch and manually construct both the theoretical quantiles (x) and sample quantiles (y). So here is what I came up with.
Basic QQ-Plot Manually
# setting up manual QQ plot used to plot with and with out hexbins
xSamp <- rgamma(1000,8,.5) # sample data
len <- 1000
i <- seq(1,len,by=1)
probSeq <- (i-.5)/len # probability grid
invCDF <- qnorm(probSeq,0,1) # theoretical quantiles for standard normal, but you could compare your sample to any distribution
orderGam <- xSamp[order(xSamp)] # ordered sampe
df <- data.frame(invCDF,orderGam)
plot(invCDF,orderGam,xlab="Standard Normal Theoretical Quantiles",ylab="Standardized Data Quantiles",main="QQ-Plot")
abline(lm(orderGam~invCDF),col="red",lwd=2)
QQ Plot With Hexbins in ggplot:
ggplot(df, aes(invCDF, orderGam)) + stat_binhex() + geom_smooth(method="lm")
![QQ Plot with ggplot][2]
So at the end of the day this might not scale up readily, but if you are looking to do true multidimensional tests of normality you might think about chi-square plots for multivariate normality. cheers
I am using ggplot2 to explore the result of some testing on an agent-based model. The model can end in one of three rounds per realization, and as such I am interested in how player utilities differ in terms of what round the game ends and their relative position in 2D space.
All this is to say that I have generated a facet_wrap plot to show this for each round, but I would also like to annotate each plot with the cor(x,y) for the subset of data represented in each facet. Is there a way to tell ggplot2 that I would like the annotation to use the subset of data generated by facet_wrap? Here is the code I have so far, and what it is producing
library(ggplot2)
# Load data
abm.data<-read.csv("ABM_results.csv")
# Create new colun for area of Pareto set
attach(abm.data)
area<-abs(((x3*(y2-y1))+(x2*(y1-y3))+(x1*(y3-y2)))/2)
abm.data<-transform(abm.data,area=area)
detach(abm.data)
# Compare area of Pareto set with player utility
png("area_p1.png",res=100,pointsize=20,height=500,width=1600)
area.p1<-ggplot(abm.data,aes(x=area))+geom_point(aes(y=U1_2,colour="Player 1",alpha=0.4))+facet_wrap(~round,ncol=3)+
annotate("text",0.375,-1.25,label=paste("rho=",round(cor(abm.data$area,abm.data$U1_2),2)), parse=TRUE)+
scale_colour_manual(values=c("Player 1"="red"))
area.p1+xlab("Area of Pareto Set")+ylab("Player Utility at Game End")+
opts(title="Final Player 1 Utility by Pareto Set Size and Round Game Ends",legend.position="none")
dev.off()
(source: drewconway.com)
As you can see, there are two problems:
The \rho value is of the full dataset, rather than the subsets by 'round'. Is there a way to get the cor(x,y) to print based on only the data shown in each plot?
The annotation should read "\rho=some_value" but instead I get "=(\rho,value);" is there a way to fix this?
To fix the second problem use
annotate("text", 0.375, -1.25,
label=paste("rho==", round(cor(abm.data$area, abm.data$U1_2), 2)),
parse=TRUE)
i.e. "rho==".
Edit: Here is a solution to solve the first problem
library("plyr")
library("ggplot2")
set.seed(1)
df <- data.frame(x=rnorm(300), y=rnorm(300), cl=gl(3,100)) # create test data
df.cor <- ddply(df, .(cl), function(val) sprintf("rho==%.2f", cor(val$x, val$y)))
p1 <- ggplot(data=df, aes(x=x)) +
geom_point(aes(y=y, colour="col1", alpha=0.4)) +
facet_wrap(~ cl, ncol=3) +
geom_text(data=df.cor, aes(x=0, y=3, label=V1), parse=TRUE) +
scale_colour_manual(values=c("col1"="red")) +
opts(legend.position="none")
print(p1)
The same question may be asked as for adding segments for each facet. We can solve these general problems by geom_segment instead of annotate("segment",...), for the geom_foo, we can define a data.frame to store the data for the geom_foo.