I'm trying to plot 18000 distributions as a heatmap type thing in R
One row can easily be plotted as a histogram but as i need to represent so many the only option I can think of is a heatmap.
This is not currently working as all the heatmap/imaging functions seem to do some kind of clustering/compare the rows instead of just plotting the distribution like in a histogram.
Does anyone know how to get around the problem or a better way of representing a large number of distribution?
matrix <- replicate(100, rnorm(100))
hist(matrix[1,],breaks = 60)
image2D(z=matrix, border="black")
image2D doesn't seem to do the trick...
Thanks
Edit 12/06/18:
Using
library(denstrip)
Does the trick for anyone who needs to visualise differences in a large amount of distributions.
You could overlay a lot of density plots using transparancy to get a sense of overlap.
m <- replicate(100, rnorm(100))
plot(range(m), c(0, 0.5), type = 'n')
for (i in 1:ncol(m)) lines(density(m[, i]), col = rgb(0.5, 0.5, 0.5, 0.5))
Related
How can I get the area under overlapping density curves?
How can I solve the problem with R? (There is a solution for python here: Calculate overlap area of two functions )
set.seed(1234)
df <- data.frame(
sex=factor(rep(c("F", "M"), each=200)),
weight=round(c(rnorm(200, mean=55, sd=5),
rnorm(200, mean=65, sd=5)))
)
(Source: http://www.sthda.com/english/wiki/ggplot2-density-plot-quick-start-guide-r-software-and-data-visualization )
ggplot(df, aes(x=weight, color=sex, fill=sex)) +
geom_density(aes(y=..density..), alpha=0.5)
"The points used in the plot are returned by ggplot_build(), so you can access them." So now, I have the points, and I can feed them to approxfun, but my problem is that i don't know how to subtract the density functions.
Any help greatly appreciated! (And I believe in high demand, there is no solution for this readily available.)
I was looking for a way to do this for empirical data, and had the problem of multiple intersections as mentioned by user5878028. After some digging I found a very simple solution, even for a total R noob like me:
Install and load the libraries "overlapping" (which performs the calculation) and "lattice" (which displays the result):
library(overlapping)
library(lattice)
Then define a variable "x" as a list that contains the two density distributions you want to compare. For this example, the two datasets "data1" and "data2" are both columns in a text file called "yourfile":
x <- list(X1=yourfile$data1, X2=yourfile$data2)
Then just tell it to display the output as a plot which will also display the estimated % overlap:
out <- overlap(x, plot=TRUE)
I hope this helps someone like it helped me! Here's an example overlap plot
I will make a few base R plots, but the plots are not actually part of
the solution. They are just there to confirm that I am getting the right
answer.
You can get each of the density functions and solve for where they intersect.
## Create the two density functions and display
FDensity = approxfun(density(df$weight[df$sex=="F"], from=40, to=80))
MDensity = approxfun(density(df$weight[df$sex=="M"], from=40, to=80))
plot(FDensity, xlim=c(40,80), ylab="Density")
curve(MDensity, add=TRUE)
Now solve for the intersection
## Solve for the intersection and plot to confirm
FminusM = function(x) { FDensity(x) - MDensity(x) }
Intersect = uniroot(FminusM, c(40, 80))$root
points(Intersect, FDensity(Intersect), pch=20, col="red")
Now we can just integrate to get the area of the overlap.
integrate(MDensity, 40,Intersect)$value +
integrate(FDensity, Intersect, 80)$value
[1] 0.2952838
The above two proposed methods give different results.
If the data in the first answer is given to the overlap function it will result in overlap% of 0.18, while the first one results in overlap% of 0.29.
X1 = df$weight[df$sex=="F"]
X2 = df$weight[df$sex=="M"]
x=list(X1=X1, X2=X2)
out <- overlap(x, plot=TRUE)
out$OV
X1-X2
0.1754
I'm looking for some technique in R similiar to command hold all in Matlab.
In Matlab I generate some data:
x = normrnd(0,1,1000,1);
[a,b]=hist(x,20);
L=b(2)-b(1);
area=sum(L*a);
frequency=a/area;
bar(b,frequency,1);
hold all;
range=b(1):0.1:b(20);
f1=normpdf(range,0,1);
f2=normpdf(range,2,2);
plot1=plot(range,f1,'r');
plot2=plot(range,f2,'m');
hold off;
I would like to create something similiar in R. I've tried this way:
x <- rnorm(1000)
h <- hist(x, breaks = 20)
a <- h$counts
b <- h$mids
L <- b[2] - b[1]
area <- sum(L*a)
frequency = a/area
range <- seq(b[1],b[20], by = 0.1)
f1 <- dnorm(range,0,1)
f2 <- dnorm(range,2,2)
barplot(frequency, names.arg = c(b))
And I stopped here, since I don't know how to add another graph to current plot. I tried to use ggplot2, but I haven't much experience with that and I failed on creating barplot with this library.
If there is a way to do that with ggplot2, I would like to know it with explanation, since I want to learn it. I will appreciate solution with traditional plot system aswell.
P.S. I used barplot(frequency, names.arg = c(b)), because I read here, that there is no equivalent in R for Matlab's bar function.
Sometimes it is better to tell us what you are trying to do, rather than how you are trying to do it. From the looks of your R code your boxplot is just a scaled histogram and from the other R code and my guesses from the matlab code you want to add reference lines for normal distributions. If I am correct then you are going about this the long way in R. The following R code is much simpler:
x <- rnorm(1000)
hist(x, prob=TRUE)
curve(dnorm(x,0,1), add=TRUE)
curve(dnorm(x,2,2), add=TRUE)
Even better would be to add col='blue' or similar to the curve calls. If you really feel the need to choose your own x values then you can replace the calls to curve with:
lines(range, dnorm(range, 0, 1) )
lines(range, dnorm(range, 2, 2) )
If you really want to learn to add lines to a barplot then you should realize that the default locations for bars may not be what you expect. Look at the updateusr function in the TeachingDemos package for R for examples of adding lines to a barplot.
To create a parallel coordinate plot I wanted to use ggparcoord() function in package GGally. The following codes show a reproducible example.
set.seed(3674)
k <- rep(1:3, each=30)
x <- k + rnorm(mean=10, sd=.2,n=90)
y <- -2*k + rnorm(mean=10, sd=.4,n=90)
z <- 3*k + rnorm(mean=10, sd=.6,n=90)
dat <- data.frame(group=factor(k),x,y,z)
library(GGally)
ggparcoord(dat,columns=1:4,groupColumn = 1)
Notice in the picture that the color for group was continuous even though I have the group variable as a factor. Is there any way I can display the plot with three discrete color instead?
I have looked at some other posts where they discuss various other ways of doing parallel coordinate plots in here. But I really wanted to do this in ggparcoord() function of package GGally. I appreciate your time in thinking about this problem.
Your code was almost correct. I spotted that columns=1:4 was not right in this case. You need to drop the column for groupColumn in columns
ggparcoord(dat,columns=2:4,groupColumn = 1)
I have a couple of cumulative empirical density functions which I would like to plot on top of each other in order to illustrate differences in the two curves. As was pointed out in a previous question, the function to draw the ECDF is simply plot(Ecdf()) And as I read the fine manual page, I determined that I can plot multiple ECDFs on top of each other using something like the following:
require( Hmisc )
set.seed(3)
g <- c(rep(1, 20), rep(2, 20))
Ecdf(c( rnorm(20), rnorm(20)), group=g)
However my curves sometimes overlap a bit and can be hard to tell which is which, just like the example above which produces this graph:
I would really like to make the color of these two CDFs different. I can't figure out how to do that, however. Any tips?
If memory serves, I have done this in the past. As I recall, you needed to trick it as Ecdf() is so darn paramterised. I think in help(ecdf) it hints that it is just a plot of stepfunctions, so you could estimate two or more ecdfs, plot one and then annotate via lines().
Edit Turns out it is as easy as
R> Ecdf(c(rnorm(20), rnorm(20)), group=g, col=c('blue', 'orange'))
as the help page clearly states the col= argument. But I have also found some scriptlets where I used plot.stepfun() explicitly.
You can add each curve one at a time (each with its own style), e.g.
Ecdf(rnorm(20), lwd = 2)
Ecdf(rnorm(20),add = TRUE, col = 'red', lty = 1)
Without using Ecdf (doesn't look like Hmisc is available):
set.seed(3)
mat <- cbind(rnorm(20), rnorm(20))
matplot(apply(mat, 2, sort), seq(20)/20, type='s')
I have come across a number of situations where I want to plot more points than I really ought to be -- the main holdup is that when I share my plots with people or embed them in papers, they occupy too much space. It's very straightforward to randomly sample rows in a dataframe.
if I want a truly random sample for a point plot, it's easy to say:
ggplot(x,y,data=myDf[sample(1:nrow(myDf),1000),])
However, I was wondering if there were more effective (ideally canned) ways to specify the number of plot points such that your actual data is accurately reflected in the plot. So here is an example.
Suppose I am plotting something like the CCDF of a heavy tailed distribution, e.g.
ccdf <- function(myList,density=FALSE)
{
# generates the CCDF of a list or vector
freqs = table(myList)
X = rev(as.numeric(names(freqs)))
Y =cumsum(rev(as.list(freqs)));
data.frame(x=X,count=Y)
}
qplot(x,count,data=ccdf(rlnorm(10000,3,2.4)),log='xy')
This will produce a plot where the x & y axis become increasingly dense. Here it would be ideal to have fewer samples plotted for large x or y values.
Does anybody have any tips or suggestions for dealing with similar issues?
Thanks,
-e
I tend to use png files rather than vector based graphics such as pdf or eps for this situation. The files are much smaller, although you lose resolution.
If it's a more conventional scatterplot, then using semi-transparent colours also helps, as well as solving the over-plotting problem. For example,
x <- rnorm(10000); y <- rnorm(10000)
qplot(x, y, colour=I(alpha("blue",1/25)))
Beyond Rob's suggestions, one plot function I like as it does the 'thinning' for you is hexbin; an example is at the R Graph Gallery.
Here is one possible solution for downsampling plot with respect to the x-axis, if it is log transformed. It log transforms the x-axis, rounds that quantity, and picks the median x value in that bin:
downsampled_qplot <- function(x,y,data,rounding=0, ...) {
# assumes we are doing log=xy or log=x
group = factor(round(log(data$x),rounding))
d <- do.call(rbind, by(data, group,
function(X) X[order(X$x)[floor(length(X)/2)],]))
qplot(x,count,data=d, ...)
}
Using the definition of ccdf() from above, we can then compare the original plot of the CCDF of the distribution with the downsampled version:
myccdf=ccdf(rlnorm(10000,3,2.4))
qplot(x,count,data=myccdf,log='xy',main='original')
downsampled_qplot(x,count,data=myccdf,log='xy',rounding=1,main='rounding = 1')
downsampled_qplot(x,count,data=myccdf,log='xy',rounding=0,main='rounding = 0')
In PDF format, the original plot takes up 640K, and the downsampled versions occupy 20K and 8K, respectively.
I'd either make image files (png or jpeg devices) as Rob already mentioned, or I'd make a 2D histogram. An alternative to the 2D histogram is a smoothed scatterplot, it makes a similar graphic but has a more smooth cutoff from dense to sparse regions of space.
If you've never seen addictedtor before, it's worth a look. It has some very nice graphics generated in R with images and sample code.
Here's the sample code from the addictedtor site:
2-d histogram:
require(gplots)
# example data, bivariate normal, no correlation
x <- rnorm(2000, sd=4)
y <- rnorm(2000, sd=1)
# separate scales for each axis, this looks circular
hist2d(x,y, nbins=50, col = c("white",heat.colors(16)))
rug(x,side=1)
rug(y,side=2)
box()
smoothscatter:
library("geneplotter") ## from BioConductor
require("RColorBrewer") ## from CRAN
x1 <- matrix(rnorm(1e4), ncol=2)
x2 <- matrix(rnorm(1e4, mean=3, sd=1.5), ncol=2)
x <- rbind(x1,x2)
layout(matrix(1:4, ncol=2, byrow=TRUE))
op <- par(mar=rep(2,4))
smoothScatter(x, nrpoints=0)
smoothScatter(x)
smoothScatter(x, nrpoints=Inf,
colramp=colorRampPalette(brewer.pal(9,"YlOrRd")),
bandwidth=40)
colors <- densCols(x)
plot(x, col=colors, pch=20)
par(op)