I would like to use ggplot2 to illustrate the difference between two similar density distributions. Here is a toy example of the type of data I have:
library(ggplot2)
# Make toy data
n_sp <- 100000
n_dup <- 50000
D <- data.frame(
event=c(rep("sp", n_sp), rep("dup", n_dup) ),
q=c(rnorm(n_sp, mean=2.0), rnorm(n_dup, mean=2.1))
)
# Standard density plot
ggplot( D, aes( x=q, y=..density.., col=event ) ) +
geom_freqpoly()
Rather than separately plot the density for each category ( dup and sp ) as above, how could I plot a single line that shows the difference between these distributions?
In the toy example above, if I subtracted the dup density distribution from the sp density distribution, the resulting line would be above zero on the left side of the plot (since there is an abundance of smaller sp values) and below 0 on the right (since there is an abundance of larger dup values). Not that there may be a different number of observations of type dup and sp.
More generally - what is the best way to show differences between similar density distributions?
There may be a way to do this within ggplot, but frequently it's easiest to do the calculations beforehand. In this case, call density on each subset of q over the same range, then subtract the y values. Using dplyr (translate to base R or data.table if you wish),
library(dplyr)
library(ggplot2)
D %>% group_by(event) %>%
# calculate densities for each group over same range; store in list column
summarise(d = list(density(q, from = min(.$q), to = max(.$q)))) %>%
# make a new data.frame from two density objects
do(data.frame(x = .$d[[1]]$x, # grab one set of x values (which are the same)
y = .$d[[1]]$y - .$d[[2]]$y)) %>% # and subtract the y values
ggplot(aes(x, y)) + # now plot
geom_line()
Related
I have data where sample counts have been pre-calculated for bins across a range, and the bins are overlapping and uneven sizes. Looks something like:
x2 <- data.frame("BinFrom" = c(1,1,2,2,4,4,4,5,5,5,8,8,8,9,11,14,17,18,19),
"BinTo" = c(3,6,4,8,5,8,6,10,12,6,7,15,11,10,20,20,18,19,20),
"Count" = c(1000,2400,15,2000,20,3800,10,6000,4200,10,25,3000,2800,10,1300,9000,10,5,40))
I wish to generate a histogram and density plot for these data. Is there a way to do this?
ggdensity etc expect the expanded data. I attempted to force that format by expanding on the mid-point of the bins, e.g.:
x2 <- x2 %>% mutate(MidBin = BinFrom + ((BinTo-BinFrom)/2))
xp <- x2 %>% expandRows(., "Count")
ggdensity(xp, "MidBin")
but this loses important data, and is not possible with my actual data frame as the row expansion exhausts the vector memory.
All help appreciated
Create new matrix and count base-overlap
base=cbind.data.frame(base=c(min(x2$BinFrom):max(x2$BinTo)))
base$overlap=sapply(base$base,function(x) sum(x2$Count[x >= x2$BinFrom & x <= x2$BinTo ]))
plot
ggplot(base,aes(x=base,y=overlap))+geom_bar(stat = "identity")
#or
ggplot(base,aes(x=base,y=overlap))+geom_area(alpha=0.25)
Let's say I have a histogram with two overlapping groups. Here's a possible command from ggplot2 and a pretend output graph.
ggplot2(data, aes(x=Variable1, fill=BinaryVariable)) + geom_histogram(position="identity")
So what I have is the frequency or count of each event. What I'd like to do instead is to get the difference between the two events in each bin. Is this possible? How?
For example, if we do RED minus BLUE:
Value at x=2 would be ~ -10
Value at x=4 would be ~ 40 - 200 = -160
Value at x=6 would be ~ 190 - 25 = 155
Value at x=8 would be ~ 10
I'd prefer to do this using ggplot2, but another way would be fine. My dataframe is set up with items like this toy example (dimensions are actually 25000 rows x 30 columns) EDITED: Here is example data to work with GIST Example
ID Variable1 BinaryVariable
1 50 T
2 55 T
3 51 N
.. .. ..
1000 1001 T
1001 1944 T
1002 1042 N
As you can see from my example, I'm interested in a histogram to plot Variable1 (a continuous variable) separately for each BinaryVariable (T or N). But what I really want is the difference between their frequencies.
So, in order to do this we need to make sure that the "bins" we use for the histograms are the same for both levels of your indicator variable. Here's a somewhat naive solution (in base R):
df = data.frame(y = c(rnorm(50), rnorm(50, mean = 1)),
x = rep(c(0,1), each = 50))
#full hist
fullhist = hist(df$y, breaks = 20) #specify more breaks than probably necessary
#create histograms for 0 & 1 using breaks from full histogram
zerohist = with(subset(df, x == 0), hist(y, breaks = fullhist$breaks))
oneshist = with(subset(df, x == 1), hist(y, breaks = fullhist$breaks))
#combine the hists
combhist = fullhist
combhist$counts = zerohist$counts - oneshist$counts
plot(combhist)
So we specify how many breaks should be used (based on values from the histogram on the full data), and then we compute the differences in the counts at each of those breaks.
PS It might be helpful to examine what the non-graphical output of hist() is.
Here's a solution that uses ggplot as requested.
The key idea is to use ggplot_build to get the rectangles computed by stat_histogram. From that you can compute the differences in each bin and then create a new plot using geom_rect.
setup and create a mock dataset with lognormal data
library(ggplot2)
library(data.table)
theme_set(theme_bw())
n1<-500
n2<-500
k1 <- exp(rnorm(n1,8,0.7))
k2 <- exp(rnorm(n2,10,1))
df <- data.table(k=c(k1,k2),label=c(rep('k1',n1),rep('k2',n2)))
Create the first plot
p <- ggplot(df, aes(x=k,group=label,color=label)) + geom_histogram(bins=40) + scale_x_log10()
Get the rectangles using ggplot_build
p_data <- as.data.table(ggplot_build(p)$data[1])[,.(count,xmin,xmax,group)]
p1_data <- p_data[group==1]
p2_data <- p_data[group==2]
Join on the x-coordinates to compute the differences. Note that the y-values aren't the counts, but the y-coordinates of the first plot.
newplot_data <- merge(p1_data, p2_data, by=c('xmin','xmax'), suffixes = c('.p1','.p2'))
newplot_data <- newplot_data[,diff:=count.p1 - count.p2]
setnames(newplot_data, old=c('y.p1','y.p2'), new=c('k1','k2'))
df2 <- melt(newplot_data,id.vars =c('xmin','xmax'),measure.vars=c('k1','diff','k2'))
make the final plot
ggplot(df2, aes(xmin=xmin,xmax=xmax,ymax=value,ymin=0,group=variable,color=variable)) + geom_rect()
Of course the scales and legends still need to be fixed, but that's a different topic.
The specific example is that imagine x is some continuous variable between 0 and 10 and that the red line is distribution of "goods" and the blue is "bads", I'd like to see if there is value in incorporating this variable into checking for 'goodness' but I'd like to first quantify the amount of stuff in the areas where the blue > red
Because this is a distribution chart, the scales look the same, but in reality there is 98 times more good in my sample which complicates things, since it's not actually just measuring the area under the curve, but rather measuring the bad sample where it's distribution is along lines where it's greater than the red.
I've been working to learn R, but am not even sure how to approach this one, any help appreciated.
EDIT
sample data:
http://pastebin.com/7L3Xc2KU <- a few million rows of that, essentially.
the graph is created with
graph <- qplot(sample_x, bad_is_1, data=sample_data, geom="density", color=bid_is_1)
The only way I can think of to do this is to calculate the area between the curve using simple trapezoids. First we manually compute the densities
d0 <- density(sample$sample_x[sample$bad_is_1==0])
d1 <- density(sample$sample_x[sample$bad_is_1==1])
Now we create functions that will interpolate between our observed density points
f0 <- approxfun(d0$x, d0$y)
f1 <- approxfun(d1$x, d1$y)
Next we find the x range of the overlap of the densities
ovrng <- c(max(min(d0$x), min(d1$x)), min(max(d0$x), max(d1$x)))
and divide that into 500 sections
i <- seq(min(ovrng), max(ovrng), length.out=500)
Now we calculate the distance between the density curves
h <- f0(i)-f1(i)
and using the formula for the area of a trapezoid we add up the area for the regions where d1>d0
area<-sum( (h[-1]+h[-length(h)]) /2 *diff(i) *(h[-1]>=0+0))
# [1] 0.1957627
We can plot the region using
plot(d0, main="d0=black, d1=green")
lines(d1, col="green")
jj<-which(h>0 & seq_along(h) %% 5==0); j<-i[jj];
segments(j, f1(j), j, f1(j)+h[jj])
Here's a way to shade the area between two density plots and calculate the magnitude of that area.
# Create some fake data
set.seed(10)
dat = data.frame(x=c(rnorm(1000, 0, 5), rnorm(2000, 0, 1)),
group=c(rep("Bad", 1000), rep("Good", 2000)))
# Plot densities
# Use y=..count.. to get counts on the vertical axis
p1 = ggplot(dat) +
geom_density(aes(x=x, y=..count.., colour=group), lwd=1)
Some extra calculations to shade the area between the two density plots
(adapted from this SO question):
pp1 = ggplot_build(p1)
# Create a new data frame with densities for the two groups ("Bad" and "Good")
dat2 = data.frame(x = pp1$data[[1]]$x[pp1$data[[1]]$group==1],
ymin=pp1$data[[1]]$y[pp1$data[[1]]$group==1],
ymax=pp1$data[[1]]$y[pp1$data[[1]]$group==2])
# We want ymax and ymin to differ only when the density of "Good"
# is greater than the density of "Bad"
dat2$ymax[dat2$ymax < dat2$ymin] = dat2$ymin[dat2$ymax < dat2$ymin]
# Shade the area between "Good" and "Bad"
p1a = p1 +
geom_ribbon(data=dat2, aes(x=x, ymin=ymin, ymax=ymax), fill='yellow', alpha=0.5)
Here are the two plots:
To get the area (number of values) in specific ranges of Good and Bad, use the density function on each group (or you can continue to work with the data pulled from ggplot as above, but this way you get more direct control over how the density distribution is generated):
## Calculate densities for Bad and Good.
# Use same number of points and same x-range for each group, so that the density
# values will line up. Use a higher value for n to get a finer x-grid for the density
# values. Use a power of 2 for n, because the density function rounds up to the nearest
# power of 2 anyway.
bad = density(dat$x[dat$group=="Bad"],
n=1024, from=min(dat$x), to=max(dat$x))
good = density(dat$x[dat$group=="Good"],
n=1024, from=min(dat$x), to=max(dat$x))
## Normalize so that densities sum to number of rows in each group
# Number of rows in each group
counts = tapply(dat$x, dat$group, length)
bad$y = counts[1]/sum(bad$y) * bad$y
good$y = counts[2]/sum(good$y) * good$y
## Results
# Number of "Good" in region where "Good" exceeds "Bad"
sum(good$y[good$y > bad$y])
[1] 1931.495 # Out of 2000 total in the data frame
# Number of "Bad" in region where "Good" exceeds "Bad"
sum(bad$y[good$y > bad$y])
[1] 317.7315 # Out of 1000 total in the data frame
stat_density2d is really a nice display for dense scatter plots, however I could not find any explanation on what the density actually means. I have a plot with densities ranging from 0 to 400. What is the unit of this scale ?
Thanks !
The density values will depend on the range of x and y in your dataset.
stat_density2d(...) uses kde2d(...) in the MASS package to calculate the 2-dimensional kernal density estimate, based on bivariate normal distributions. The density at a point is scaled so that the integral of density over all x and y = 1. So if you data is highly localized, or if the range for x and y is small, you can get large numbers for density.
You can see this in the following simple example:
library(ggplot2)
set.seed(1)
df1 <- data.frame(x=c(rnorm(50,0,5),rnorm(50,20,5)),
y=c(rnorm(50,0,5),rnorm(50,20,5)))
ggplot(df1, aes(x,y)) + geom_point()+
stat_density2d(geom="path",aes(color=..level..))
set.seed(1)
df2 <- data.frame(x=c(rnorm(50,0,5),rnorm(50,20,5))/100,
y=c(rnorm(50,0,5),rnorm(50,20,5))/100)
ggplot(df2, aes(x,y)) + geom_point()+
stat_density2d(geom="path",aes(color=..level..))
These two data frames are identical except that in df2 the scale is 1/100 that in df1 (in each direction), and therefore the density levels are 10,000 times greater in the the plot of df2.
I have a data frame of two variables, x and y in R. What i want to do is bin each entry by its value of x, but then display the density of the value of y for all entries in each bin. More specifically, for each interval in units of x, i want to plot the sum(of all values of y of entries whose values of x are in the specific interval)/(sum of all values of y for all entries). I know how to do this manually via vector manipulation, but i have to make a lot of these plots and wanted to know if their was a quicker way to do this, maybe via some advanced hist.
You could generate the groupings using cut and then use a facet_grid to display the multiple histograms:
# Sample data with y depending on x
set.seed(144)
dat <- data.frame(x=rnorm(1000))
dat$y <- dat$x + rnorm(1000)
# Generate bins of x values
dat$grp <- cut(dat$x, breaks=2)
# Plot
library(ggplot2)
ggplot(dat, aes(x=y)) + geom_histogram() + facet_grid(grp~.)