I've written an R script that loops through a data.frame making multiple of complex plots that includes a histogram. The problem is that the histograms often show a tall, uninformative peak at x=0 or x=1 and it obscures the rest of the data which is more informative. I have figured out that I can hide the tall peak by defining the limits of the x and y axes of each histogram as seen in the code below - but what I really need to figure out is how to define the y-axis limits such that they are optimized for the second-largest peak in my histogram.
Here's some code that simulates my data and plots histograms with different sorts of axis limits imposed:
require(ggplot2)
set.seed(5)
df = data.frame(matrix(sample(c(1:10), 1000, replace = TRUE, prob = c(0.8,0.01,0.01,0.01,0.01,0.01,0.01,0.01,0.01,0.01)), nrow=100))
cols = names(df)
for (i in c(1:length(cols))) {
my_col = cols[i]
p1 = ggplot(df, aes_string(my_col)) + geom_histogram(bins = 10)
print(p1)
p2 = p1 + ggtitle(paste("Fixed X Limits", my_col)) + scale_x_continuous(limits = c(1,10))
print(p2)
p3 = p1 + ggtitle(paste("Fixed Y Limits", my_col)) + scale_y_continuous(limits = c(0,3))
print(p3)
p4 = p1 + ggtitle(paste("Fixed X & Y Limits", my_col)) + scale_y_continuous(limits = c(0,3)) + scale_x_continuous(limits = c(1,10))
print(p4)
}
The problem is that in this data, I can hard-code y-limits and have a reasonable expectation that they will work well for all the histograms. With my real data the size of the peaks varies wildly between the numerous histograms I am producing. I've tried defining the y-limit with various equations based on descriptive numbers like the mean, median and range but nothing I've come up with works well for all cases.
If I could define the y-limit in relation to the second-tallest peak of the histogram, I would have something that was perfectly suited for each situation.
I am not sure how ggplot builds its histograms, but one method would be to grab the results from hist:
maxDensities <- sapply(df, function(i) max(hist(i)$density))
# take the second highest peak:
myYlim <- rev(sort(maxDensities))[2]
I would process the data to determine the height you need.
Something along the lines of:
sort(table(cut(df$X1,breaks=10)),T)[2]
Working from the inside out
cut will bin the data (not really needed with integer data like you have but probably needed with real data
table then creates a table with the count of each of those bins
sort sorts the table from highest to lowest
[2] takes the 2nd highest value
I have measurements of approximately 1000 variables in 2 groups with 10 replicates in each, in other words I have 2 dataframes with 10 columns and 1000 rows in each.
I would like to show the distribution of my measurements, in two different groups, to pick up variables that differ significantly between the groups. My initial idea was to do a large scatter plot where the x-coordinate would be an iteration of variables, and the y-coordinate would be measurement, and the points could be color coded. It doesn't quite work as expected however, I get a scatter plot matrix instead.
I tried to go with a boxplot,
ratios1 <- as.data.frame(matrix(rnorm(10000) * 100, 1000, 10))
boxplot(t(log2(ratios1)), horizontal = T)
which sort of works but all lines for the boxes makes the plot undecipherable, even for a single group (see figure below). Then I tried to remove the boxes and add the points afterwards as suggested here
boxplot(t(log2(ratios1)), horizontal = T, border = "white")
points(t(log2(ratios1)), pch=1)
But that didn't quite work either, as I only got the first variable drawn on the graph.
How can I display this type of information?
First of all, columns correspond to variables and rows to observations, not the other way around.
set.seed(42)
ratios1 <- as.data.frame(matrix(rnorm(10000) * 100, 10, 1000))
You could plot quantiles like this:
library(reshape2)
ratios2 <- melt(ratios1)
library(ggplot2)
ggplot(ratios2, aes(x = as.numeric(variable), y = value)) +
stat_summary(fun.data = function(y) as.data.frame(setNames(as.list(quantile(y, probs = c(0.025, 0.5, 0.975))), c("ymin", "y", "ymax"))),
color = "blue") +
stat_summary(fun.data = function(y) as.data.frame(setNames(as.list(quantile(y, probs = c(0.25, 0.5, 0.75))), c("ymin", "y", "ymax"))),
color = "red") +
xlab("variable")
There are no groups in your data, so I don't know what to do with that. Maybe you could facet by group. However, I don't think this kind of plot would be very useful for your goal of "pick[ing] up variables that differ significantly between the groups". I would do a hypothesis test with the appropriate correction for alpha error inflation.
Can anyone tell me how to create a plot which features 3 different matrices sets of data. In general, I have 3 different matricies of data all 1*1001 dimensions, and i wish to plot all 3 on the same graph.
I have managed to get one matrix to plot at once, and assemble the code to create the other 2 matrices but not to plot it. B[i,] is randomly generated data. What I would like to know is what would be the coding to get all 3 plots together on one graph.
Code for one matrix:
ntime<-1000
average.price.at.each.timestep<-matrix(0,nrow=1,ncol=ntime+1)
for(i in 1:(ntime+1)){
average.price.at.each.timestep[i]<-mean(B[i,])
}
matplot(t, t(average.price.at.each.timestep), type="l", lty=1, main="MC Price of a Zero Coupon Bond", ylab="Price", xlab = "Option Exercise Date")
Code for 3:
average.price.at.each.timestep<-matrix(0,nrow=1,ncol=ntime+1)
s.e.at.each.time <-matrix(0,nrow=1,ncol=ntime+1)
upper.c.l.at <- matrix(0,nrow=1,ncol=ntime+1)
lower.c.l.at <- matrix(0,nrow=1,ncol=ntime+1)
std <- function(x) sd(x)/sqrt(length(x))
for(i in 1:(ntime+1)){
average.price.at.each.timestep[i]<-mean(B[i,])
s.e.at.each.time[i] <- std(B[i,])
upper.c.l.at[i] <- average.price.at.each.timestep[i]+1.96*s.e.at.each.time[i]
lower.c.l.at[i] <- average.price.at.each.timestep[i]-1.96*s.e.at.each.time[i]
}
I'm still struggling with this as I cannot get the solutions given to match with my data sets, I have now included the code below that generates the matrix B as a working example so you can see the data I am dealing with. As you can see it produces a plot of the different prices, I would like a plot with the average price and confidence intervals of the average.
# Define Bond Price Parameters
#
P<-1 #par value
# Define Vasicek Model Parameters
#
rev.rate<-0.3 #speed of reversion
long.term.mean<-0.1 #long term level of the mean
sigma<-0.05 #volatility
r0<-0.03 #spot interest rate
Strike<-0.05
# Define Simulation Parameters
#
T<-50 #time to expiry
ntime<-1000 #number of timesteps
yearstep<-ntime/T #yearstep
npaths<-1000 #number of paths
dt<-T/ntime #timestep
R <- matrix(0,nrow=ntime+1,ncol=npaths) #matrix of Vasicek interest rate values
B <- matrix(0,nrow=ntime+1,ncol=npaths) # matrix of Bond Prices
R[1,]<-r0 #specifies that all paths start at specified spot rate
B[1,]<-P
# do loop which generates values to fill matrix R with multiple paths of Interest Rates as they evolve over time.
# stochastic process based on standard normal distribution
for (j in 1:npaths) {
for (i in 1:ntime) {
dZ <-rnorm(1,mean=0,sd=1)*sqrt(dt)
Rij<-R[i,j]
Bij<-B[i,j]
dr <-rev.rate*(long.term.mean-Rij)*dt+sigma*dZ
R[i+1,j]<-Rij+dr
B[i+1,j]<-Bij*exp(-R[i+1,j]*dt)
}
}
t<-seq(0,T,dt)
par(mfcol = c(3,3))
matplot(t, B[,1:pmin(20,npaths)], type="l", lty=1, main="Price of a Zero Coupon Bond", ylab="Price", xlab = "Time to Expiry")
Your example isn't reproducible, so I created some fake data that I hope is structured similarly to yours. If this isn't what you were looking for, let me know and I'll update as needed.
# Fake data
ntime <- 100
mat1 <- matrix(rnorm(ntime+1, 10, 2), nrow=1, ncol=ntime+1)
mat2 <- matrix(rnorm(ntime+1, 20, 2), nrow=1, ncol=ntime+1)
mat3 <- matrix(rnorm(ntime+1, 30, 2), nrow=1, ncol=ntime+1)
matplot(1:(ntime+1), t(mat1), type="l", lty=1, ylim=c(0, max(c(mat1,mat2,mat3))),
main="MC Price of a Zero Coupon Bond",
ylab="Price", xlab = "Option Exercise Date")
# Add lines for mat2 and mat3
lines(1:101, mat2, col="red")
lines(1:101, mat3, col="blue")
UPDATE: Is this what you're trying to do?
matplot(t, t(average.price.at.each.timestep), type="l", lty=1,
main="MC Price of a Zero Coupon Bond", ylab="Price",
xlab = "Option Exercise Date")
matlines(t, t(upper.c.l.at), lty=2, col="red")
matlines(t, t(lower.c.l.at), lty=2, col="green")
See plot below. If you have multiple columns that you want to plot (as in your updated example where you plot 20 separate paths) and you want to add lower and upper CIs for all of them (though this would make the plot unreadable), just use a matrix of upper and lower CI values that correspond to each path in average.price.at.each.timestep and use matlines to add them to your existing plot of the multiple paths.
This is doable using ggplot2 and reshape2. The structures you have are a little awkward, which you could improve by using a data frame instead of a matrix.
#Dummy data
average.price.at.each.timestep <- rnorm(1000, sd=0.01)
s.e.at.each.time <- rnorm(1000, sd=0.0005, mean=1)
#CIs (note you can vectorise this):
upper.c.l.at <- average.price.at.each.timestep+1.96*s.e.at.each.time
lower.c.l.at <- average.price.at.each.timestep-1.96*s.e.at.each.time
#create a data frame:
prices <- data.frame(time = 1:length(average.price.at.each.timestep), price=average.price.at.each.timestep, upperCI= upper.c.l.at, lowerCI= lower.c.l.at)
library(reshape2)
#turn the data frame into time, variable, value triplets
prices.t <- melt(prices, id.vars=c("time"))
#plot
library(ggplot2)
ggplot(prices.t, aes(time, value, colour=variable)) + geom_line()
This produces the following plot:
This can be improved somewhat by using geom_ribbon instead:
ggplot(prices, aes(time, price)) + geom_ribbon(aes(ymin=lowerCI, ymax=upperCI), alpha=0.1) + geom_line()
Which produces this plot:
Here's another, slightly different ggplot solution that does not require you to calculate the confidence limits first - ggplot does it for you.
# create sample dataset
set.seed(1) # for reproducible example
B <- matrix(rnorm(1000,mean=rep(10+1:10/2,each=10)),nc=10)
library(ggplot2)
library(reshape2) # for melt(...)
gg <- melt(data.frame(date=1:nrow(B),B), id="date")
ggplot(gg, aes(x=date,y=value)) +
stat_summary(fun.y = mean, geom="line")+
stat_summary(fun.y = function(y)mean(y)-1.96*sd(y)/sqrt(length(y)), geom="line",linetype="dotted", color="blue")+
stat_summary(fun.y = function(y)mean(y)+1.96*sd(y)/sqrt(length(y)), geom="line",linetype="dotted", color="blue")+
theme_bw()
stat_summary(...) summarizes the y-values for a given value of x (the date). So in the first call, it calculates the mean, in the second the lowerCL, and in the third the upperCL.
You could also create a CL(...) function, and call that:
CL <- function(x,level=0.95,type=c("lower","upper")) {
fact <- c(lower=-1,upper=1)
mean(x) - fact[type]*qnorm((1-level)/2)*sd(x)/sqrt(length(x))
}
ggplot(gg, aes(x=date,y=value)) +
stat_summary(fun.y = mean, geom="line")+
stat_summary(fun.y = CL, type="lower", geom="line",linetype="dotted", color="blue")+
stat_summary(fun.y = CL, type="upper", geom="line",linetype="dotted", color="blue")+
theme_bw()
This produces a plot identical to the one above.
I have 12 variables, M1, M2, ..., M12, for which I compute a certain statistic x.
df = data.frame(model = paste("M", 1:28, sep = ""), x = runif(28, 1, 1.05))
levels = seq(0.8, 1.2, 0.05)
I would like to plot this data as follows:
Each circle (contour) represents the a level of that statistic "x". The three blue lines simply represent three different scenarios.
The dataframe included in this example represents one scenario. The blue line would simply join the values of all the models M1 to M28 for that specific scenario.
Is there any tool in R that allow for such a plot? I tried contour() from library(MASS) but the contours are not drawn as perfect circles.
Any help would be appreciated. Thanks!
Here is a ggplot solution:
library(ggplot2)
ggplot(data=df, aes(x=model, y=x, group=1)) +
geom_line() + coord_polar() +
scale_y_continuous(limits=range(levels), breaks=levels, labels=levels)
Note this is a little confusing because of the names in your data frame. x is really the y variable here, and model the real x, so the graph scale label seems odd.
EDIT: I had to set your factor levels for model in the data frame so they plot in the correct order.
I have a dataset having the unique IDs of manufacturing units, the industrial classification of their outputs (CAT) and the number of people each unit employs (EMP). I want to graphically show that EMP varies by CAT, i.e. employment size in general varies by the kind of output a unit produces. I tried boxplots arranged by median EMP:
a = read.csv("/filepath/plot.csv", header=T, stringsAsFactors=F)
bymedian = with(a, reorder(CAT, log(as.numeric(as.character(EMP))), median))
boxplot(log(EMP) ~ bymedian, data=a, horizontal=F, notch=T, pch=1, cex=.25, col="gray95", boxwex=.25, las=2, outline=F)
pch=1, cex=.25, col="gray95", boxwex=.25, las=2, outline=F)
The problem is that because of the large number of categories (400+), the plot becomes very messy. Is there a cleaner way of showing what I am trying to do?
Using ggplot2 you can show what you are trying to do with a scale_x_discrete
library(ggplot2)
a$bymedian = with(a, reorder(CAT, log(EMP), median))
p <- ggplot(a,aes(y=log(EMP),x=bymedian))+
geom_boxplot()
breaks <- levels(a$bymedian)[seq(1,nlevels(a$bymedian),20)]
p %+% scale_x_discrete(breaks = breaks, labels = breaks)