Scatter plot with error bars - r

How can I generate the following plot in R? Points, shown in the plot are the averages, and their ranges correspond to minimal and maximal values.
I have data in two files (below is an example).
x y
1 0.8773
1 0.8722
1 0.8816
1 0.8834
1 0.8759
1 0.8890
1 0.8727
2 0.9047
2 0.9062
2 0.8998
2 0.9044
2 0.8960
.. ...

First of all: it is very unfortunate and surprising that R cannot draw error bars "out of the box".
Here is my favourite workaround, the advantage is that you do not need any extra packages. The trick is to draw arrows (!) but with little horizontal bars instead of arrowheads (!!!). This not-so-straightforward idea comes from the R Wiki Tips and is reproduced here as a worked-out example.
Let's assume you have a vector of "average values" avg and another vector of "standard deviations" sdev, they are of the same length n. Let's make the abscissa just the number of these "measurements", so x <- 1:n. Using these, here come the plotting commands:
plot(x, avg,
ylim=range(c(avg-sdev, avg+sdev)),
pch=19, xlab="Measurements", ylab="Mean +/- SD",
main="Scatter plot with std.dev error bars"
)
# hack: we draw arrows but with very special "arrowheads"
arrows(x, avg-sdev, x, avg+sdev, length=0.05, angle=90, code=3)
The result looks like this:
In the arrows(...) function length=0.05 is the size of the "arrowhead" in inches, angle=90 specifies that the "arrowhead" is perpendicular to the shaft of the arrow, and the particularly intuitive code=3 parameter specifies that we want to draw an arrowhead on both ends of the arrow.
For horizontal error bars the following changes are necessary, assuming that the sdev vector now contains the errors in the x values and the y values are the ordinates:
plot(x, y,
xlim=range(c(x-sdev, x+sdev)),
pch=19,...)
# horizontal error bars
arrows(x-sdev, y, x+sdev, y, length=0.05, angle=90, code=3)

Using ggplot and a little dplyr for data manipulation:
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
library(ggplot2)
library(dplyr)
df.summary <- df %>% group_by(x) %>%
summarize(ymin = min(y),
ymax = max(y),
ymean = mean(y))
ggplot(df.summary, aes(x = x, y = ymean)) +
geom_point(size = 2) +
geom_errorbar(aes(ymin = ymin, ymax = ymax))
If there's an additional grouping column (OP's example plot has two errorbars per x value, saying the data is sourced from two files), then you should get all the data in one data frame at the start, add the grouping variable to the dplyr::group_by call (e.g., group_by(x, file) if file is the name of the column) and add it as a "group" aesthetic in the ggplot, e.g., aes(x = x, y = ymean, group = file).

#some example data
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
#calculate mean, min and max for each x-value
library(plyr)
df2 <- ddply(df,.(x),function(df) c(mean=mean(df$y),min=min(df$y),max=max(df$y)))
#plot error bars
library(Hmisc)
with(df2,errbar(x,mean,max,min))
grid(nx=NA,ny=NULL)

To summarize Laryx Decidua's answer:
define and use a function like the following
plot.with.errorbars <- function(x, y, err, ylim=NULL, ...) {
if (is.null(ylim))
ylim <- c(min(y-err), max(y+err))
plot(x, y, ylim=ylim, pch=19, ...)
arrows(x, y-err, x, y+err, length=0.05, angle=90, code=3)
}
where one can override the automatic ylim, and also pass extra parameters such as main, xlab, ylab.

Another (easier - at least for me) way to do this is below.
install.packages("ggplot2movies")
data(movies, package="ggplot2movies")
Plot average Length vs Rating
rating_by_len = tapply(movies$length,
movies$rating,
mean)
plot(names(rating_by_len), rating_by_len, ylim=c(0, 200)
,xlab = "Rating", ylab = "Length", main="Average Rating by Movie Length", pch=21)
Add error bars to the plot: mean - sd, mean + sd
sds = tapply(movies$length, movies$rating, sd)
upper = rating_by_len + sds
lower = rating_by_len - sds
segments(x0=as.numeric(names(rating_by_len)),
y0=lower,
y1=upper)
Hope that helps.

I put together start to finish code of a hypothetical experiment with ten measurement replicated three times. Just for fun with the help of other stackoverflowers. Thank you... Obviously loops are an option as applycan be used but I like to see what happens.
#Create fake data
x <-rep(1:10, each =3)
y <- rnorm(30, mean=4,sd=1)
#Loop to get standard deviation from data
sd.y = NULL
for(i in 1:10){
sd.y[i] <- sd(y[(1+(i-1)*3):(3+(i-1)*3)])
}
sd.y<-rep(sd.y,each = 3)
#Loop to get mean from data
mean.y = NULL
for(i in 1:10){
mean.y[i] <- mean(y[(1+(i-1)*3):(3+(i-1)*3)])
}
mean.y<-rep(mean.y,each = 3)
#Put together the data to view it so far
data <- cbind(x, y, mean.y, sd.y)
#Make an empty matrix to fill with shrunk data
data.1 = matrix(data = NA, nrow=10, ncol = 4)
colnames(data.1) <- c("X","Y","MEAN","SD")
#Loop to put data into shrunk format
for(i in 1:10){
data.1[i,] <- data[(1+(i-1)*3),]
}
#Create atomic vectors for arrows
x <- data.1[,1]
mean.exp <- data.1[,3]
sd.exp <- data.1[,4]
#Plot the data
plot(x, mean.exp, ylim = range(c(mean.exp-sd.exp,mean.exp+sd.exp)))
abline(h = 4)
arrows(x, mean.exp-sd.exp, x, mean.exp+sd.exp, length=0.05, angle=90, code=3)

Related

Add error bars to scatterplot [duplicate]

How can I generate the following plot in R? Points, shown in the plot are the averages, and their ranges correspond to minimal and maximal values.
I have data in two files (below is an example).
x y
1 0.8773
1 0.8722
1 0.8816
1 0.8834
1 0.8759
1 0.8890
1 0.8727
2 0.9047
2 0.9062
2 0.8998
2 0.9044
2 0.8960
.. ...
First of all: it is very unfortunate and surprising that R cannot draw error bars "out of the box".
Here is my favourite workaround, the advantage is that you do not need any extra packages. The trick is to draw arrows (!) but with little horizontal bars instead of arrowheads (!!!). This not-so-straightforward idea comes from the R Wiki Tips and is reproduced here as a worked-out example.
Let's assume you have a vector of "average values" avg and another vector of "standard deviations" sdev, they are of the same length n. Let's make the abscissa just the number of these "measurements", so x <- 1:n. Using these, here come the plotting commands:
plot(x, avg,
ylim=range(c(avg-sdev, avg+sdev)),
pch=19, xlab="Measurements", ylab="Mean +/- SD",
main="Scatter plot with std.dev error bars"
)
# hack: we draw arrows but with very special "arrowheads"
arrows(x, avg-sdev, x, avg+sdev, length=0.05, angle=90, code=3)
The result looks like this:
In the arrows(...) function length=0.05 is the size of the "arrowhead" in inches, angle=90 specifies that the "arrowhead" is perpendicular to the shaft of the arrow, and the particularly intuitive code=3 parameter specifies that we want to draw an arrowhead on both ends of the arrow.
For horizontal error bars the following changes are necessary, assuming that the sdev vector now contains the errors in the x values and the y values are the ordinates:
plot(x, y,
xlim=range(c(x-sdev, x+sdev)),
pch=19,...)
# horizontal error bars
arrows(x-sdev, y, x+sdev, y, length=0.05, angle=90, code=3)
Using ggplot and a little dplyr for data manipulation:
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
library(ggplot2)
library(dplyr)
df.summary <- df %>% group_by(x) %>%
summarize(ymin = min(y),
ymax = max(y),
ymean = mean(y))
ggplot(df.summary, aes(x = x, y = ymean)) +
geom_point(size = 2) +
geom_errorbar(aes(ymin = ymin, ymax = ymax))
If there's an additional grouping column (OP's example plot has two errorbars per x value, saying the data is sourced from two files), then you should get all the data in one data frame at the start, add the grouping variable to the dplyr::group_by call (e.g., group_by(x, file) if file is the name of the column) and add it as a "group" aesthetic in the ggplot, e.g., aes(x = x, y = ymean, group = file).
#some example data
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
#calculate mean, min and max for each x-value
library(plyr)
df2 <- ddply(df,.(x),function(df) c(mean=mean(df$y),min=min(df$y),max=max(df$y)))
#plot error bars
library(Hmisc)
with(df2,errbar(x,mean,max,min))
grid(nx=NA,ny=NULL)
To summarize Laryx Decidua's answer:
define and use a function like the following
plot.with.errorbars <- function(x, y, err, ylim=NULL, ...) {
if (is.null(ylim))
ylim <- c(min(y-err), max(y+err))
plot(x, y, ylim=ylim, pch=19, ...)
arrows(x, y-err, x, y+err, length=0.05, angle=90, code=3)
}
where one can override the automatic ylim, and also pass extra parameters such as main, xlab, ylab.
Another (easier - at least for me) way to do this is below.
install.packages("ggplot2movies")
data(movies, package="ggplot2movies")
Plot average Length vs Rating
rating_by_len = tapply(movies$length,
movies$rating,
mean)
plot(names(rating_by_len), rating_by_len, ylim=c(0, 200)
,xlab = "Rating", ylab = "Length", main="Average Rating by Movie Length", pch=21)
Add error bars to the plot: mean - sd, mean + sd
sds = tapply(movies$length, movies$rating, sd)
upper = rating_by_len + sds
lower = rating_by_len - sds
segments(x0=as.numeric(names(rating_by_len)),
y0=lower,
y1=upper)
Hope that helps.
I put together start to finish code of a hypothetical experiment with ten measurement replicated three times. Just for fun with the help of other stackoverflowers. Thank you... Obviously loops are an option as applycan be used but I like to see what happens.
#Create fake data
x <-rep(1:10, each =3)
y <- rnorm(30, mean=4,sd=1)
#Loop to get standard deviation from data
sd.y = NULL
for(i in 1:10){
sd.y[i] <- sd(y[(1+(i-1)*3):(3+(i-1)*3)])
}
sd.y<-rep(sd.y,each = 3)
#Loop to get mean from data
mean.y = NULL
for(i in 1:10){
mean.y[i] <- mean(y[(1+(i-1)*3):(3+(i-1)*3)])
}
mean.y<-rep(mean.y,each = 3)
#Put together the data to view it so far
data <- cbind(x, y, mean.y, sd.y)
#Make an empty matrix to fill with shrunk data
data.1 = matrix(data = NA, nrow=10, ncol = 4)
colnames(data.1) <- c("X","Y","MEAN","SD")
#Loop to put data into shrunk format
for(i in 1:10){
data.1[i,] <- data[(1+(i-1)*3),]
}
#Create atomic vectors for arrows
x <- data.1[,1]
mean.exp <- data.1[,3]
sd.exp <- data.1[,4]
#Plot the data
plot(x, mean.exp, ylim = range(c(mean.exp-sd.exp,mean.exp+sd.exp)))
abline(h = 4)
arrows(x, mean.exp-sd.exp, x, mean.exp+sd.exp, length=0.05, angle=90, code=3)

How to plot multiple columns at the same time? [duplicate]

I would like to plot y1 and y2 in the same plot.
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x, 1, 1)
plot(x, y1, type = "l", col = "red")
plot(x, y2, type = "l", col = "green")
But when I do it like this, they are not plotted in the same plot together.
In Matlab one can do hold on, but does anyone know how to do this in R?
lines() or points() will add to the existing graph, but will not create a new window. So you'd need to do
plot(x,y1,type="l",col="red")
lines(x,y2,col="green")
You can also use par and plot on the same graph but different axis. Something as follows:
plot( x, y1, type="l", col="red" )
par(new=TRUE)
plot( x, y2, type="l", col="green" )
If you read in detail about par in R, you will be able to generate really interesting graphs. Another book to look at is Paul Murrel's R Graphics.
When constructing multilayer plots one should consider ggplot package. The idea is to create a graphical object with basic aesthetics and enhance it incrementally.
ggplot style requires data to be packed in data.frame.
# Data generation
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x,1,1)
df <- data.frame(x,y1,y2)
Basic solution:
require(ggplot2)
ggplot(df, aes(x)) + # basic graphical object
geom_line(aes(y=y1), colour="red") + # first layer
geom_line(aes(y=y2), colour="green") # second layer
Here + operator is used to add extra layers to basic object.
With ggplot you have access to graphical object on every stage of plotting. Say, usual step-by-step setup can look like this:
g <- ggplot(df, aes(x))
g <- g + geom_line(aes(y=y1), colour="red")
g <- g + geom_line(aes(y=y2), colour="green")
g
g produces the plot, and you can see it at every stage (well, after creation of at least one layer). Further enchantments of the plot are also made with created object. For example, we can add labels for axises:
g <- g + ylab("Y") + xlab("X")
g
Final g looks like:
UPDATE (2013-11-08):
As pointed out in comments, ggplot's philosophy suggests using data in long format.
You can refer to this answer in order to see the corresponding code.
I think that the answer you are looking for is:
plot(first thing to plot)
plot(second thing to plot,add=TRUE)
Use the matplot function:
matplot(x, cbind(y1,y2),type="l",col=c("red","green"),lty=c(1,1))
use this if y1 and y2 are evaluated at the same x points. It scales the Y-axis to fit whichever is bigger (y1 or y2), unlike some of the other answers here that will clip y2 if it gets bigger than y1 (ggplot solutions mostly are okay with this).
Alternatively, and if the two lines don't have the same x-coordinates, set the axis limits on the first plot and add:
x1 <- seq(-2, 2, 0.05)
x2 <- seq(-3, 3, 0.05)
y1 <- pnorm(x1)
y2 <- pnorm(x2,1,1)
plot(x1,y1,ylim=range(c(y1,y2)),xlim=range(c(x1,x2)), type="l",col="red")
lines(x2,y2,col="green")
Am astonished this Q is 4 years old and nobody has mentioned matplot or x/ylim...
tl;dr: You want to use curve (with add=TRUE) or lines.
I disagree with par(new=TRUE) because that will double-print tick-marks and axis labels. Eg
The output of plot(sin); par(new=T); plot( function(x) x**2 ).
Look how messed up the vertical axis labels are! Since the ranges are different you would need to set ylim=c(lowest point between the two functions, highest point between the two functions), which is less easy than what I'm about to show you---and way less easy if you want to add not just two curves, but many.
What always confused me about plotting is the difference between curve and lines. (If you can't remember that these are the names of the two important plotting commands, just sing it.)
Here's the big difference between curve and lines.
curve will plot a function, like curve(sin). lines plots points with x and y values, like: lines( x=0:10, y=sin(0:10) ).
And here's a minor difference: curve needs to be called with add=TRUE for what you're trying to do, while lines already assumes you're adding to an existing plot.
Here's the result of calling plot(0:2); curve(sin).
Behind the scenes, check out methods(plot). And check body( plot.function )[[5]]. When you call plot(sin) R figures out that sin is a function (not y values) and uses the plot.function method, which ends up calling curve. So curve is the tool meant to handle functions.
if you want to split the plot into two columns (2 plots next to each other), you can do it like this:
par(mfrow=c(1,2))
plot(x)
plot(y)
Reference Link
As described by #redmode, you may plot the two lines in the same graphical device using ggplot. In that answer the data were in a 'wide' format. However, when using ggplot it is generally most convenient to keep the data in a data frame in a 'long' format. Then, by using different 'grouping variables' in the aesthetics arguments, properties of the line, such as linetype or colour, will vary according to the grouping variable, and corresponding legends will appear.
In this case, we can use the colour aessthetics, which matches colour of the lines to different levels of a variable in the data set (here: y1 vs y2). But first we need to melt the data from wide to long format, using e.g. the function 'melt' from reshape2 package. Other methods to reshape the data are described here: Reshaping data.frame from wide to long format.
library(ggplot2)
library(reshape2)
# original data in a 'wide' format
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x, 1, 1)
df <- data.frame(x, y1, y2)
# melt the data to a long format
df2 <- melt(data = df, id.vars = "x")
# plot, using the aesthetics argument 'colour'
ggplot(data = df2, aes(x = x, y = value, colour = variable)) + geom_line()
If you are using base graphics (i.e. not lattice/ grid graphics), then you can mimic MATLAB's hold on feature by using the points/lines/polygons functions to add additional details to your plots without starting a new plot. In the case of a multiplot layout, you can use par(mfg=...) to pick which plot you add things to.
You can use points for the overplot, that is.
plot(x1, y1,col='red')
points(x2,y2,col='blue')
Idiomatic Matlab plot(x1,y1,x2,y2) can be translated in R with ggplot2 for example in this way:
x1 <- seq(1,10,.2)
df1 <- data.frame(x=x1,y=log(x1),type="Log")
x2 <- seq(1,10)
df2 <- data.frame(x=x2,y=cumsum(1/x2),type="Harmonic")
df <- rbind(df1,df2)
library(ggplot2)
ggplot(df)+geom_line(aes(x,y,colour=type))
Inspired by Tingting Zhao's Dual line plots with different range of x-axis Using ggplot2.
Rather than keeping the values to be plotted in an array, store them in a matrix. By default the entire matrix will be treated as one data set. However if you add the same number of modifiers to the plot, e.g. the col(), as you have rows in the matrix, R will figure out that each row should be treated independently. For example:
x = matrix( c(21,50,80,41), nrow=2 )
y = matrix( c(1,2,1,2), nrow=2 )
plot(x, y, col("red","blue")
This should work unless your data sets are of differing sizes.
You could use the ggplotly() function from the plotly package to turn any of the gggplot2 examples here into an interactive plot, but I think this sort of plot is better without ggplot2:
# call Plotly and enter username and key
library(plotly)
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x, 1, 1)
plot_ly(x = x) %>%
add_lines(y = y1, color = I("red"), name = "Red") %>%
add_lines(y = y2, color = I("green"), name = "Green")
You can also create your plot using ggvis:
library(ggvis)
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x,1,1)
df <- data.frame(x, y1, y2)
df %>%
ggvis(~x, ~y1, stroke := 'red') %>%
layer_paths() %>%
layer_paths(data = df, x = ~x, y = ~y2, stroke := 'blue')
This will create the following plot:
Using plotly (adding solution from plotly with primary and secondary y axis- It seems to be missing):
library(plotly)
x <- seq(-2, 2, 0.05)
y1 <- pnorm(x)
y2 <- pnorm(x, 1, 1)
df=cbind.data.frame(x,y1,y2)
plot_ly(df) %>%
add_trace(x=~x,y=~y1,name = 'Line 1',type = 'scatter',mode = 'lines+markers',connectgaps = TRUE) %>%
add_trace(x=~x,y=~y2,name = 'Line 2',type = 'scatter',mode = 'lines+markers',connectgaps = TRUE,yaxis = "y2") %>%
layout(title = 'Title',
xaxis = list(title = "X-axis title"),
yaxis2 = list(side = 'right', overlaying = "y", title = 'secondary y axis', showgrid = FALSE, zeroline = FALSE))
Screenshot from working demo:
we can also use lattice library
library(lattice)
x <- seq(-2,2,0.05)
y1 <- pnorm(x)
y2 <- pnorm(x,1,1)
xyplot(y1 + y2 ~ x, ylab = "y1 and y2", type = "l", auto.key = list(points = FALSE,lines = TRUE))
For specific colors
xyplot(y1 + y2 ~ x,ylab = "y1 and y2", type = "l", auto.key = list(points = F,lines = T), par.settings = list(superpose.line = list(col = c("red","green"))))
Use curve for mathematical functions.
And use add=TRUE to use the same plot and axis.
curve( log2 , to=5 , col="black", ylab="log's(.)")
curve( log , add=TRUE , col="red" )
curve( log10, add=TRUE , col="blue" )
abline( h=0 )

Plotting percentiles in error bars [duplicate]

How can I generate the following plot in R? Points, shown in the plot are the averages, and their ranges correspond to minimal and maximal values.
I have data in two files (below is an example).
x y
1 0.8773
1 0.8722
1 0.8816
1 0.8834
1 0.8759
1 0.8890
1 0.8727
2 0.9047
2 0.9062
2 0.8998
2 0.9044
2 0.8960
.. ...
First of all: it is very unfortunate and surprising that R cannot draw error bars "out of the box".
Here is my favourite workaround, the advantage is that you do not need any extra packages. The trick is to draw arrows (!) but with little horizontal bars instead of arrowheads (!!!). This not-so-straightforward idea comes from the R Wiki Tips and is reproduced here as a worked-out example.
Let's assume you have a vector of "average values" avg and another vector of "standard deviations" sdev, they are of the same length n. Let's make the abscissa just the number of these "measurements", so x <- 1:n. Using these, here come the plotting commands:
plot(x, avg,
ylim=range(c(avg-sdev, avg+sdev)),
pch=19, xlab="Measurements", ylab="Mean +/- SD",
main="Scatter plot with std.dev error bars"
)
# hack: we draw arrows but with very special "arrowheads"
arrows(x, avg-sdev, x, avg+sdev, length=0.05, angle=90, code=3)
The result looks like this:
In the arrows(...) function length=0.05 is the size of the "arrowhead" in inches, angle=90 specifies that the "arrowhead" is perpendicular to the shaft of the arrow, and the particularly intuitive code=3 parameter specifies that we want to draw an arrowhead on both ends of the arrow.
For horizontal error bars the following changes are necessary, assuming that the sdev vector now contains the errors in the x values and the y values are the ordinates:
plot(x, y,
xlim=range(c(x-sdev, x+sdev)),
pch=19,...)
# horizontal error bars
arrows(x-sdev, y, x+sdev, y, length=0.05, angle=90, code=3)
Using ggplot and a little dplyr for data manipulation:
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
library(ggplot2)
library(dplyr)
df.summary <- df %>% group_by(x) %>%
summarize(ymin = min(y),
ymax = max(y),
ymean = mean(y))
ggplot(df.summary, aes(x = x, y = ymean)) +
geom_point(size = 2) +
geom_errorbar(aes(ymin = ymin, ymax = ymax))
If there's an additional grouping column (OP's example plot has two errorbars per x value, saying the data is sourced from two files), then you should get all the data in one data frame at the start, add the grouping variable to the dplyr::group_by call (e.g., group_by(x, file) if file is the name of the column) and add it as a "group" aesthetic in the ggplot, e.g., aes(x = x, y = ymean, group = file).
#some example data
set.seed(42)
df <- data.frame(x = rep(1:10,each=5), y = rnorm(50))
#calculate mean, min and max for each x-value
library(plyr)
df2 <- ddply(df,.(x),function(df) c(mean=mean(df$y),min=min(df$y),max=max(df$y)))
#plot error bars
library(Hmisc)
with(df2,errbar(x,mean,max,min))
grid(nx=NA,ny=NULL)
To summarize Laryx Decidua's answer:
define and use a function like the following
plot.with.errorbars <- function(x, y, err, ylim=NULL, ...) {
if (is.null(ylim))
ylim <- c(min(y-err), max(y+err))
plot(x, y, ylim=ylim, pch=19, ...)
arrows(x, y-err, x, y+err, length=0.05, angle=90, code=3)
}
where one can override the automatic ylim, and also pass extra parameters such as main, xlab, ylab.
Another (easier - at least for me) way to do this is below.
install.packages("ggplot2movies")
data(movies, package="ggplot2movies")
Plot average Length vs Rating
rating_by_len = tapply(movies$length,
movies$rating,
mean)
plot(names(rating_by_len), rating_by_len, ylim=c(0, 200)
,xlab = "Rating", ylab = "Length", main="Average Rating by Movie Length", pch=21)
Add error bars to the plot: mean - sd, mean + sd
sds = tapply(movies$length, movies$rating, sd)
upper = rating_by_len + sds
lower = rating_by_len - sds
segments(x0=as.numeric(names(rating_by_len)),
y0=lower,
y1=upper)
Hope that helps.
I put together start to finish code of a hypothetical experiment with ten measurement replicated three times. Just for fun with the help of other stackoverflowers. Thank you... Obviously loops are an option as applycan be used but I like to see what happens.
#Create fake data
x <-rep(1:10, each =3)
y <- rnorm(30, mean=4,sd=1)
#Loop to get standard deviation from data
sd.y = NULL
for(i in 1:10){
sd.y[i] <- sd(y[(1+(i-1)*3):(3+(i-1)*3)])
}
sd.y<-rep(sd.y,each = 3)
#Loop to get mean from data
mean.y = NULL
for(i in 1:10){
mean.y[i] <- mean(y[(1+(i-1)*3):(3+(i-1)*3)])
}
mean.y<-rep(mean.y,each = 3)
#Put together the data to view it so far
data <- cbind(x, y, mean.y, sd.y)
#Make an empty matrix to fill with shrunk data
data.1 = matrix(data = NA, nrow=10, ncol = 4)
colnames(data.1) <- c("X","Y","MEAN","SD")
#Loop to put data into shrunk format
for(i in 1:10){
data.1[i,] <- data[(1+(i-1)*3),]
}
#Create atomic vectors for arrows
x <- data.1[,1]
mean.exp <- data.1[,3]
sd.exp <- data.1[,4]
#Plot the data
plot(x, mean.exp, ylim = range(c(mean.exp-sd.exp,mean.exp+sd.exp)))
abline(h = 4)
arrows(x, mean.exp-sd.exp, x, mean.exp+sd.exp, length=0.05, angle=90, code=3)

Plotting two Poisson processes on one plot

I have two Poisson processes:
n <- 100
x <- seq(0, 10, length = 1000)
y1 <- cumsum(rpois(1000, 1 / n))
y2 <- -cumsum(rpois(1000, 1 / n))
I would like to plot them in one plot and expect that y1 lies above x-axis and y2 lies below x-axis. I tried the following code:
plot(x, y1)
par(new = TRUE)
plot(x, y2, col = "red",
axes = FALSE,
xlab = '', ylab = '',
xlim = c(0, 10), ylim = c(min(y2), max(y1)))
but it did not work. Can someone please tell me how to fix this? (I am working with R for my code)
Many thanks in advance
How about
plot(x,y1, ylim=range(y1,y2), type="l")
lines(x, y2, col="red")
I would suggest trying to avoid multiple calls to plot with par(new=TRUE). That is usually very messy. Here we use lines() to add to an existing plot. The only catch is that the x and y limits won't change based on the new data, so we use ylim in the first plot() call to set a range appropriate for all the data.
Or if you don't want to worry about limits (like MrFlick mentioned) or the number of lines, you could also tide up your data and using melt and ggplot
df <- data.frame(x, y1, y2)
library(reshape2)
library(ggplot2)
mdf <- melt(df, "x")
ggplot(mdf, aes(x, value, color = variable)) +
geom_line()

Surface plot Q in R - compable to surf() in matlab

I want to plot a matrix of z values with x rows and y columns as a surface similar to this graph from MATLAB.
Surface plot:
Code to generate matrix:
# Parameters
shape<-1.849241
scale<-38.87986
x<-seq(from = -241.440, to = 241.440, by = 0.240)# 2013 length
y<-seq(from = -241.440, to = 241.440, by = 0.240)
matrix_fun<-matrix(data = 0, nrow = length(x), ncol = length(y))
# Generate two dimensional travel distance probability density function
for (i in 1:length(x)) {
for (j in 1:length(y)){
dxy<-sqrt(x[i]^2+y[j]^2)
prob<-1/(scale^(shape)*gamma(shape))*dxy^(shape-1)*exp(-(dxy/scale))
matrix_fun[i,j]<-prob
}}
# Rescale 2-d pdf to sum to 1
a<-sum(matrix_fun)
matrix_scale<-matrix_fun/a
I am able to generate surface plots using a couple methods (persp(), persp3d(), surface3d()) but the colors aren't displaying the z values (the probabilities held within the matrix). The z values only seem to display as heights not as differentiated colors as in the MATLAB figure.
Example of graph code and graphs:
library(rgl)
persp3d(x=x, y=y, z=matrix_scale, color=rainbow(25, start=min(matrix_scale), end=max(matrix_scale)))
surface3d(x=x, y=y, z=matrix_scale, color=rainbow(25, start=min(matrix_scale), end=max(matrix_scale)))
persp(x=x, y=y, z=matrix_scale, theta=30, phi=30, col=rainbow(25, start=min(matrix_scale), end=max(matrix_scale)), border=NA)
Image of the last graph
Any other tips to recreate the image in R would be most appreciated (i.e. legend bar, axis tick marks, etc.)
So here's a ggplot solution which seems to come a little bit closer to the MATLAB plot
# Parameters
shape<-1.849241
scale<-38.87986
x<-seq(from = -241.440, to = 241.440, by = 2.40)
y<-seq(from = -241.440, to = 241.440, by = 2.40)
df <- expand.grid(x=x,y=y)
df$dxy <- with(df,sqrt(x^2+y^2))
df$prob <- dgamma(df$dxy,shape=shape,scale=scale)
df$prob <- df$prob/sum(df$prob)
library(ggplot2)
library(colorRamps) # for matlab.like(...)
library(scales) # for labels=scientific
ggplot(df, aes(x,y))+
geom_tile(aes(fill=prob))+
scale_fill_gradientn(colours=matlab.like(10), labels=scientific)
BTW: You can generate your data frame of probabilities much more efficiently using the built-in dgamma(...) function, rather than calculating it yourself.
In line with alexis_laz's comment, here is an example using filled.contour. You might want to increase your by to 2.40 since the finer granularity increases the time it takes to generate the plot by a lot but doesn't improve quality.
filled.contour(x = x, y = y, z = matrix_scale, color = terrain.colors)
# terrain.colors is in the base grDevices package
If you want something closer to your color scheme above, you can fiddle with the rainbow function:
filled.contour(x = x, y = y, z = matrix_scale,
color = (function(n, ...) rep(rev(rainbow(n/2, ...)[1:9]), each = 3)))
Finer granularity:
filled.contour(x = x, y = y, z = matrix_scale, nlevels = 150,
color = (function(n, ...)
rev(rep(rainbow(50, start = 0, end = 0.75, ...), each = 3))[5:150]))

Resources