I have data that is computed on a 2D polar mesh:
# The mesh created is in two dimensions: r and theta.
# Mesh steps in theta are regular, while mesh steps in r are more refined
# close to the origin
nb.theta <- 50
theta.max <- 130
theta <- seq(0, theta.max, length.out = nb.theta)
nb.r <- 80
# r goes from r0 to rMax
q0 <- 1.1
z <- seq(1, nb.r)
rMax <- 30
r0 <- rMax / (q0 ^ nb.r - 1)
r <- r0 * (q0 ^ z - 1)
# Now let's add some data
mesh <- as.data.frame(expand.grid(r = r, theta = theta))
mesh$value <- mesh$r * mesh$theta / theta.max
Now, I want to plot the mesh in R (preferably with ggplot2). I tried:
ggplot(mesh, aes(r, theta, color = value)) + geom_point() + coord_polar(theta = "y")
But the result is far from satisfactory:
Ideally, I would like to have cells filled and not just points. I also would like the plot not to be a full circle: I only have data from 0 to 130 degrees.
Is this possible?
This should solve the circle issue:
ggplot(mesh, aes(r, theta, color = value)) +
geom_point() +
coord_polar(theta = "y") +
scale_y_continuous(limits=c(0,360))
We can use geom_tile rather than geom_point so that we fill the mesh. We need to calculate the width of each window. Here I've just set it to r/10 which is approximately correct. You will be able to calculate it exactly.
Adding ylim ensures that only part of the circle is filled.
mesh <- expand.grid(r = r, theta = theta)
mesh$value <- mesh$r * mesh$theta / theta.max
mesh$width <- mesh$r/10
ggplot(mesh, aes(r, theta, fill = value, width = width)) +
geom_tile() +
coord_polar(theta = "y") +
ylim(0, 360)
NB expand.grid returns a data.frame, so we don't need to convert it.
Related
I'm trying to create a contour plot for a very specific function. The contour plot I've obtained through ggplot2 is quite similar to Wolfram Alpha's plot, although I'm not getting the same "grid-like" (white stripes) behavior as in the latter source.
How could I get similar white stripes like those if I'm not sure what those values are? Should I increase the amount of points?
library(tidyverse)
v <- function(r, q){
value <- pmax(0, 3 * q - 1 + r - 3*r*q)^2 +
pmax(0, r - 3 * r * q)^2 +
pmax(0, -2 + 2*r + 2*q - 2*r*q)^2+
pmax(0, 2*q - 3*r*q)^2
return(value)
}
r <- q <- seq(0, 1, 0.001)
vertices <- expand_grid(r, q)
vertices %>% mutate(v = v(r, q)) %>%
ggplot(aes(x = r, y = q, z = v)) +
geom_contour_filled()
I know about the parameter log="xy", but I don't know whether you can control the base of the logarithmic scale (my guess is that 10 may be the default (?)), and I'm not getting lucky on the specific issue below...
How can I reproduce the following plot (from this source) with R. In particular, I am having problems with the log base 10 x and y axes.
Leaving aside the power law red line, I was playing with
x = rlnorm(1e4,0,10)
h = hist(x, prob=T, plot=F)
plot(h$count, log="xy", type="l", lend=2)
without success.
Use the pdf of the lognormal in base10
[Generalising it to other log-bases is straightforward.]
We can then plot the pdf on a log10-log10 scale.
(gg)plotting
# lognormal base log10 pdf, w is in log10
lognorm_base10 <- function(w, mu, sigma) {
log10(exp(1)) / (sqrt(2*pi*sigma^2) * 10^w) * exp(- (w - mu)^2 / (2 * sigma^2));
}
# Generate data for mu = 0, sigma = 10
x <- seq(0, 10, length.out = 100);
y <- lognorm_base10(x, 0, 10);
# Plot
require(ggplot2);
gg <- ggplot(data.frame(x = x, y = y), aes(x, y));
gg <- gg + geom_line() + scale_y_log10();
gg <- gg + labs(x = "log10(x)", y = "log10(p)")
Plotting without ggplot
plot(x, log10(y), type = "l")
For reasons I won't go into I need to plot a vertical normal curve on a blank ggplot2 graph. The following code gets it done as a series of points with x,y coordinates
dfBlank <- data.frame()
g <- ggplot(dfBlank) + xlim(0.58,1) + ylim(-0.2,113.2)
hdiLo <- 31.88
hdiHi <- 73.43
yComb <- seq(hdiLo, hdiHi, length = 75)
xVals <- 0.79 - (0.06*dnorm(yComb, 52.65, 10.67))/0.05
dfVertCurve <- data.frame(x = xVals, y = yComb)
g + geom_point(data = dfVertCurve, aes(x = x, y = y), size = 0.01)
The curve is clearly discernible but is a series of points. The lines() function in basic plot would turn these points into a smooth line.
Is there a ggplot2 equivalent?
I see two different ways to do it.
geom_segment
The first uses geom_segment to 'link' each point with its next one.
hdiLo <- 31.88
hdiHi <- 73.43
yComb <- seq(hdiLo, hdiHi, length = 75)
xVals <- 0.79 - (0.06*dnorm(yComb, 52.65, 10.67))/0.05
dfVertCurve <- data.frame(x = xVals, y = yComb)
library(ggplot2)
ggplot() +
xlim(0.58, 1) +
ylim(-0.2, 113.2) +
geom_segment(data = dfVertCurve, aes(x = x, xend = dplyr::lead(x), y = y, yend = dplyr::lead(y)), size = 0.01)
#> Warning: Removed 1 rows containing missing values (geom_segment).
As you can see it just link the points you created. The last point does not have a next one, so the last segment is removed (See the warning)
stat_function
The second one, which I think is better and more ggplotish, utilize stat_function().
library(ggplot2)
f = function(x) .79 - (.06 * dnorm(x, 52.65, 10.67)) / .05
hdiLo <- 31.88
hdiHi <- 73.43
yComb <- seq(hdiLo, hdiHi, length = 75)
ggplot() +
xlim(-0.2, 113.2) +
ylim(0.58, 1) +
stat_function(data = data.frame(yComb), fun = f) +
coord_flip()
This build a proper function (y = f(x)), plot it. Note that it is build on the X axis and then flipped. Because of this the xlim and ylim are inverted.
Can ggplot2 be used to produce a so-called topoplot (often used in neuroscience)?
Sample data:
label x y signal
1 R3 0.64924459 0.91228430 2.0261520
2 R4 0.78789621 0.78234410 1.7880972
3 R5 0.93169511 0.72980685 0.9170998
4 R6 0.48406513 0.82383895 3.1933129
Full sample data.
Rows represent individual electrodes. Columns x and y represent the projection into 2D space and the column signal is essentially the z-axis representing voltage measured at a given electrode.
stat_contour doesn't work, apparently due to unequal grid.
geom_density_2d only provides a density estimation of x and y.
geom_raster is one not fitted for this task or I must be using it incorrectly since it quickly runs out of memory.
Smoothing (like in the image on the right) and head contours (nose, ears) aren't necessary.
I want to avoid Matlab and transforming the data so that it fits this or that toolbox… Many thanks!
Update (26 January 2016)
The closest I've been able to get to my objective is via
library(colorRamps)
ggplot(channels, aes(x, y, z = signal)) + stat_summary_2d() + scale_fill_gradientn(colours=matlab.like(20))
which produces an image like this:
Update 2 (27 January 2016)
I've tried #alexforrence's approach with full data and this is the result:
It's a great start but there is a couple of issues:
The last call (ggplot()) takes about 40 seconds on an Intel i7 4790K while Matlab toolboxes manage to generate these almost instantly; my ‘emergency solution’ above takes about a second.
As you can see, the upper and lower border of the central part appear to be ‘sliced’ – I'm not sure what causes this but it could be the third issue.
I'm getting these warnings:
1: Removed 170235 rows containing non-finite values (stat_contour).
2: Removed 170235 rows containing non-finite values (stat_contour).
Update 3 (27 January 2016)
Comparison between two plots produced with different interp(xo, yo) and stat_contour(binwidth) values:
Ragged edges if one chooses low interp(xo, yo), in this case xo/yo = seq(0, 1, length = 100):
Here's a potential start:
First, we'll attach some packages. I'm using akima to do linear interpolation, though it looks like EEGLAB uses some sort of spherical interpolation here? (the data was a little sparse to try it).
library(ggplot2)
library(akima)
library(reshape2)
Next, reading in the data:
dat <- read.table(text = " label x y signal
1 R3 0.64924459 0.91228430 2.0261520
2 R4 0.78789621 0.78234410 1.7880972
3 R5 0.93169511 0.72980685 0.9170998
4 R6 0.48406513 0.82383895 3.1933129")
We'll interpolate the data, and stick that in a data frame.
datmat <- interp(dat$x, dat$y, dat$signal,
xo = seq(0, 1, length = 1000),
yo = seq(0, 1, length = 1000))
datmat2 <- melt(datmat$z)
names(datmat2) <- c('x', 'y', 'value')
datmat2[,1:2] <- datmat2[,1:2]/1000 # scale it back
I'm going to borrow from some previous answers. The circleFun below is from Draw a circle with ggplot2.
circleFun <- function(center = c(0,0),diameter = 1, npoints = 100){
r = diameter / 2
tt <- seq(0,2*pi,length.out = npoints)
xx <- center[1] + r * cos(tt)
yy <- center[2] + r * sin(tt)
return(data.frame(x = xx, y = yy))
}
circledat <- circleFun(c(.5, .5), 1, npoints = 100) # center on [.5, .5]
# ignore anything outside the circle
datmat2$incircle <- (datmat2$x - .5)^2 + (datmat2$y - .5)^2 < .5^2 # mark
datmat2 <- datmat2[datmat2$incircle,]
And I really liked the look of the contour plot in R plot filled.contour() output in ggpplot2, so we'll borrow that one.
ggplot(datmat2, aes(x, y, z = value)) +
geom_tile(aes(fill = value)) +
stat_contour(aes(fill = ..level..), geom = 'polygon', binwidth = 0.01) +
geom_contour(colour = 'white', alpha = 0.5) +
scale_fill_distiller(palette = "Spectral", na.value = NA) +
geom_path(data = circledat, aes(x, y, z = NULL)) +
# draw the nose (haven't drawn ears yet)
geom_line(data = data.frame(x = c(0.45, 0.5, .55), y = c(1, 1.05, 1)),
aes(x, y, z = NULL)) +
# add points for the electrodes
geom_point(data = dat, aes(x, y, z = NULL, fill = NULL),
shape = 21, colour = 'black', fill = 'white', size = 2) +
theme_bw()
With improvements mentioned in the comments (setting extrap = TRUE and linear = FALSE in the interp call to fill in gaps and do a spline smoothing, respectively, and removing NAs before plotting), we get:
mgcv can do spherical splines. This replaces akima (the chunk containing interp() isn't necessary).
library(mgcv)
spl1 <- gam(signal ~ s(x, y, bs = 'sos'), data = dat)
# fine grid, coarser is faster
datmat2 <- data.frame(expand.grid(x = seq(0, 1, 0.001), y = seq(0, 1, 0.001)))
resp <- predict(spl1, datmat2, type = "response")
datmat2$value <- resp
I am trying to replot the following figure in a more legible way. Observe that I am trying to plot both lines and points. However, the number of points being printed is way too many and the line is getting covered up. Is there a way I can plot:
Different lines for different datasets
Different points shapes for different datasets but limit the number of points to say 30-50
Add the line and point information to the legend
My plotting code is here (It was too big for SO)
Do you need something like this?
transData$Type2 <- factor(transData$Type, labels = c("Some Info for P", "Some Info for Q"))
ggplot(transData, aes(x=Value, y=ecd)) +
geom_line(aes(group=Type2,colour=Type2, linetype=Type2), size=1.5) +
geom_point(aes(shape = Type2), data = transData[round(seq(1, nrow(transData), length = 30)), ], size = 5) +
opts(legend.position = "top", legend.key.width = unit(3, "line"))
You can plot large, partially transparent points: the denser areas will appear darker.
p <- ggplot(transData, aes(x=Value, y=ecd, group=Type))
p +
geom_point(size=20, colour=rgb(0,0,0,.02)) +
geom_line(aes(colour=Type), size=3)
The following code adds points more or less evenly spaced, though they're not necessarily actual data points (could be interpolated),
barbedize <- function(x, y, N=10, ...){
ind <- order(x)
x <- x[ind]
y <- y[ind]
lengths <- c(0, sqrt(diff(x)^2 + diff(y)^2))
l <- cumsum(lengths)
tl <- l[length(l)]
el <- seq(0, to=tl, length=N+1)[-1]
res <-
sapply(el[-length(el)], function(ii){
int <- findInterval(ii, l)
xx <- x[int:(int+1)]
yy <- y[int:(int+1)]
dx <- diff(xx)
dy <- diff(yy)
new.length <- ii - l[int]
segment.length <- lengths[int+1]
ratio <- new.length / segment.length
xend <- x[int] + ratio * dx
yend <- y[int] + ratio * dy
c(x=xend, y=yend)
})
as.data.frame(t(res))
}
library(plyr)
few_points <- ddply(transData, "Type", function(d, ...)
barbedize(d$Value, d$ecd, ...), N=10)
ggplot(transData, aes(x=Value, y=ecd)) +
geom_line(aes(group=Type,colour=Type, linetype=Type), size=1) +
geom_point(aes(x=x,y=y, colour=Type, shape=Type), data=few_points, size=3)
(this is a quick and dirty proof-of-principle, barbedize should be cleaned up and written more efficiently...)