log-transformed density function not plotting correctly - r

I'm trying to log-transform the x axis of a density plot and get unexpected results. The code without the transformation works fine:
library(ggplot2)
data = data.frame(x=c(1,2,10,11,1000))
dens = density(data$x)
densy = sapply(data$x, function(x) { dens$y[findInterval(x, dens$x)] })
ggplot(data, aes(x = x)) +
geom_density() +
geom_point(y = densy)
If I add scale_x_log10(), I get the following result:
Apart from the y values having been rescaled, something seems to have happened to the x values as well -- the peaks of the density function are not quite where the points are.
Am I using the log transformation incorrectly here?

The shape of the density curve changes after the transformation because the distribution of the data has changed and the bandwidths are different. If you set a bandwidth of (bw=1000) prior to the transformation and 10 afterward, you will get two normal looking densities (with different y-axis values because the support will be much larger in the first case). Here is an example showing how varying bandwidths change the shape of the density.
data = data.frame(x=c(1,2,10,11,1000), y=0)
## Examine how changing bandwidth changes the shape of the curve
par(mfrow=c(2,1))
greys <- colorRampPalette(c("black", "red"))(10)
plot(density(data$x), main="No Transform")
points(data, pch=19)
plot(density(log10(data$x)), ylim=c(0,2), main="Log-transform w/ varying bw")
points(log10(data$x), data$y, pch=19)
for (i in 1:10)
points(density(log10(data$x), bw=0.02*i), col=greys[i], type="l")
legend("topright", paste(0.02*1:10), col=greys, lty=2, cex=0.8)

Related

R how to automatically adjust y axis when using basic plot with xlim

I'm trying to use base R (and would like to stick to it for this problem) to plot a specific portion of a dataset.
My example data looks like below:
x <- c(1:100)
y <- sort(runif(100, min=0, max=1000))
When I plot this with plot(x,y, type='l'), I get a plot with a y axis that shows 0 to 1000. However, when I plot only a specific x range, my y axis still shows 0 to 1000. I would like to zoom in to reduce the y axis range.
For example,
plot(x,y, type='l', xlim=c(40,60))
plot(x,y, type='l', xlim=c(80,90))
both produces plots with a y axis that ranges c(0,1000). But I'd like to zoom in so that the y axis range for the first plot is something like c(300,700) and that for the second plot is c(700,1000). (300, 700 and 1000 are all arbitrary numbers just to illustrate the purpose to really zoom into the plot). Is there a way to do this without setting specific ylim?
I'd like to avoid using ylim because I'm plotting and saving in a for loop and I can't write a ylim that is suitable for all plots. I've thought of doing something like ylim = max(y)*1.5, but again, since I'm cutting the y values off based on xlim, this doesn't help with zooming in whenever xlim changes.
Subset the relevant data and plot that
lower = 40
upper = 60
ind = which(x >= lower & x <= upper)
plot(x[ind], y[ind], type = "l")

plot theoretic distribution against the real data histogram on one figure

I want to plot the histogram with real data and compare it with a theoretical normal distribution in one plot. But the scale looks different. Two plots have different scale
# you can generate some ramdom data on ystar which is realy data.
x<-seq(-4,4,length=200)
y<-dnorm(x,mean=0, sd=1)
plot(x,y, type = "l", lwd = 2, xlim = c(-3.5,3.5),ylim=c(0,0.7))
par(new = TRUE)
hist(ystar,xlim = c(-10,10),freq = FALSE,ylim=c(0,0.7),breaks = 50)
Desire output
Assuming that ystar is a vector, you should change this:
y<-dnorm(x,mean=0, sd=1)
To:
y<-dnorm(x,mean=mean(ystar), sd=sd(ystar))
This will produce a distribution function that approximately matches the histogram.
You should then be able to use the same x-limits for both the histogram and the theoretical distribution, which will eliminate the strange overlapping axis labels you have in your current version.

How to fit R histogram within axes limits [0,1]

Suppose I generate data using x <- rnorm(10000) and then plot a simple histogram using hist(x).
This obviously shows that the data is normal, but the x and y axes are determined by the values generated. How could I adjust x so that the histogram will still appear as a normal curve, but on a plot whose bounds are x=[0,1] and y=[0,1]. I tried using this normalization method from another answer, https://stats.stackexchange.com/questions/70801/how-to-normalize-data-to-0-1-range, and setting xlim and ylim to c(0,1), but the result was not what I wanted, as it basically just fills up the entire plot.
I'm not sure what you mean by 'fills up the whole plot'. This code seems to work fine:
x <- rnorm(1000)
z <- (x - min(x))/(max(x) - min(x))
hist(z)
Then if you want the y-axis on a scale of 0-1:
hist1 <- hist(z)
hist1$counts <- hist1$counts/sum(hist1$counts)
plot(hist1, ylim = c(0,1)) ## Looks squished to me if you include the ylim argument

Smoothing using kernel and loess in R

I am trying to smooth my data set, using kernel or loess smoothing method. But, They are all not clear or not what I want. Several questions are the followings.
My x data is "conc" and y data is "depth", which is ex. cm.
1) Kernel smooth
k <- kernel("daniell", 150)
plot(k)
K <- kernapply(conc, k)
plot(conc~depth)
lines(K, col = "red")
Here, my data is smoothed by frequency=150. This means that every data point is averaged by neighboring (right and left) 150 data points? What "daniell" means? I could not find what it means online.
2) Loess smooth
p<-qplot(depth, conc, data=total)
p1 <- p + geom_smooth(method = "loess", size = 1, level=0.95)
Here, what is the default of loess smooth function? If I want to smooth my data with frequency=150 like above case (moving average by every 150 data point), how can I modify this code?
3) To show y-axis with log scale, I put "log10(conc)", instead of "conc", and it worked. But, I cannot change the y-axis tick label. I tried to use "scale_y_log10(limits = c(1,1e3))" in my code to show axis tick labe like 10^0, 10^1, 10^2..., but did not work.
Please answer my questions. Thanks a lot for your help.
Sum

Histogram with Logarithmic Scale and custom breaks

I'm trying to generate a histogram in R with a logarithmic scale for y. Currently I do:
hist(mydata$V3, breaks=c(0,1,2,3,4,5,25))
This gives me a histogram, but the density between 0 to 1 is so great (about a million values difference) that you can barely make out any of the other bars.
Then I've tried doing:
mydata_hist <- hist(mydata$V3, breaks=c(0,1,2,3,4,5,25), plot=FALSE)
plot(rpd_hist$counts, log="xy", pch=20, col="blue")
It gives me sorta what I want, but the bottom shows me the values 1-6 rather than 0, 1, 2, 3, 4, 5, 25. It's also showing the data as points rather than bars. barplot works but then I don't get any bottom axis.
A histogram is a poor-man's density estimate. Note that in your call to hist() using default arguments, you get frequencies not probabilities -- add ,prob=TRUE to the call if you want probabilities.
As for the log axis problem, don't use 'x' if you do not want the x-axis transformed:
plot(mydata_hist$count, log="y", type='h', lwd=10, lend=2)
gets you bars on a log-y scale -- the look-and-feel is still a little different but can probably be tweaked.
Lastly, you can also do hist(log(x), ...) to get a histogram of the log of your data.
Another option would be to use the package ggplot2.
ggplot(mydata, aes(x = V3)) + geom_histogram() + scale_x_log10()
It's not entirely clear from your question whether you want a logged x-axis or a logged y-axis. A logged y-axis is not a good idea when using bars because they are anchored at zero, which becomes negative infinity when logged. You can work around this problem by using a frequency polygon or density plot.
Dirk's answer is a great one. If you want an appearance like what hist produces, you can also try this:
buckets <- c(0,1,2,3,4,5,25)
mydata_hist <- hist(mydata$V3, breaks=buckets, plot=FALSE)
bp <- barplot(mydata_hist$count, log="y", col="white", names.arg=buckets)
text(bp, mydata_hist$counts, labels=mydata_hist$counts, pos=1)
The last line is optional, it adds value labels just under the top of each bar. This can be useful for log scale graphs, but can also be omitted.
I also pass main, xlab, and ylab parameters to provide a plot title, x-axis label, and y-axis label.
Run the hist() function without making a graph, log-transform the counts, and then draw the figure.
hist.data = hist(my.data, plot=F)
hist.data$counts = log(hist.data$counts, 2)
plot(hist.data)
It should look just like the regular histogram, but the y-axis will be log2 Frequency.
I've put together a function that behaves identically to hist in the default case, but accepts the log argument. It uses several tricks from other posters, but adds a few of its own. hist(x) and myhist(x) look identical.
The original problem would be solved with:
myhist(mydata$V3, breaks=c(0,1,2,3,4,5,25), log="xy")
The function:
myhist <- function(x, ..., breaks="Sturges",
main = paste("Histogram of", xname),
xlab = xname,
ylab = "Frequency") {
xname = paste(deparse(substitute(x), 500), collapse="\n")
h = hist(x, breaks=breaks, plot=FALSE)
plot(h$breaks, c(NA,h$counts), type='S', main=main,
xlab=xlab, ylab=ylab, axes=FALSE, ...)
axis(1)
axis(2)
lines(h$breaks, c(h$counts,NA), type='s')
lines(h$breaks, c(NA,h$counts), type='h')
lines(h$breaks, c(h$counts,NA), type='h')
lines(h$breaks, rep(0,length(h$breaks)), type='S')
invisible(h)
}
Exercise for the reader: Unfortunately, not everything that works with hist works with myhist as it stands. That should be fixable with a bit more effort, though.
Here's a pretty ggplot2 solution:
library(ggplot2)
library(scales) # makes pretty labels on the x-axis
breaks=c(0,1,2,3,4,5,25)
ggplot(mydata,aes(x = V3)) +
geom_histogram(breaks = log10(breaks)) +
scale_x_log10(
breaks = breaks,
labels = scales::trans_format("log10", scales::math_format(10^.x))
)
Note that to set the breaks in geom_histogram, they had to be transformed to work with scale_x_log10

Resources