Creating a beta distribution Q-Q plot - r

My task is to create 100 random generated numbers from beta distribution and compare that random variable with beta distribution using quantile plot.
This is my attempt:
library(MASS)
library(qualityTools)
Random_Numbers_Beta <- rbeta(100, 1, 1)
qqPlot(Random_Numbers_Beta, "beta", list(shape = 1, rate = 1))
Unfortunately something is wrong. This is an error which occurs:
Error in (function (x, densfun, start, ...) :
'start' must be a named list
Can something be done with that issue?

First, you had to specify that list(shape = 1, rate = 1) is the start parameter; right now this list is being treated as a value for the confbounds parameter. Second, it's actually not shape and rate, but shape1 and shape2, as in, e.g., ?dbeta.
qqPlot(Random_Numbers_Beta, "beta", start = list(shape1 = 1, shape2 = 1))
Again inspecting ?qqPlot you may see that ... is for "further graphical parameters: (see par)." Hence, you may modify the plot the way you like; e.g., adding col = 'red'.
Also notice that Beta(1,1) is simply the uniform distribution on [0,1] and, hence, its quantile function is the identity function. That is, qbeta(x, 1, 1) == x for any x in [0,1]. So, you may also simply work directly with
x <- seq(0, 1, length = 500)
plot(quantile(Random_Numbers_Beta, x), x)
abline(a = 0, b = 1, col = 'red')
if you don't need the confidence bounds.
One can notice, however, that the two plots are a little different. Given your task, it would seem that you need the second one.
In the first one, it looks like qqPlot fits a beta distribution for your data and uses its quantiles, which apparently isn't exactly the identity function. That is, it doesn't use the exact knowledge about the parameters. The second plot uses this knowledge.

Related

How to smooth a curve in R?

location diffrence<-c(0,0.5,1,1.5,2)
Power<-c(0,0.2,0.4,0.6,0.8,1)
plot(location diffrence,Power)
The guy which has written the paper said he has smoothed the curve using a weighted moving average with weights vector w = (0.25,0.5,0.25) but he did not explained how he did this and with which function he achieved that.i am really confused
Up front, as #MartinWettstein cautions, be careful in when you smooth data and what you do with it (infer from it). Having said that, a simple exponential moving average might look like this.
# replacement data
x <- seq(0, 2, len=5)
y <- c(0, 0.02, 0.65, 1, 1)
# smoothed
ysm <-
zoo::rollapply(c(NA, y, NA), 3,
function(a) Hmisc::wtd.mean(a, c(0.25, 0.5, 0.25), na.rm = TRUE),
partial = FALSE)
# plot
plot(x, y, type = "b", pch = 16)
lines(x, ysm, col = "red")
Notes:
the zoo:: package provides a rolling window (3-wide here), calling the function once for indices 1-3, then again for indices 2-4, then 3-5, 4-6, etc.
with rolling-window operations, realize that they can be center-aligned (default of zoo::rollapply) or left/right aligned. There are some good explanations here: How to calculate 7-day moving average in R?)
I surround the y data with NAs so that I can mimic a partial window. Normally with rolling-window ops, if k=3, then the resulting vector is length(y) - (k-1) long. I'm inferring that you want to include data on the ends, so the first smoothed data point would be effectively (0.5*0 + 0.25*0.02)/0.75, the second smoothed data point (0.25*0 + 0.5*0.02 + 0.25*0.65)/1, and the last smoothed data point (0.25*1 + 0.5*1)/0.75. That is, omitting the 0.25 times a missing data point. That's a guess and can easily be adjusted based on your real needs.
I'm using Hmisc::wtd.mean, though it is trivial to write this weighted-mean function yourself.
This is suggestive only, and not meant to be authoritative. Just to help you begin exploring your smoothing processes.

QQ plot in r from tassel pipeline

For my GWAS analysis I am using the tassel pipeline. In my GWAS I am studying two correlated traits.
I want to plot a Q_Q plot for two trait in one plot like the one which we can obtain from tassel Program.
Any one has any suggestion with which package of r I can do that?
With qq() command from qqman package I plot QQ plot in seprate plot but I want a plot which involved my two traits as i did in Tassel
Ay suggestion?
A QQ-Plot in your case compares quantiles of the empirical distribution of your result to quantiles of the distribution that you'd expect theoretically if the null hypothesis is true.
If you have n data points, it makes sense to compare the n-quantiles, because then the actual quantiles of your empirical distribution are just your data points, ordered.
The theoretical distribution of p-values is the uniform distribution. Think of it, that's exactly the reason why they exist. If a measurement is assigned for example a p-value of 0.05, you'd expect this or a more extreme measurement by pure chance (null hypothesis) in only 5% of your experiments, if you repeat that experiment very often. A measurement with p=0.5, is expected in 50% of the cases. So, generalizing to any value p, your cumulative distribution function
CDF(p) = P[measurement with p-value of ≤ p] = p.
Look in Wikipedia, that's the
CDF for the uniform distribution between 0 and 1.
Therefore, the expected n-quantiles for your QQ-Plot are {1/n, 2/n, ... n/n}. (They represent the case that the null hypothesis is true)
So, now we have the theoretical quantiles (x-axis) and the actual quantiles. In R code, this is something like
expected_quantiles <- function(pvalues){
n = length(pvalues)
actual_quantiles = sort(pvalues)
expected_quantiles = seq_along(pvalues)/n
data.frame(expected = expected_quantiles, actual = actual_quantiles)
}
You can take the -log10 of these values and plot them, for example like so
testdata1 <- c(runif(98,0,1), 1e-4, 2e-5)
testdata2 <- c(runif(96,0,1), 1e-3, 2e-3, 2e-4)
qq <- lapply(list(d1 = testdata1, d2 = testdata2), expected_quantiles)
xlim <- rev(-log10(range(rbind(qq$d1, qq$d2)$expected))) * c(1, 1.1)
ylim <- rev(-log10(range(rbind(qq$d1, qq$d2)$actual))) * c(1, 1.1)
plot(NULL, xlim = xlim, ylim = ylim)
points(x = -log10(qq$d1$expected) ,y = -log10(qq$d1$actual), col = "red")
points(x = -log10(qq$d2$expected) ,y = -log10(qq$d2$actual), col = "blue")
abline(a = 0, b = 1)

R-package beeswarm generates same x-coordinates

I am working on a script where I need to calculate the coordinates for a beeswarm plot without immediately plotting. When I use beeswarm, I get x-coordinates that aren't swarmed, and more or less the same value:
But if I generate the same plot again it swarms correctly:
And if I use dev.off() I again get no swarming:
The code I used:
n <- 250
df = data.frame(x = floor(runif(n, 0, 5)),
y = rnorm(n = n, mean = 500, sd = 100))
#Plot 1:
A = with(df, beeswarm(y ~ x, do.plot = F))
plot(x = A$x, y=A$y)
#Plot 2:
A = with(df, beeswarm(y ~ x, do.plot = F))
plot(x = A$x, y=A$y)
dev.off()
#Plot 3:
A = with(df, beeswarm(y ~ x, do.plot = F))
plot(x = A$x, y=A$y)
It seems to me like beeswarm uses something like the current plot parameters (or however it is called) to do the swarming and therefore chokes when a plot isn't showing. I have tried to play around with beeswarm parameters such as spacing, breaks, corral, corralWidth, priority, and xlim, but it does not make a difference. FYI: If do.plot is set to TRUE the x-coordinates are calculated correctly, but this is not helpful as I don't want to plot immediately.
Any tips or comments are greatly appreciated!
You're right; beeswarm uses the current plot parameters to calculate the amount of space to leave between points. It seems that setting "do.plot=FALSE" does not do what one would expect, and I'm not sure why I included this parameter.
If you want to control the parameters manually, you could use the functions swarmx or swarmy instead. These functions must be applied to each group separately, e.g.
dfsplitswarmed <- by(df, df$x, function(aa) swarmx(aa$x, aa$y, xsize = 0.075, ysize = 7.5, cex = 1, log = ""))
dfswarmed <- do.call(rbind, dfsplitswarmed)
plot(dfswarmed)
In this case, I set the xsize and ysize values based on what the function would default to for this particular data set. If you can find a set of xsize/ysize values that work for your data, this approach might work for you.
Otherwise, perhaps a simpler approach would be to leave do.plot=TRUE, and then discard the plots.

R Statistics Distributions Plotting

I am having some trouble with a homework I have at Statistics.
I am required to graphical represent the density and the distribution function in two inline plots for a set of parameters at my choice ( there must be minimum 4 ) for Student, Fisher and ChiS repartitions.
Let's take only the example of Student Repartition.
From what I have searched on the internet, I have come with this:
First, I need to generate some random values.
x <- rnorm( 20, 0, 1 )
Question 1: I need to generate 4 of this?
Then I have to plot these values with:
plot(dt( x, df = 1))
plot(pt( x, df = 1))
But, how to do this for four set of parameters? They should be represented in the same plot.
Is this the good approach to what I came so far?
Please, tell me if I'm wrong.
To plot several densities of a certain distribution, you have to first have a support vector, in this case x below.
Then compute the values of the densities with the parameters of your choice.
Then plot them.
In the code that follows, I will plot 4 Sudent-t pdf's, with degrees of freedom 1 to 4.
x <- seq(-5, 5, by = 0.01) # The support vector
y <- sapply(1:4, function(d) dt(x, df = d))
# Open an empty plot first
plot(1, type = "n", xlim = c(-5, 5), ylim = c(0, 0.5))
for(i in 1:4){
lines(x, y[, i], col = i)
}
Then you can make the graph prettier, by adding a main title, changing the axis titles, etc.
If you want other distributions, such as the F or Chi-squared, you will use x strictly positive, for instance x <- seq(0.0001, 10, by = 0.01).

Using user-defined functions within "curve" function in R graphics

I am needing to produce normally distributed density plots with different total areas (summing to 1). Using the following function, I can specify the lambda - which gives the relative area:
sdnorm <- function(x, mean=0, sd=1, lambda=1){lambda*dnorm(x, mean=mean, sd=sd)}
I then want to plot up the function using different parameters. Using ggplot2, this code works:
require(ggplot2)
qplot(x, geom="blank") + stat_function(fun=sdnorm,args=list(mean=8,sd=2,lambda=0.7)) +
stat_function(fun=sdnorm,args=list(mean=18,sd=4,lambda=0.30))
but I really want to do this in base R graphics, for which I think I need to use the "curve" function. However, I am struggling to get this to work.
If you take a look at the help file for ? curve, you'll see that the first argument can be a number of different things:
The name of a function, or a call or an expression written as a function of x which will evaluate to an object of the same length as x.
This means you can specify the first argument as either a function name or an expression, so you could just do:
curve(sdnorm)
to get a plot of the function with its default arguments. Otherwise, to recreate your ggplot2 representation you would want to do:
curve(sdnorm(x, mean=8,sd=2,lambda=0.7), from = 0, to = 30)
curve(sdnorm(x, mean=18,sd=4,lambda=0.30), add = TRUE)
The result:
You can do the following in base R
x <- seq(0, 50, 1)
plot(x, sdnorm(x, mean = 8, sd = 2, lambda = 0.7), type = 'l', ylab = 'y')
lines(x, sdnorm(x, mean = 18, sd = 4, lambda = 0.30))
EDIT I added ylab = 'y' and updated the picture to have the y-axis re-labeled.
This should get you started.

Resources