Kolmogorov-Smirnov Tests for Laplace destribution R - r

I have a problem with ks function in R. I have a Laplace Distribution:
ldes <- function(y, a) {
if(y < 0.5) 1/a*log(2*y, 2)
else 1/a*log(2*(1-y), 2)
}
a <- 1
set.seed(1)
y = runif(1000, 0, 1)
ld <- ldes(y, a)
So, I need to do the ks test, but can't find anything about second parameter that should be in there, like:
ks.test(my_lnorm, **plnorm**, mean = -5, sd = 5)
for Lognormal Destribution or:
ks.test(my_log, **plogis**, location = 2, scale = 3)
for Logistics Destribution
Thanks.

You can try some package for the laplace distribution, for example disclap (if it satisfies our need, otherwise some continuous analog).
library(disclap)
ks.test(ld, "pdisclap", 0.5) # choose the right value of parameter p (p=0.5 is arbitrary)
One-sample Kolmogorov-Smirnov test
data: ld
D = 0.3333, p-value < 2.2e-16
alternative hypothesis: two-sided
As can be seen from the result of the hypothesis test, the null hypothesis (that the samples are drawn from the same population distribution) is rejected.
y2 <- rdisclap(1000, p=0.5) # generate some simulated datapoints
plot(ecdf(ld), xlim = range(c(ld, y2))) # compare ecdfs
plot(ecdf(y2), add = TRUE, lty = "dashed")

Related

Bootstrap t test in R without reliance on MKinfer package

I would like to get CI for a paired t-test using bootstrap in R. Unfortunately, I dont have privileges to install the MKinfer. Using MKinfer I would (like here: T-test with bootstrap in R):
boot.t.test(
x = iris["Petal.Length"],
y = iris["Sepal.Length"],
alternative = c("two.sided"),
mu = 0,
#paired = TRUE,
conf.level = 0.95,
R = 9999
)
How would I do this for paired data with CI's and p-values not relying on MKinfer (relying on boot would be fine)?
Here is an example using boot using R = 1000 bootstrap replicates
library(boot)
x <- iris$Petal.Length
y <- iris$Sepal.Length
change_in_mean <- function(df, indices) t.test(
df[indices, 1], df[indices, 2], paired = TRUE, var.equal = FALSE)$estimate
model <- boot(
data = cbind(x, y),
statistic = change_in_mean,
R = 1000)
We can calculate the confidence interval of the estimated change in the mean using boot.ci
boot.ci(model, type = "norm")
#BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
#Based on 1000 bootstrap replicates
#
#CALL :
#boot.ci(boot.out = model, type = "norm")
#
#Intervals :
#Level Normal
#95% (-2.262, -1.905 )
#Calculations and Intervals on Original Scale
Note that this is very close to the CI reported by t.test
t.test(x, y, paired = TRUE, var.equal = FALSE)
#
# Paired t-test
#
#data: x and y
#t = -22.813, df = 149, p-value < 2.2e-16
#alternative hypothesis: true difference in means is not equal to 0
#95 percent confidence interval:
# -2.265959 -1.904708
#sample estimates:
#mean of the differences
# -2.085333

gofCopula in R giving same p-values

I am working with copulas and I am trying to select the best copula I want to use a goodness of fit test, but my code is giving me the same p-values for all the copulas, which is ceirtainly wrong. Here is my code:
dat <- read_excel("SMI.xlsx")
data <- cbind(dat[,2], dat[,3], dat[4])
dataNZ <- cbind(data[,1], data[,3])
#definition of copulas
#Normal copula
nc <- ellipCopula(family = "normal", param= 0.5, dim = 2, dispstr = "un")
#Gumbel-Hougaard copula
gc <- gumbelCopula(dim = 2, param = 1.1)
#GOF tests
gofmplnNZ <- gofCopula(nc, u, estim.method= "mpl")
gofmplgNZ <- gofCopula(gc, u, estim.method= "mpl")
To show exaclty what I mean here are the results of gofmplnNZ and gofmplgNZ:
> gofmplnNZ
Parametric bootstrap-based goodness-of-fit test of Normal copula, dim. d = 2, with 'method'="Sn",
'estim.method'="mpl":
data: x
statistic = 0.062807, parameter = 0.45013, p-value = 0.0004995
> gofmplgNZ
Parametric bootstrap-based goodness-of-fit test of Gumbel copula, dim. d = 2, with 'method'="Sn",
'estim.method'="mpl":
data: x
statistic = 0.1805, parameter = 1.4038, p-value = 0.0004995

R : Calculating p-value using simulations

I wrote this code to run a test statistic on two randomly distributed observations x and y
mean.test <- function(x, y, B=10000,
alternative=c("two.sided","less","greater"))
{
p.value <- 0
alternative <- match.arg(alternative)
s <- replicate(B, (mean(sample(c(x,y), B, replace=TRUE))-mean(sample(c(x,y), B, replace=TRUE))))
t <- mean(x) - mean(y)
p.value <- 2*(1- pnorm(abs(quantile(T,0.01)), mean = 0, sd = 1, lower.tail =
TRUE, log.p = FALSE)) #try to calculate p value
data.name <- deparse(substitute(c(x,y)))
names(t) <- "difference in means"
zero <- 0
names(zero) <- "difference in means"
return(structure(list(statistic = t, p.value = p.value,
method = "mean test", data.name = data.name,
observed = c(x,y), alternative = alternative,
null.value = zero),
class = "htest"))
}
the code uses a Monte-Carlo simulations to generate the distribution function of the test statistic mean(x) - mean(y) and then calculates the p-value, but apparently i miss defined this p-value because for :
> set.seed(0)
> mean.test(rnorm(1000,3,2),rnorm(2000,4,3))
the output should look like:
mean test
data: c(rnorm(1000, 3, 2), rnorm(2000, 4, 3))
difference in means = -1.0967, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
but i got this instead:
mean test
data: c(rnorm(1000, 3, 2), rnorm(2000, 4, 3))
difference in means = -1.0967, p-value = 0.8087
alternative hypothesis: true difference in means is not equal to 0
can someone explain the bug to me ?
As far as I can tell, your code has numerous mistakes and errors in it:
quantile(T, 0.01) - here T == TRUE, so you're calculating the quantile of 1.
The object s is never used.
mean(sample(c(x,y), B, replace=TRUE)) What are you trying to do here? The c() function combines x and y. Sampling makes no sense since you don't know what population they come from
When you calculate the test statistic t, it should depend on the variance (and sample size).

Continuous PowerTransform/BoxCox Transformation in R

I have a dataset that I need to transfer into normal distribution.
First, Generate a reproducible dataset.
df <- runif(500, 0, 100)
Second, define a function. This function will continue transforming d.f. until P > 0.05. The transformed d.f. will be generated and named as y.
BoxCoxTrans <- function(y)
{
lambda <- 1
constant <- 0
while(shapiro.test(y)$p.value < 0.10)
{
constant <- abs(min(y, na.rm = TRUE)) + 0.001
y <- y + constant
lambda <- powerTransform(y)$lambda
y <- y ^ lambda
}
assign("y", y, envir = .GlobalEnv)
}
Third, test df
shapiro.test(df)
Shapiro-Wilk normality test
data: df
W = 0.95997, p-value = 2.05e-10
Because P < 0.05, transform df
BoxCoxTrans(df)
Then it gives me the following error messages,
Error in qr.resid(xqr, w * fam(Y, lambda, j = TRUE)) :
NA/NaN/Inf in foreign function call (arg 5)
What did I do wrong?
You could use a Box-Muller Transformation to generate an approximately normal distribution from a random uniform distribution. This might be more appropriate than a Box-Cox Transformation, which AFAIK is typically applied to convert a skewed distribution into one that is almost normal.
Here's an example of a Box-Muller Transformation applied to a set of uniformly distributed numbers:
set.seed(1234)
size <- 5000
a <- runif(size)
b <- runif(size)
y <- sqrt(-2 * log(a)) * cos(2 * pi * b)
plot(density(y), main = "Example of Box-Muller Transformation", xlab="x", ylab="f(x)")
library(nortest)
#> lillie.test(y)
#
# Lilliefors (Kolmogorov-Smirnov) normality test
#
#data: y
#D = 0.009062, p-value = 0.4099
#
#> shapiro.test(y)
#
# Shapiro-Wilk normality test
#
#data: y
#W = 0.99943, p-value = 0.1301
#
Hope this helps.
Add
print(summary(y))
before the end of your while loop and watch your computation explode. In any event, repetitively applying Box-Cox makes no sense because you get the ML(-like) estimator of the transformation parameter from the first application. Moreover, why would you expect a power transformation to normalize a uniform distribution?
John

Calculate AUC in R?

Given a vector of scores and a vector of actual class labels, how do you calculate a single-number AUC metric for a binary classifier in the R language or in simple English?
Page 9 of "AUC: a Better Measure..." seems to require knowing the class labels, and here is an example in MATLAB where I don't understand
R(Actual == 1))
Because R (not to be confused with the R language) is defined a vector but used as a function?
With the package pROC you can use the function auc() like this example from the help page:
> data(aSAH)
>
> # Syntax (response, predictor):
> auc(aSAH$outcome, aSAH$s100b)
Area under the curve: 0.7314
The ROCR package will calculate the AUC among other statistics:
auc.tmp <- performance(pred,"auc"); auc <- as.numeric(auc.tmp#y.values)
As mentioned by others, you can compute the AUC using the ROCR package. With the ROCR package you can also plot the ROC curve, lift curve and other model selection measures.
You can compute the AUC directly without using any package by using the fact that the AUC is equal to the probability that a true positive is scored greater than a true negative.
For example, if pos.scores is a vector containing a score of the positive examples, and neg.scores is a vector containing the negative examples then the AUC is approximated by:
> mean(sample(pos.scores,1000,replace=T) > sample(neg.scores,1000,replace=T))
[1] 0.7261
will give an approximation of the AUC. You can also estimate the variance of the AUC by bootstrapping:
> aucs = replicate(1000,mean(sample(pos.scores,1000,replace=T) > sample(neg.scores,1000,replace=T)))
Without any additional packages:
true_Y = c(1,1,1,1,2,1,2,1,2,2)
probs = c(1,0.999,0.999,0.973,0.568,0.421,0.382,0.377,0.146,0.11)
getROC_AUC = function(probs, true_Y){
probsSort = sort(probs, decreasing = TRUE, index.return = TRUE)
val = unlist(probsSort$x)
idx = unlist(probsSort$ix)
roc_y = true_Y[idx];
stack_x = cumsum(roc_y == 2)/sum(roc_y == 2)
stack_y = cumsum(roc_y == 1)/sum(roc_y == 1)
auc = sum((stack_x[2:length(roc_y)]-stack_x[1:length(roc_y)-1])*stack_y[2:length(roc_y)])
return(list(stack_x=stack_x, stack_y=stack_y, auc=auc))
}
aList = getROC_AUC(probs, true_Y)
stack_x = unlist(aList$stack_x)
stack_y = unlist(aList$stack_y)
auc = unlist(aList$auc)
plot(stack_x, stack_y, type = "l", col = "blue", xlab = "False Positive Rate", ylab = "True Positive Rate", main = "ROC")
axis(1, seq(0.0,1.0,0.1))
axis(2, seq(0.0,1.0,0.1))
abline(h=seq(0.0,1.0,0.1), v=seq(0.0,1.0,0.1), col="gray", lty=3)
legend(0.7, 0.3, sprintf("%3.3f",auc), lty=c(1,1), lwd=c(2.5,2.5), col="blue", title = "AUC")
I found some of the solutions here to be slow and/or confusing (and some of them don't handle ties correctly) so I wrote my own data.table based function auc_roc() in my R package mltools.
library(data.table)
library(mltools)
preds <- c(.1, .3, .3, .9)
actuals <- c(0, 0, 1, 1)
auc_roc(preds, actuals) # 0.875
auc_roc(preds, actuals, returnDT=TRUE)
Pred CountFalse CountTrue CumulativeFPR CumulativeTPR AdditionalArea CumulativeArea
1: 0.9 0 1 0.0 0.5 0.000 0.000
2: 0.3 1 1 0.5 1.0 0.375 0.375
3: 0.1 1 0 1.0 1.0 0.500 0.875
You can learn more about AUROC in this blog post by Miron Kursa:
https://mbq.me/blog/augh-roc/
He provides a fast function for AUROC:
# By Miron Kursa https://mbq.me
auroc <- function(score, bool) {
n1 <- sum(!bool)
n2 <- sum(bool)
U <- sum(rank(score)[!bool]) - n1 * (n1 + 1) / 2
return(1 - U / n1 / n2)
}
Let's test it:
set.seed(42)
score <- rnorm(1e3)
bool <- sample(c(TRUE, FALSE), 1e3, replace = TRUE)
pROC::auc(bool, score)
mltools::auc_roc(score, bool)
ROCR::performance(ROCR::prediction(score, bool), "auc")#y.values[[1]]
auroc(score, bool)
0.51371668847094
0.51371668847094
0.51371668847094
0.51371668847094
auroc() is 100 times faster than pROC::auc() and computeAUC().
auroc() is 10 times faster than mltools::auc_roc() and ROCR::performance().
print(microbenchmark(
pROC::auc(bool, score),
computeAUC(score[bool], score[!bool]),
mltools::auc_roc(score, bool),
ROCR::performance(ROCR::prediction(score, bool), "auc")#y.values,
auroc(score, bool)
))
Unit: microseconds
expr min
pROC::auc(bool, score) 21000.146
computeAUC(score[bool], score[!bool]) 11878.605
mltools::auc_roc(score, bool) 5750.651
ROCR::performance(ROCR::prediction(score, bool), "auc")#y.values 2899.573
auroc(score, bool) 236.531
lq mean median uq max neval cld
22005.3350 23738.3447 22206.5730 22710.853 32628.347 100 d
12323.0305 16173.0645 12378.5540 12624.981 233701.511 100 c
6186.0245 6495.5158 6325.3955 6573.993 14698.244 100 b
3019.6310 3300.1961 3068.0240 3237.534 11995.667 100 ab
245.4755 253.1109 251.8505 257.578 300.506 100 a
Combining code from ISL 9.6.3 ROC Curves, along with #J. Won.'s answer to this question and a few more places, the following plots the ROC curve and prints the AUC in the bottom right on the plot.
Below probs is a numeric vector of predicted probabilities for binary classification and test$label contains the true labels of the test data.
require(ROCR)
require(pROC)
rocplot <- function(pred, truth, ...) {
predob = prediction(pred, truth)
perf = performance(predob, "tpr", "fpr")
plot(perf, ...)
area <- auc(truth, pred)
area <- format(round(area, 4), nsmall = 4)
text(x=0.8, y=0.1, labels = paste("AUC =", area))
# the reference x=y line
segments(x0=0, y0=0, x1=1, y1=1, col="gray", lty=2)
}
rocplot(probs, test$label, col="blue")
This gives a plot like this:
I usually use the function ROC from the DiagnosisMed package. I like the graph it produces. AUC is returned along with it's confidence interval and it is also mentioned on the graph.
ROC(classLabels,scores,Full=TRUE)
Along the lines of erik's response, you should also be able to calculate the ROC directly by comparing all possible pairs of values from pos.scores and neg.scores:
score.pairs <- merge(pos.scores, neg.scores)
names(score.pairs) <- c("pos.score", "neg.score")
sum(score.pairs$pos.score > score.pairs$neg.score) / nrow(score.pairs)
Certainly less efficient than the sample approach or the pROC::auc, but more stable than the former and requiring less installation than the latter.
Related: when I tried this it gave similar results to pROC's value, but not exactly the same (off by 0.02 or so); the result was closer to the sample approach with very high N. If anyone has ideas why that might be I'd be interested.
Currently top voted answer is incorrect, because it disregards ties. When positive and negative scores are equal, then AUC should be 0.5. Below is corrected example.
computeAUC <- function(pos.scores, neg.scores, n_sample=100000) {
# Args:
# pos.scores: scores of positive observations
# neg.scores: scores of negative observations
# n_samples : number of samples to approximate AUC
pos.sample <- sample(pos.scores, n_sample, replace=T)
neg.sample <- sample(neg.scores, n_sample, replace=T)
mean(1.0*(pos.sample > neg.sample) + 0.5*(pos.sample==neg.sample))
}
Calculating AUC with Metrics package is very easy and straightforward:
library(Metrics)
actual <- c(0, 0, 1, 1)
predicted <- c(.1, .3, .3, .9)
auc(actual, predicted)
0.875

Resources