I have a homework in a subject called 'Monthe-Carlo methods' and I'm stuck with one task.
The task is as follows:
Using MC method find an approximate value of the integral (see the pic below),
where D is an area [0 to infinity) x [0 to infinity) x [0 to infinity).
Also, find the error value for probability 0,99.
Integral pic
What I've done so far, is in the code below.
The main question that I have, is .. Which distribution should I choose for generated values? And what's the core logic behind it?
At the moment I used just plain normal distribution with parameters (0,1) just to get some initial results, but as the results show, the error value is really big and therefore it's could not be the best solution..
Thanks in advance!
n <- 100000
alfa <- 0.01 # 1-0.99
# võtame generaatori preagu normaaljaotusest, sest integraalfunktsiooni kuju on üsnagi sarnane
# normaaljaotuse tihedusfunktsioonile
# gen <- function(n){
# return(matrix(runif(3*n, 0, Inf),ncol=3))
# }
gen <- function(n){
return(matrix(rnorm(3*n, 0, 1),ncol=3))
}
g <- function(x){
#tihedus <- dunif(x[,1],0,Inf)*dunif(x[,2],0,Inf)*dunif(x[,3],0,Inf)
tihedus <- rnorm(x[,1],0,1)*rnorm(x[,2],0,1)*rnorm(x[,3],0,1)
return( (x[,1]+x[,2])*exp(-(x[,1]+x[,2]+2*x[,3]))/(x[,1]^2+x[,2]+x[,3]+1) / tihedus*((x[,1]>=0) + (x[,2]>=0) + (x[,3]>=0)) )
}
MC(gen, g, n, alfa)
Related
I can do it for the two sample t test but not for Median test or Wilcoxon test or Hodges Lehmann test
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
mean(data_2000)
mean(data_2019)
mean(data_2019) - mean(data_2000)
combined_data <- c(data_2000, data_2019)
set.seed(123)
null_dist <- c()
for (i in 1:100000) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
null_dist[i] <- mean(shuffled_2019) - mean(shuffled_2000)
}
(p_value <- (sum(null_dist >= 49.57143) + sum(null_dist <=
`enter code here`-49.57143))/length(null_dist))
I think this is what you're trying to do. I altered your code as little as possible. There are packages like infer that will do this for you and the for loop is not the most efficient but it's plenty good enough and may help you learn. As long as we're looping I did mean and median at the same time since all other parts of the code are identical. ifelse is a nice easy way to make 1s and 0s to sum.
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
delta_mean <- mean(data_2019) - mean(data_2000)
delta_median <- median(data_2019) - median(data_2000)
combined_data <- c(data_2000, data_2019)
trials <- 100000
set.seed(123)
mean_diff <- c()
median_diff <- c()
for (i in 1:trials) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
mean_diff[i] <- mean(shuffled_2019) - mean(shuffled_2000)
median_diff[i] <- median(shuffled_2019) - median(shuffled_2000)
}
p_mean <- sum(ifelse(mean_diff > delta_mean | mean_diff < -1 * delta_mean, 1, 0)) / trials
p_median <- sum(ifelse(median_diff > delta_median | median_diff < -1 * delta_median, 1, 0)) / trials
p_mean
#> [1] 0.31888
p_median
#> [1] 0.24446
Following up on your question about HL test. Quoting Wikipedia
The Hodges–Lehmann statistic also estimates the difference between two populations. For two sets of data with m and n observations, the set of two-element sets made of them is their Cartesian product, which contains m × n pairs of points (one from each set); each such pair defines one difference of values. The Hodges–Lehmann statistic is the median of the m × n differences.
You could run it on your data with the following code...
Do NOT run it 100,000 times the answer is the same everytime because you're already making all 49 possible pairings
hl_df <- expand.grid(data_2019, data_2000)
hl_df$pair_diffs <- hl_df$Var1 - hl_df$Var2
median(hl_df$pair_diffs)
[1] 49
You can do the Wilcoxon test with wilcox.test in the stats package (loaded by default as part of R core). You need to set exact = FALSE because an exact p-value is not possible if there are ties.
wilcox.test(data_2019, data_2000, exact = FALSE)
Wilcoxon rank sum test with continuity correction
data: data_2019 and data_2000
W = 33.5, p-value = 0.2769
alternative hypothesis: true location shift is not equal to 0
I'll update this when I figure out how to do the other tests.
I am using the following R code, taken from a published paper (citation below). This is the code:
int2=function(x,r,n,p) {
(1+x)^((n-1-p)/2)*(1+(1-r^2)*x)^(-(n-1)/2)*x^(-3/2)*exp(-n/(2*x))}
integrate(f=int2,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
When I run it, I get the error "non-finite function value". Yet Maple is able to compute this as 4.046018765*10^27.
I tried using "integral" in package pracma, which gives me a different error:
Error in if (delta < tol) break : missing value where TRUE/FALSE needed
The overall goal is to compute a ratio of two integrals, as described in Wetzels & Wagenmakers (2012) "A default Bayesian hypothesis test for correlations" (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505519/). The entire function is as follows:
jzs.pcorbf = function(r0, r1, p0, p1, n) {
int = function(r,n,p,g) {
(1+g)^((n-1-p)/2)*(1+(1-r^2)*g)^(-(n-1)/2)*g^(-3/2)*exp(-n/(2*g))};
bf10=integrate(int, lower=0,upper=Inf,r=r1,p=p1,n=n)$value/
integrate(int,lower=0,upper=Inf,r=r0,p=p0,n=n)$value;
return(bf10)
}
Thanks!
The issue is that your integral function is generating NaN values when called with x values in its domain. You're integrating from 0 to Infinity, so let's check a valid x value of 1000:
int2(1000, sqrt(0.245), 530, 3)
# [1] NaN
Your objective multiplies four pieces:
x <- 1000
r <- sqrt(0.245)
n <- 530
p <- 3
(1+x)^((n-1-p)/2)
# [1] Inf
(1+(1-r^2)*x)^(-(n-1)/2)
# [1] 0
x^(-3/2)
# [1] 3.162278e-05
exp(-n/(2*x))
# [1] 0.7672059
We can now see that the issue is that you're multiplying infinity by 0 (or rather something numerically equal to infinity times something numerically equal to 0), which is causing the numerical issues. Instead of calculating a*b*c*d, it will be more stable to calculate exp(log(a) + log(b) + log(c) + log(d)) (using the identity that log(a*b*c*d) = log(a)+log(b)+log(c)+log(d)). One other quick note -- the value x=0 needs a special case.
int3 = function(x, r, n, p) {
loga <- ((n-1-p)/2) * log(1+x)
logb <- (-(n-1)/2) * log(1+(1-r^2)*x)
logc <- -3/2 * log(x)
logd <- -n/(2*x)
return(ifelse(x == 0, 0, exp(loga + logb + logc + logd)))
}
integrate(f=int3,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
# 1.553185e+27 with absolute error < 2.6e+18
I would like to construct a sequence with length 50 of the following type:
Xn+1=4*Xn*(1-Xn). For your information, this is the Logistic Map for r=4. In the case of the Logistic Map with parameter r = 4 and an initial state in (0,1), the attractor is also the interval (0,1) and the probability measure corresponds to the beta distribution with parameters a = 0.5 and b = 0.5. (The Logistic Map is a polynomial mapping (equivalently, recurrence relation) of degree 2, often cited as an archetypal example of how complex, chaotic behaviour can arise from very simple non-linear dynamical equations). How can I do this in R?
There are some ready to use solution on the net. I cite the general solution of mage's blog where you can find more detailed description.
logistic.map <- function(r, x, N, M){
## r: bifurcation parameter
## x: initial value
## N: number of iteration
## M: number of iteration points to be returned
z <- 1:N
z[1] <- x
for(i in c(1:(N-1))){
z[i+1] <- r *z[i] * (1 - z[i])
}
## Return the last M iterations
z[c((N-M):N)]
}
For OP example:
logistic.map(4,0.2,50,49)
This isn't really an R question, is it? More basic programming. Anyway, you probably need an accumulator and a value to process.
values <- 0.2 ## this accumulates as a vector, starting with 0.2
xn <- values ## xn gets the first value
for (it in 2:50) { ## start the loop from the second iteration
xn <- 4L*xn*(1L-xn) ## perform the sequence function
values <- c(values, xn) ## add the new value to the vector
}
values
# [1] 0.2000000000 0.6400000000 0.9216000000 0.2890137600 0.8219392261 0.5854205387 0.9708133262 0.1133392473 0.4019738493 0.9615634951 0 .1478365599 0.5039236459
# [13] 0.9999384200 0.0002463048 0.0009849765 0.0039360251 0.0156821314 0.0617448085 0.2317295484 0.7121238592 0.8200138734 0.5903644834 0 .9673370405 0.1263843622
# [25] 0.4416454208 0.9863789723 0.0537419811 0.2034151221 0.6481496409 0.9122067356 0.3203424285 0.8708926280 0.4497546341 0.9899016128 0 .0399856390 0.1535471506
# [37] 0.5198816927 0.9984188732 0.0063145074 0.0250985376 0.0978744041 0.3531800204 0.9137755744 0.3151590962 0.8633353611 0.4719496615 0 .9968527140 0.0125495222
# [49] 0.0495681269 0.1884445109
I am using the following R code, taken from a published paper (citation below). This is the code:
int2=function(x,r,n,p) {
(1+x)^((n-1-p)/2)*(1+(1-r^2)*x)^(-(n-1)/2)*x^(-3/2)*exp(-n/(2*x))}
integrate(f=int2,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
When I run it, I get the error "non-finite function value". Yet Maple is able to compute this as 4.046018765*10^27.
I tried using "integral" in package pracma, which gives me a different error:
Error in if (delta < tol) break : missing value where TRUE/FALSE needed
The overall goal is to compute a ratio of two integrals, as described in Wetzels & Wagenmakers (2012) "A default Bayesian hypothesis test for correlations" (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505519/). The entire function is as follows:
jzs.pcorbf = function(r0, r1, p0, p1, n) {
int = function(r,n,p,g) {
(1+g)^((n-1-p)/2)*(1+(1-r^2)*g)^(-(n-1)/2)*g^(-3/2)*exp(-n/(2*g))};
bf10=integrate(int, lower=0,upper=Inf,r=r1,p=p1,n=n)$value/
integrate(int,lower=0,upper=Inf,r=r0,p=p0,n=n)$value;
return(bf10)
}
Thanks!
The issue is that your integral function is generating NaN values when called with x values in its domain. You're integrating from 0 to Infinity, so let's check a valid x value of 1000:
int2(1000, sqrt(0.245), 530, 3)
# [1] NaN
Your objective multiplies four pieces:
x <- 1000
r <- sqrt(0.245)
n <- 530
p <- 3
(1+x)^((n-1-p)/2)
# [1] Inf
(1+(1-r^2)*x)^(-(n-1)/2)
# [1] 0
x^(-3/2)
# [1] 3.162278e-05
exp(-n/(2*x))
# [1] 0.7672059
We can now see that the issue is that you're multiplying infinity by 0 (or rather something numerically equal to infinity times something numerically equal to 0), which is causing the numerical issues. Instead of calculating a*b*c*d, it will be more stable to calculate exp(log(a) + log(b) + log(c) + log(d)) (using the identity that log(a*b*c*d) = log(a)+log(b)+log(c)+log(d)). One other quick note -- the value x=0 needs a special case.
int3 = function(x, r, n, p) {
loga <- ((n-1-p)/2) * log(1+x)
logb <- (-(n-1)/2) * log(1+(1-r^2)*x)
logc <- -3/2 * log(x)
logd <- -n/(2*x)
return(ifelse(x == 0, 0, exp(loga + logb + logc + logd)))
}
integrate(f=int3,lower=0,upper=Inf,n=530,r=sqrt(.245),p=3, stop.on.error=FALSE)
# 1.553185e+27 with absolute error < 2.6e+18
I have written this function that computes the MLE from a Cauchy distribution numerically based on the Newton-Raphson algorithm:
mlec <- function(x,theta0=median(x),numstp=100,eps=0.01){
numfin <- numstp
ic <- 0
istop <- 0
while(istop==0){
ic <- ic+1
ltheta <- -2*sum((x-theta0)/(1+(x-theta0)^2))
lprimetheta <- -2*(sum(2*(x-theta0)^2/
(1+(x-theta0)^2)^2-1/(1+(x-theta0)^2)^2))
theta1 <- theta0-(ltheta/lprimetheta)
check <- abs((theta1-theta0)/theta1)
if(check < eps ) { istop <- 1 }
theta0 <- theta1
}
list(theta1=theta1,check=check,realnumstps=ic)
}
The goal is then to generate observations from a Cauchy distribution with scale parameter 2 and see how the MLE performs. The problem is that while for some samples, the MLE runs wonderfully for others I get the strange error
Error in if (check < eps) { : missing value where TRUE/FALSE needed
What is going on here? I have defined what "check" is so that shouldn't happen.
Thank you.
I've added a little bit of instrumentation (see the cat() statement in the middle), and fixed the second-derivative expression (fixed: see below):
mlec <- function(x,theta0=median(x),numstp=100,eps=0.01,
debug=TRUE,fixed=FALSE){
numfin <- numstp
ic <- 0
istop <- 0
while(istop==0){
ic <- ic+1
ltheta <- -2*sum((x-theta0)/(1+(x-theta0)^2))
lprimetheta <- -2*(sum(2*(x-theta0)^2/
(1+(x-theta0)^2)^2-1/(1+(x-theta0)^2)^2))
if (!fixed) {
theta1 <- theta0-(ltheta/lprimetheta)
} else theta1 <- theta0-ltheta/ff(theta0)
check <- abs((theta1-theta0)/theta1)
if (debug) cat(ic,ltheta,lprimetheta,theta0,theta1,check,"\n")
if(check < eps ) { istop <- 1 }
theta0 <- theta1
}
list(theta1=theta1,check=check,realnumstps=ic)
}
set.seed(1)
x <- rcauchy(100,2)
mlec(x)
Here's the tail end of the output:
## ic ltheta lprimetheta theta0 theta1 check
## 427 -4.48838e-75 -2.014555e-151 -4.455951e+76 -6.683926e+76 0.3333333
## 428 -2.992253e-75 -8.953579e-152 -6.683926e+76 -1.002589e+77 0.3333333
## 429 -1.994835e-75 -3.979368e-152 -1.002589e+77 -1.503883e+77 0.3333333
## 430 -1.32989e-75 0 -1.503883e+77 -Inf NaN
Now, why is it happening? Either you've got a bug somewhere, or the algorithm is unstable. tl;dr it turns out the answer is actually "both"; your second-derivative expression seems wrong, but even it were correct the N-R algorithm is extremely unstable for this problem.
Here are your derivative and second-derivative functions (I'm wrapping them with Vectorize() for convenience so I can evaluate these at multiple theta values simultaneously):
lthetafun <- Vectorize(function(theta) {
-2*sum((x-theta)/(1+(x-theta)^2))
})
lprimethetafun <- Vectorize(function(theta) {
-2*(sum(2*(x-theta)^2/
(1+(x-theta)^2)^2-1/(1+(x-theta)^2)^2))
})
A negative log-likelihood function based on the built-in dcauchy function:
thetafun <- Vectorize(function(theta) -sum(dcauchy(x,theta,log=TRUE)))
Check differentiation (at an arbitrarily chosen point):
library("numDeriv")
all.equal(grad(thetafun,2),lthetafun(2)) ## TRUE
Check second derivative:
hessian(thetafun,2) ## 36.13297
lprimethetafun(2) ## 8.609859: ???
I think your second-derivative expression is wrong.
The following alternative second-derivative function is based on lazily cheating with Wolfram Alpha, differentiating your gradient function (which matches with the finite-difference approximation):
ff <- Vectorize(function(theta)
2*sum(((x-theta)^2-1)/((x-theta)^2+1)^2))
ff(2) ## matches hessian() above.
But it looks like you may have further problems.
The negative log-likelihood surface looks OK:
curve(thetafun, from=-10,to=10,n=501)
But trouble is on the horizon:
curve(lthetafun, from=-10,to=10, n=501)
This looks irregular, and going up one level to the second derivative shows that it is:
curve(ff, from=-10, to=10, n=501)
Here's the curve of N-R updates:
ff2 <- function(x) x-lthetafun(x)/ff(x)
curve(ff2, from=-10, to=10, n=501,ylim=c(-100,100))
Yikes! This indicates why the Newton-Raphson method could go wrong unless you start close enough to the minimum (any time the likelihood surface has an inflection point, the N-R updating curve will have a pole ...). Further analysis of the problem would probably tell you why the second derivative of the Cauchy is so scary.
If you just want to find the MLE you can do it by some more robust 1-D method:
library("bbmle")
mle2(x~dcauchy(location=m),
data=data.frame(x),
start=list(m=median(x)),
method="Brent",
lower=-100,upper=100)
##
## Call:
## mle2(minuslogl = x ~ dcauchy(location = m), start = list(m = median(x)),
## method = "Brent", data = data.frame(x), lower = -100, upper = 100)
##
## Coefficients:
## m
## 1.90179
##
## Log-likelihood: -262.96
##
If you start close enough, N-R seems to work OK:
mlec(x,1.85,debug=FALSE,fixed=TRUE,eps=0.0001)
## $theta1
## [1] 1.901592
##
## $check
## [1] 5.214763e-05
##
## $realnumstps
## [1] 37079