How to integrate a function when the lower limit is unknown? - r

I use RStudio for university. And I got this task where I need help:
Response times of people were measured. The following density function was found:
f (x) = 0.62 * (1 / x)
only positive reaction times between 1 and 5 seconds were measured.
In which interval [c, 5] do the top 30 percent of the response times fall? Calculate c!
Normally I would integrate in this way:
integrand_2 <- function(x) {0.62 * (1/x)}
integrate(integrand, lower = , upper = 5)
But as you can see, I have the problem that the lower limit is unknown. How can I find this unknown lower limit (c)?

Are you looking for this?
f <- function(z) integrate(function(x) 0.62 / x, z, 5)$value - 0.3
res <- uniroot(f, c(1, 5))$root
then we have
> res
[1] 3.081973
> integrate(function(x) 0.62 / x, res, 5)$value
[1] 0.2999982

Here is a way, but not with the result in ThomasIsCoding's answer.
pdf <- function(x) {
0.62/x
}
cdf <- function(x){
integrate(pdf, lower = 1, upper = x)$value
}
u <- uniroot(function(x) cdf(x) - 0.7, c(1, 5))
u$root
#[1] 3.092671
1 - cdf(u$root)
#[1] 0.2999982
But if the correct normalizing constant 1/log(5) is used instead of the rounded value 0.62, the result becomes closer.
pdf <- function(x) {
1/log(5)/x
}
# Same cdf
u <- uniroot(function(x) cdf(x) - 0.7, c(1, 5))
u$root
#[1] 3.085178
1 - cdf(u$root)
#[1] 0.2999982

Related

Generate vector of 'random' proportions of a given length within specific boundaries

I want to generate a vector of a given length, e.g., n = 5. Each value in the vector should be a proportion (i.e., a value between 0 and 1) so that across n elements they sum up to 1.
Unfortunately, I have two vectors: one (mymins) defines the allowed lower boundaries of each proportion and the other (mymaxs) defines the allowed top boundaries of each proportion.
In my example below the desired proportion for the first element is allowed to fall anywhere between 0.3 and 0.9. And for the last element, the desired proportion is allowed to fall between 0.05 and 0.7.
mymins <- c(0.3, 0.1, 0, 0.2, 0.05)
mymaxs <- c(0.9, 1, 1, 1, 0.7)
Let's assume that mymins are always 'legitimate' (i.e., their sum is never larger than 1).
How could I find a set of 5 proportions such that they all sum to 1 but lie within the boundaries?
Here is what I tried:
n = 5
mydif <- mymaxs - mymins # possible range for each proportion
myorder <- rank(mydif) # order those differences from smallest to largest
mytarget <- sum(mydif) # sum up the 5 ranges
x <- sort(runif(n))[myorder] # generate 5 random values an sort them in the order of mydif
x2 <- mymins + x / sum(x) * mytarget # rescale random values to sum up to mytarget and add them to mymins
x3 <- x2/sum(x2) # rescale x2 to sum up to 1
As you can see, I am not very far - because after rescaling some values are outside of their allowed boundaries.
I should probably also mention that I need this operation to be fast - because I am using it in an optimization loop.
I also tried to find a solution using optim, however the problem is that it always finds the same solution - and I need to generate a DIFFERENT solutions every time I find the proporotion:
myfun <- function(x) {
x <- round(x, 4)
abovemins <- x - mymins
n_belowmins <- sum(abovemins < 0)
if (n_belowmins > 0) return(100000)
belowmax <- x - mymaxs
n_abovemax <- sum(belowmax > 0)
if (n_abovemax > 0) return(100000)
mydist <- abs(sum(x) - 1)
return(mydist)
}
myopt <- optim(par = mymins + 0.01, fn = myfun)
myopt$par
sum(round(myopt$par, 4))
Thank you very much for your suggestions!
Perhaps its better to think of this in a different way. Your samples actually need to sum to 0.35 (which is 1 - sum(mymins)), then be added on to the minimum values
constrained_sample <- function(mymins, mymaxs)
{
sizes <- mymaxs - mymins
samp <- (runif(5) * sizes)
samp/sum(samp) * (1 - sum(mymins)) + mymins
}
It works like this:
constrained_sample(mymins, mymaxs)
#> [1] 0.31728333 0.17839397 0.07196067 0.29146744 0.14089459
We can test this works by running the following loop, which will print a message to the console if any of the criteria aren't met:
for(i in 1:1000)
{
test <- constrained_sample(mymins, mymaxs)
if(!all(test > mymins) | !all(test < mymaxs) | abs(sum(test) - 1) > 1e6) cat("failure")
}
This throws no errors, since the criteria are always met. However, as #GregorThomas points out, the bounds aren't realistic in this case. We can see a range of solutions constrained by your conditions using a boxplot:
samp <- constrained_sample(mymins, mymaxs)
for(i in 1:999) samp <- rbind(samp, constrained_sample(mymins, mymaxs))
df <- data.frame(val = c(samp[,1], samp[,2], samp[,3], samp[,4], samp[,5]),
index = factor(rep(1:5, each = 1000)))
ggplot(df, aes(x = index, y = val)) + geom_boxplot()
Because you need 5 random numbers to sum to 1, you really only have 4 independent numbers and one dependent number.
mymins <- c(0.3, 0.1, 0, 0.2, 0.05)
mymaxs <- c(0.9, 1, 1, 1, 0.7)
set.seed(42)
iter <- 1000
while(iter > 0 &&
(
(1 - sum(x <- runif(4, mymins[-5], mymaxs[-5]))) < mymins[5] ||
(1 - sum(x)) > mymaxs[5]
)
) iter <- iter - 1
if (iter < 1) {
# failed
stop("unable to find something within 1000 iterations")
} else {
x <- c(x, 1-sum(x))
}
sum(x)
# [1] 1
all(mymins <= x & x <= mymaxs)
# [1] TRUE
x
# [1] 0.37732330 0.21618036 0.07225311 0.24250359 0.09173965
The reason I use iter there is to make sure you don't take an "infinite" amount of time to find something. If your mymins and mymaxs combination make this mathematically infeasible (as your first example was), then you don't need to spin forever. If it is mathematically improbable to find it in a reasonable amount of time, you need to weigh how long you want to do this.
One reason this takes so long is that we are iteratively pulling entropy. If you expect this to go for a long time, then it is generally better to pre-calculate as much as you think you'll need (overall) and run things as a matrix.
set.seed(42)
n <- 10000
m <- matrix(runif(prod(n, length(mymins)-1)), nrow = n)
m <- t(t(m) * (mymaxs[-5] - mymins[-5]) + mymins[-5])
remainders <- (1 - rowSums(m))
ind <- mymins[5] <= remainders & remainders <= mymaxs[5]
table(ind)
# ind
# FALSE TRUE
# 9981 19
m <- cbind(m[ind,,drop=FALSE], remainders[ind])
nrow(m)
# [1] 19
rowSums(m)
# [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
head(m)
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0.3405821 0.1306152 0.05931363 0.2199362 0.24955282
# [2,] 0.3601376 0.1367465 0.20235704 0.2477507 0.05300821
# [3,] 0.4469526 0.1279795 0.02265618 0.2881733 0.11423845
# [4,] 0.5450527 0.1029903 0.07503371 0.2052423 0.07168103
# [5,] 0.3161519 0.1469783 0.15290720 0.3268470 0.05711557
# [6,] 0.4782448 0.1185735 0.01664063 0.2178225 0.16871845
all(
mymins[1] <= m[,1] & m[,1] <= mymaxs[1],
mymins[2] <= m[,2] & m[,2] <= mymaxs[2],
mymins[3] <= m[,3] & m[,3] <= mymaxs[3],
mymins[4] <= m[,4] & m[,4] <= mymaxs[4],
mymins[5] <= m[,5] & m[,5] <= mymaxs[5]
)
# [1] TRUE
This time it took 10000 attempts to make 19 valid combinations. It might take more or fewer attempts based on randomness, so ymmv with regards to how much you need to pre-generate.
If your example bounds are realistic, we can refine them quite a bit, narrowing the range of possibilities. For the current version of the question with:
mymins = c(0.3, 0.1, 0, 0.2, 0.05)
mymaxs = c(0.9, 1, 1, 1, 0.7)
What's the max for x[1]? Well, if x[2:5] take on minimum values, they will add up to 0.1 + 0 + 0.2 + 0.05 = 0.35, so based on the other mins only we know that max value for x[1] is 1 - 0.35 = 0.65. The 0.9 in mymaxs is way too high.
We can calculate the actual max values taking the minimum of the max values based on the minimums and the mymaxs vector:
new_max = pmin(mymaxs, 1 - (sum(mymins) - mymins))
new_max
# [1] 0.65 0.45 0.35 0.55 0.40
We can similarly revise the min bounds, though in this case even the revised max bounds new_max are high enough that it would have any impact on the minimums.
new_min = pmax(mymins, 1 - (sum(new_max) - new_max))
new_min
# [1] 0.30 0.10 0.00 0.20 0.05
With these adjustments, we should be able to see easily if any solutions are possible (all(new_min < new_max)). And then generating random numbers as in r2evans's answer should go much quicker using the new bounds.

Optimize function in r with the boundary value

I have the following function and want find $x$ satisfying this requirement.
$$\frac{X^{2}(1.5)^{2}\exp{1.5X^{2}}}{2} < 1$$
I wrote the following r function.
f <- function(X) 0.5*X^2 * 1.5^2 * exp(X*1.5) < 1
optimize(f, c(0, 1))
But it is giving me an error. I want to find the value of X satisfying the requirement. Thank you for the help.
If we define ff as
ff <- function(X) 0.5*X^2 * 1.5^2 * exp(X*1.5)
then graphing it
curve(ff)
we see that ff(0) = 0 and ff(x) is monotonically increasing in x. The largest value of x for which ff(x) <= 1 can be calculated as the solution to ff(x) = 1 which occurs at the minimum of g:
g <- function(x) (ff(x) - 1)^2
optimize(g, c(0, 1))
giving:
$minimum
[1] 0.6008074
$objective
[1] 1.058761e-09
Thus any value of x between 0 and 0.6008074 gives a value of ff in the closed interval [0, 1].
# create graph
curve(ff)
opt <- optimize(g, c(0, 1))
abline(h = 0:1)
abline(v = c(0, opt$minimum))

Manual simulation of Markov Chain in R

Consider the Markov chain with state space S = {1, 2}, transition matrix
and initial distribution α = (1/2, 1/2).
Simulate 5 steps of the Markov chain (that is, simulate X0, X1, . . . , X5). Repeat the simulation 100
times. Use the results of your simulations to solve the following problems.
Estimate P(X1 = 1|X0 = 1). Compare your result with the exact probability.
My solution:
# returns Xn
func2 <- function(alpha1, mat1, n1)
{
xn <- alpha1 %*% matrixpower(mat1, n1+1)
return (xn)
}
alpha <- c(0.5, 0.5)
mat <- matrix(c(0.5, 0.5, 0, 1), nrow=2, ncol=2)
n <- 10
for (variable in 1:100)
{
print(func2(alpha, mat, n))
}
What is the difference if I run this code once or 100 times (as is said in the problem-statement)?
How can I find the conditional probability from here on?
Let
alpha <- c(1, 1) / 2
mat <- matrix(c(1 / 2, 0, 1 / 2, 1), nrow = 2, ncol = 2) # Different than yours
be the initial distribution and the transition matrix. Your func2 only finds n-th step distribution, which isn't needed, and doesn't simulate anything. Instead we may use
chainSim <- function(alpha, mat, n) {
out <- numeric(n)
out[1] <- sample(1:2, 1, prob = alpha)
for(i in 2:n)
out[i] <- sample(1:2, 1, prob = mat[out[i - 1], ])
out
}
where out[1] is generated using only the initial distribution and then for subsequent terms we use the transition matrix.
Then we have
set.seed(1)
# Doing once
chainSim(alpha, mat, 1 + 5)
# [1] 2 2 2 2 2 2
so that the chain initiated at 2 and got stuck there due to the specified transition probabilities.
Doing it for 100 times we have
# Doing 100 times
sim <- replicate(chainSim(alpha, mat, 1 + 5), n = 100)
rowMeans(sim - 1)
# [1] 0.52 0.78 0.87 0.94 0.99 1.00
where the last line shows how often we ended up in state 2 rather than 1. That gives one (out of many) reasons why 100 repetitions are more informative: we got stuck at state 2 doing just a single simulation, while repeating it for 100 times we explored more possible paths.
Then the conditional probability can be found with
mean(sim[2, sim[1, ] == 1] == 1)
# [1] 0.4583333
while the true probability is 0.5 (given by the upper left entry of the transition matrix).

Finding the largest argument of a function subject to a bound

As the title suggests, I would like to solve the following problem. Let f denote a a certain function and let f0 denote a given constant. Is there an economical way of finding max{x:f(x) <= f0}?
Here is what an example would look like:
f = function(x) (x-2)^2
f0 = 0.4
and in that case the correct answer would be about 2.5. Thank you in advance.
One possibility to optimize with constraints would be to define a version of your function f which returns Inf if the constraint is not met:
f <- function(x) (x-2)^2
f0 <- 0.4
f_optim <- function(x, a = f0) ifelse(f(x) <= a, f(x), Inf)
optimize(f_optim, c(-10, 10), a = f0, maximum = T, tol = .Machine$double.eps)
$maximum
[1] 2.632456
$objective
[1] 0.4
f0 = 0.04
f = function(x) (x - 2)^2
g = function(x, f0) {
delta = f0 - f(x)
abs(delta)
}
optimize(g, c(0, 10), f0 = f0, maximum=F, tol= .Machine$double.eps)

Triple integral in R (how to specifying the domain)

I would like to compute the triple integral of a function of three variables f(x,y,z) in R. I'm using the package cubature and the function adaptIntegrate(). The integrand is equal to 1 only in a certain domain (x<y<z, 0 otherwise) which I don't know how to specify. I'm trying 2 different implementations of the function, but none of them work:
#First implementation
fxyz <- function(w) {
x <- w[1]
y <- w[2]
z <- w[3]
x*y*z*(x < y)&(y < z)
}
#Second implementation
fxyz <- function(w) {
x <- w[1]
y <- w[2]
z <- w[3]
if(x<y&y<z)
out<-1
else
out<-0
out
}
#Computation of integral
library(cubature)
lower <- rep(0,3)
upper <- rep(1, 3)
adaptIntegrate(f=fxyz, lowerLimit=lower, upperLimit=upper, fDim = 3)
Any idea on how to specify the domain correctly?
I don't know about the cubature package, but you can do this by repeated application of base R's integrate function for one-dimensional integration.
f.xyz <- function(x, y, z) ifelse(x < y & y < z, 1, 0)
f.yz <- Vectorize(function(y, z) integrate(f.xyz, 0, 1, y=y, z=z)$value,
vectorize.args="y")
f.z <- Vectorize(function(z) integrate(f.yz, 0, 1, z=z)$value,
vectorize.args="z")
integrate(f.z, 0, 1)
# 0.1666632 with absolute error < 9.7e-05
You'll probably want to play with the control arguments to set the numeric tolerances; small errors in the inner integration can turn into big ones on the outside.
In your first function the return value is wrong. It should be as.numeric(x<=y)*as.numeric(y<=z). In your second function you should also use <= instead of <, otherwise `adapIntegrate won't work correctly. You also need to specify a maximum number of evaluations. Try this
library(cubature)
lower <- rep(0,3)
upper <- rep(1,3)
# First implementation (modified)
fxyz <- function(w) {
x <- w[1]
y <- w[2]
z <- w[3]
as.numeric(x <= y)*as.numeric(y <= z)
}
adaptIntegrate(f=fxyz,lowerLimit=lower,upperLimit=upper,doChecking=TRUE,
maxEval=2000000,absError=10e-5,tol=1e-5)
#$integral
#[1] 0.1664146
#$error
#[1] 0.0001851699
#$functionEvaluations
#[1] 2000031
#$returnCode
#[1] 0
The domain 0 <= x <= y <= z <= 1 is the "canonical" simplex. To integrate over a simplex, use the SimplicialCubature package.
library(SimplicialCubature)
f <- function(x) 1
S <- CanonicalSimplex(3)
> adaptIntegrateSimplex(function(x) 1, S)
$integral
[1] 0.1666667
$estAbsError
[1] 1.666667e-13
$functionEvaluations
[1] 55
$returnCode
[1] 0
$message
[1] "OK"
Note that integrating the constant function f(x)=1 over the simplex simply gives the volume of the simplex, which is 1/6. The integration is useless for this example.
> SimplexVolume(S)
[1] 0.1666667

Resources