I would like to write a code to solve this kind of equations:
For that I wrote the code below, however it does not solve the problem. Do you have any ideas about the possibility to solve this kind of integrals in R?
t_0 = 15
mu = 0.1
lambda = 0.8
f = function(x1,x2) exp(mu*(x1+x2))*dexp(log(lambda)*(x1+x2))
f_comp = function(x2) f(x1,x2)
f_1 = function(x1) {integrate(f_comp,upper = t_0, lower = x1)}
result = integrate(f = f_1, lower = 0, upper = t_0)$value
--------- edit:
Given the answer below, I adapt the code to my example, but I still think is not the correct one, at least the value 0 for the integral does not make sense.
integrate(function(x1) {
sapply(x1, function(x1){
integrate(function(x2) exp(mu*(x1+x2))*dexp(log(lambda)*(x1+x2)), lower = x1, upper = t_0)$value
})
}, 0, t_0)
by the way, I would like to buid a general procedure for that (that is why I just not calculate the integral by hand). That is not only double integrals, but also n-tuples integrals, so I need a general procedure for this kind of calculations.
Make a picture of the domain of integration. This is a simplex (a triangle) with vertices (0,0), (0,t0), (t0,t0). To evaluate an integral on a simplex, the SimplicialCubature package is the way to go.
t0 = 15
mu = 0.1
lambda = 0.8
library(SimplicialCubature)
f <- function(xy){
x <- xy[1]; y <- xy[2]
exp(-mu*(x+y)) * (1-exp(-lambda*(x+y)))
}
S <- cbind(c(0,0), c(0,t0), c(t0,t0))
adaptIntegrateSimplex(f, S)$integral
# 29.55906
integrate(function(x1) {
sapply(x1, function(x1){
integrate(function(x2) exp(-mu*(x1+x2))*(1-exp(-lambda*(x1+x2))), lower = x1,
upper = t0)$value
})
}, 0, t0)$value
# 29.55906
Related
I am trying to simulate fBm with its integral representation. I know that there are faster methods out there, but i would like to play around with the kernel function inside the integral.
enter link description here
My appraoch was to simulate the stochastic integral cummulatively.
set.seed(100)
dt = 0.01
T = seq(-50,10,0.1)
n=length(T)
m=length(T)
Gamma = matrix(0, n, m)
H=0.9
exponent <- function(a, pow) (abs(a)^pow)*sign(a)
for(j in 1:length(T)){
zeile = numeric(length(T)) #resetting our path for each j
y = sqrt(dt)*rnorm(n=1, mean = 0, sd=1) #normal distributred r.V.
zeile[1] = (max(exponent(T[j] - T[1],H-0.5),0) - max(exponent(-T[1],H-0.5),0))*y #first entry of one path
for(i in 1:(length(T)-1)){
y1 = sqrt(dt)*rnorm(n=1, mean = 0, sd=1)
y2 = sqrt(dt)*rnorm(n=1, mean = 0, sd=1)
zeile[i+1] = zeile[i] + max(exponent(T[j] - T[i],H-0.5),0)*y1 - max(exponent(-T[i],H-0.5),0)*y2
}
Gamma[j,] = zeile
}
normalV = rnorm(length(T),mean =0,sd=1)
path = Gamma%*%normalV
plot(T, path, type="l")
T is the interval over which we will plot. The first for loop is there to go over each time point and fix it in the second for loop in which we fill up one row of the matrix. After we have our matrix, I thought I have to multiply it by a N(0,1) vektor in order to get one path of fBm. Clearly I do not.
I am working on a portfolio optimazion algorithm and part of the problem consists in generating moment matching scenario.
My choice due to its simplicity and quickness was to go through paper "An algorithm for moment-matching scenario generation with application to financial portfolio optimization" (Ponomareva, Roman and Date).
The problem is that even though the mathematics are very simple, I am stuck by the fact that some of probability weights pi are negative even though the formulas in the paper should ensure otherwise. If I put a loop to run the algorithm until it finds a positive combination it essentially runs forever.
I put the bit of code based on the paper were things get stuck:
dummy1 = 0
while (dummy1 <=0 | dummy1 >= 1) {
dummy1 = round(rnorm(1, mean = 0.5, sd = 0.25), 2)
}
diag.cov.returns = diag(cov.returns)
Z = dummy1 * sqrt (diag.cov.returns) #Vector Z according to paper formula
ZZT = Z %*% t(Z)
LLT = cov.returns - ZZT
L = chol(LLT) #cholesky decomposition to get matrix L
s = sample (1:5, 1)
F1 = 0
F2 = -1
S = (2*N*s)+3
while (((4*F2)-(3*F1*F1)) < 0) {
#Gamma = (2*s*s)*(((N*mean.fourth) - (0.75*(sum(Z^4)* (N*mean.third/sum(Z^3))^2)))/sum(L^4))
#Gamma is necessary if we want to get p from Uniform Distribution
#U = runif(s, 0, 1)
U = rgamma(s, shape = 1, scale = ((1/exp(1)):1))
#p = (s*(N/Gamma)) + ((1/(2*N*s)) - (s/(N*Gamma)))*U
p = (-log(U, base = exp(1)))
p = p/(((2*sum(p))+max(p))*N*s) #this is the array expected to have positive and bounded between 0 and 1
q1 = 1/p
pz = p
p[s+1] = (1-(2*N*sum(p))) #extra point necessary to get the 3 moment mathcing probabilities
F1 = (N*mean.third*sqrt(p[s+1]))/(sum(Z^3))
F2 = p[s+1]*(((N*mean.fourth) - (1/(2*s*s))*sum(L^4)*(sum(1/p)))/sum(Z^4))
}
alpha = (0.5*F1) + 0.5*sqrt((4*F2)-(3*F1*F1))
beta = -(0.5*F1) + 0.5*sqrt((4*F2)-(3*F1*F1))
w1 = 1/(alpha*(alpha+beta))
w2 = 1/(beta*(alpha+beta))
w0 = 1 - (1/(alpha*beta))
P = rep(pz, 2*N) #Vector with Probabilities starting from p + 3 extra probabilities to match third and fourth moments
P[(2*N*s)+1] = p[s+1]*w0
P[(2*N*s)+2] = p[s+1]*w1
P[(2*N*s)+3] = p[s+1]*w2
Unfortunately I cannot discolose the input dataset containing funds returns. However I can surely be more specific. Starting from a data.frame() containing N assets' returns (in my case there 11 funds and monthly returns from 30/01/2001 to 30/09/2020). Once the mean returns, covariance matrix, central third and fourth moments (NOT skewness and kurtosis) and the averages are computed. The algorithm follows as I have reported in the problem. The point where i get stuck is that p takes also negative values. This is a problem since the first s elements of p are later used as probabilities in P.
I hope that in this way the problem is more clear. I also want to add that in the paper the data used by the authors is reported, unfortunately to import them in R would be necessary to import them manually. However I repeat any data.frame() containing assets' returns will do.
I have the following function and i need it to be maximized instead of minimized.
adbudgReturn = function(Spend,a,b,c,d){
adbudgReturn = sum(b+(a-b)*((Spend^c)/(d+(Spend^c))))
return(adbudgReturn)
}
FP_param <- c(95000,0,1.15,700000)
FB_param <- c(23111.55,0,1.15,20000)
GA_param <- c(115004,1409,1.457,2000000)
y = c(0.333333,0.333333,0.333333)
TotalSpend <- function(Budget,y){
FP_clicks = adbudgReturn(Budget * y[1], FP_param[1], FP_param[2], FP_param[3], FP_param[4])
FB_clicks = adbudgReturn(Budget * y[2], FB_param[1], FB_param[2], FB_param[3], FB_param[4])
GA_clicks = adbudgReturn(Budget * y[3], GA_param[1], GA_param[2], GA_param[3], GA_param[4])
return(total = FP_clicks + FB_clicks + GA_clicks)
}
startValVec = c(0.33333,0.333333,0.3333333)
minValVec = c(0,0.2,0)
maxValVec = c(0.8,1,08)
MaxClicks_optim.parms <- nlminb(objective = TotalSpend,start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 10000)
I have tried adding the minus sign in front of the nlminb function i.e:
-nlminb(..)
but without any success. Any help will be appreciated.
Also i would like to add constraints so the sum of the maxValVec = 1
Other optimization functions in R such as optim() have a built-in fnscale control parameter you can use to switch from minimization to maximization (i.e. optim(..., control=list(fnscale=-1)), but nlminb doesn't appear to. So you either need to flip the sign in your original objective function, or (possibly more transparently) make a wrapper function that inverts the sign, e.g.
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
[ .... everything else as before ... ] )
Note that the ... in the max_obj() definition are literal. The only part of the solution above that needs to be filled in is the [.... everything else as a before ...] part. To be absolutely explicit:
max_obj <- function(...) -1*TotalSpend(...)
MaxClicks_optim.parms <- nlminb(objective = max_obj,
start = startValVec,
lower = minValVec,
upper = maxValVec,
control = list(iter.max=100000,eval.max=20000),
Budget = 1e4)
If you were using a user-specified gradient argument you'd have to wrap that too.
This CV question points out that you can maximize by minimizing the negative of a function, but doesn't go into the nuts and bolts.
An optim()-based solution would look something like:
optim(fn = TotalSpend,
par = startValVec,
lower = minValVec,
upper = maxValVec,
method = "L-BFGS-B",
control = list(maxit=100000, fnscale=-1),
Budget = 1e4)
L-BFGS-B is the only method built into to optim() that does box-constrained optimization
optim() doesn't have separate controls for max iterations and max function evaluations
Here is an example with a simple parabolic function, It works the same with nlminband optim:
## ==== Some preliminaries ========================
par(mfrow=c(1,2))
a <- b <- seq(-10, 10, 0.1)
## ==== Search for a minimum ======================
# function has minimum
f1 <- function(a, b) {
(a - 1)^2 + (b - 2)^2
}
## show function, blue color is low
image(a, b, outer(a, b, f1), col=topo.colors(16))
## wrapper: combine parameters
g1 <- function(p) f1(p["a"], p["b"])
## minimization
(ret <- nlminb(c(a=0, b=0), g1))
## show minimum
points(t(ret$par), pch="+", cex=2)
## ==== Search for a maximum =======================
## function has a maximum
f2 <- function(a, b) {
- (a - 1)^2 - (b + 2)^2
}
## brown color is high
image(a, b, outer(a, b, f2), col=topo.colors(16))
## wrapper: combine parameters, invert sign
g2 <- function(p) -f2(p["a"], p["b"])
## minimization of negative objective = maximization
(ret <- nlminb(c(a=0, b=0), g2))
## show maximum
points(t(ret$par), pch="+", cex=2)
In my below R code, I was wondering how I could find out what is rh1 when y == 0.5?
Note that y uses atanh(rh1), which can be converted back to rh1 using tanh().
rh1 <- seq(-1, 0.1, by = 0.001)
y <- pnorm(-0.13, atanh(rh1), 0.2)
plot(rh1, y, type = "l")
Analytical solution
For a normal distribution X ~ N(mu, 0.2). We want to find mu, such that Pr (X < -0.13) = y.
Recall your previous question and my answer over there: Determine a normal distribution given its quantile information. Here we have something simpler, as there is only one unknown parameter and one piece of quantile information.
Again, we start by standardization:
Pr {X < -0.13} = y
=> Pr { [(X - mu) / 0.2] < [(-0.13 - mu) / 0.2] } = y
=> Pr { Z < [(-0.13 - mu) / 0.2] } = y # Z ~ N(0,1)
=> (-0.13 - mu) / 0.2 = qnorm (y)
=> mu = -0.13 - 0.2 * qnorm (y)
Now, let atanh(rh1) = mu => rh1 = tanh(mu), so in short, the analytical solution is:
tanh( -0.13 - 0.2 * qnorm (y) )
Numerical solution
It is a root finding problem. We first build the following function f, and we aim to find its root, i.e., the rh1 so that f(rh1) = 0.
f <- function (rh1, y) pnorm(-0.13, atanh(rh1), 0.2) - y
The simplest root finding method is bisection method, implemented by uniroot in R. I recommend you reading Uniroot solution in R for how we should work with it in general.
curve(f(x, 0.5), from = -1, to = 0.1); abline (h = 0, lty = 2)
We see there is a root between (-0.2, 0), so:
uniroot(f, c(-0.2, 0), y = 0.5)$root
# [1] -0.129243
Your function is monotonic so you can just create the inverse function.
rh1 <- seq(-1,.1,by=.001)
y <- pnorm(-.13,atanh(rh1),.2)
InverseFun = approxfun(y, rh1)
InverseFun(0.5)
[1] -0.1292726
I would like to integrate a following function named betalog
g <- function(x,a,b){
if (a < 0 | b < 0) stop()
temp <- (a-1)*log(x) + (b-1)*log(1-x)
return( exp(temp) )
}
betalog<- function(x,a,b)
{
temp <- g(x=x,a=a,b=b)* log(x/(1-x))
return( temp )
}
The function g is integrand of the beta function. In theory, betalog should be integrable over any [0,alpha] interval if 0 < alpha < 1, and a > 0, b >0.
However, I cannot numerically integrate betalog with very small a:
a <- 0.00001
b <- 1
alpha <- 0.5
integrate(betalog,a=a,b=b,lower=0,upper=alpha,subdivisions=1000000L)
Error in integrate(betalog, a = a, b = b, lower = 0, upper = alpha, subdivisions =
1000000L) :
non-finite function value
In fact, I cannot even compute the incomplete beta function using R integrate function when a is very small:
integrate(g,a=a,b=b,lower=0,upper=alpha,subdivisions=1000000L)
Error in integrate(g, a = a, b = b, lower = 0, upper = alpha, subdivisions = 1000000L) :
roundoff error is detected in the extrapolation table
Can anyone gives me tip to integrate such incomplete beta-like function in R?
> betalog(0, a, b)
[1] -Inf
Your function is singular at the lower bound. Recall that to compute an improper integral you must replace the singular bounds with dummy variables and take the limit from the correct side towards that bound. In particular,
> integrate(betalog,a=a,b=b,lower=0.000001,upper=alpha,subdivisions=10000000L)
-94.60292 with absolute error < 0.00014
> integrate(betalog,a=a,b=b,lower=.Machine$double.xmin * 1000,upper=alpha,subdivisions=10000
-244894.7 with absolute error < 10
> integrate(betalog,a=a,b=b,lower=.Machine$double.xmin,upper=alpha,subdivisions=10000000L)
Error in integrate(betalog, a = a, b = b, lower = .Machine$double.xmin, :
non-finite function value
I suspect that your integral diverges, but this might be tricky since even state-of-the-art symbolic algebra systems can't prove that:
http://www.wolframalpha.com/input/?i=Integral%28x%5E%280.00001+-1%29+ln%28x%2F%281-x%29%29%2C+x%2C0%2C+0.5%29
Whatever the case, R is not the correct tool for this problem.