Inverse function in R - r

So I have a function: f(a,b,rho)=r. Given a,b and r, I would like to find rho. I came across the inverse function in R, but it seems that when the function tries to find rho, the function cannot tell which of a, b or rho is the specified rho and also the function cannot load the given a and b. In addition, I know that rho will be between 0 and 1.
a = -.7
b = 2
r <- function(rho,a,b){
# basically here I defined a very long function of r
# in terms of a and b and rho
}
R_inverse=inverse(function(rho) r(rho,-.7,2),0,1)
# r_value is just a random value
R_inverse(r_value)
This does not work. I would appreciate any help with the inverse function or any other alternative way to find rho given r and a and b.

Related

How to solve an objective function having an exponential term with a different base in CVXR?

I am using CVXR to solve a concave objective function. The decision variable (x) is one-dimensional and the objective function is the summation of 2 logarithmic terms in which the second term is exponential with different bases of “a and b” (e.g., a^x, b^x); “a and b” are constants.
My full objective function is:
(-x*sum(ln(y))) + ln((1-x)/((a^(1-x))-(b^(1-x))))
where y is a given 1-D vector of data.
When I add the second term having (a^x and b^x) to the objective function, I keep getting
Error in a^(1 - x): non-numeric argument to binary operator
Is there any atom function in CVXR that can be used to code constant^x?
Here is my code:
library(CVXR)
a <- 7
b <- 0.3
M=1000
x_i # is a given vector of 1-D data
x <- Variable(1)
nominator <- (1-x)
denominator <- (1/((a^(1-x))-(b^(1-x))))
obj <- (-xsum(log(x_i)) + Mlog(nominator/denominator)) # change M to the length of X_i later
constr <- list(x>0)
prob <- Problem(Maximize(obj), constr)
result <- solve(prob)
alpha_hat <- result$getValue(x)
Please tell me what I am doing wrong. I appreciate your help in advance.
do some math
2=e^log2
2^x=(e^log2)^x=e^(log2*x)
So, you can try
denominator <- 1/(exp(log(a)*(1-x)) - exp(log(b)*(1-x)))

compute the integration in R

I compute the cumulative distribution function whose result should lie in [0,1]. The equation for computing the CDF is:
\begin{align}
F= \int_{\hat{a}}^{x}\frac{2}{\hat{b}-\hat{a}} ~\sum \nolimits_{k=0}^{' N-1} C_{k}~\text{cos} \bigg( \big(y - \hat{a} \big) \frac{k \pi}{\hat{b} - \hat{a}}\bigg) ~dy
\end{align}
where
Ck is a vector
cos term is a vector
length(ck) = length(cos term) = N.
I am sure the equation is correct, but I am afraid my code is incorrect.
Here is my code:
integrand<-function(x,myCk)
{
(2/(b-a))*(t(myCk)%*%as.matrix(cos((x-hat.a)*uk)))
}
f <- function(x){integrand(x,myCk)}
# define a vectorized version of this function
fv <- Vectorize(f,"x")
res<-integrate(fv,upper = r,lower = hat.a, subdivisions = 2000)$value
resreturns the cumulative distribution function, and the result can be larger than 1.
myCkis a vector generated by another function.
hat.ais the lower bound for integration, and it is negative.
ukis a vector generated by a function. The length of ukequals the length of myCk.
I appreciate your advice!

Nonlinear equation involving summations in R

I am having the hardest time trying to implement this equation into a nonlinear solver in R. I am trying both the nleqslv and BB packages but so far getting nothing but errors. I have searched and read documentation until my eyes have bled, but I cannot wrap my brain around it. The equation itself works like this:
The Equation
s2 * sum(price^(2*x+2)) - s2.bar * sum(price^(2*x)) = 0
Where s2, s2.bar and price are known vectors of equal length.
The last attempt I tried in BB was this:
gamma = function(x){
n = len(x)
f = numeric(n)
f[n] = s2*sum(price^(2*x[n]+2)) - s2.bar*sum(price^(2*x[n]))
f
}
g0 = rnorm(length(price))
results = BBsolve(par=g0, fn=gamma)
From you description of the various parts used in the function you seem to have muddled up the formula.
Your function gamma should most probably be written as
gamma <- function(x){
f <- s2*sum(price^(2*x+2)) - s2.bar*sum(price^(2*x))
f
}
s2, price and s2.bar are vectors from your description so the formula you gave will return a vector.
Since you have not given any data we cannot test. I have tried testing with randomly generated values for s2, price, s2.bar. Sometimes one gets a solution with both nleqslv and BB but not always.
In the case of package nleqslv the default method will not always work.
Since the package has different methods you should use the function testnslv from the package to see if any of the provided methods does find a solution.

Pass vector of lambdas to Poisson(), or guidance on idiomatic function composition

I'm trying to learn a little Julia by doing some bayesian analysis. In Peter Hoff's textbook, he describes a process of sampling from a posterior predictive distribution of a Poisson-Gamma model in which he:
Samples values from the gamma distribution
Samples values from the poisson distribution, passing a vector of lambdas
Here is what this looks like in R:
a <- 2
b <- 1
sy1 <- 217; n1 <- 111
theta1.mc <- rgamma(1000, a+sy1, b+n1)
y1.mc <- rpois(1000, theta1.mc)
In Julia, I see that distributions can't take a vector of parameters. So, I end up doing something like this:
using Distributions
a = 2
b = 1
sy1 = 217; n1 = 111
theta_mc = rand(Gamma(a+217, 1/(b+n1)), 5000)
y1_mc = map(x -> rand(Poisson(x)), theta_mc)
While I was initially put off at the distribution function not taking a vector and working Just Like R™, I like that I'm not needing to set my number of samples more than once. That said, I'm not sure I'm doing this idiomatically, either in terms of how people would work with the distributions package, or more generically how to compose functions.
Can anyone suggest a better, more idiomatic approach than my example code?
I would usually do something like the following, which uses list comprehensions:
a, b = 2, 1
sy1, n1 = 217, 111
theta_mc = rand(Gamma(a + sy1, 1 / (b + n1)), 1000)
y1_mc = [rand(Poisson(theta)) for theta in theta_mc]
One source of confusion may be that Poisson isn't really a function, it's a type constructor and it returns an object. So vectorization over theta doesn't really make sense, since that wouldn't construct one object, but many -- which would then require another step to call rand on each generated object.

Fitting an inverse function

I have a function which looks like:
g(x) = f(x) - a^b / f(x)^b
g(x) - known function, data vector provided.
f(x) - hidden process.
a,b - parameters of this function.
From the above we get the relation:
f(x) = inverse(g(x))
My goal is to optimize parameters a and b such that f(x) would be as close as possible
to a normal distribution. If we look on a f(x) Q-Q normal plot (attached), my purpose is to minimize the distance between f(x) to the straight line which represents the normal distribution, by optimizing parameters a and b.
I wrote the below code:
g_fun <- function(x) {x - a^b/x^b}
inverse = function (f, lower = 0, upper = 2000) {
function (y) uniroot((function (x) f(x) - y), lower = lower, upper = upper)[1]
}
f_func = inverse(function(x) g_fun(x))
enter code here
# let's made up an example
# g(x) values are known
g <- c(-0.016339, 0.029646, -0.0255258, 0.003352, -0.053258, -0.018971, 0.005172,
0.067114, 0.026415, 0.051062)
# Calculate f(x) by using the inverse of g(x), when a=a0 and b=b0
for (i in 1:10) {
f[i] <- f_fun(g[i])
}
I have two question:
How to pass parameters a and b to the functions?
How to perform this optimization task, meaning find a and b such that f(x) would approximate normal distribution.
Not sure how you were able to produce the Q-Q plot since your provided examples do not work. You are not specifying the values of a and b and you are defining f_func but calling f_fun. Anyway here is my answer to your questions:
How to pass parameters a and b to the functions? - Just pass them as
arguments to the functions.
How to perform this optimization task, meaning find a and b such that f(x) would approximate normal distribution? - The same way any optimization task is done. Define a cost function, then minimize it.
Here is the revised code: I have added a and b as parameters, removed the inverse function and incorporated it inside f_func, which can now take vector input so no need for a for loop.
g_fun <- function(x,a,b) {x - a^b/x^b}
f_func = function(y,a,b,lower = 0, upper = 2000){
sapply(y,function(z) { uniroot(function(x) g_fun(x,a,b) - z, lower = lower, upper = upper)$root})
}
# g(x) values are known
g <- c(-0.016339, 0.029646, -0.0255258, 0.003352, -0.053258, -0.018971, 0.005172,
0.067114, 0.026415, 0.051062)
f <- f_func(g,1,1) # using a = 1 and b = 1
#[1] 0.9918427 1.0149329 0.9873386 1.0016774 0.9737270 0.9905320 1.0025893
#[8] 1.0341199 1.0132947 1.0258569
f_func(g,2,10)
[1] 1.876408 1.880554 1.875578 1.878138 1.873094 1.876170 1.878304 1.884049
[9] 1.880256 1.882544
Now for the optimization part, it depends on what you mean by f(x) would approximate normal distribution. You can compare mean square error from the qq-line if you want. Also since you say approximate, how close is good enough? You can go with shapiro.test and keep searching till you find p-value below 0.05 (be ware that there may not be a solution)
shapiro.test(f_func(g,1,2))$p
[1] 0.9484821
cost <- function(x,y) shapiro.test(f_func(g,x,y))$p
Now that we have a cost function how do we go about minimizing it. There are many many different ways to do numerical optimization. Take a look at optim function http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optim.html.
optim(c(1,1),cost)
This final line does not work, but without proper data and context this is as far as I can go. Hope this helps.

Resources