I tried to minimize the following function:
func <- function(qq){
x <- qq[1]
y <- qq[2]
output <- 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2
return(output)
}
when x+y=1 and 0<=x,y<=1. To use gosolnp in Rsolnp package, firstly, I defined cons to use it in eqfun argument:
cons <- function(qq)
sum(qq)
Then I applied gosolnp function:
install.packages("Rsolnp")
require(Rsolnp)
gosolnp(fun = func, LB = c(0, 0), UB = c(1, 1), eqfun = cons, eqB = 1)
res$pars
[1] 0.8028775 0.1971225
res$value
[1] 2.606528e-09 -5.551115e-17
the answer should be x = 0 and y = 1, but as you can try in every run of gosolnp you will get new points which func is approximately 0 at that points (and not exactly).
Mathematica and Maple do optimization for this function very fast and give the true answer which is x = 0 and y = 1, but instead every run in R gives a new solution which is not correct.
I also tried another optimization function as spg() in alabama or DEoptim, but the problem remained unsolved.
So my question are:
1- is there any solution that I can minimize func in R?
2- is there any difference between precision in R and Mathematica and why Mathematica could give me the exact answer but R not?
Thank you in advance
If you have two variables x and y, with y = 1 - x, then you really have a problem in just one variable x. Noting that, you can reparametrise your function to be
1 - 2 * x + x^2 - 2 * (1 - x) + 2 * x * (1 - x) + (1 - x)^2
and going through the algebra shows that this is constant as a function of x. Thus any value of x in (0, 1) is a solution, and which one your algorithm converges to will basically be random: based on numerical roundoff and your choice of starting point.
The fact that gosolnp's returned value is zero to within the limits of numerical precision should have been a tipoff, or even just plotting the curve.
I can't speak to these particular packages, but nloptr(...) in package nloptr seems to work well:
# Non-Linear Optimization (package::nloptr)
F <- function(v){
x=v[1]
y=v[2]
output <- 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2
}
Hc <- function(v) return(1-sum(v))
library(nloptr)
opt <- nloptr(x0=c(1/2,1/2), eval_f=F, lb = c(0,0), ub = c(1,1),
eval_g_eq = Hc,
opts = list(algorithm="NLOPT_GN_ISRES",maxeval=1e6))
opt$solution
# [1] 0.0005506997 0.9994492982
Your function is identically equal to 0 so there is no point in trying to minimize it.
library(Ryacas)
x <- Sym("x")
y <- 1-x
Simplify( 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2)
which gives:
expression(0)
Related
I am trying to graph the function ((sin(a*(pi/10))+x)^2 + (cos(a*(pi/10))+y)^2 -1)/(0.7*abs(x))=y (with a being any value from 1-20) in r but am struggling as all the r functions seem to need to have equations in the format function(x) = y with no ys in function(x), and no trigonometric functions being applied to y.
I've tried:
curve((((sin(1*(pi/10))+x)^2 + (cos(1*(pi/10))+y)^2 -1)/(0.7abs(x))))
and
curve((((sin(1(pi/10))+x)^2 + (cos(1*(pi/10))+y)^2 -1)/(0.7*abs(x)))-y)
where I've specified y as:
y <- seq(-3,3,length=100) (and a=1)
but I get warning messages of:
Warning messages:
1: In (sin(1 * (pi/10)) + x)^2 + (cos(1 * (pi/10)) + y)^2 :
longer object length is not a multiple of shorter object length
2: In (((sin(1 * (pi/10)) + x)^2 + (cos(1 * (pi/10)) + y)^2 - 1)/(0.7 * :
longer object length is not a multiple of shorter object length
and the graphs it produced are not right (I have checked by plotting it on Wolfram alpha).
My problem seems to be the same as the one here Graphing more complicated trigonometric functions in R
(sorry if this isn't the right way to link to other questions, I'm new to stack overflow)
but it hasn't been satisfactorily answered.
Any help would be much appreciated!
This is an implicit equation.
a <- 1
f <- function(x, y) {
y - ((sin(a*(pi/10))+x)^2 + (cos(a*(pi/10))+y)^2 -1)/(0.7*abs(x))
}
x <- seq(-1.5, 1, len = 200)
y <- seq(-2.5, 0.5, len = 200)
z <- outer(x, y, f)
cr <- contourLines(x, y, z, levels = 0)
plot(cr[[1]]$x, cr[[1]]$y, type = "l")
This is equation a <- x * t - 2 * x. I want to solve this equation for t.
So basically, set a = 0 and solve for t . I am new to the R packages for solving equations. I need the package that solves for complex roots. The original equations I am work with have real and imaginary roots. I am looking for an algebraic solution only, not numerical.
I tried:
a <- x * t - 2 * x
solve(a,t)
I run into an error:
Error in solve.default(a, t) : 'a' (1000 x 1) must be square
You can use Ryacas to get the solution as an expression of x:
library(Ryacas)
x <- Sym("x")
t <- Sym("t")
Solve(x*t-2*x == 0, t)
# Yacas vector:
# [1] t == 2 * x/x
As you can see, the solution is t=2 (assuming x is not zero).
Let's try a less trivial example:
Solve(x*t-2*x == 1, t)
# Yacas vector:
# [1] t == (2 * x + 1)/x
If you want to get a function which provides the solution as a function of x, you can do:
solution <- Solve(x*t-2*x == 1, t)
f <- function(x){}
body(f) <- yacas(paste0("t Where ", solution))$text
f
# function (x)
# (2 * x + 1)/x
You might be looking for optimize:
a=function(x,t) x*t-2*x
optimize(a,lower=-100,upper=100,t=10)
optimize(a,lower=-100,upper=100,x=2)
If you need more help, I need a reproductible example.
I am trying to implement a vectorized exponentially weighted moving standard deviation using R. Is this the correct approach?
ewma <- function (x, alpha) {
c(stats::filter(x * alpha, 1 - alpha, "recursive", init = x[1]))
}
ewmsd <- function(x, alpha) {
sqerror <- na.omit((x - lag(ewma(x, alpha)))^2)
ewmvar <- c(stats::filter(sqerror * alpha, 1 - alpha, "recursive", init = 0))
c(NA, sqrt(ewmvar))
}
I'm guessing it's not, since its output is different from Python's pandas.Series.ewm.std() function.
When I run
ewmsd(x = 0:9, alpha = 0.96)
the output is
[1] NA 0.2236068 0.4874679 0.7953500 1.1353903 1.4993855 1.8812961 2.2764708 2.6812160 3.0925367
However, with
pd.Series(range(10)).ewm(alpha = 0.96).std()
the output is
0 NaN
1 0.707107
2 0.746729
3 0.750825
4 0.751135
5 0.751155
6 0.751156
7 0.751157
8 0.751157
9 0.751157
According to the documentation for Pandas, the pandas.Series.ewm() function receives an adjust parameter, which defaults to TRUE. When adjust == TRUE, the exponentially weighted moving average from pandas.Series.ewm.mean() is calculated through weights, not recursively. Naturally, this affects the standard deviation output as well. See this Github issue and this question for more info.
Here's a vectorized solution in R:
ewmsd <- function(x, alpha) {
n <- length(x)
sapply(
1:n,
function(i, x, alpha) {
y <- x[1:i]
m <- length(y)
weights <- (1 - alpha)^((m - 1):0)
ewma <- sum(weights * y) / sum(weights)
bias <- sum(weights)^2 / (sum(weights)^2 - sum(weights^2))
ewmsd <- sqrt(bias * sum(weights * (y - ewma)^2) / sum(weights))
},
x = x,
alpha = alpha
)
}
I'm trying to use R to estimate E[u(X)] where u is a utility function and X is a random variable. More specifically, I want to be able to rank E[u(X)] and E[u(Y)] for two random variables X and Y -- only the ranking matters.
My problem is that u(x) = -exp(-sigma * x) for some sigma > 0, and this converges very rapidly to zero. So I have many cases where I expect, say, E[u(X)] > E[u(Y)], but because they are so close to zero, my simulation cannot distinguish them.
Does anyone have any advice for me?
I am only interested in ranking the two expected utilities, so u(x) can be replaced by any u.tilde(x) = a * u(x) + b, where a > 0 and b can be any number.
Below is an example where X and Y are both normal (in which case I think there is a closed form solution, but pretend X and Y have complicated distributions that I can only simulate from).
get.u <- function(sigma=1) {
stopifnot(sigma > 0)
utility <- function(x) {
return(-exp(-sigma * x))
}
return(utility)
}
u <- get.u(sigma=1)
curve(u, from=0, to=10) # Converges very rapidly to zero
n <- 10^4
x <- rnorm(n, 10^4, sd=10)
y <- rnorm(n, 10^4, sd=10^3)
mean(u(x)) == mean(u(y)) # Returns True (they're both 0), but I expect E[u(x)] > E[u(y)]
## An example of replacing u with a*u + b
get.scaled.u <- function(sigma=1) {
stopifnot(sigma > 0) # Risk averse
utility <- function(x) {
return(-exp(-sigma * x + sigma * 10^4))
}
return(utility)
}
u <- get.scaled.u(sigma=1)
mean(u(x)) > mean(u(y)) # True as desired
x <- rnorm(n, 10^4, sd=10^3)
y <- rnorm(n, 10^4, sd=2*10^3)
mean(u(x)) > mean(u(y)) # False again -- they're both -Inf
Is finding a clever way to scale u the correct way to deal with this problem? For example, suppose X and Y both have bounded support -- if I know the bounds, how can I scale u to guarantee that a*u + b will be neither too close to -Inf, nor too close to zero?
Edit: I didn't know about multiple precision packages. Rmpfr is helpful:
library(Rmpfr)
x.precise <- mpfr(x, 100)
y.precise <- mpfr(y, 100)
mean(u(x.precise)) > mean(u(y.precise)) # True
not for the first time, I guess that the answer is quite simple. But searching for R solutions is regularly hard work and after two hours its probably at time to ask someone...
I am working with a non-linear formula (this is only the first work on it, it will actually become non-linear soon) and to test my initial values, i would like to simply calculate the values over a series of x values.
Here is some code:
x <- c(1,2,3,4,5,6,7,8,9,10,11,12) #etc
y <- c(NA,332,248,234,84,56,26,24,27,33,37,25) #etc
# This is my formula I shall soon expand
fEst <- y ~ 1 / (x / a + 1) * b
# Initial value
a <- 800
# Initial value based on inverted formula and second measure
b <- y[2] * (x[2] / a + 1)
# Can i use my formula fEst to do this step?
p <- 1 / (x / a + 1) * b
The point is that I am working on the formula - and it seems strange to make each change, twice...
What I found was a package nls2 where something like this was possible and a function apply.a.formula which seems to be an element from another package - but as this is a very basic use of a function, I guess that the R base packe already has the appropriate functions. Just ... where?
Thanks!
I came across this thread whilst looking up the avenues you'd tried and the solution posted by Gabor. Note that apply.a.formula() is a made up function name that the OP in the thread was looking to find a real function for.
Using the example that Gabor provided in the thread this is a solution using the nls2 package:
## your data
x <- c(1,2,3,4,5,6,7,8,9,10,11,12) #etc
y <- c(NA,332,248,234,84,56,26,24,27,33,37,25) #etc
# This is my formula I shall soon expand
fEst <- y ~ 1 / (x / a + 1) * b
# Initial value
a <- 800
# Initial value based on inverted formula and second measure
b <- y[2] * (x[2] / a + 1)
## install.packages("nls2", depend = TRUE) if not installed
require(nls2)
fitted(nls2(fEst, start = c(a = a, b = b), alg = "brute"))
The last line gives:
R> fitted(nls2(fEst, start = c(a = a, b = b), alg = "brute"))
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113
attr(,"label")
[1] "Fitted values"
which is essentially the same as 1 / (x / a + 1) * b would give:
R> 1 / (x / a + 1) * b
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113
From the comments, Carl Witthoft notes that if you want to generalise equations like 1 / (x / a + 1) * b then a function might be a useful way of encapsulating the operation without typing out 1 / (x / a + 1) * b every time. For example
myeqn <- function(a, b, x) { 1 / (x / a + 1) * b }
R> myeqn(a, b, x)
[1] 332.4145 332.0000 331.5866 331.1741 330.7627 330.3524 329.9430 329.5347
[9] 329.1273 328.7210 328.3157 327.9113