I try to solve nonlinear equations with controls.
Here is my code:
fun <- function(x) {
b0 <- (0.64*1+(1-0.64)*x[1]*(x[2]*x[1]-1)+x[1]*1*(1-x[2])*x[3])/(x[1]-1) -1805*2.85*0.64
b1plus <- (0.64*1+(1-0.64)*x[1]*(x[2]*x[1]-1.01)+x[1]*1.01*(1-x[2])*x[3])/(1.01*(x[1]-1)) -1805*2.85*0.64*(1+0.00235)
b1minus <- (0.64*1+(1-0.64)*x[1]*(x[2]*x[1]-0.99)+x[1]*0.99*(1-x[2])*x[3])/(0.99*(x[1]-1)) -1805*2.85*0.64*(1-0.00235)
return(c(b0,b1plus,b1minus))
}
multiroot(fun,c(1.5, 0, 0))
However, the result I get is far beyond the actual results. I wish to control x1 to the range (1.5,4), x2(0,1), x3(0,10000). How can I do that?
Thank you!!
Methods like 'Newton-Raphson' in multiroot or nleqslv do not work well together with bounds constraints. One possible approach is to square and sum the components of your function
fun1 <- function(x) sum(fun(x)^2)
and then treat this as a global optimization problem where you hope for a minimum value of 0.0. For example, GenSA provides an implementation of "simulated annealing" that works reasonably well in low dimensions.
library(GenSA)
res <- GenSA(par=NULL, fn=fun1,
lower=c(1.5,0,0), upper=c(4,1,10000),
control=list(maxit=10e5))
res$value; res$par
## [1] 119.7869
## [1] 4.00 0.00 2469.44
Several tries did not find a lower function value than this one, which makes me think there is no common root in the constraint box you requested.
Related
I tried to solve the these non-linear equations by using nleqslv. However it does not work well. I do know the reason why it does not because I didn't separate the two unknowns to different sides of the equation.
My questions are: 1, Are there any other packages that could solve this kind of
equations?
2, Is there any effective way in R that could help me rearrange
the equation so that it meets the requirement of the package
nleqslv?
Thank you guys.
Here are the codes and p[1] and p[2] are the two unknowns I want to solve.
dslnex<-function(p){
p<-numeric(2)
0.015=sum(exp(Calib2$Median_Score*p[1]+p[2])*weight_pd_bad)
cum_dr<-0
for (i in 1:length(label)){
cum_dr[i]<-exp(Calib2$Median_Score*p[1]+p[2][1:i]*weight_pd_bad[1:i]/0.015
}
mid<-0
for (i in 1:length(label)){
mid[i]<-sum(cum_dr[1:i])/2
}
0.4=(sum(mid*weight_pd_bad)-0.5)/(0.5*(1-0.015))
}
pstart<-c(-0.000679354,-4.203065891)
z<- nleqslv(pstart, dslnex, jacobian=TRUE,control=list(btol=.01))
Following up on my comment I have rewritten your function as follows correcting errors and inefficiencies.
Errors and other changes are given as inline comments.
# no need to use dslnex as name for your function
# dslnex <- function(p){
# any valid name will do
f <- function(p) {
# do not do this
# you are overwriting p as passed by nleqslv
# p<-numeric(2)
# declare retun vector
y <- numeric(2)
y[1] <- 0.015 - (sum(exp(Calib2$Median_Score*p[1]+p[2])*weight_pd_bad))
# do not do this
# cum_dr is initialized as a scalar and will be made into a vector
# which will be grown as a new element is inserted (can be very inefficient)
# cum_dr<-0
# so declare cum_dr to be a vector with length(label) elements
cum_dr <- numeric(length(label))
for (i in 1:length(label)){
cum_dr[i]<-exp(Calib2$Median_Score*p[1]+p[2][1:i]*weight_pd_bad[1:i]/0.015
}
# same problem as above
# mid<-0
mid <- numeric(length(label))
for (i in 1:length(label)){
mid[i]<-sum(cum_dr[1:i])/2
}
y[2] <- 0.4 - (sum(mid*weight_pd_bad)-0.5)/(0.5*(1-0.015))
# return vector y
y
}
pstart <-c(-0.000679354,-4.203065891)
z <- nleqslv(pstart, dslnex, jacobian=TRUE,control=list(btol=.01))
nleqslv is intended for solving systems of equations of the form f(x) = 0 which must be square.
So a function must return a vector with the same size as the x-vector.
You should now be able to proceed provided your system of equations has a solution. And provided there are no further errors in your equations. I have my doubles about the [1:i] in the expression for cum_dr and the expression for mid[i]. The loop calculating mid possibly can be written as a single statement: mid <- cumsum(cum_dr)/2. Up to you.
I try to solve a non-linear optimization problem using the function donlp2 in R. My goal is to find out the maximum value of the following function:
442.8658*(x1+1)^(0.008752747)*(y1+1)^(0.555782)+(x2+1)^(0.008752747)*(y2+1)^(0.555782)
There is no non-linear constraints. The linear constraints are listed below:
x1+x2<=20000;
y1+y2<=20000;
x1<=4662.41;
x2<=149339;
y1<=14013.94;
y2<=1342738;
x1>=0;
x2>=0;
y1>=0;
y2>=0;
Below is my code:
p <- c(rep(0,4))
par.l <- c(rep(0,4))
par.u <- c(4662.41, 149339, 14013.94, 1342738)
fn <- function(par){
x1 <- par[1]; y1<-par[3]
x2 <- par[2]; y2<-par[4]
y <- 1 / (442.8658*(x1+1)^(0.008752747)*(y1+1)^(0.555782)
+ (x2+1)^(0.008752747)*(y2+1)^(0.555782))
}
A <- matrix(c(rep(c(1,0),2), rep(c(0,1),2)), nrow=2)
lin.l <- c(-Inf, 20000)
lin.u <- c(-Inf, 20000)
ret <- donlp2(p, fn, par.u=par.u, par.l=par.l, A=A, lin.l=lin.l, lin.u=lin.u)
I searched and found some related posts saying that donlp2 is only good to find minimum value of a function, which is the reason I took the reciprocal in the objective function.
The code ran correctly, but I have concerns with the results, since I can easily find other values that can give me greater outcome, i.e. the minimization of the objective function is not true.
I also found that when I change the initial value or the lower bound of x1,x2,y1,y2, the results will change dramatically. For example, if I set p=c(rep(0,4)), par.l<-c(rep(1,4)) instead of p=c(rep(0,4)), par.l<-c(rep(0,4)), the results will change from
$par
[1] 2.410409e+00 5.442753e-03 1.000000e+04 1.000000e+04
to
$par
[1] 2331.748 74670.025 3180.113 16819.887
Any ideas? I appreciate your suggestions and help!
I've been writing some code that iteratively performs binomial draws (using rbinom) and for some callee arguments I can end up with the size being large, which causes R (3.1.1, both official or homebrew builds tested—so unlikely to be compiler related) to return an unexpected NA. For example:
rbinom(1,2^32,0.95)
is what I'd expect to work, but gives NA back. However, running with size=2^31 or prob≤0.5 works.
The fine manual mentions inversion being used when size < .Machine$integer.max is false, could this be the issue?
Looking at the source rbinom does the equivalent (in C code) of the following for such large sizes:
qbinom(runif(n), size, prob, FALSE)
And indeed:
set.seed(42)
rbinom(1,2^31,0.95)
#[1] 2040095619
set.seed(42)
qbinom(runif(1), 2^31, 0.95, F)
#[1] 2040095619
However:
set.seed(42)
rbinom(1,2^32,0.95)
#[1] NA
set.seed(42)
qbinom(runif(1), 2^32, 0.95, F)
#[1] 4080199349
As #BenBolker points out rbinom returns an integer and if the return value is larger than .Machine$integer.max, e.g., larger than 2147483647 on my machine, NA gets returned. In contrast qbinom returns a double. I don't know why and it doesn't seem to be documented.
So, it seems like there is at least undocumented behavior and you should probably report it.
I agree that (in the absence of documentation saying this is a problem) that this is a bug. A reasonable workaround would be using the Normal approximation, which should be very very good indeed (and faster) for such large values. (I originally meant this to be short and simple but it ended up getting a little bit out of hand.)
rbinom_safe <- function(n,size,prob,max.size=2^31) {
maxlen <- max(length(size),length(prob),n)
prob <- rep(prob,length.out=maxlen)
size <- rep(size,length.out=maxlen)
res <- numeric(n)
bigvals <- size>max.size
if (nbig <- sum(bigvals>0)) {
m <- (size*prob)[bigvals]
sd <- sqrt(size*prob*(1-prob))[bigvals]
res[bigvals] <- round(rnorm(nbig,mean=m,sd=sd))
}
if (nbig<n) {
res[!bigvals] <- rbinom(n-nbig,size[!bigvals],prob[!bigvals])
}
return(res)
}
set.seed(101)
size <- c(1,5,10,2^31,2^32)
rbinom_safe(5,size,prob=0.95)
rbinom_safe(5,3,prob=0.95)
rbinom_safe(5,2^32,prob=0.95)
The Normal approximation should work reasonably well whenever the mean is many standard deviations away from 0 or 1 (whichever is closer). For large N this should be OK unless p is very extreme. For example:
n <- 2^31
p <- 0.95
m <- n*p
sd <- sqrt(n*p*(1-p))
set.seed(101)![enter image description here][1]
rr <- rbinom_safe(10000,n,prob=p)
hist(rr,freq=FALSE,col="gray",breaks=50)
curve(dnorm(x,mean=m,sd=sd),col=2,add=TRUE)
dd <- round(seq(m-5*sd,m+5*sd,length.out=101))
midpts <- (dd[-1]+dd[-length(dd)])/2
lines(midpts,c(diff(sapply(dd,pbinom,size=n,prob=p))/diff(dd)[1]),
col="blue",lty=2)
This is the intended behaviour, but there are two issues:
1) The NA induced by coercion should raise a warning
2) The fact that discrete random variables have storage mode integer should be documented.
I have fixed 1) and will modify the documentation to fix 2) when I have a little more time.
I am very new to programming and have been essentially learning by trial and error, but have reached a problem I do not know how to approach. I need to do a double integration over a triangular area in R. As the usual integrate function doesn't seem able to handle this, I tried using cubature package (*edited - see below for the full code).
Update/Edit:
I've been working on this more and am still coming up against the same issue. I understand that I have to ensure that values are within the appropriate bounds with respect to the asin calculation. However, this still isn't getting around the fundamental problem of the triangular area. Perhaps it will be clearer if I post my full code below:
L <- 25
n <- -4
area <- 30
distances <- L*seq(0.005, 100, 0.05)
cond <- area*pi
d <- 5
fun <- function(x=1,r=0)
{
if (x<cond) {
return(0)
} else {
return((-1)*((n+2)/(2*pi*(L^2)))*(1+((x/L)^2))^(n/2)*(1/pi)*(1/pi)*acos(d/x))*asin(sqrt((pi*area)/d+r))
}
}
fun(5)
fun(300)
library(cubature)
integrationone <- function()
{
integrand <- adaptIntegrate(fun, lowerLimit=c(d,0), upperLimit=c(80,80))
return(integrand$integral)
}
integrationone()
warnings()
From looking at the warning messages, R seems unable to carry out the evaluation of the conditional argument while integrating over x, so I still can't get values for only the exact area I want to integrate over. Does anyone have any ideas or advice?
I don't think that the code behind adaptIntegrate will help you what's happen. You can type in a console adaptIntegrate and you will get the code. It is essentially a call to a C algorithm.
In order to understand what it is happen , I think you need before to understand what you integrate. Try to simplify your function, to see his definition domain.
INV_PI <- 1/pi
fun <- function(X){
scale <- -1*((n+2)/(2*pi*(L^2)))*INV_PI^2 *acos(d/(d+r))
res <- scale*asin(sqrt((pi*area)/X))* (1+((X/L)^2))^(n/2)
sqrt(prod(res))
}
Here the 2 terms on X , but only one can produce problem.
asin(sqrt((pi*area)/X))
asin is defined only between[-1,1], sqrt is defined only for positive numbers.
So here fun is defined between [pi*area,INF], and you have to integrate in this domain.
for example :
low.Lim <- pi*area
doubleintegration <- function()
{
integrand <- adaptIntegrate(fun, lowerLimit=c(low.Lim,low.Lim),
upperLimit=c(200*low.Lim,200*low.Lim))
return(integrand$integral)
}
doubleintegration()
[1] 0.1331089
I have many rows and on every row I compute the uniroot of a non-linear function. I have a quad-core Ubuntu machine which hasn't stopped running my code for two days now. Not surprisingly, I'm looking for ways to speed things up ;-)
After some research, I noticed that only one core is currently used and parallelization is the thing to do. Digging deeper, I came to the conclusion (maybe incorrectly?) that the package foreach isn't really meant for my problem because too much overhead is produced (see, for example, SO). A good alternative seems to be multicore for Unix machines. In particular, the pvec function seems to be the most efficient one after I checked the help page.
However, if I understand it correctly, this function only takes one vector and splits it up accordingly. I need a function that can be parallized, but takes multiple vectors (or a data.frame instead), just like the mapply function does. Is there anything out there that I missed?
Here is a small example of what I want to do: (Note that I include a plyr example here because it can be an alternative to the base mapply function and it has a parallelize option. However, it is slower in my implementation and internally, it calls foreach to parallelize, so I think it won't help. Is that correct?)
library(plyr)
library(foreach)
n <- 10000
df <- data.frame(P = rnorm(n, mean=100, sd=10),
B0 = rnorm(n, mean=40, sd=5),
CF1 = rnorm(n, mean=30, sd=10),
CF2 = rnorm(n, mean=30, sd=5),
CF3 = rnorm(n, mean=90, sd=8))
get_uniroot <- function(P, B0, CF1, CF2, CF3) {
uniroot(function(x) {-P + B0 + CF1/x + CF2/x^2 + CF3/x^3},
lower = 1,
upper = 10,
tol = 0.00001)$root
}
system.time(x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3))
#user system elapsed
#0.91 0.00 0.90
system.time(x2 <- mdply(df, get_uniroot))
#user system elapsed
#5.85 0.00 5.85
system.time(x3 <- foreach(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3, .combine = "c") %do% {
get_uniroot(P, B0, CF1, CF2, CF3)})
#user system elapsed
# 10.30 0.00 10.36
all.equal(x1, x2$V1) #TRUE
all.equal(x1, x3) #TRUE
Also, I tried to implement Ryan Thompson's function chunkapply from the SO link above (only got rid of doMC part, because I couldn't install it. His example works, though, even after adjusting his function.),
but didn't get it to work. However, since it uses foreach, I thought the same arguments mentioned above apply, so I didn't try it too long.
#chunkapply(get_uniroot, list(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3))
#Error in { : task 1 failed - "invalid function value in 'zeroin'"
PS: I know that I could just increase tol to reduce the number of steps that are necessary to find a uniroot. However, I already set tol as big as possible.
I'd use the parallel package that's built into R 2.14 and work with matrices. You could then simply use mclapply like this:
dfm <- as.matrix(df)
result <- mclapply(seq_len(nrow(dfm)),
function(x) do.call(get_uniroot,as.list(dfm[x,])),
mc.cores=4L
)
unlist(result)
This is basically doing the same mapply does, but in a parallel way.
But...
Mind you that parallelization always counts for some overhead as well. As I explained in the question you link to, going parallel only pays off if your inner function calculates significantly longer than the overhead involved. In your case, your uniroot function works pretty fast. You might then consider to cut your data frame in bigger chunks, and combine both mapply and mclapply. A possible way to do this is:
ncores <- 4
id <- floor(
quantile(0:nrow(df),
1-(0:ncores)/ncores
)
)
idm <- embed(id,2)
mapply_uniroot <- function(id){
tmp <- df[(id[1]+1):id[2],]
mapply(get_uniroot, tmp$P, tmp$B0, tmp$CF1, tmp$CF2, tmp$CF3)
}
result <-mclapply(nrow(idm):1,
function(x) mapply_uniroot(idm[x,]),
mc.cores=ncores)
final <- unlist(result)
This might need some tweaking, but it essentially breaks your df in exactly as many bits as there are cores, and run the mapply on every core. To show this works :
> x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3)
> all.equal(final,x1)
[1] TRUE
it's an old topic but fyi you now have parallel::mcmapply doc is here. don't forget to set mc.cores in the options. I usually use mc.cores=parallel::detectCores()-1 to let one cpu free for OS operations.
x4 <- mcmapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3,mc.cores=parallel::detectCores()-1)
This isn't exactly a best practices suggestion, but considerable speed-up can be had by identifying the root for all parameters in a 'vectorized' fashion. For instance,
bisect <-
function(f, interval, ..., lower=min(interval), upper=max(interval),
f.lower=f(lower, ...), f.upper=f(upper, ...), maxiter=20)
{
nrow <- length(f.lower)
bounds <- matrix(c(lower, upper), nrow, 2, byrow=TRUE)
for (i in seq_len(maxiter)) {
## move lower or upper bound to mid-point, preserving opposite signs
mid <- rowSums(bounds) / 2
updt <- ifelse(f(mid, ...) > 0, 0L, nrow) + seq_len(nrow)
bounds[updt] <- mid
}
rowSums(bounds) / 2
}
and then
> system.time(x2 <- with(df, {
+ f <- function(x, PB0, CF1, CF2, CF3)
+ PB0 + CF1/x + CF2/x^2 + CF3/x^3
+ bisect(f, c(1, 10), PB0, CF1, CF2, CF3)
+ }))
user system elapsed
0.180 0.000 0.181
> range(x1 - x2)
[1] -6.282406e-06 6.658593e-06
versus about 1.3s for application of uniroot separately to each. This also combined P and B0 into a single value ahead of time, since that is how they enter the equation.
The bounds on the final value are +/- diff(interval) * (.5 ^ maxiter) or so. A fancier implementation would replace bisection with linear or quadratic interpolation (as in the reference cited in ?uniroot), but then uniform efficient convergence (and in all cases error handling) would be more tricky to arrange.