Finding Contraints Matrix in constrOptim (R) - r

I would like to set the constraints for constrOptim to optimize the following function:
logistic<-function(b,x,target){
b1<-b[1]
b2<-b[2]
log<-function(x){1/(1+exp(-(b1+b2*x)))}
abs(mean(log(x))-target)
}
Optimization to target=.75 can generally be done using
optim(c(1,1),logistic,x=data,target=.75,hessian=TRUE,method='SANN')
Method SANN seems necessary, as the function is not differentiable.
The constraint is that b2>0 (condition 1) or b2<0 (condition 2), while b1 can be any real number. But how can I extend this function to constrOptim? Specifically, I do not know how to specify arguments ui and ci.
Alternative approaches to optimization under these constraints also welcome. Thanks.
EDIT
I think that I found a workarround. We find constraint b2<0 by redefining logistic to
logistic_lt<-function(b,x,target){
b1<-b[1]
b2<-b[2]
if(b2>0){b2<-b2*(-1)}
log<-function(x){1/(1+exp(-(b1+b2*x)))}
abs(mean(log(x))-target)
}
Still I'd be interested in a solution involving constrOptim.

Related

maximizing function using optim in r where one of the parameters is an integer

I have a function that I need to maximize that contains 3 parameters, one of which is an integer.
How do I let the optim function know to maximize (instead of minimize which is the default).
And how do I let it know that one of the parameters in an integer?
Will it work if one of the parameters is a binary or categorical?
Max vs min is easy (set fnscale=-1 in the control parameter).
Integer parameters are not easy. I don't know of a simple out-of-the-box solution for this, hopefully someone else does.
Most of the methods implemented in optim assume continuous parameter spaces. (method="SANN" will work since you can give it explicit rules for updating - see the examples - but it's tricky to get it to work efficiently.) Most of the optimizers listed in the Optimization Task View are for continuous optimization - the section on global/stochastic gives the most options for mixed discrete/continuous problems.
If the range of plausible integers is reasonably small you can use brute force (i.e., optimize over the two continuous parameters for each of a range of fixed integer values); you could also use bisection search over the integers.

Recursive arc-length reparameterization of an arbitrary curve

I have a 3D parametric curve defined as P(t) = [x(t), y(t), z(t)].
I'm looking for a function to reparametrize this curve in terms of arc-length. I'm using OpenSCAD, which is a declarative language with no variables (constants only), so the solution needs to work recursively (and with no variables aside from global constants and function arguments).
More precisely, I need to write a function Q(s) that gives the point on P that is (approximately) distance s along the arc from the point where t=0. I already have functions for numeric integration and derivation that can be incorporated into the answer.
Any suggestions would be greatly appreciated!
p.s It's not possible to pass functions as a parameter in OpenSCAD, I usually get around this by just using global declarations.
The length of an arc sigma between parameter values t=0 and t=T can be computed by solving the following integral:
sigma(T) = Integral[ sqrt[ x'(t)^2 + y'(t)^2 + z'(t)^2 ],{t,0,T}]
If you want to parametrize your curve with the arc-length, you have to invert this formula. This is unfortunately rather difficult from a mathematics point of view. The simplest method is to implement a simple bisection method as a numeric solver. The computation method quickly becomes heavy so reusing previous results is ideal. The secant method is also useful as the derivative of sigma(t) is already known and equals
sigma'(t) = sqrt[ x'(t)^2 + y'(t)^2 + z'(t)^2]
Maybe not really the most helpful answer, but I hope it gives you some ideas. I cannot help you with the OpenSCad implementation.

Change Objective Function in nls.lm() in "R"

I'm using the function nls.lm {package: minpack.lm} to optimize a parameteristion for a hydrological model. The function is working quite well, but I want to use an other objective function (OF). Normally, the obective function "fn" in the nls.lm is defined as
A function that returns a vector of residuals, the sum square of which
is to be minimized. The first argument of fn must be par.
Now I want to use the Nash-Sutcliff-Efficiency, which is defined as
NSE <- 1 - (sum((obs - sim)^2) / sum((obs - mean(obs))^2))
or other OF. The problem is that nls.lm minimizes the expression sum(x)^2 and only the x is modifiable. I know that the best fit NSE = 1. Thus 1 - NSE creates a real minimization problem.
BTW: Example 1 from a nls.lm help page is a good example; there
observed - getPred(p,xx)
is minimized, what actually means that
sum ( observed - getPred(p,xx) )^2
is minimized by the nls.lm function, whereas getPred(p,xx) returns sim.
Any suggestion would be helpful. Thanks in advance.
Micha
nls.lm (and the related functions nls, and nlsLM) are designed to minimize the sum square of the residuals. For the problem you seek to solve, I would try application of a general-purpose minimizer.
If the problem is 'not too hard' (that is, has a single global minimum, is smooth), you could try to apply 'optim' to it (I would try the 'Nelder-Mead' and 'BFGS' options first), or the 'bobyqa' function from the package 'minqa', among other functions.
If the problem requires a global optimizer, you could try the 'GenSA' function from package 'GenSA', the 'genoud' function from the package 'rgenoud', or the 'DEoptim' function from package 'DEoptim', among other options. A review on 'Global Optimization in R' is forthcoming in the Journal of Statistical Software, and that will give a more complete overview of applicable functions.

Is this a correct way to find the derivative of the sigmoid function in python?

I came up with this code:
def DSigmoid(value):
return (math.exp(float(value))/((1+math.exp(float(value)))**2))
a.) Will this return the correct derivative?
b.) Is this an efficient method?
Friendly regards,
Daquicker
Looks correct to me. In general, two good ways of checking such a derivative computation are:
Wolfram Alpha. Inputting the sigmoid function 1/(1+e^(-t)), we are given an explicit formula for the derivative, which matches yours. To be a little more direct, you can input D[1/(1+e^(-t)), t] to get the derivative without all the additional information.
Compare it to a numerical approximation. In your case, I will assume you already have a function Sigmoid(value). Taking
Dapprox = (Sigmoid(value+epsilon) - Sigmoid(value)) / epsilon
for some small epsilon and comparing it to the output of your function DSigmoid(value) should catch all but the tiniest errors. In general, estimating the derivative numerically is the best way to double check that you've actually coded the derivative correctly, even if you're already sure about the formula, and it takes almost no effort.
In case numerical stability is an issue, there is another possibility: provided that you have a good implementation of the sigmoid available (such as in scipy) you can implement it as:
from scipy.special import expit as sigmoid
def sigmoid_grad(x):
fx = sigmoid(x)
return fx * (1 - fx)
Note that this is mathematically equivalent to the other expression.
In my case this solution worked, while the direct implementation caused floating point overflows when computing exp(-x).

How to solve nested ODE equations

We can use the deSolve package in R for ordinary differential equations (ODE), however, I can't find a way to solve two nested ODE equations, suppose `
b'(t) = beta - k*b(t);
a'(t) = alpha -b(t)*gamma;
where ' means differentiation. How can we solve a and b then? as a' is a function of b, we have to solve a and b simultaneously.
I got an error:
Error in lsoda(y, times, func, parms, ...) : The used combination of solvers cannot be nested.
When I tried to add the solve for b inside the ode solution for a.
I may be confused, but you seem to be describing coupled equations, which lsoda can handle perfectly well, as follows (I implemented your ODEs but made up some parameters since I didn't know what you had in mind.)
gfun <- function(t,y,parms,...) {
## 'with' trick lets us write gradient in terms of variable/parameter names
with(as.list(c(y,parms)),
list(c(b=beta-k*b,a=alpha-b*gamma),NULL))
}
library(deSolve)
L1 <- lsoda(y=c(b=1,a=1),
times=seq(0,10,by=0.1),
func=gfun,
parms=c(alpha=0.1,beta=0.2,gamma=0.05,k=0.01))
matplot(L1[,1],L1[,-1],type="l",lty=1,bty="l",las=1)
PS: this seems to be a set of coupled linear ODEs, so you should actually be able to get a full closed-form solution rather than solving them numerically. (I'm too lazy to do that right now; b(t) can be solved immediately (an "affine" equation), a(t) can be solved by integration.)

Resources