I am dealing with a constrained GMM problem in matlab. I find some equality constraints have 0 lagrangian multipliers. However, I want these equality constraints to be satisfied.
Are there any solutions?
Besides, may I introduce an error in equality constraints and make the objective equal to the sum of two GMM (one is the original GMM and the other is from the error in equality constraints)? I did not find any references. Perhaps I was googling with wrong keywords, any suggestions?
Thank you so much! I really appreciate it!
Related
The autocorrelation command acf is slightly off the correct value. the command i am giving is acf(X,lag.max = 1) and the output is 0.881 while the same autocorrelation calculation done using the command cor(X[1:41],X[2:42]) gives the value 0.9452. I searched a lot to understand if there is any syntax I am missing or if the two computations have some fundamental difference but could not find suitable resources. Please help.
The equation for autocorrelation used by acf() and in all time series textbooks is
where
See https://otexts.com/fpp2/autocorrelation.html for example.
The equation for regular correlation is slightly different:
where
The simpler formula is better because it uses all available data in the denominator.
Note also that you might expect the top formula to include the multiplier $T/(T-k)$ given the different number of terms in the summations. But this is not included to ensure that the corresponding autocorrelation matrix is nonnegative definite.
I am working on a problem which has two linear constraints including one equality constraint. I am using SPEA2 algorithm. The constraints are given below.
I have tried penalty function approach but had difficulty in selecting parameters. Secondly, I have used constrained dominance relation approach but again could not get feasible solutions. Please advise...
Recently, I have constructed a stats model with the negative log-likelihood to be minimized. There are nine parameters to be estimate (in fact I wanna add two more further). Several optimization method in R have been used,including optim,GenSA, DEoptim,Solnp. Then I got a minimum satisfied.
In the next procedure to compute t-value, it is necessary to compute se:
sqrt(diag(solve(hessian)))
However, error occurs due to hessian matrix is not positive semi-definite that negative numbers exist in the main diagonal elements. I have tried optimHess or numericHessian to compute different hessian (the hessians are different) but failed all the same. The work suspends.
This question I think is common in multiple parametric statistics. I ask for help that how should I do in this situation.
There is a paper by Jeff Gill and Gary King discussing this issue. It may help. Essentially, even if theoretically the Hessian should be definite positive at the minimum, because of numerical issues it may not. The paper discusses methods to deal with such matrices.
It happened many times that the values of design variables were out of their bounds (for example, low bound 0.0, design variable value was set to -0.004 by the optimizer), and the constraints seemed like being ignored (for example, a minimum 1.0 constraint for an output variable was not satisfied).
I am using openMDAO version 1.6.4, ScipyOptimizer, SLSQP, force_fd, step_size 1.0e-4.
Any ideas about why those things happened? Am I doing settings wrong? possible bugs? if not, how to avoid?
Any feedback is appreciated.
without seeing any code, its hard to know for sure. But SLSQP has been known to violate variable bounds, especially when design variables are poorly scaled. Try scaling things so your design variables vary to between 0 and 1. That should help it work better.
It took me quite some time to re-run my codes with scaled variables.
I got converged results, and values of all design variables are within their bounds, however, one constraint is still not satisfied. I set a constraint for a parameter to be [0, 1.5], but the final result is 1.73.
I am thinking of applying a more strict convergence criterion and re-running my codes to see if the constraint will be satisfied.
Do you think a smaller convergence criterion will help satisfy all the constraints? Your advice will be appreciated.
I'm using IPOPT within Julia. My objective function will throw an error for certain parameter values (specifically, though I assume this doesn't matter, it involves a Cholesky decomposition of a covariance matrix and so requires that the covariance matrix be positive-definite). As such, I non-linearly constrain the parameters so that they cannot produce an error. Despite this constraint, IPOPT still insists on evaluating the objective function at paramaters which cause my objective function to throw an error. This causes my script to crash, resulting in misery and pain.
I'm interested why, in general, IPOPT would evaluate a function at parameters that breach the constraints. (I've ensured that it is indeed checking the constraints before evaluating the function.) If possible, I would like to know how I can stop it doing this.
I have set IPOPT's 'bound_relax_factor' parameter to zero; this doesn't help. I understand I could ask the objective function to return NaN instead of throwing an error, but when I do IPOPT seems to get even more confused and does not end up converging. Poor thing.
I'm happy to provide some example code if it would help.
Many thanks in advance :):)
EDIT:
A commenter suggested I ask my objective function to return a bad objective value when the constraints are violated. Unfortunately this is what happens when I do:
I'm not sure why Ipopt would go from a point evaluating at 2.0016x10^2 to a point evaluating at 10^10 — I worry there's something quite fundamental about IPOPT I'm not understanding.
Setting 'constr_viol_tol' and 'acceptable_constr_viol_tol' to their minimal values doesn't noticably affect optimisation, nor does 'over-constraining' my parameters (i.e. ensuring they cannot be anywhere near an unacceptable value).
The only constraints that Ipopt is guaranteed to satisfy at all intermediate iterations are simple upper and lower bounds on variables. Any other linear or nonlinear equality or inequality constraint will not necessarily be satisfied until the solver has finished converging at the final iteration (if it can get to a point that satisfies termination conditions). Guaranteeing that intermediate iterates are always feasible in the presence of arbitrary non-convex equality and inequality constraints is not tractable. The Newton step direction is based on local first and second order derivative information, so will be an approximation and may leave the space of feasible points if the problem has nontrivial curvature. Think about the space of points where x * y == constant as an example.
You should reformulate your problem to avoid needing to evaluate objective or constraint functions at invalid points. For example, instead of taking the Cholesky factorization of a covariance matrix constructed from your data, introduce a unit lower triangular matrix L and a diagonal matrix D. Impose lower bound constraints D[i, i] >= 0 for all i in 1:size(D,1), and nonlinear equality constraints L * D * L' == A where A is your covariance matrix. Then use L * sqrtm(D) anywhere you need to operate on the Cholesky factorization (this is a possibly semidefinite factorization, so more of a modified Cholesky representation than the classical strictly positive definite L * L' factorization).
Note that if your problem is convex, then there is likely a specialized formulation that a conic solver will be more efficient at solving than a general-purpose nonlinear solver like Ipopt.