Cplex opl run doesn't answer when I add easy code - constraints

I have a small project with CPLEX OPL. In my model, I have almost 40 constraints. It works properly and gives me objective function value as 90. Objective function is:
maximize sum(k in konteyner, s in sandik, x in ex) sx[k][s][x]+
sum(k in konteyner, s in sandik, y in vay) sy[k][s][y]+
sum(k in konteyner, s in sandik, z in zed) sz[k][s][z];
In this situation my model works with other all constraints. However, when I add a constraint that doesn't constrain anything, my model works but doesn't give an answer. Finally, it say that "OPL run doesn't answer." But, this constraint cannot affect anything.
The constraint that is added is below:
forall(s1 in sandik, s2 in sandik, k in konteyner, x in ex, y in vay, z in
zed: s1 < s2)
{
sx[k][s1][x] + sy[k][s1][y] + sz[k][s1][z] + sx[k][s2][x] + sy[k][s2][y] +
sz[k][s2][z] <= 99999999 ;
}
Note: sx,sy,sz are boolean decision variables.
Note2: Normally in last constraint, value isn't 99999. I wanted to it doesn't constrain anything for just now.
Note3: Normally value is 5. When I make it 5, it still doesn't work.
Thank you for your answers.

Related

How to solve an equation y=ax^2+bx+c when x is unknown and y known

I have this equation:
y = -0.00248793*x^2+20.77173764*x-371.01805798
And I would like to obtain the result of the equation when I give "y" numbers,
edited explanation 2/06/20:
I want to add a vector as my "y", and receive an output of one vector also.
This problem is a biological one, in which I performed citokine bead array (CBA) and I stablished a reference curve which is sinusoidal.
after stablishing the degree of the equation making the following:
fitil6_1=lm(Standards$`IL6 median`~poly(concentration,1,raw=TRUE))
fitil6_2=lm(Standards$`IL6 median`~poly(concentration,2,raw=TRUE))
fitil6_3=lm(Standards$`IL6 median`~poly(concentration,3,raw=TRUE))
fitil6_4=lm(Standards$`IL6 median`~poly(concentration,4,raw=TRUE))
lines(concentration,predict(fitil6_1,data.frame(x=concentration)),col="red")
lines(concentration,predict(fitil6_2,data.frame(x=concentration)),col="green")
lines(concentration,predict(fitil6_3,data.frame(x=concentration)),col="blue")
lines(concentration,predict(fitil6_4,data.frame(x=concentration)),col="purple)
legend(20,40000,legend=c("de grau 1","de grau 2","de grau 3","de grau 4"),lty=1,col=c("red","green","blue","purple"))
I have chosen the degree 2 formula as it fits better to my dots for this cytokine (and most cytokines in this study)
So when I make
coef(fitil6_2)
(Intercept) poly(concentration, 2, raw = TRUE)1 poly(concentration, 2, raw = TRUE)2
-8.262381e+02 2.371377e+01 -2.847135e-03
I receive that output and then I am able to build the formula (in this case):
y=-2.847135e-03 *x^2+2.371377e+01*x-8.262381e+02
but as my independent value is what I know is pretty difficult to isolate x!
(end of the editing)
I have tried many things like making function(x,y) but when you specify this you need to give a number of y, so really I am litlle bit lost!
Thank you
As #Dave2e said, you can solve this particular example by algebra. But you might need a programmatic solution, or you might be using the quadratic as an easy example. in which case...
Rewrite your problem as "what value of y satisfies -0.00248793*x^2+20.77173764*x-371.01805798 - y = 0?".
There are plenty of ways to find the zeroes of a function. That's what you've turned your problem into. Suppose your "known value of y" is 10...
f <- function(x, y) {
-0.00248793*x^2+20.77173764*x-371.01805798 - y
}
answer <- stats::uniroot(f, interval=c(0, 50), y=10)
# Check we've got the right answer
f(answer$root, 10)
Giving
[1] -1.186322e-10
Using this method, you do need to find/guess a range within which the answer might lie. That's the purpose of the interval=c(0.50) part of the call to uniroot. You can read the online help for more information about the value returned by uniroot and things you might want to look out for.
Thank you for all who answered I have just started in this page, this worked for me:
isolating "y" and then making a function with the quadratic formula to x:
delta<-function(y,a,b,c)
{k=(-b+sqrt(b^2-4*a*(c-y)))/(2*a)
print(k)
}
delta(citoquines_valero$`IFNg median`,-1.957128e-03,1.665741e+01,-7.522327e+02)
#I will use that one as a provisional solution.
#I have also been told to use this, but is not working properly:
result <- function(y,a,b,c){
# Constructing delta
delta<-function(y,a,b,c){
b^2-4*a*(c-y)
}
if(delta(a,b,d) > 0){ # first case D>0
x_1 = (-b+sqrt(delta(y,a,b,c)))/(2*a)
x_2 = (-b-sqrt(delta(y,a,b,c)))/(2*a)
if (x_1 >= 0) {
print(x_1)
else if (x_2 >= 0){
print(x_2)
}
}
print(result)
else if(delta(a,b,d) == 0){ # second case D=0
x = -b/(2*a); return(x)
}
else {"There are no real roots."}; # third case D<0```
return("There are no real roots.")
}
}

Optimizing for global minimum

I am attempting to use optimize() to find the minimum value of n for the following function (Clopper-Pearson lower bound):
f <- function (n, p=0.5)
(1 + (n - p*n + 1) /
(p*n*qf(p= .025, df1= 2*p, df2= 2*(n - p + 1))))^-1
And here is how I attempted to optimize it:
n_clop <- optimize(f.1, c(300,400), maximum = FALSE, p=0.5)
n_clop
I did this over the interval [300,400] because I suspect the value to be between within it but ultimately I would like to do the optimization between 0 and infinity. It seems that this command is producing a local minimum because no matter the interval it produces the lower bound of that interval as the minimum - which is not what I suspect from clopper-pearson. So, my two questions are how to properly find a global minimum in R and how to so over any interval?
I've very briefly looked over the Wikipedia page you linked and don't see any obvious typos in your formula (although I feel like it should be 0.975=1-alpha/2 rather than 0.025=alpha/2?). However, evaluating the function you've coded over a very broad scale suggests that there are no local minima that are messing you up. My strong guess would be that either your logic is wrong (i.e., n->0 is really the right answer) or that you haven't coded what you think you're coding, due to a typo (possibly in the Wikipedia article, although that seems unlikely) or a thinko.
f <- function (n, p=0.5)
(1 + (n - p*n + 1) /
(p*n*qf(p= .025, df1= 2*p, df2= 2*(n - p + 1))))^-1
Confirm that you're getting the right answer for the interval you chose:
curve(f(x),c(300,400))
Evaluating over a broad range (n=0.00001 to 1000000):
curve(f(10^x),c(-5,7))
As #MrFlick suggests, global optimization is hard. You could start with optim(...method="SANN") but the best answer is definitely case-specific.

Prolog Recursion (Factorial of a Power Function)

I am having some troubles with my CS assignment. I am trying to call another rule that I created previously within a new rule that will calculate the factorial of a power function (EX. Y = (N^X)!). I think the problem with my code is that Y in exp(Y,X,N) is not carrying over when I call factorial(Y,Z), I am not entirely sure though. I have been trying to find an example of this, but I haven been able to find anything.
I am not expecting an answer since this is homework, but any help would be greatly appreciated.
Here is my code:
/* 1.2: Write recursive rules exp(Y, X, N) to compute mathematical function Y = X^N, where Y is used
to hold the result, X and N are non-negative integers, and X and N cannot be 0 at the same time
as 0^0 is undefined. The program must print an error message if X = N = 0.
*/
exp(_,0,0) :-
write('0^0 is undefined').
exp(1,_,0).
exp(Y,X,N) :-
N > 0, !, N1 is N - 1, exp(Y1, X, N1), Y is X * Y1.
/* 1.3: Write recursive rules factorial(Y,X,N) to compute Y = (X^N)! This function can be described as the
factorial of exp. The rules must use the exp that you designed.
*/
factorial(0,X) :-
X is 1.
factorial(N,X) :-
N> 0, N1 is N - 1, factorial(N1,X1), X is X1 * N.
factorial(Y,X,N) :-
exp(Y,X,N), factorial(Y,Z).
The Z variable mentioned in factorial/3 (mentioned only once; so-called 'singleton variable', cannot ever get unified with anything ...).
Noticed comments under question, short-circuiting it to _ won't work, you have to unify it with a sensible value (what do you want to compute / link head of the clause with exp and factorial through parameters => introduce some parameter "in the middle"/not mentioned in the head).
Edit: I'll rename your variables for you maybe you'll se more clearly what you did:
factorial(Y,X,Result) :-
exp(Y,X,Result), factorial(Y,UnusedResult).
now you should see what your factorial/3 really computes, and how to fix it.

OpenMDAO 1.2.0 implicit component

I new to OpenMDAO and I'm still learning how to formulate the problems.
For a simple example, let's say I have 3 input variables with the given bounds:
1 <= x <= 10
0 <= y <= 10
1 <= z <= 10
and I have 4 outputs, defined as:
f1 = x * y
f2 = 2 * z
g1 = x + y - 1
g2 = z
my goal is to minimize f1 * g1, but enforce the constraint f1 = f2 and g1 = g2. For example, one solution is x=3, y=4, z=6 (no idea if this is optimal).
For this simple problem, you can probably just feed the output equality constraints to the driver. However, for my actual problem it's hard to find an initial starting point that satisfy all the constraints, and as the result the optimizer failed to do anything. I figure maybe I could define y and z as states in an implicit component and have a nonlinear solver figure out the right values of y and z given x, then feed x to the optimization driver.
Is this a possible approach? If so, how will the implicit component look like in this case? I looked at the Sellar problem tutorial but I wasn't able to translate it to this case.
You could create an implicit component if you want. In that case, you would define an apply_linear method in your component. That is done with the sellar problem here.
In your case since you have a 2 equation set of residuals which are both dependent on the state variables, I suggest you create a single array state variable of length 2, call it foo (I used a new variable to avoid any confusion, but name it whatever you want!). Then you will define two residuals, one for each element of the residual array of the new state variable.
Something like:
resids['foo'][0] = params['x'] * unknowns['foo'][0] - 2 * unknowns['foo'][1]
resids['foo'][1] = params['x'] + unknowns['foo'][0] - 1 - unknowns['foo'][1]
If you wanted to keep the state variable names separate you could, and it will still work. You'll just have to arbitrarily assign one residual equation to one variable and one to the other.
Then the only thing left is to add a non linear solver to the group containing your implicit component and it should work. If you choose to use a newton solver, you'll either need to set fd_options['force_fd'] = True or define derivatives of your residuals wrt all params and state variables.

Minimization with constraint on all parameters in R

I want to minimize a simple linear function Y = x1 + x2 + x3 + x4 + x5 using ordinary least squares with the constraint that the sum of all coefficients have to equal 5. How can I accomplish this in R? All of the packages I've seen seem to allow for constraints on individual coefficients, but I can't figure out how to set a single constraint affecting coefficients. I'm not tied to OLS; if this requires an iterative approach, that's fine as well.
The basic math is as follows: we start with
mu = a0 + a1*x1 + a2*x2 + a3*x3 + a4*x4
and we want to find a0-a4 to minimize the SSQ between mu and our response variable y.
if we replace the last parameter (say a4) with (say) C-a1-a2-a3 to honour the constraint, we end up with a new set of linear equations
mu = a0 + a1*x1 + a2*x2 + a3*x3 + (C-a1-a2-a3)*x4
= a0 + a1*(x1-x4) + a2*(x2-x4) + a3*(x3-x4) + C*x4
(note that a4 has disappeared ...)
Something like this (untested!) implements it in R.
Original data frame:
d <- data.frame(y=runif(20),
x1=runif(20),
x2=runif(20),
x3=runif(20),
x4=runif(20))
Create a transformed version where all but the last column have the last column "swept out", e.g. x1 -> x1-x4; x2 -> x2-x4; ...
dtrans <- data.frame(y=d$y,
sweep(d[,2:4],
1,
d[,5],
"-"),
x4=d$x4)
Rename to tx1, tx2, ... to minimize confusion:
names(dtrans)[2:4] <- paste("t",names(dtrans[2:4]),sep="")
Sum-of-coefficients constraint:
constr <- 5
Now fit the model with an offset:
lm(y~tx1+tx2+tx3,offset=constr*x4,data=dtrans)
It wouldn't be too hard to make this more general.
This requires a little more thought and manipulation than simply specifying a constraint to a canned optimization program. On the other hand, (1) it could easily be wrapped in a convenience function; (2) it's much more efficient than calling a general-purpose optimizer, since the problem is still linear (and in fact one dimension smaller than the one you started with). It could even be done with big data (e.g. biglm). (Actually, it occurs to me that if this is a linear model, you don't even need the offset, although using the offset means you don't have to compute a0=intercept-C*x4 after you finish.)
Since you said you are open to other approaches, this can also be solved in terms of a quadratic programming (QP):
Minimize a quadratic objective: the sum of the squared errors,
subject to a linear constraint: your weights must sum to 5.
Assuming X is your n-by-5 matrix and Y is a vector of length(n), this would solve for your optimal weights:
library(limSolve)
lsei(A = X,
B = Y,
E = matrix(1, nrow = 1, ncol = 5),
F = 5)

Resources