what' the difference between Indicator Constraint "->" and "<->" in .lp format file - constraints

I am completely new to CPLEX and I am trying to solve a problem with CPLEX 12.9. I don't known what's the difference between Indicator Constraint "->" and "<->". Does it mean that "->" equals to "if... then" and "<->" equals to "If and only if...then"?
the example is shown as follow:
i1: x14 = 1 <-> x(0) = 0
i2: x13 = 1 -> x14 = 1

yes -> means if then and <-> means equivalent
For example if you write in OPL
dvar int x;
dvar boolean b;
subject to
{
b==(x==1);
}
then in the lp file you will see
i1: x1 = 1 <-> x = 1
and if you write
dvar int x;
dvar boolean b;
subject to
{
(b==1)=>(x==1);
}
then in the lp file you will see
i2: x1 = 1 -> x3 = 1

Related

On submission it is giving runtime_error

t=int(input())
while t>0 :
c=0
n,h,y1,y2,e = list(map(int, input().split()))
for i in range(n):
x0,x1 = list(map(int, input().split()))
if x0==1 :
if x1 < h-y1:
e -= 1
else :
if y2 < x1 :
e -= 1
if e>0 :
c+=1
else :
break
print(c)
t-=1
It is passing the sample test cases but on submission, it is showing runtime error(NZEC) occurred.
Here is the link to the question: https://www.codechef.com/problems/PIPSQUIK
The problem is that you're reading the inputs and processing them simultaneously. So, a situation can arise in some test cases such that e<=0 but you still have some x0 x1 to read(i.e. i<n-1). In such cases, you'll break the loop because e<=0 and in next iteration of while loop, you'll try to read 5 values n,h,y1,y2,e = list(map(int, input().split())) but you'll receive only 2 values x0 x1 and hence it'll throw a ValueError: not enough values to unpack (expected 5, got 2) and hence it'll not pass all the test cases.
To fix this, just take all inputs first and then process them according to your current logic.
t=int(input())
while t>0 :
c=0
n,h,y1,y2,e = list(map(int, input().split()))
inputs = []
for i in range(n):
inputs.append(list(map(int, input().split())))
for inp in inputs:
x0,x1 = inp
if x0==1 :
if x1 < h-y1:
e -= 1
else :
if y2 < x1 :
e -= 1
if e>0 :
c+=1
else :
break
print(c)
t -= 1

What are the different versions of arithmetic swap and why do they work?

I think we all should be familiar of the arithmetic swap algorithm, that swaps two variables without using a third variable. Now I found out that there are two variations of the arithmetic swap. Please consider the following:
Variation 1.
int a = 2;
int b = 3;
a = a + b;
b = a - b;
a = a - b;
Variation 2.
int a = 2;
int b = 3;
b = b - a;
a = a + b;
b = a - b;
I want to know, why are there two distinct variations of the arithmetic swap and why do they work? Are there also other variations of the arithmetic swap that achieve the same result? How are they related? Is there any elegant mathematical formula that justifies why the arithmetic swap works the way it does, for all variations? Is there anything related between these two variations of the two arithmetic swap, like an underlying truth?
Break each variable out as what it represents:
a = 2
b = 3
a1 = a + b
b1 = a1 - b = (a + b) - b = a
a2 = a1 - b1 = (a + b) - a = b
a = 2
b = 3
b1 = b - a
a1 = a + b1 = a + (b - a) = b
b2 = a1 - b1 = b - (b - a) = a
There's not underlying truth other than the fact that the math works out. Remember that each time you do an assignment, it's effectively a new "variable" from the math side.

Why does R.predict.svm return a list of the wrong size?

I am trying to use the R type provider to fit and predict a Support Vector Machines model. I was able to fit the model but when I try to predict the returned vector has the same length as the training vector, which it should not have.
I tried the equivalent code directly in R and the returned list has the correct length.
Why is this happening?
Here is an example:
open System
open RDotNet
open RProvider
open RProvider.stats
open RProvider.e1071
// Random number generator
let rng = Random()
let rand () = rng.NextDouble()
// Generate fake X1 and X2
let X1s = [ for i in 0 .. 9 -> 10. * rand () ] // length = 10
let X2s = [ for i in 0 .. 9 -> 5. * rand () ] // length = 10
let Z1s = [ for i in 0 .. 5 -> 10. * rand () ] // length = 6
let Z2s = [ for i in 0 .. 5 -> 5. * rand () ] // length = 6
// Build Ys
let Ys = [0;1;0;1;0;1;0;1;0;1]
let XMat =
["X1", box X1s; "X2", box X2s]
|> namedParams
|> R.cbind
let ZMat =
["Z1", box Z1s; "Z2", box Z2s]
|> namedParams
|> R.cbind
let svm_model =
["x", box XMat; "y", box Ys ; "type", box "C"; "gamma", box 1.0]
|> namedParams
|> R.svm
let svm_predict = R.predict(svm_model, ZMat)
let res =
if svm_predict.Type = RDotNet.Internals.SymbolicExpressionType.IntegerVector then
svm_predict.AsInteger()
|> List.ofSeq
else failwithf "Expecting a Numeric but got a %A" svm_predict.Type
printfn "The predicted values are: %A" res
// The predicted values are: [1; 2; 1; 2; 1; 2; 1; 1; 1; 2]
And here is the original R code:
library(stats)
library(e1071)
// Random number generator
x1 <- 10 * rnorm(10)
x2 <- 5 * rnorm(10)
x = cbind(x1, x2)
z1 <- 10 * rnorm(5)
z2 <- 5 * rnorm(5)
z = cbind(z1, z2)
zs <- c(0,1,0,1,0,1,0,1,0,1)
svm_fit = svm(x=x,y=zs,type="C",gamma=1.0)
svm_pred = predict(svm_fit, z)
print(svm_pred)
1 2 3 4 5
1 0 1 1 1
Levels: 0 1
I suspect the issue might be when passing parameters to the R.predict function. I'm not an expert on SVMs, so I'm not sure what is the result this should give, but when I call it as follows, I get results similar to your R version:
let svm_predict =
namedParams ["object", box svm_model; "newdata", box ZMat ]
|> R.predict
I think what's going on is that the R type provider infers some information about parameter names of the predict function, but is not able to figure out exactly what the second parameter is - and so rather than providing it as newdata, it provides it as something else.

Constrained Regression in R

I'm using the R type-provider from F# to access some regression related R functionality. I would like to estimate a regression when there is a constraint on the regression coefficients, so that their weighted average is 0. The weights sum to 1. The below example is simplified as I have dozens of coefficients, with varying weights, I only show the R code below:
y1 <- runif(n = 50,min = 0.02,max=0.05)
y2 <- runif(n=50,min=0.01,max=0.03)
y <- c(x1,x2)
x1 <- c(rep(0,50),rep(1,50))
x2 <- c(rep(1,50),rep(0,50))
lm(y~x1+x2)
This gives the output of
> lm(y~x1+x2)
Call:
lm(formula = y ~ x1 + x2)
Coefficients:
(Intercept) x1 x2
0.03468 -0.01460 NA
as expected. However I would like to place a constraint on x1 and x2, so their weighted average is (0.5 * x1 + 0.5 * x2) = 0. In that case the intercept becomes mean(y) = 0.02737966 and the x1 and x2 coefficients will show the offset from this value (-0.006 and +0.007 respectively). It seems the packages quadprog and mgcvare applicable however I wasn't able to apply the constraints.
Maybe not exactly an answer to your question, since it asks for doing the optimization in R. But maybe the following helps. It uses the NLopt library anyway which I think is what R uses? Let me know if you need help in formulating the MLE but for a linear model with gaussian assumptions and no endogeneity it should be straightforward enough.
Note that even though LN_COBYLA doesn't use user supplied gradients, the match with pattern in cFunc and oFunc ignores it. I tried with LD_LBFGS but that doesn't support AddEqualZeroConstraint().
[EDIT]
Adding complete example you can use as template. Its not idiomatic, and quite ugly, but illustrates the point. However, in this example, the constraints will cause this to degenerate. You need NLOptNet, MathNet.Numerics, Fsharp Charting. Maybe it helps other people looking to do constrained optimization in F#.
open System
open System.IO
open FSharp.Core.Operators
open MathNet.Numerics
open MathNet.Numerics.LinearAlgebra
open MathNet.Numerics.LinearAlgebra.Double
open MathNet.Numerics.Distributions
open DiffSharp.Numerical.Float64
open NLoptNet
let (.*) (m1 : Matrix<float>) (m2 : Matrix<float>) =
m1.Multiply(m2)
let (.+) (m1 : Matrix<float>) (m2 : Matrix<float>) =
m1.Add(m2)
let (.-) (m1 : Matrix<float>) (m2 : Matrix<float>) =
m1.Subtract(m2)
let V = matrix [[1.; 0.5; 0.2]
[0.5; 1.; 0.]
[0.2; 0.; 1.]]
let dat = (DenseMatrix.init 200 3 ( fun i j -> Normal.Sample(0., 1.) )) .* V.Cholesky().Factor
let y = DenseMatrix.init 200 1 (fun i j -> 0.)
let x0 = DenseMatrix.init 200 1 (fun i j -> 0.)
let x1 = DenseMatrix.init 200 1 (fun i j -> 0.)
for i in 0 .. 199 do
y.[i, 0] <- dat.[i, 0]
x0.[i, 0] <- dat.[i, 1]
x1.[i, 0] <- dat.[i, 2]
let ll (th : float array) =
let t1 = x0.Multiply(th.[0]) .+ x1.Multiply(th.[1])
let res = (y .- t1).PointwisePower(2.)
res.ColumnAbsoluteSums().Sum() / 200.
let oFunc (th : float array) (gradvec : float array) =
match gradvec with
| null -> ()
| _ -> (grad ll th).CopyTo(gradvec, 0)
ll th
let cFunc (th : float array) (gradvec : float array) =
match gradvec with
| null -> ()
| _ -> (grad ll th).CopyTo(gradvec, 0)
th.[0] + th.[1]
let fitFunc () =
let solver = new NLoptSolver(NLoptAlgorithm.LN_COBYLA, uint32(2), 1e-7, 100000)
solver.SetLowerBounds([|-10.; -10.;|])
solver.SetUpperBounds([|10.; 10.;|])
//solver.AddEqualZeroConstraint(cFunc)
solver.SetMinObjective(oFunc)
let initialValues = [|1.; 2.;|]
let objReached, finalScore = solver.Optimize(initialValues)
objReached |> printfn "%A"
let fittedParams = initialValues
fittedParams |> printfn "%A"
fittedParams
let fittedParams = fitFunc() |> DenseVector
let yh = DenseMatrix.init 200 1 (fun i j -> 0.)
for i in 0 .. 199 do
yh.[i, 0] <- dat.[i, 1] * fittedParams.[0] + dat.[i, 2] * fittedParams.[1]
Chart.Combine([Chart.Line(y.Column(0), Name="y")
Chart.Line(yh.Column(0), Name="yh")
|> Chart.WithLegend(Title="Model", Enabled=true)] )
|> Chart.Show

Find the value used for XOR

I have the initial address and the output .. I need to find out what was used for XOR
129.94.5.93:46 XOR ????? == 10.165.7.201:14512
XOR has an interesting property that if you apply it to one of its operands and the result, you get the other operand back. In other words, if
r = a ^ b
then
b = r ^ a
where a and b are operands, and r is the result.
Hence, the data with which the original has been XOR-ed is
139.251.2.148:14494
Here is a short program in C# to produce this result from your data:
var a = new[] {129,94,5,93,46};
var b = new[] {10,165,7,201,14512};
var c = new int[a.Length];
for (int i = 0 ; i != a.Length ; i++) {
c[i] = a[i] ^ b[i];
Console.WriteLine("a={0} b={1} c={2} back={3}", a[i], b[i], c[i], c[i] ^ a[i]);
}
Here is a link to ideone showing this program in action.
XOR is a "reversible" function of sorts so:
A XOR B = C
A XOR C = B
therefore if you just XOR the 2 values that you do have you will be able to get the missing number
so
129.94.5.93:46 XOR X == 10.165.7.201:14512
x == 129.94.5.93:46 OXR 10.165.7.201:14512
The easiest way to figure this out is to look at the binary representation of each number (let's take the first number on each side):
129 = 10000001
XOR 139 = 10001011
======================
010 = 00001010
From this we can see that 129 XOR 139 == 10 is equivalent to 129 XOR 10 == 139.

Resources