Is there a possibility to tell MiniZinc to project solutions on a subset of the set of variables? Or is there any other way to enumerate all solutions that are unique wrt to evaluation of some subset of variables?
I have tried to use FlatZinc annotations directly in MiniZinc, but it does not work, since the flattening process adds more defines_var annotations and my annotations are ignored.
I tried the following model in MiniZinc 2.0 (https://www.minizinc.org/2.0/index.html) and this seems to work as you expect, i.e. that just x1 and x2 are projected (printed) in the result.
int: n = 3;
var 1..n: x1;
var 1..n: x2;
var 1..n: x3;
solve satisfy;
constraint
x3 > 1
;
output [ show([x1,x2])];
The result is:
[1, 1]
----------
[2, 1]
----------
[3, 1]
----------
[1, 2]
----------
[2, 2]
----------
[3, 2]
----------
[1, 3]
----------
[2, 3]
----------
[3, 3]
----------
==========
In MiniZinc v 1.6 x1 and x2 are printed repeatedly for each x3 value.
Update:
However, if x3 is involved in any constraints (in any interesting way) it seems to be the same behaviour as in version 1.6. And that's probably not what you want...
Update2:
I asked one of the developers of Gecode about this and he answered:
Regarding the projection semantics, this really depends on the solver. Gecode for instance should not produce duplicate solutions (based on what is mentioned in the output statement), whereas g12fd does, AFAIK. So the answer would be that projection is defined by the output item, but only some solvers guarantee uniqueness.
When we tested this, we found a bug in Gecode that didn't comply with the answer. This is now fixed (in the SVN version).
The following model how just give distinct answers for x1 and x2 (using the fixed Gecode version):
int: n = 5;
var 1..n: x1;
var 1..n: x2;
var 1..n: x3;
solve satisfy;
constraint
x2 + x1 < 5 /\
x2 +x3 > x1
;
output [ "x:", show([x1,x2])];
The solutions given (with the fixed Gecode solver) are now:
x:[1, 1]
x:[2, 1]
x:[3, 1]
x:[1, 2]
x:[2, 2]
x:[1, 3]
==========
This works both for MiniZinc 1.6 and MiniZinc 2.0.
The solution uses FlatZinc directly. Assume that we add a constraint x3=x1+x2
var 1..3: x1 :: output_var;
var 1..3: x2 :: output_var;
var 1..3: x3 :: is_defined_var :: var_is_introduced;
constraint int_plus(x1, x2, x3) :: defines_var(x3);
constraint int_le_reif(1, x3, true);
solve satisfy;
The output of a flatzinc -a returned a correct combination of values:
x1 = 1;
x2 = 1;
----------
x1 = 2;
x2 = 1;
----------
x1 = 1;
x2 = 2;
----------
==========
My observation is that auxiliary variables must be:
declared with annotations is_defined_var and var_is_introduced
must be defined in some constraint defines_var
Annotation of a variable declaration only has no effect at all. The solver treats such a variable as a usual decision variable.
Related
I have a two-element vector whose elements can only be 0 or 1. For the sake of this example, suppose x = [0, 1]. Suppose also there are four objects y00, y01, y10, y11. My goal is to update the corresponding y (y01 in this example) according to the current value of x.
I am aware I can do this using a series of if statements:
if x == [0, 0]
y00 += 1
elseif x == [0, 1]
y01 += 1
elseif x == [1, 0]
y10 += 1
elseif x == [1, 1]
y11 += 1
end
However, I understand this can be done more succinctly using Julia's metaprogramming, although I'm unfamiliar with its usage and can't figure out how.
I want to be able to express something like y{x[1]}{x[2]} += 1 (which is obviously wrong); basically, be able to refer and modify the correct y according to the current value of x.
So far, I've been able to call the actual value of the correct y (but I can't summon the y object itself) with something like
eval(Symbol(string("y", x[1], x[2])))
I'm sorry if I did not use the appropriate lingo, but I hope I made myself clear.
There's a much more elegant way using StaticArrays. You can define a common type for your y values, which will behave like a matrix (which I assume the ys represent?), and defines a lot of things for you:
julia> mutable struct Thing2 <: FieldMatrix{2, 2, Float64}
y00::Float64
y01::Float64
y10::Float64
y11::Float64
end
julia> M = rand(Thing2)
2×2 Thing2 with indices SOneTo(2)×SOneTo(2):
0.695919 0.624941
0.404213 0.0317816
julia> M.y00 += 1
1.6959194941562996
julia> M[1, 2] += 1
1.6249412302897646
julia> M * [2, 3]
2-element SArray{Tuple{2},Float64,1,2} with indices SOneTo(2):
10.266662679181893
0.9037708026795666
(Side note: Julia indices begin at 1, so it might be more idiomatic to use one-based indices for y as well. Alternatively, can create array types with custom indexing, but that's more work, again.)
How about using x as linear indices into an array Y?
x = reshape(1:4, 2, 2)
Y = zeros(4);
Y[ x[1,2] ] += 1
Any time you find yourself naming variables with sequential numbers it's a HUGE RED FLAG that you should just use an array instead. No need to make it so complicated with a custom static array or linear indexing — you can just make y a plain old 2x2 array. The straight-forward transformation is:
y = zeros(2,2)
if x == [0, 0]
y[1,1] += 1
elseif x == [0, 1]
y[1,2] += 1
elseif x == [1, 0]
y[2,1] += 1
elseif x == [1, 1]
y[2,2] += 1
end
Now you can start seeing a pattern here and simplify this by using x as an index directly into y:
y[(x .+ 1)...] += 1
I'm doing two things there: I'm adding one to all the elements of x and then I'm splatting those elements into the indexing expression so they're treated as a two-dimensional lookup. From here, you could make this more Julian by just using one-based indices from the get-go and potentially making x a Tuple or CartesianIndex for improved performance.
I'm a beginner to Prolog and have two requirements:
f(1) = 1
f(x) = 5x + x^2 + f(x - 1)
rules:
f(1,1).
f(X,Y) :-
Y is 5 * X + X * X + f(X-1,Y).
query:
f(4,X).
Output:
ERROR: is/2: Arguments are not sufficiently instantiated
How can I add value of f(X-1)?
This can be easily solved by using auxiliary variables.
For example, consider:
f(1, 1).
f(X, Y) :-
Y #= 5*X + X^2 + T1,
T2 #= X - 1,
f(T2, T1).
This is a straight-forward translation of the rules you give, using auxiliary variables T1 and T2 which stand for the partial expressions f(X-1) and X-1, respectively. As #BallpointBen correctly notes, it is not sufficient to use the terms themselves, because these terms are different from their arithmetic evaluation. In particular, -(2,1) is not the integer 1, but 2 - 1 #= 1 does hold!
Depending on your Prolog system, you may ned to currently still import a library to use the predicate (#=)/2, which expresses equality of integer expressesions.
Your example query now already yields a solution:
?- f(4, X).
X = 75 .
Note that the predicate does not terminate universally in this case:
?- f(4, X), false.
nontermination
We can easily make it so with an additional constraint:
f(1, 1).
f(X, Y) :-
X #> 1,
Y #= 5*X + X^2 + T1,
T2 #= X - 1,
f(T2, T1).
Now we have:
?- f(4, X).
X = 75 ;
false.
Note that we can use this as a true relation, also in the most general case:
?- f(X, Y).
X = Y, Y = 1 ;
X = 2,
Y = 15 ;
X = 3,
Y = 39 ;
X = 4,
Y = 75 ;
etc.
Versions based on lower-level arithmetic typically only cover a very limited subset of instances of such queries. I therefore recommend that you use (#=)/2 instead of (is)/2. Especially for beginners, using (is)/2 is too hard to understand. Take the many related questions filed under instantiation-error as evidence, and see clpfd for declarative solutions.
The issue is that you are trying to evaluate f(X-1,Y) as if it were a number, but of course it is a predicate that may be true or false. After some tinkering, I found this solution:
f(1,1).
f(X,Y) :- X > 0, Z is X-1, f(Z,N), Y is 5*X + X*X + N.
The trick is to let it find its way down to f(1,N) first, without evaluating anything; then let the results bubble back up by satisfying Y is 5*X + X*X + N. In Prolog, order matters for its search. It needs to satisfy f(Z,N) in order to have a value of N for the statement Y is 5*X + X*X + N.
Also, note the condition X > 0 to avoid infinite recursion.
I have a prolog program with given grammar:
sum --> [+], mult, sum | mult | num.
mult --> [*], num, xer.
xer --> [x] | [^], [x], num.
num --> [2] | [3] ... etc
I have an abstract tree representation of my expressions. For example: mul(num(2),var(x)) which equals [*,2,x] is valid.
I want to be able to create all expressions that satisfies a given x and solution. Using
allExpressions(Tree, X, Solution).
For example:
?- allExpressions(Tree, 2, 6)
Tree = mul(num(3),x)
Tree = sum(num(2),mul(num(2),var(x))
etc.
Due to my grammar it will obviously not be an unlimited set of equations for this.
I have already programmed an evaluation(Tree, X, Solution) which calculates the answer given the X-variable. So what I need help to is to generate the possible set of equations for given x-variable and solution.
Any ideas to how I approach this? Thanks
That's easy: Since all of your arithmetic operations can only increase the value of expressions, it is simple to limit the depth when searching for solutions. Simply describe inductively what a solution can look like. You can do it for example with SWI-Prolog's finite domain constraints for addition and multiplication like this:
:- use_module(library(clpfd)).
expression(var(x), X, X).
expression(num(N), _, N) :- phrase(num, [N]).
expression(mul(A,B), X, N) :-
N1 * N2 #= N,
N1 #> 1,
N2 #> 1,
expression(A, X, N1),
expression(B, X, N2).
expression(sum(A,B), X, N) :-
N1 + N2 #= N,
N1 #> 1,
N2 #> 1,
expression(A, X, N1),
expression(B, X, N2).
I leave the other operations as an exercise.
Example query and some results:
?- expression(Tree, 2, 6).
Tree = mul(var(x), num(3)) ;
Tree = mul(num(2), num(3)) ;
[...solutions omitted...]
Tree = sum(num(2), mul(num(2), var(x))) ;
Tree = sum(num(2), mul(num(2), num(2))) ;
[...solutions omitted...]
Tree = sum(sum(num(2), num(2)), num(2)) ;
false.
+1 for using a clean, non-defaulty representation for expression trees (var(x), num(N) etc.), which lets you use pattern matching when reasoning about it.
This may be quite a basic question for someone who knows linear programming.
In most of the problems that I saw on LP has somewhat similar to following format
max 3x+4y
subject to 4x-5y = -34
3x-5y = 10 (and similar other constraints)
So in other words, we have same number of unknown in objective and constraint functions.
My problem is that I have one unknown variable in objective function and 3 unknowns in constraint functions.
The problem is like this
Objective function: min w1
subject to:
w1 + 0.1676x + 0.1692y >= 0.1666
w1 - 0.1676x - 0.1692y >= -0.1666
w1 + 0.3039x + 0.3058y >= 0.3
w1 - 0.3039x - 0.3058y >= -0.3
x + y = 1
x >= 0
y >= 0
As can be seen, the objective function has only one unknown i.e. w1 and constraint functions have 3 (or lets say 2) unknown i.e w1, x and y.
Can somebody please guide me how to solve this problem, especially using R or MATLAB linear programming toolbox.
Your objective only involves w1 but you can still view it as a function of w1,x,y, where the coefficient of w1 is 1, and the coeffs of x,y are zero:
min w1*1 + x*0 + y*0
Once you see this you can formulate it in the usual way as a "standard" LP.
Prasad is correct. The number of unknowns in the objective function does not matter. You can view unknowns that are not present as having a zero coefficient.
This LP is easily solved using Matlab's linprog function. For more
details on linprog see the documentation here.
% We lay out the variables as X = [w1; x; y]
c = [1; 0; 0]; % The objective is w1 = c'*X
% Construct the constraint matrix
% Inequality constraints will be written as Ain*X <= bin
% w1 x y
Ain = [ -1 -0.1676 -0.1692;
-1 0.1676 0.1692;
-1 -0.3039 -0.3058;
-1 0.3039 0.3058;
];
bin = [ -0.166; 0.166; -0.3; 0.3];
% Construct equality constraints Aeq*X == beq
Aeq = [ 0 1 1];
beq = 1;
%Construct lower and upper bounds l <= X <= u
l = [ -inf; 0; 0];
u = inf(3,1);
% Solve the LP using linprog
[X, optval] = linprog(c,Ain,bin,Aeq,beq,l,u);
% Extract the solution
w1 = X(1);
x = X(2);
y = X(3);
I'm implementing the NTRUEncrypt algorithm, according to an NTRU tutorial, a polynomial f has an inverse g such that f*g=1 mod x, basically the polynomial multiplied by its inverse reduced modulo x gives 1. I get the concept but in an example they provide, a polynomial f = -1 + X + X^2 - X4 + X6 + X9 - X10 which we will represent as the array [-1,1,1,0,-1,0,1,0,0,1,-1] has an inverse g of [1,2,0,2,2,1,0,2,1,2,0], so that when we multiply them and reduce the result modulo 3 we get 1, however when I use the NTRU algorithm for multiplying and reducing them I get -2.
Here is my algorithm for multiplying them written in Java:
public static int[] PolMulFun(int a[],int b[],int c[],int N,int M)
{
for(int k=N-1;k>=0;k--)
{
c[k]=0;
int j=k+1;
for(int i=N-1;i>=0;i--)
{
if(j==N)
{
j=0;
}
if(a[i]!=0 && b[j]!=0)
{
c[k]=(c[k]+(a[i]*b[j]))%M;
}
j=j+1;
}
}
return c;
}
It basicall taken in polynomial a and multiplies it b, resturns teh result in c, N specifies the degree of the polynomials+1, in teh example above N=11; and M is the reuction modulo, in teh exampel above 3.
Why am I getting -2 and not 1?
-2 == 1 mod 3, so the calculation is fine, but it appears that Java's modulus (remainder) operator has an output range of [-n .. n] for mod n+1, instead of the standard mathematical [0..n].
Just stick an if (c[k] < 0) c[k] += M; after your c[k]=...%M line, and you should be fine.
Edit: actually, best to put it right at the end of the outermost (k) for-loop.