cvxpy contrained normalization equations (abs) - constraints

I am working in an optimization problem (A*v = b) where I would like to rank a set of alternatives X = {x1,x2,x3,x4}. However, I have the following normalization constraint: |v[i] - v[j]| <= 1, which can be in the form -1 <= v[i] - v[j] <= 1.
My code is as follows:
import cvxpy as cp
n = len(X) #set of alternatives
v = cp.Variable(n)
objective = cp.Minimize(cp.sum_squares(A*v - b))
constraints = [0 <= v]
#Normalization condition -1 <= v[i] - v[j] <= 1
for i in range(n):
for j in range(n):
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
prob = cp.Problem(objective, constraints)
# The optimal objective value is returned by `prob.solve()`.
result = prob.solve()
# The optimal value for v is stored in `v.value`.
va2 = v.value
Which outputs:
[-0.15 0.45 -0.35 0.05]
Result, which is not close to what should be and even have negative values. I think, my code for the normalization contraint most probably is wrong.

You are not appending your constraints, instead you are overwriting them each time. Instead of this line
constraints = [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
You should have
constraints += [-1 <= v[i]-v[j], 1 >= v[i]-v[j]]
For cleanliness you may want to change this
for i in range(n):
for j in range(n):
To only consider each pair once:
for i in range(n):
for j in range(i+1, n):

Related

Julia JuMP feasibility slack of constraints

In Julia, by using JuMP am setting up a simple optimization problem (MWE, the real problem is much bigger).
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#objective(model, Min, sum(x))
print(model)
Which gives this model:
Min x[1] + x[2]
Subject to
x[1] + x[2] ≤ 2.0
-x[1] - x[2] ≤ -1.0
x[1] ≥ 0.0
x[2] ≥ 0.0
I optimize this model via optimize!(model).
Now, obviously, the constraint x[1] + x[2] <= 2 is redundant and it has a feasibility slack of "3". My goal is to determine all the constraints that have slacks larger than 0 and display the slacks. Then I will delete those from the model.
To this end, I iterate over the constraints which are not variable bounds and print their values.
for (F, S) in list_of_constraint_types(model)
# Iterate over constraint types
if F!= JuMP.VariableRef #for constraints that
for ci in all_constraints(model, F, S)
println(value(ci))
end
end
end
However, because I print the value of the constraints, I get the left-hand sides:
1.0
-1.0
I want to instead see the slacks as
0
3
How may I do this? Note that I am not necessarily interested in linear programs, so things like shadow_value is not useful for me.
Based on the accepted answer, I am adding a MWE that solves this problem.
model = Model()
set_optimizer(model, MosekTools.Optimizer)
#variable(model, 0 <= x[1:2])
#constraint(model, sum(x) <= 2)
#constraint(model, 1 <= sum(x))
#constraint(model, 0.9 <= sum(x))
#objective(model, Min, sum(x))
print(model)
optimize!(model)
constraints_to_delete = vec([])
for (F, S) in list_of_constraint_types(model)
if F!= JuMP.VariableRef
for ci in all_constraints(model, F, S)
slack = normalized_rhs(ci) - value(ci)
if slack > 10^-5
push!(constraints_to_delete, ci)
println(slack)
#delete(model, ci)
end
end
end
end
for c in constraints_to_delete
delete(model, c)
end
print(model)
Read this (hot off the press) tutorial: https://jump.dev/JuMP.jl/dev/tutorials/linear/lp_sensitivity/.
Although focused on LPs, it shows how to compute slacks etc using normalized_rhs(ci) - value(ci).

What do multiple objective functions mean in Julia jump?

I have multiple objective functions for the same model in Julia JuMP created using an #optimize in a for loop. What does it mean to have multiple objective functions in Julia? What objective is minimized, or is it that all the objectives are minimized jointly? How are the objectives minimized jointly?
using JuMP
using MosekTools
K = 3
N = 2
penalties = [1.0, 3.9, 8.7]
function fac1(r::Number, i::Number, l::Number)
fac1 = 1.0
for m in 0:r-1
fac1 *= (i-m)*(l-m)
end
return fac1
end
function fac2(r::Number, i::Number, l::Number, tau::Float64)
return tau ^ (i + l - 2r + 1)/(i + l - 2r + 1)
end
function Q_r(i::Number, l::Number, r::Number, tau::Float64)
if i >= r && l >= r
return 2 * fac1(r, i, l) * fac2(r, i, l, tau)
else
return 0.0
end
end
function Q(i::Number, l::Number, tau::Number)
elem = 0
for r in 0:N
elem += penalties[r + 1] * Q_r(i, l, r, tau)
end
return elem
end
# discrete segment starting times
mat = Array{Float64, 3}(undef, K, N+1, N+1)
function Q_mat()
for k in 0:K-1
for i in 1:N+1
for j in 1:N+1
mat[k+1, i, j] = Q(i, j, convert(Float64, k))
end
end
return mat
end
end
function A_tau(r::Number, n::Number, tau::Float64)
fac = 1
for m in 1:r
fac *= (n - (m - 1))
end
if n >= r
return fac * tau ^ (n - r)
else
return 0.0
end
end
function A_tau_mat(tau::Float64)
mat = Array{Float64, 2}(undef, N+1, N+1)
for i in 1:N+1
for j in 1:N+1
mat[i, j] = A_tau(i, j, tau)
end
end
return mat
end
function A_0(r::Number, n::Number)
if r == n
fac = 1
for m in 1:r
fac *= r - (m - 1)
end
return fac
else
return 0.0
end
end
m = Model(optimizer_with_attributes(Mosek.Optimizer, "QUIET" => false, "INTPNT_CO_TOL_DFEAS" => 1e-7))
#variable(m, A[i=1:K+1,j=1:K,k=1:N+1,l=1:N+1])
#variable(m, p[i=1:K+1,j=1:N+1])
# constraint difference might be a small fractional difference.
# assuming that time difference is 1 second starting from 0.
for i in 1:K
#constraint(m, -A_tau_mat(convert(Float64, i-1)) * p[i] .+ A_tau_mat(convert(Float64, i-1)) * p[i+1] .== [0.0, 0.0, 0.0])
end
for i in 1:K+1
#constraint(m, A_tau_mat(convert(Float64, i-1)) * p[i] .== [1.0 12.0 13.0])
end
#constraint(m, A_tau_mat(convert(Float64, K+1)) * p[K+1] .== [0.0 0.0 0.0])
for i in 1:K+1
#objective(m, Min, p[i]' * Q_mat()[i] * p[i])
end
optimize!(m)
println("p value is ", value.(p))
println(A_tau_mat(0.0), A_tau_mat(1.0), A_tau_mat(2.0))
With the standard JuMP you can have only one goal function at a time. Running another #objective macro just overwrites the previous goal function.
Consider the following code:
julia> m = Model(GLPK.Optimizer);
julia> #variable(m,x >= 0)
x
julia> #objective(m, Max, 2x)
2 x
julia> #objective(m, Min, 2x)
2 x
julia> println(m)
Min 2 x
Subject to
x >= 0.0
It can be obviously seen that there is only one goal function left.
However, indeed there is an area in optimization called multi-criteria optimization. The goal here is to find a Pareto-barrier.
There is a Julia package for handling MC and it is named MultiJuMP. Here is a sample code:
using MultiJuMP, JuMP
using Clp
const mmodel = multi_model(Clp.Optimizer, linear = true)
const y = #variable(mmodel, 0 <= y <= 10.0)
const z = #variable(mmodel, 0 <= z <= 10.0)
#constraint(mmodel, y + z <= 15.0)
const exp_obj1 = #expression(mmodel, -y +0.05 * z)
const exp_obj2 = #expression(mmodel, 0.05 * y - z)
const obj1 = SingleObjective(exp_obj1)
const obj2 = SingleObjective(exp_obj2)
const multim = get_multidata(mmodel)
multim.objectives = [obj1, obj2]
optimize!(mmodel, method = WeightedSum())
This library also supports plotting of the Pareto frontier.
The disadvantage is that as of today it does not seem to be actively maintained (however it works with the current Julia and JuMP versions).

Implementing FFT over finite fields

I would like to implement multiplication of polynomials using NTT. I followed Number-theoretic transform (integer DFT) and it seems to work.
Now I would like to implement multiplication of polynomials over finite fields Z_p[x] where p is arbitrary prime number.
Does it changes anything that the coefficients are now bounded by p, compared to the former unbounded case?
In particular, original NTT required to find prime number N as the working modulus that is larger than (magnitude of largest element of input vector)^2 * (length of input vector) + 1 so that the result never overflows. If the result is going to be bounded by that p prime anyway, how small can the modulus be? Note that p - 1 does not have to be of form (some positive integer) * (length of input vector).
Edit: I copy-pasted the source from the link above to illustrate the problem:
#
# Number-theoretic transform library (Python 2, 3)
#
# Copyright (c) 2017 Project Nayuki
# All rights reserved. Contact Nayuki for licensing.
# https://www.nayuki.io/page/number-theoretic-transform-integer-dft
#
import itertools, numbers
def find_params_and_transform(invec, minmod):
check_int(minmod)
mod = find_modulus(len(invec), minmod)
root = find_primitive_root(len(invec), mod - 1, mod)
return (transform(invec, root, mod), root, mod)
def check_int(n):
if not isinstance(n, numbers.Integral):
raise TypeError()
def find_modulus(veclen, minimum):
check_int(veclen)
check_int(minimum)
if veclen < 1 or minimum < 1:
raise ValueError()
start = (minimum - 1 + veclen - 1) // veclen
for i in itertools.count(max(start, 1)):
n = i * veclen + 1
assert n >= minimum
if is_prime(n):
return n
def is_prime(n):
check_int(n)
if n <= 1:
raise ValueError()
return all((n % i != 0) for i in range(2, sqrt(n) + 1))
def sqrt(n):
check_int(n)
if n < 0:
raise ValueError()
i = 1
while i * i <= n:
i *= 2
result = 0
while i > 0:
if (result + i)**2 <= n:
result += i
i //= 2
return result
def find_primitive_root(degree, totient, mod):
check_int(degree)
check_int(totient)
check_int(mod)
if not (1 <= degree <= totient < mod):
raise ValueError()
if totient % degree != 0:
raise ValueError()
gen = find_generator(totient, mod)
root = pow(gen, totient // degree, mod)
assert 0 <= root < mod
return root
def find_generator(totient, mod):
check_int(totient)
check_int(mod)
if not (1 <= totient < mod):
raise ValueError()
for i in range(1, mod):
if is_generator(i, totient, mod):
return i
raise ValueError("No generator exists")
def is_generator(val, totient, mod):
check_int(val)
check_int(totient)
check_int(mod)
if not (0 <= val < mod):
raise ValueError()
if not (1 <= totient < mod):
raise ValueError()
pf = unique_prime_factors(totient)
return pow(val, totient, mod) == 1 and all((pow(val, totient // p, mod) != 1) for p in pf)
def unique_prime_factors(n):
check_int(n)
if n < 1:
raise ValueError()
result = []
i = 2
end = sqrt(n)
while i <= end:
if n % i == 0:
n //= i
result.append(i)
while n % i == 0:
n //= i
end = sqrt(n)
i += 1
if n > 1:
result.append(n)
return result
def transform(invec, root, mod):
check_int(root)
check_int(mod)
if len(invec) >= mod:
raise ValueError()
if not all((0 <= val < mod) for val in invec):
raise ValueError()
if not (1 <= root < mod):
raise ValueError()
outvec = []
for i in range(len(invec)):
temp = 0
for (j, val) in enumerate(invec):
temp += val * pow(root, i * j, mod)
temp %= mod
outvec.append(temp)
return outvec
def inverse_transform(invec, root, mod):
outvec = transform(invec, reciprocal(root, mod), mod)
scaler = reciprocal(len(invec), mod)
return [(val * scaler % mod) for val in outvec]
def reciprocal(n, mod):
check_int(n)
check_int(mod)
if not (0 <= n < mod):
raise ValueError()
x, y = mod, n
a, b = 0, 1
while y != 0:
a, b = b, a - x // y * b
x, y = y, x % y
if x == 1:
return a % mod
else:
raise ValueError("Reciprocal does not exist")
def circular_convolve(vec0, vec1):
if not (0 < len(vec0) == len(vec1)):
raise ValueError()
if any((val < 0) for val in itertools.chain(vec0, vec1)):
raise ValueError()
maxval = max(val for val in itertools.chain(vec0, vec1))
minmod = maxval**2 * len(vec0) + 1
temp0, root, mod = find_params_and_transform(vec0, minmod)
temp1 = transform(vec1, root, mod)
temp2 = [(x * y % mod) for (x, y) in zip(temp0, temp1)]
return inverse_transform(temp2, root, mod)
vec0 = [24, 12, 28, 8, 0, 0, 0, 0]
vec1 = [4, 26, 29, 23, 0, 0, 0, 0]
print(circular_convolve(vec0, vec1))
def modulo(vec, prime):
return [x % prime for x in vec]
print(modulo(circular_convolve(vec0, vec1), 31))
Prints:
[96, 672, 1120, 1660, 1296, 876, 184, 0]
[3, 21, 4, 17, 25, 8, 29, 0]
However, where I change minmod = maxval**2 * len(vec0) + 1 to minmod = maxval + 1, it stops working:
[14, 16, 13, 20, 25, 15, 20, 0]
[14, 16, 13, 20, 25, 15, 20, 0]
What is the smallest minmod (N in the link above) be in order to work as expected?
If your input of n integers is bound to some prime q (any mod q not just prime will be the same) You can use it as a max value +1 but beware you can not use it as a prime p for the NTT because NTT prime p has special properties. All of them are here:
Translation from Complex-FFT to Finite-Field-FFT
so our max value of each input is q-1 but during your task computation (Convolution on 2 NTT results) the magnitude of first layer results can rise up to n.(q-1) but as we are doing convolution on them the input magnitude of final iNTT will rise up to:
m = n.((q-1)^2)
If you are doing different operations on the NTTs than the m equation might change.
Now let us get back to the p so in a nutshell you can use any prime p that upholds these:
p mod n == 1
p > m
and there exist 1 <= r,L < p such that:
p mod (L-1) = 0
r^(L*i) mod p == 1 // i = { 0,n }
r^(L*i) mod p != 1 // i = { 1,2,3, ... n-1 }
If all this is satisfied then p is nth root of unity and can be used for NTT. To find such prime and also the r,L look at the link above (there is C++ code that finds such).
For example during string multiplication we take 2 strings do NTT on them then convolute the result and iNTT back the result (that is sum of both input sizes). So for example:
99999999999999999999999999999999
*99999999999999999999999999999999
----------------------------------------------------------------
9999999999999999999999999999999800000000000000000000000000000001
the q = 10 and both operands are 9^32 so n=32 hence m = 9*9*32 = 2592 and the found prime is p = 2689. As you can see the result matches so no overflow occurs. However if I use any smaller prime that still fit all the other conditions the result will not match. I used this specifically to stretch the NTT values as much as possible (all values are q-1 and sizes are equal to the same power of 2)
In case your NTT is fast and n is not a power of 2 then you need to zero pad to nearest higher or equal power of 2 size for each NTT. But that should not affect the m value as zero pad should not increase the magnitude of values. My testing proves it so for convolution you can use:
m = (n1+n2).((q-1)^2)/2
where n1,n2 are the raw inputs sizes before zeropad.
For more info about implementing NTT you can check out mine in C++ (extensively optimized):
Modular arithmetics and NTT (finite field DFT) optimizations
So to answer your questions:
yes you can take advantage of the fact that input is mod q but you can not use q as p !!!
You can use minmod = n * (maxval + 1) only for single NTT (or first layer of NTTs) but as you are chaining them with convolution during your NTT usage you can not use that for the final INTT stage !!!
However as I mentioned in the comments easiest is to use max possible p that fits in the data type you are using and is usable for all power of 2 sizes of input supported.
Which basically renders your question irrelevant. The only case I can think of where this is not possible/desired is on arbitrary precision numbers where there is "no" max limit. There are many performance issues binded to variable p as the search for p is really slow (may be even slower than the NTT itself) and also variable p disables many performance optimizations of the modular arithmetics needed making the NTT really slow.

Minimize the maximum variable

I have a Mixed Integer Programming problem. The objective function is a minimization of the maximum variable value in the a vector. The variable is has an upper bound of 5. The problem is like this:
m = Model(solver = GLPKSolverMIP())
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= 5, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
The max variable is not part of the julia JuMP syntax. So I modified the problem to
t=1
while t<=5 && (status == :NotSolved || status == :Infeasible)
m = Model(solver = GLPKSolverMIP())
i = 1:12
#objective(m, Min, max(x[i] for i=1:12))
#variable(m, 0 <= x[i] <= t, Int)
#constraint(m, sum(x[i] for i=1:12) == 12)
status = solve(m)
t += 1
end
This solution does the job by solving the problem iterative for starting with a upper bound for the variable at 1 and then increase by one until the solutoin is feasible. Is this really the best way to do this?
The question wants to minimize a maximum, this maximum can be held in an auxiliary variable and then we will minimize it. To do so, add constraints to force the new variable to actually be an upper bound on x. In code it is:
using GLPKMathProgInterface
using JuMP
m = Model(solver = GLPKSolverMIP())
#variable(m, 0 <= x[i=1:3] <= 5, Int) # define variables
#variable(m, 0 <= t <= 12) # define auxiliary variable
#constraint(m, t .>= x) # constrain t to be the max
#constraint(m, sum(x[i] for i=1:3) == 12) # the meat of the constraints
#objective(m, Min, t) # we wish to minimize the max
status = solve(m)
Now we can inspect the solution:
julia> getValue(t)
4.0
julia> getValue(x)
3-element Array{Float64,1}:
4.0
4.0
4.0
The actual problem the poster wanted to solve is probably more complex that this, but it can be solved by a variation on this framework.

Calculating the modulo of two intervals

I want to understand how the modulus operator works when applied to two intervals. Adding, subtracting and multiplying two intervals is trivial to implement in code, but how do you do it for modulus?
I'd be happy if someone can show me the formula, sample code or a link which explains how it works.
Background info: You have two integers x_lo < x < x_hi and y_lo < y < y_hi. What is the the lower and upper bound for mod(x, y)?
Edit: I'm unsure if it is possible to come up with the minimal bounds in an efficient manner (without calculating the mod for all x or for all y). If so, then I'll accept an accurate but non-optimal answer for the bounds. Obviously, [-inf,+inf] is a correct answer then :) but I want a bound that is more limited in size.
It turns out, this is an interesting problem. The assumption I make is that for integer intervals, modulo is defined with respect to truncated division (round towards 0).
As a consequence, mod(-a,m) == -mod(a,m) for all a, m. Moreover, sign(mod(a,m)) == sign(a).
Definitions, before we start
Closed interval from a to b: [a,b]
Empty interval: [] := [+Inf,-Inf]
Negation: -[a,b] := [-b,-a]
Union: [a,b] u [c,d] := [min(a,c),max(b,d)]
Absolute value: |m| := max(m,-m)
Simpler Case: Fixed modulus m
It is easier to start with a fixed m. We will later generalize this to the modulo of two intervals. The definition builds up recursively. It should be no problem to implement this in your favorite programming language. Pseudocode:
def mod1([a,b], m):
// (1): empty interval
if a > b || m == 0:
return []
// (2): compute modulo with positive interval and negate
else if b < 0:
return -mod1([-b,-a], m)
// (3): split into negative and non-negative interval, compute and join
else if a < 0:
return mod1([a,-1], m) u mod1([0,b], m)
// (4): there is no k > 0 such that a < k*m <= b
else if b-a < |m| && a % m <= b % m:
return [a % m, b % m]
// (5): we can't do better than that
else
return [0,|m|-1]
Up to this point, we can't do better than that. The resulting interval in (5) might be an over-approximation, but it is the best we can get. If we were allowed to return a set of intervals, we could be more precise.
General case
The same ideas apply to the case where our modulus is an interval itself. Here we go:
def mod2([a,b], [m,n]):
// (1): empty interval
if a > b || m > n:
return []
// (2): compute modulo with positive interval and negate
else if b < 0:
return -mod2([-b,-a], [m,n])
// (3): split into negative and non-negative interval, compute, and join
else if a < 0:
return mod2([a,-1], [m,n]) u mod2([0,b], [m,n])
// (4): use the simpler function from before
else if m == n:
return mod1([a,b], m)
// (5): use only non-negative m and n
else if n <= 0:
return mod2([a,b], [-n,-m])
// (6): similar to (5), make modulus non-negative
else if m <= 0:
return mod2([a,b], [1, max(-m,n)])
// (7): compare to (4) in mod1, check b-a < |modulus|
else if b-a >= n:
return [0,n-1]
// (8): similar to (7), split interval, compute, and join
else if b-a >= m:
return [0, b-a-1] u mod2([a,b], [b-a+1,n])
// (9): modulo has no effect
else if m > b:
return [a,b]
// (10): there is some overlapping of [a,b] and [n,m]
else if n > b:
return [0,b]
// (11): either compute all possibilities and join, or be imprecise
else:
return [0,n-1] // imprecise
Have fun! :)
Let see mod(x, y) = mod.
In general 0 <= mod <= y. So it's always true: y_lo < mod < y_hi
But we can see some specific cases below:
- if: x_hi < y_lo then div(x, y) = 0, then x_low < mod < x_hi
- if: x_low > y_hi then div(x, y) > 0, then y_low < mod < y_hi
- if: x_low < y_low < y_hi < x_hi, then y_low < mod < y_hi
- if: x_low < y_low < x_hi < y_hi, then y_low < mod < x_hi
- if: y_low < x_low < y_hi < x_hi, then y_low < mod < y_hi
....

Resources