Why the #objective value is negative? - julia

I have this model and I am using cbc:
#variable(premex, PRODAMOUNT[op_k in keys(_ORDER_PRODUCTs_ALL), u_k in keys(UNITS), t in TIME], Int, lower_bound = 0)
#objective(
premex,
Min,
sum(
sum(
(
(iszero(
sum(
PRODAMOUNT[op_k, u_k, t] * _PRODUCTs_ALL[op["product"]]["bagSize"]
for (op_k, op) in _ORDER_PRODUCTs_ALL
)
) ? 0 : u["cap"]
) -
sum(
PRODAMOUNT[op_k, u_k, t] * _PRODUCTs_ALL[op["product"]]["bagSize"]
for (op_k, op) in _ORDER_PRODUCTs_ALL
)
) * u["util_cost1"]
for (u_k, u) in UNITS
)
for t in TIME
)
)
And here is a constraint that doesn't allow PRODAMOUNT on one UNIT / t to go above the max capacity of that unit.
for t in TIME
#constraint(
premex,
[u_k in keys(UNITS)],
sum(
PRODAMOUNT[op_k, u_k, t] * _PRODUCTs_ALL[op["product"]]["bagSize"]
for (op_k, op) in _ORDER_PRODUCTs_ALL
)
<= UNITS[u_k]["cap"]
)
end
Objective value: -461275000.00000000
How? Why the negative value?
UNITS[u_k][“cap”] and u[“cap”] are the same

Please provide a link when cross posting: https://discourse.julialang.org/t/why-the-objective-value-is-negative/46781
Your objective function is negative because the first term evaluates to 0 prior to being passed to Cbc, leaving only the negative summation.

Related

How to check if a user-defined function is already registered in Julia/JuMP

I want to check if a user-defined function is already registered in JuMP/julia. Here's an example:
function foo( f, f1, f2 )
if !function_is_registered(:f) # This is what I'm looking for
JuMP.register(:f,1,f1,f2)
end
####
# Optimization problem here using f
# Leads to some return statement
####
end
f(x) = exp( A * x )
f1(x) = A * exp( A * x )
f2(x) = A * A * exp( A * x )
# Function to register
A = 2
use1 = foo(f, f1, f2)
use2 = foo(f, f1, f2)
# This second usage would fail without the check. Can't re-register f.
As should be obvious from the comments, the check is needed for the second usage. As far as I can tell, JuMP registers functions at a global level - once registered they can't be re-defined locally (right? If they can, this solves my problem too!).
This will do what you want.
using JuMP
using Ipopt
function set_A_sol( A )
f = (x) -> exp( A * x ) - x
f1 = (x) -> A * exp( A * x ) - 1.0
f2 = (x) -> A * A * exp( A * x )
# Local redefinition of f
try
JuMP.register(:f, 1, f, f1, f2)
catch e
if e.msg == "Operator f has already been defined"
ind = pop!( ReverseDiffSparse.univariate_operator_to_id, :f);
deleteat!( ReverseDiffSparse.univariate_operators, ind);
pop!( ReverseDiffSparse.user_univariate_operator_f, ind);
pop!( ReverseDiffSparse.user_univariate_operator_fprime, ind);
pop!( ReverseDiffSparse.user_univariate_operator_fprimeprime, ind);
JuMP.register(:f, 1, f, f1, f2);
end
end
mod = Model(solver=Ipopt.IpoptSolver(print_level=0))
#variable(mod, - Inf <= x <= Inf )
#NLobjective(mod, Min, f(x) )
status=solve(mod)
return getvalue(x)
end
julia> ans1 = set_A_sol(0.5)
1.3862943611200509
julia> ans2 = set_A_sol(1.0)
0.0
julia> ans3 = set_A_sol(2.0)
-0.34657359027997264
Explanation:
If you look at the register function, defined in nlp.jl, "Registering" involves adding the symbol to a dictionary, held in ReverseDiffSparse.
Register a function and check those dictionaries manually to see what they look like.
So "de-registering" simply involves removing all traces of :f and its derivatives from all the places where it has been recorded.
Here's an extended answer based on Tasos's suggestions (thanks Tasos!).
tl;dr You can use try-catch statement for something that is already registered. You can also change parameters in the objective function in the global environment, but cannot wrap them in functions.
The following effectively permits checking for redefinition of a function:
function foo2( f, f1, f2 )
try
JuMP.register(:f,1,f1,f2)
end
####
# Optimization problem here using f
# Leads to some return statement
####
end
end
What's even better is that you can actually use the naive way that JuMP looks for f to change parameters in the objective function (although you need to redefine the model each time as you can't put a #NLparameter in a user-defined objective). For example:
using JuMP
using Ipopt
f = (x) -> exp( A * x ) - x
f1 = (x) -> A * exp( A * x ) - 1.0
f2 = (x) -> A * A * exp( A * x )
# Period objective function
JuMP.register(:f, 1, f, f1, f2)
A = 1.0
mod = Model(solver=Ipopt.IpoptSolver(print_level=0))
#variable(mod, - Inf <= x <= Inf )
#NLobjective(mod, Min, f(x) )
status=solve(mod)
println("x = ", getvalue(x))
# Returns 0
A = 2.0
mod = Model(solver=Ipopt.IpoptSolver(print_level=0))
#variable(mod, - Inf <= x <= Inf )
#NLobjective(mod, Min, f(x) )
status=solve(mod)
println("x = ", getvalue(x))
# Returns -0.34657 (correct)
You can even redefine f to something totally different and that will still work too. However, you can't wrap this in a function. For example:
function set_A_sol( A )
f = (x) -> exp( A * x ) - x
f1 = (x) -> A * exp( A * x ) - 1.0
f2 = (x) -> A * A * exp( A * x )
# Local redefinition of f
try
JuMP.register(:f, 1, f, f1, f2)
end
mod = Model(solver=Ipopt.IpoptSolver(print_level=0))
#variable(mod, - Inf <= x <= Inf )
#NLobjective(mod, Min, f(x) )
status=solve(mod)
return getvalue(x)
end
ans1 = set_A_sol(0.5)
ans2 = set_A_sol(1.0)
ans3 = set_A_sol(2.0)
# All return 1.38629
I don't totally understand why, but it appears that the first time that A is set inside set_A_sol, the JuMP registration fixes A once and for all. Given this is the thing that I ultimately want to be able to do, I'm still stuck. Suggestions welcome!

Differential Eq using deSolve in R

My apologies, for being unclear earlier. I now understand the function a bit more, but could use some assistance on a few aspects.
I would like to get back a relationship of conversion ( X ) versus volume ( V ), or the other way around would be fine as well. It would seem to me that the traditional "times" term is what I want to replace with an X sequence from 0 - 1, X is conversion remember so bounded by 0 and 1.0
Below, rw is the reaction rate, and is a function of the partial pressures at any given moment, which are described as P.w, P.x, P.y, and P.z which themselves are functions of the initial conditions (P.w0, v.0) and the conversion, again X.
Thank you in advance
rm(list = ls())
weight <- function( Vols, State, Pars ) {
with(as.list(c(State, Pars)), {
y = 1
delta = 2
ya.0 = 0.4
eps = ya.0 * delta
temp = 800
R = 8.314
k.2 = exp( (35000 / ( R*temp )) - 7.912 )
K.3 = exp( 4.084 / temp - 4.33 )
P.w <- P.w0 * ( 1 - X ) * y / ( 1 + eps * X )
P.x <- P.w0 * ( 1 - 2*X ) * y / ( 1 + eps * X )
P.y <- P.w0 * ( 1 + X ) * y / ( 1 + eps * X )
P.z <- P.w0 * ( 1 + 4*X ) * y / ( 1 + eps * X )
r.w <- k.2 * ( K.3 * P.w * P.x ^ 2 - P.y * P.z^4 )
F.w0 <- P.w0 * v.0 / ( R * temp )
dX.dq <- r.w / F.w0
res <- dX.dq
return(list(res))
})
}
pars <- c( y = 1,
P.w0 = 23,
v.0 = 120 )
yini <- c( X = 0 )
vols <- seq( 0 , 100 , by = 1 )
out <- ode( yini , vols , weight , pars )
Just running
vol.func(0,0,params)
i.e., evaluating the gradient at the initial conditions, gives NaN. The proper way to diagnose this is to divide your complex gradient expressions up into separate terms and see which one is causing trouble. I'm not going to go through this in detail, but as #Sixiang.Hu points out in comments above, you're dividing by V in your gradient function, which will cause infinite values if the numerator is finite or NaN values if the numerator is zero ...
More generally, it's not clear whether you understand that the first argument to the gradient function (your vol.func) is supposed to be the current time, not a value of the state variable. Perhaps V is supposed to be your state variable, and X should be a parameter ...?

Sorting with parity in julia

Suppose I have the following array:
[6,3,3,5,6],
Is there an already implemented way to sort the array and that returns also the number of permutations that it had to make the algorithm to sort it?
For instance, I have to move 3 times to the right with the 6 so it can be ordered, which would give me parity -1.
The general problem would be to order an arbitrary array (all integers, with repeated indexes!), and to know the parity performed by the algorithm to order the array.
a=[6,3,3,5,6]
sortperm(a) - [ 1:size(a)[1] ]
Results in
3-element Array{Int64,1}:
1
1
1
-3
0
sortperm shows you where each n-th index should go into. We're using 1:size(a)[1] to compare the earlier index to its original indexation.
If your array is small, you can compute the determinant of the permutation matrix
function permutation_sign_1(p)
n = length(p)
A = zeros(n,n)
for i in 1:n
A[i,p[i]] = 1
end
det(A)
end
In general, you can decompose the permutation as a product of cycles,
count the number of even cycles, and return its parity.
function permutation_sign_2(p)
n = length(p)
not_seen = Set{Int}(1:n)
seen = Set{Int}()
cycles = Array{Int,1}[]
while ! isempty(not_seen)
cycle = Int[]
x = pop!( not_seen )
while ! in(x, seen)
push!( cycle, x )
push!( seen, x )
x = p[x]
pop!( not_seen, x, 0 )
end
push!( cycles, cycle )
end
cycle_lengths = map( length, cycles )
even_cycles = filter( i -> i % 2 == 0, cycle_lengths )
length( even_cycles ) % 2 == 0 ? 1 : -1
end
The parity of a permutation can also be obtained from the
number of inversions.
It can be computed by slightly modifying the merge sort algorithm.
Since it is also used to compute Kendall's tau (check less(corkendall)),
there is already an implementation.
using StatsBase
function permutation_sign_3(p)
x = copy(p)
number_of_inversions = StatsBase.swaps!(x)
number_of_inversions % 2 == 0 ? +1 : -1
end
On your example, those three functions give the same result:
x = [6,3,3,5,6]
p = sortperm(x)
permutation_sign_1( p )
permutation_sign_2( p )
permutation_sign_3( p ) # -1

How to calculate the entropy of a coin flip

I would like to know ...
In the repeated coin flip, ¿how do I calculate the entropy of the random variable X that represents the number of flips to do until get "head" for first time?
The variable X can take any number from 1 through infinity. The probabilities are:
p(X = i) = (1/2)^i
The entropy is:
H = - Sum {i from 1 to infinity} ( p(X = i) * log2(p(X = i)) )
= - Sum {i from 1 to infinity} ( 1/2^i * log2(1/2^i) )
= - Sum {i from 1 to infinity} ( 1/2^i * i * log2(1/2) )
= Sum {i from 1 to infinity} ( 1/2^i * i )
Solving this yields:
H = 2 bit

Math - how to calculate arccos formula?

can anyone help me to calculate Arccos(X) ? with some formula ?
I'm trying to do it in some environment (SAP WEBI) with limited math formulas. ( have only cos , sin , tan.. ).
You can try using Newton's method:
function acos(a) {
delta = 1e-5
// a lousy first approximation
x = pi*(1-a)/2
last = x
x += (cos x-a)/sin x
while ( abs(x-last) > delta ) {
last = x
x += (cos x-a)/sin x
}
return x
}
From https://en.wikipedia.org/wiki/Inverse_trigonometric_functions :
# for -1 < x <= +1 :
acos(x) == 2*atan( sqrt(1-x*x)/(1+x) )
In WEBI there isn't way to calculate ACOS so there is 2 solutions:
1) create a new custom function in c++ and import it to WEBI
2) create a universe and use ACOS there.
Mor

Resources