Binary variable in Gurobi solver - julia

I run my code in Julia language, using a notebook, and a Gurobi solver. The problem is I have a defined binary variable, but the Gurobi solver provides me with a numerical value. For example, the results should provide the optimal number of buses in the system, but the result is 7.999?

This is how integer programming works; please see the Gurobi knowledge base article Why does Gurobi sometimes return values for integer variables that are not integers?

Related

Mathematical constrained optimization in R

I have a mathematical optimization which I wish to solve in R consider this system/problem:
How Can I solve this problem in R?
In this model Budget, p_l for all l and mu_target are fixed constants while muis a given m-dimensional vector and R is a given n by m matrix.
I have looked into constrOptim and lp but I don't have the imagination to implement the constraints
Those functions require that I have a "constraint" matrix but my problem is that I simply don't know how to design that constraint matrix. There are not many examples with decision variables on both sides of the equations.
Have a look on the nloptr package. It has quite extensive documentation with examples. Lots of algorithms to choose from, depending what problem you are trying to resolve.
NLoptr link

Affinity Propagatiomh in julia with the Clustering pkg

I would like to use the Affinity Propagation algorithm from the Clustering pkg in Julia.
I have a colletection of n points with m variables. I created a mxn array but i would like to know what is the S input in the function affinityprop(S::DenseMatrix{T}; ...)
The python sklearn implementation seems to take the mxn array as input.
Most likely it expects an affinity matrix.
But clearly there is a lack of documentation. You should file a bug report that asks for proper documentation to be included with Clustering.jl

Is there an equivalent to matlab's rcond() function in Julia?

I'm porting some matlab code that uses rcond() to test for singularity, as also recommended here (for matlab singularity testing).
I see that there is a cond() function in Julia (as also in Matlab), but rcond() doesn't appear to be available by default:
ERROR: rcond not defined
I'd assume that rcond(), like the Matlab version is more efficient than 1/cond(). Is there such a function in Julia, perhaps using an add-on module?
Julia calculates the condition number using the ratio of maximum to the minimum of the eigenvalues (got to love open source, no more MATLAB black boxs!)
Julia doesn't have a rcond function in Base, and I'm unaware of one in any package. If it did, it'd just be the ratio of the maximum to the minimum instead. I'm not sure why its efficient in MATLAB, but its quite possible that whatever the reason is it doesn't carry though to Julia.
Matlab's rcond is an optimization based upon the fact that its an estimate of the condition number for square matrices. In my testing and given that its help mentions LAPACK's 1-norm estimator, it appears as though it uses LAPACK's dgecon.f. In fact, this is exactly what Julia does when you ask for the condition number of a square matrix with the 1- or Inf-norm.
So you can simply define
rcond(A::StridedMatrix) = 1/cond(A,1)
You can save Julia from twice-inverting LAPACK's results by manually combining cond(::StridedMatrix) and cond(::LU), but the savings here will almost certainly be immeasurable. Where there is a measurable savings, however, is that you can directly take the norm(A) instead of reconstructing a matrix similar to A through its LU factorization.
rcond(A::StridedMatrix) = LAPACK.gecon!('1', lufact(A).factors, norm(A, 1))
In my tests, this behaves identically to Matlab's rcond (2014b), and provides a decent speedup.

Operations on long numbers in R

I aim to use maximum likelihood methods (usually about 10^5 iterations) with a probability distribution that creates very big integers and very small float values that cannot be stored as a numeric nor a in a float type.
I thought I would use the as.bigq in the gmp package. My issue is that one can only add, substract, multiply and dived two objects of class/type bigq, while my distribution actually contains logarithm, power, gamma and confluent hypergeometric functions.
What is my best option to deal with this issue?
Should I use another package?
Should I code all these functions for bigq objects.
Coding these function on R may cause some functions to be very slow, right?
How to write the logarithm function using only the +,-,*,/ operators? Should I approximate this function using a taylor series expansion?
How to write the power function using only the +,-,*,/ operators when the exponent is not an integer?
How to write the confluent hypergeometric function (the equivalent of the Hypergeometric1F1Regularized[..] function in Mathematica)?
I could eventually write these functions in C and call them from R but it sounds like some complicated work for not much, especially if I have to use the gmp package in C as well to handle these big numbers.
All your problems can be solved with Rmpfr most likely which allows you to use all of the functions returned by getGroupMembers("Math") with arbitrary accuracy.
Vignette: http://cran.r-project.org/web/packages/Rmpfr/vignettes/Rmpfr-pkg.pdf
Simple example of what it can do:
test <- mpfr(rnorm(100,mean=0,sd=.0001), 240)
Reduce("*", test)
I don't THINK it has hypergeometric functions though...

Optimization in R with arbitrary constraints

I have done it in Excel but need to run a proper simulation in R.
I need to minimize function F(x) (x is a vector) while having constraints that sum(x)=1, all values in x are [0,1] and another function G(x) > G_0.
I have tried it with optim and constrOptim. None of them give you this option.
The problem you are referring to is (presumably) a non-linear optimization with non-linear constraints. This is one of the most general optimization problems.
The package I have used for these purposes is called nloptr: see here. From my experience, it is both versatile and fast. You can specify both equality and inequality constaints by setting eval_g_eq and eval_g_ineq, correspondingly. If the jacobians are known explicitly (can be derived analytically), specify them for faster convergence; otherwise, a numerical approximation is used.
Use this list as a general reference to optimization problems.
Write the set of equations using the Lagrange multiplier, then solve using the R command nlm.
You can do this in the OpenMx Package (currently host at the site listed below. Aiming for 2.0 relase on cran this year)
It is a general purpose package mostly used for Structural Equation Modelling, but handling nonlinear constraints.
FOr your case, make an mxModel() with your algebras expressed in mxAlgebras() and the constraints in mxConstraints()
When you mxRun() the model, the algebras will be solved within the constraints, if possible.
http://openmx.psyc.virginia.edu/

Resources