Is there a way to set the optimality gap in CBC solver, as of now I am able to set the maximum timings on the solver but can not figure out how to set the optimality gap a the stopping criteria.Thanks a lot.
using JuMP , Cbc
m = Model( solver = CbcSolver(Sec=70*60))
You can write:
using JuMP, Cbc
model = Model()
set_optimizer(model, Cbc.Optimizer)
set_optimizer_attribute(model, "ratioGap", 0.05)
as per the documentation here:
https://github.com/jump-dev/Cbc.jl
ratioGap -- Terminate after optimality gap is smaller than this relative fraction.
solver.options['ratioGAP']=0.01
Related
The parameters "atol" and "rtol" default is 0, in sklearn.neighbors.KernelDensity class. What does this mean?
Does it mean it uses all the data points to calculate the likelihood?
What will happen when they are not set to 0?
You can check the documentation of sklearn.
atol : float, default=0
The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution.
rtol : float, default=0
The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution.
Intuitively, it means when sklearn calculating the kernel density, the program might stop earlier before finding optimal mean square error. It will allow some range of error, but faster. It's a balance of time and accuracy. You can try what range of atol/rtol you can accept when you are on the developing stage so you don't need to wait so long when testing the code.
I cannot put the whole code here, and was not able to reproduce the problem with a small code, but here is the beginning of the code:
using JuMP, Cbc, StatsBase
n = 3;
V = 1:(2n+1);
model = Model(with_optimizer(Cbc.Optimizer, seconds=120));
#variable(model, x[V], Bin);
...
#objective(model, Min, total_blah);
JuMP.optimize!(model)
result = termination_status(model)
JuMP.objective_value(model)
xsol = JuMP.value.(x);
The problem I have is that the solver returns a solution where some of the xsol have values like 0.99995, where I am expecting Binary, ie either 0 or 1.
Can someone explain this behavior?
I looked this up and CBC has an option called integerTolerance (or integerT) that helps CBC to decide whether a variable is integer valued. Using CBC.exe, I see:
Coin:integerTolerance
integerTolerance has value 1e-006
Indeed the default is 1e-6. You cannot set it to zero but you can make it smaller (valid range is 1e-020 to 0.5). (The only solver I know of that allows this tolerance to be set to zero is Cplex; usually doing that leads to longer solution times).
In general I would advice to keep it as it is. If small deviations from integer values irritate you, I would round integer variables in the solution before printing. This gives better looking solutions (but this rounding step may make the solution slightly infeasible).
I have a simple question. I am trying to understand why is there a large difference in network responses given by the gpu (cuda) and the cpu. Here's a minimal example:
require 'torch'
require 'nn'
require 'cunn'
require 'paths'
-- a small convnet
net = nn.Sequential()
net:add(nn.SpatialConvolution(3,16, 3,3))
net:add(nn.SpatialConvolution(16,8, 3,3))
net:add(nn.SpatialConvolution(8,1, 3,3))
-- randomize weights
local w = net:getParameters()
w:copy(torch.Tensor(w:nElement()):uniform(-1000,1000))
-- random input
x = torch.Tensor(3, 10, 10):uniform(-1,1)
-- network on gpu
net:cuda()
y = net:forward(x:cuda())
print(y)
-- network on cpu
y2 = net:clone():double():forward(x)
print(y2)
-- check difference (typically ~10000)
print("Mean Abs. Diff:")
print(torch.abs(y2-y:double()):sum()/y2:nElement())
Am I doing something wrong here, or it's some expected difference between CPU/GPU computation?
It turns out, even though the mean absolute difference can be large, the percentage difference is quite small (on the order of 1e-5%):
print("Mean Abs. % Diff:")
print(torch.abs(y2-y:double()):cdiv(torch.abs(y2)):sum() / y2:nElement())
Is the mean absolute diff. large due to some difference in how cuda handles floating point precision as compared to the cpu?
I would like to have a function f(x) that gives good pseudo-random numbers in uniform distribution according to value x. I am aware of linear congruential generators, however these work in iterations, i.e. I provide the initial seed and then I get a sequence of random values one by one. This is not what I want, because if a want to get let's say 200000th number in the sequence, I have to compute numbers 1 ... 199999. I need a function that is given by one simple formula that uses basic operations such as +, *, mod, etc. I am also aware of hash functions but I didn't find any that suits these needs. I might come up with some function myself, but I'd like to use something that's been tested to give decent pseudo-random values. Is there anything like that being used?
You might consider multiplicative congruential generators. These are linear congruentials without the additive constant: Xi+1 = aXi % c for suitable constants a and c. Expanding this out for a few iterations will convince you that Xk = akX0 % c, where X0 is your seed value. This can be calculated in O(log(k)) time using fast modular exponentiation. No need to calculate the first 199,999 to get the 200,000th value, you can find it in something proportional to about 18 steps.
Actually, for LCG with additive constant it works as well. There is a paper by F. Brown, "Random Number Generation with Arbitrary Stride", Trans. Am. Nucl. Soc. (Nov. 1994). Based on this paper there is reasonable LCG with decent quality and log2(N) skip-ahead feature, used by well-known Monte Carlo package MCNP5. C++ post is here https://github.com/Iwan-Zotow/LCG-PLE63/. Further development if this idea (RNG with logarithmic skip-ahead) is pretty decent family of generators at http://www.pcg-random.org/
You could use a simple encryption algorithm that can encrypt the numbers 1, 2, 3, ... Since encryption is reversible, each input number will have a unique output. The 200000th number in your sequence is encrypt(key, 200000). Use DES for 64 bit numbers, AES for 128 bit numbers and you can roll your own simple Feistel cipher for 32 bit or 16 bit numbers.
I am running clustering using the mclust function. In need to get the number of iterations the algorithm used to get the answer. I can't seem to find it anywhere. I do not mind using other function that will perform "gaussian mixture mode" using EM if it will provide me the number of iteration as part of it's output.
There seems to be no clear way to extract this, as far as I can tell. Here's a pretty hacky and approximate way to get at it though.
You can set the maximum number of iterations for EM using the control parameter,
x<-c(rnorm(100),rnorm(100,10,1))
mod<-Mclust(x,control = emControl(itmax=100))
These are set to Inf by default and so the EM will terminate only when the log likelihood changes in increments smaller than the tolerance. If you set itmax, the EM will terminate, but will throw a warning that the algorithm stopped before reaching the tolerance bounds.
So you could adjust itmax a few times to get a sense of how many iterations are required before the EM is naturally terminating. For example,
mod<-Mclust(x,control = emControl(itmax=102))
throws a warning but
mod<-Mclust(x,control = emControl(itmax=103))
does not.
So it seems that 103 iterations are required to reach the exit conditions (with the default tolerance parameters).