How is it possible to manage gap of solver cplex in julia? - julia

we can have manual Gap in CPLEX.
IRP=Model(solver=CplexSolver(CPX_PARAM_EPGAP=0.00000001))
but if we want to attain 0.02% Gap. is this correct?
IRP=Model(solver=CplexSolver(CPX_PARAM_EPGAP=0.02))
or this is correct?
IRP=Model(solver=CplexSolver(CPX_PARAM_EPGAP=0.0002))
Would you please help me that which of those is correct? Thanks very much.

According to the description on IBM Knowledgecenter for CPLEX, for 0.02% you need to enter 0.0002=0.02*0.01. Therefore, the second one is correct.
IRP=Model(solver=CplexSolver(CPX_PARAM_EPGAP=0.0002))
When the value
|bestbound-bestinteger|/(1e-10+|bestinteger|)
falls below the value of this parameter, the mixed integer
optimization is stopped.
For example, to instruct CPLEX to stop as soon as it has found a
feasible integer solution proved to be within five percent of optimal,
set the relative MIP gap tolerance to 0.05.

Related

How does one set a floating-point to infinite in Open Shading Language?

I'm doing some materials work right now using Open Shading Language (OSL), and it has a convenient function, isinf(), which will determine whether a floating-point is infinite or not...
However, I can't find anything in the documentation about actually setting a variable to infinite. I'm instead going to be setting it to "irrationally large", which will certainly work well enough for my purposes (effectively cell noise generation), but I'm curious whether there's a built-in way to express infinity in OSL?
The problem is that OSL tries very very hard not to let you generate non-finite numbers, and there is no call to intentionally give you an infinity value. You could use what would in C be FLT_MAX: 3.402823466+38

How to decide which mode to use for 'kaiming_normal' initialization

I have read several codes that do layer initialization using nn.init.kaiming_normal_() of PyTorch. Some codes use the fan in mode which is the default. Of the many examples, one can be found here and shown below.
init.kaiming_normal(m.weight.data, a=0, mode='fan_in')
However, sometimes I see people using the fan out mode as seen here and shown below.
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
Can someone give me some guidelines or tips to help me decide which mode to select? Further I am working on image super resolutions and denoising tasks using PyTorch and which mode will be more beneficial.
According to documentation:
Choosing 'fan_in' preserves the magnitude of the variance of the
weights in the forward pass. Choosing 'fan_out' preserves the
magnitudes in the backwards pass.
and according to Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015):
We note that it is sufficient to use either Eqn.(14) or
Eqn.(10)
where Eqn.(10) and Eqn.(14) are fan_in and fan_out appropriately. Furthermore:
This means that if the initialization properly scales the backward
signal, then this is also the case for the forward signal; and vice
versa. For all models in this paper, both forms can make them converge
so all in all it doesn't matter much but it's more about what you are after. I assume that if you suspect your backward pass might be more "chaotic" (greater variance) it is worth changing the mode to fan_out. This might happen when the loss oscillates a lot (e.g. very easy examples followed by very hard ones).
Correct choice of nonlinearity is more important, where nonlinearity is the activation you are using after the layer you are initializaing currently. Current defaults set it to leaky_relu with a=0, which is effectively the same as relu. If you are using leaky_relu you should change a to it's slope.

Semi-total derivative approximation with varying finite difference steps

I recently learned about the feature of the semi-total derivative approximation. I started to use this feature with bsplines and an explicit component. My current problem is that my design variables are input from two different components similar to the xsdm below. As far as I see it is not possible to set up different finite difference steps for different design variables. So looking at the xsdm again the control points, x and z should have identical FD steps i.e.
model.approx_totals(step=1)
works but
model.approx_totals(step=np.ones(5))
won't work. I guess, one remedy is to use the relative step size but some of my input bounds are varying from 0 to xx so maybe the relative step size is not the best. Is there a way to feed in FD steps as a vector or something similar to ;
for out in outputs:
for dep,fdstep in zip(inputs,inputsteps):
self.declare_partials(of=out,wrt=dep,method='fd',step=fdstep, form='central')
As of OpenMDAO V2.4, you don't have the ability to set per-variable FD step sizes when using approx_totals. The best option is just to use relative step sizes.

Julia and system of ordinary differential equations

I want to try to solve a system of ordinary differential equations, perhaps parallelized and came across Julia and DifferentialEquations.jl. the system looks like
x'(t) = f(t)*z(t)
y'(t) = g(t)*z(t)
z'(t) = f(t)*(1-2*x(t))/(2) -g(t)*y(t)
over 10^2 < t < 10^14, but my initial boundary conditions are
x(10^14) == 0
y(10^14) == 0
z(10^14) == 0
Could someone please explain to me how to setup this problem in julia? I checked the documentation and could only find u0 as a parameter, but it doesn't give details on choosing for a right handed set of boundary conditions Many thanks!
You're looking to solve a boundary value problem (BVP). While this area is currently less developed than other areas of DifferentialEquations.jl, there are methods which exist for this which are shown in the tutorial on solving BVPs. The MIRK4 method may be the one to try.
I will note however that your timescale is quite large and may lead to numerical errors. Either using higher precision numbers (BigFloat, ArbFloat, DoubleFloat) may be required for handling that range, or you may want to rescale time in your equations so that way it better fits for standard double precision floating point numbers (Float64).

arithmetic library for tracking worst case error

(edited)
Is there any library or tool that allows for knowing the maximum accumulated error in arithmetic operations?
For example if I make some iterative calculation ...
myVars = initialValues;
while (notEnded) {
myVars = updateMyVars(myVars)
}
... I want to know at the end not only the calculated values, but also the potential error (the range of posible values if results in each individual operations took the range limits for each operand).
I have already written a Java class called EADouble.java (EA for Error Accounting) which holds and updates the maximum positive and negative errors along with the calculated value, for some basic operations, but I'm afraid I might be reinventing an square wheel.
Any libraries/whatever in Java/whatever? Any suggestions?
Updated on July 11th: Examined existing libraries and added link to sample code.
As commented by fellows, there is the concept of Interval Arithmetic, and there was a previous question ( A good uncertainty (interval) arithmetic library? ) on the topic. There just a couple of small issues about my intent:
I care more about the "main" value than about the upper and lower bounds. However, to add that extra value to an open library should be straight-forward.
Accounting the error as an independent floating point might allow for a finer accuracy (e.g. for addition the upper bound would be incremented just half ULP instead of a whole ULP).
Libraries I had a look at:
ia_math (Java. Just would have to add the main value. My favourite so far)
Boost/numeric/Interval (C++, Very complex/complete)
ErrorProp (Java, accounts value, and error as standard deviation)
The sample code (TestEADouble.java) runs ok a ballistic simulation and a calculation of number e. However those are not very demanding scenarios.
probably way too late, but look at BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
Pretty complete, simple, account for computer error, and if your errors are centered easy access to your nominal (through Mid(...)).

Resources