if i have a design variable that has lower and upper bounds of 0 and 1e6 and an initial value of 1e5
it surely is very insensitive to the default finite difference steps of 1e-6
is the correct way of overcoming this problem ;
a) change FD step size f.e. to 5e4
b) scale the design variable with 'scaler' of 1e6 and set the lower upper bounds to 0 and 1, while keeping the default FD steps.
I think "a" is your best bet if you are using the latest (OpenMDAO 2.x).
When you call declare_partials for a specific derivative in a component, or when you call approx_totals on a group, you can pass in an optional argument called "step", which contains the desired stepsize. Since your variable spans [0, 1e6], I think maybe a step size between 1e1 and 1e3 would work for you.
Idea "b" wouldn't actually work at present for fixing the FD problem. The step size you declare is applied to the unscaled value of the input, so you would still have the same precision problem. This is true for both kinds of scaling (1. specified on add_output, and 2. specified on add_design_var). Note though that you may still want to scale this problem anyway because the optimizer may work better on a scaled problem. If you do this then, you should still declare the larger "step" size mentioned above.
BTW, another option is to use a relative stepsize in the 'fd' calculation by setting the "step_calc" argument to "rel". This turns the absolute stepsize into a relative stepsize. However, I don't recommend this here because your range includes zero, and when it is close to zero, the stepsize falls back to an absolute one to prevent it from being too tiny.
Related
I have a component that includes np.sqrt(1-x). This works fine for normal operation, since all inputs will strictly be between 0 and 1. However, when checking partials and providing an input array that goes all the way up to 1, the finite differencer will step past 1, and break the component. The inputs shouldn't be less than 0 either, so simply switching the direction of the finite difference wouldn't work.
The workaround is just using np.linspace(0, 0.99, 400) instead of np.linspace(0, 1, 400).
Is it possible to set allowable bounds for the finite differencing?
As of V3.1 there isn't a way to set bounds like that. Just make sure that you test the derivatives around a more well posed point, like around 0.5.
If you're having trouble getting things to work in the optimization context, since you might bump into both bounds ... well, thats why analytic derivatives are better :)
I recently learned about the feature of the semi-total derivative approximation. I started to use this feature with bsplines and an explicit component. My current problem is that my design variables are input from two different components similar to the xsdm below. As far as I see it is not possible to set up different finite difference steps for different design variables. So looking at the xsdm again the control points, x and z should have identical FD steps i.e.
model.approx_totals(step=1)
works but
model.approx_totals(step=np.ones(5))
won't work. I guess, one remedy is to use the relative step size but some of my input bounds are varying from 0 to xx so maybe the relative step size is not the best. Is there a way to feed in FD steps as a vector or something similar to ;
for out in outputs:
for dep,fdstep in zip(inputs,inputsteps):
self.declare_partials(of=out,wrt=dep,method='fd',step=fdstep, form='central')
As of OpenMDAO V2.4, you don't have the ability to set per-variable FD step sizes when using approx_totals. The best option is just to use relative step sizes.
I am using the finite difference scheme to find gradients.
Lets say i have 2 outputs (y1,y2) and 1 input (x) in a single component. And in advance I know that the sensitivity of y1 with respect to x is not same as the sensitivity of y2 to x. And thus i could potentially have two different steps for those as in ;
self.declare_partials(of=y1, wrt=x, method='fd',step=0.01, form='central')
self.declare_partials(of=y2, wrt=x, method='fd',step=0.05, form='central')
There is nothing that stops me (algorithmically) but it is not clear what would openmdao gradient calculation exactly do in this case?
does it exchange information from the case where the steps are different by looking at the steps ratios or simply treating them independently and therefore doubling computational time ?
I just tested this, and it does the finite difference twice with the two different step sizes, and only saves the requested outputs for each step. I don't think we could do anything with the ratios as you suggested, as the reason for using different stepsizes to resolve individual outputs is because you don't trust the accuracy of the outputs at the smaller (or large) stepsize.
This is a fair question about the effect of the API. In typical FD applications you would get only 1 function call per design variable for forward and backward difference and 2 function calls for central difference.
However in this case, you have asked for two different step sizes for two different outputs, both with central difference. So here, you'll end up with 4 function calls to compute all the derivatives. dy1_dx will be computed using the step size of .01 and dy2_dx will be computed with a step size of .05.
There is no crosstalk between the two different FD calls, and you do end up with more function calls than you would have if you just specified a single step size via:
self.declare_partials(of='*', wrt=x, method='fd',step=0.05, form='central')
If the cost is something you can bear, and you get improved accuracy, then you could use this method to get different step sizes for different outputs.
I can see that my design variable exceeds its limits. (using COBYLA in this case)
I have a sample setup with single design variable where the optimum lies around 0.
I set the 'lower=0'.
I want this to be a very strict limit, because negative values yield NaN for my solver.
The optimizer goes i.e.
1, 2, 0, -0.125000000e-01, -1.56250000e-02, -1.95312500e-03, -2.44140625e-04
-3.05175781e-05, -3.81469727e-06, -5.00000000e-07
I am guessing this is optimizer type dependent? But is there a way enforce more strictly.
Unfortunately, COBYLA does not strictly respect variable bounds (see scipy docs) The best you can do is to add them as linear constraints, and it will attempt to enforce them at the optimum point.
You can try SLSQP, though. It does strictly respect the bounds.
I have a range of numbers in (0, 1]
I would like to take the natural log of these numbers, and then store as 8.8
fixed point.
My foruma is K*ln(x) + (1<<16)
but I am not sure what the best value is for K .
My thinking is that if x doubles, then ln(x) increases by ln(2), so the fixed point value should increase by 1 in fixed point (i.e. 256)
So, this would mean K = 256/ln(2)
Does this make sense?
As x approaches 0, ln(x) will diverge to negative infinity. So you are essentially trying to map an infinite domain to a finite range.
If you do so in a linear way, you have to cut off at some point. If you choose your cut-off at too low a value, you'll be wasting precision for the numbers you represent. If you choose to high a cut-off, too many values will be clamped to the minimal element of the range. Without knowledge about the distribution of the point, it will be very hard to guess a suitable balance here.
So perhaps you could apply a non-linear map instead of the linear one you proposed. Something like the exponential function? Which would mean you'd actually store x instead of ln(x). So I'd say if you want to store values from [0,1) in 16 bit without too much loss of information, you'd just use Q0.16, i.e. all the digits in the fractional part. For (0,1] you can either store 1 − x or do a special case for x = 1 so that you encode that as 0 instead. If you have Q8.8 numbers, you'd multiply your numbers by 28 = 256 first, but if you have access to the bit representation that multiplication would be a waste of time.
I guess you had a reason you'd want to store logarithms, so this answer may not be what you were hoping for. I don't see an easier way around the underlying problem, though, so you may have to reconsider some of your ideas.