Error when checking partial derivatives of slinear structured metamodel - openmdao

I am using the MetaModelStructuredComp Component to perform an interpolation in a 2D grid.
When checking the option to compute it with the 'slinear' method, the interpolation appears to work correctly, but when checking partial derivatives with a complex step, it returns a large error (order of 10^-1) for the derivatives with respect to the second dimension and second node (they belong to a grid point, but also the first node).
This does not happen when checking with all the other methods (cubic returning order of 10-15, and scipy_slinear with a finite difference check in the order of 10-10). The scipy_slinear check returns an analytical and numerical finite difference of that component which is almost identical to the one returned by the numerical finite difference of the slinear method (around -0.03338752, but note that the analytical returns -0.05107948)
I am not sure if its something I am missing, or if there is an error in the analytical partials for the slinear.
In my code, the first dimension is alpha with trained data of shape (12,), and second dimension mach (5,). I am checking with two outputs (C_D (12, 5) and C_L (12, 5), both having the same large error)
The LiftDragCoefficientsMetaModelPretrimmedGroup code is:
class LiftDragCoefficientsMetaModelPretrimmedGroup(om.Group):
def initialize(self):
self.options.declare('num_nodes', types=int,
desc='Number of nodes to be evaluated in the RHS')
self.options.declare('machs', default=np.arange(10),
desc='Vector of machs defining grid')
self.options.declare('alphas', default=np.arange(10),
desc='Vector of alphas defining grid')
self.options.declare('C_D_grid', default=np.zeros(10),
desc='Drag coefficients from grid')
self.options.declare('C_L_grid', default=np.zeros(10),
desc='Lift coefficients from grid')
self.options.declare('extrapolate', types=bool,
desc='Allow extrapolation if true',default=True)
self.options.declare('interp_method', types=str,
desc='Interlopation Method', default='slinear')
def setup(self):
comp=om.MetaModelStructuredComp(method=self.options['interp_method'],
extrapolate=self.options['extrapolate'] ,
vec_size=self.options['num_nodes'] )
comp.add_output('C_L', self.options['C_L_grid'].mean(), self.options['C_L_grid'])
comp.add_output('C_D', self.options['C_D_grid'].mean(), self.options['C_D_grid'])
comp.add_input('alpha', self.options['alphas'].mean(), self.options['alphas'])
comp.add_input('mach', self.options['machs'].mean(), self.options['machs'])
self.add_subsystem('comp', comp, promotes=["*"])
self.comp._no_check_partials = False # override skipping of check_partials
The code routine used is:
model = om.Group()
model.add_subsystem('InterpSubsystem',
LiftDragCoefficientsMetaModelPretrimmedGroup(num_nodes=3,
machs=rw.machs,
alphas=rw.alphas*np.pi/180.0,
C_L_grid=rw.c_Lt_grid,
C_D_grid=rw.c_Dt_grid,
interp_method='slinear',
extrapolate=False))
p = om.Problem(model)
p.setup(force_alloc_complex=True)
p.set_val('InterpSubsystem.alpha', np.array([35 * np.pi / 180, 10 * np.pi / 180, 8.5 * np.pi / 180]))
p.set_val('InterpSubsystem.mach', np.array([5, 7 , 7.5]))
p.run_model()
print(p['InterpSubsystem.C_L'])
print(p['InterpSubsystem.C_L']-np.array([rw.c_Lt_grid[8,1],rw.c_Lt_grid[3,3],0]))
print(p['InterpSubsystem.C_D'])
print(p['InterpSubsystem.C_D']-np.array([rw.c_Dt_grid[8,1],rw.c_Dt_grid[3,3],0]))
cpd = p.check_partials(compact_print=False, method='cs')
assert_check_partials(cpd, atol=1.0E-7, rtol=1.0E-7)
The error code is the following:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
InterpSubsystem.comp: 'C_L' wrt 'alpha'
Analytic Magnitude : 1.313053e+01
Fd Magnitude : 1.313053e+01 (cs:None)
Absolute Error (Jan - Jfd) : 8.881784e-16
Relative Error (Jan - Jfd) / Jfd : 6.764226e-17
Raw Analytic Derivative (Jfor)
[[7.45312355 0. 0. ]
[0. 8.300603 0. ]
[0. 0. 6.92543442]]
Raw FD Derivative (Jfd)
[[7.45312355 0. 0. ]
[0. 8.300603 0. ]
[0. 0. 6.92543442]]
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
InterpSubsystem.comp: 'C_L' wrt 'mach'
Analytic Magnitude : 1.331524e-01
Fd Magnitude : 1.274173e-01 (cs:None)
Absolute Error (Jan - Jfd) : 1.769196e-02 *
Relative Error (Jan - Jfd) / Jfd : 1.388505e-01 *
Raw Analytic Derivative (Jfor)
[[-0.11945724 0. 0. ]
[ 0. -0.05107948 0. ]
[ 0. 0. -0.02916183]]
Raw FD Derivative (Jfd)
[[-0.11945724 0. 0. ]
[ 0. -0.03338752 0. ]
[ 0. 0. -0.02916183]]
--------------------------------
Component: InterpSubsystem.comp
--------------------------------
< output > wrt < variable > | abs/rel | norm | norm value
--------------------------- | ------- | ------ | --------------------
C_L wrt mach | abs | fwd-fd | 0.017691955836769413
C_L wrt mach | rel | fwd-fd | 0.13885048495831204
C_D wrt mach | abs | fwd-fd | 0.004933779456698817
C_D wrt mach | rel | fwd-fd | 0.05287403976054622

So, what I think is happening here (and what I stumbled upon with my random table) is that 7.0 is one of the points on which your Mach data is defined. When you use the 'slinear' method, the derivatives are discontinuous at that point. It's just one of the disadvantages of the method.
The discrepancy in the check happens because the bracketing algorithm, which figures out which bin you are interpolating, chooses the "left" bin when you are on a grid point, but the finite difference or complex step chooses the "right" bin because the default check direction is "forward".
To alleviate future confusion, we're going to set the check direction to "backward" for the structured meta model component whenever the 'slinear' method is chosen. The derivatives "matched" when I made this change. This change should make it into OpenMDAO 3.11.

Related

How to get values from every iteration in JuMP

I'm solving a particular optimization problem in julia using JuMP, Ipopt and I have a problem finding the history of values i.e. value of x from every iteration.
I couldn't find anything useful in documentations.
Minimal example:
using JuMP
import Ipopt
model = Model(Ipopt.Optimizer)
#variable(model, -2.0 <= x <= 2.0, start = -2.0)
#NLobjective(model, Min, (x - 1.0) ^ 2)
optimize!(model)
value(x)
and I'd like to see value of x from every iteration, not only the last to create plot of x vs iteration.
Looking for any help :)
Each solver has a parameter on how verbose it is in representing the results.
In case of Ipopt you can do before calling optimize!(model):
set_optimizer_attribute(model, "print_level", 7)
In logs loog for curr_x (here is a part of logs):
**************************************************
*** Summary of Iteration: 6:
**************************************************
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
6 3.8455657e-13 0.00e+00 8.39e-17 -5.7 5.74e-05 - 1.00e+00 1.00e+00f 1
**************************************************
*** Beginning Iteration 6 from the following point:
**************************************************
Current barrier parameter mu = 1.8449144625279479e-06
Current fraction-to-the-boundary parameter tau = 9.9999815508553747e-01
||curr_x||_inf = 9.9999937987374388e-01
||curr_s||_inf = 0.0000000000000000e+00
||curr_y_c||_inf = 0.0000000000000000e+00
||curr_y_d||_inf = 0.0000000000000000e+00
||curr_z_L||_inf = 6.1403864613595829e-07
This is currently not possible. But there's an open issue: https://github.com/jump-dev/Ipopt.jl/issues/281

Logic of computing a^b, and is power a keyword?

I found the following code that is meant to compute a^b (Cracking the Coding Interview, Ch. VI Big O).
What's the logic of return a * power(a, b - 1); ? Is it recursion
of some sort?
Is power a keyword here or just pseudocode?
int power(int a, int b)
{ if (b < 0) {
return a; // error
} else if (b == 0) {
return 1;
} else {
return a * power(a, b - 1);
}
}
Power is just the name of the function.
Ya this is RECURSION as we are representing a given problem in terms of smaller problem of similar type.
let a=2 and b=4 =calculate= power(2,4) -- large problem (original one)
Now we will represent this in terms of smaller one
i.e 2*power(2,4-1) -- smaller problem of same type power(2,3)
i.e a*power(a,b-1)
If's in the start are for controlling the base cases i.e when b goes below 1
This is a recursive function. That is, the function is defined in terms of itself, with a base case that prevents the recursion from running indefinitely.
power is the name of the function.
For example, 4^3 is equal to 4 * 4^2. That is, 4 raised to the third power can be calculated by multiplying 4 and 4 raised to the second power. And 4^2 can be calculated as 4 * 4^1, which can be simplified to 4 * 4, since the base case of the recursion specifies that 4^1 = 4. Combining this together, 4^3 = 4 * 4^2 = 4 * 4 * 4^1 = 4 * 4 * 4 = 64.
power here is just the name of the function that is defined and NOT a keyword.
Now, let consider that you want to find 2^10. You can write the same thing as 2*(2^9), as 2*2*(2^8), as 2*2*2*(2^7) and so on till 2*2*2*2*2*2*2*2*2*(2^1).
This is what a * power(a, b - 1) is doing in a recursive manner.
Here is a dry run of the code for finding 2^4:
The initial call to the function will be power(2,4), the complete stack trace is shown below
power(2,4) ---> returns a*power(2,3), i.e, 2*4=16
|
power(2,3) ---> returns a*power(2,2), i.e, 2*3=8
|
power(2,2) ---> returns a*power(2,1), i.e, 2*2=4
|
power(2,1) ---> returns a*power(2,0), i.e, 2*1=2
|
power(2,0) ---> returns 1 as b == 0

Progress of Intrinsic Dimension Calculation in R

I am using the R packages "ider" and "intrinsicDimension". Only one function in the "intrinsicDimension" package: pcaLocalDimEst, has a verbose option, none of the functions in the "ider" has a verbose option.
Is there any way to get the progress of the calculations?
For instance, if I use the kernel version of the correlation dimension estimator for determining the intrinsic dimension:
estconvU <- convU(x=df, maxDim=20)
How do I obtain the progress of the calculation?
Type the following:
fix(convU)
Modify the first line, by adding "verbose=FALSE" to the end of the function call:
# -- - - - - - - - - - - - - - - - -vvvvvvvvvvvvv don't add this line
function (x, maxDim = 5, DM = FALSE, verbose=FALSE) # <- add this "verbose=FALSE"
# -- - - - - - - - - - - - - - - - -^^^^^^^^^^^^^ don't add this line
Then, scroll down to line 19 and add the following AFTER the for loop initialisation:
19: for (l in 1:maxDim) {
20: if(verbose) cat(paste("Working...", l, "\n")) # Add this line.
Then click the Save button at the bottom. If you made a mistake, R will complain.
If not, call the convU function but add verbose=TRUE and you should see some progress messages appear. For example, from the help page of convU:
x <- gendata(DataName='SwissRoll', n=1200)
estconvU <- convU(x=x, verbose = TRUE)
Working... 1
Working... 2
Working... 3
Working... 4
Working... 5

Creating a stochastic SIR model in Julia

I am new to julia and want to create a Stochastic SIR model by following: http://epirecip.es/epicookbook/chapters/sir-stochastic-discretestate-continuoustime/julia
I have written my own interpretation which is nearly the same:
# Following the Gillespie algorthim:
# 1. Initialization of states & parameters
# 2. Monte-carlo step. Random process/step selection.
# 3. Update all states. e.g., I = I + 1 (increase of infected by 1 person). Note: only +/- by 1.
# 4. Repeat until stopTime.
# p - Parameter array: β, ɣ for infected rate and recovered rate, resp.
# initialState - initial states of S, I, R information.
# stopTime - Total run time.
using Plots, Distributions
function stochasticSIR(p, initialState, stopTime)
# Hold the states of S,I,R separately w/ a NamedTuple. See '? NamedTuple' in the REML for details
# Populate the data storage arrays with the initial data and initialize the run time
sirData = (dataₛ = [initialState[1]], dataᵢ = [initialState[2]], dataᵣ = [initialState[3]], time = [0]);
while sirData.time[end] < stopTime
if sirData.dataᵢ[end] == 0 # If somehow # of infected = 0, break the loop.
break
end
# Probabilities of each process (infection, recovery). p[1] = β and p[2] = ɣ
probᵢ = p[1] * sirData.dataₛ[end] * sirData.dataᵢ[end];
probᵣ = p[2] * sirData.dataᵣ[end];
probₜ = probᵢ + probᵣ; # Total reaction rate
# When the next process happens
k = rand(Exponential(1/probₜ));
push!(sirData.time, sirData.time[end] + k) # time step by k
# Probability that the reaction is:
# probᵢ, probᵣ resp. is: probᵢ / probₜ, probᵣ / probₜ
randNum = rand();
# Update the states by randomly picking process (Gillespie algo.)
if randNum < (probᵢ / probₜ)
push!(sirData.dataₛ, sirData.dataₛ[end] - 1);
push!(sirData.dataᵢ, sirData.dataᵢ[end] + 1);
else
push!(sirData.dataᵢ, sirData.dataᵢ[end] - 1);
push!(sirData.dataᵣ, sirData.dataᵣ[end] +1)
end
end
end
sirOutput = stochasticSIR([0.0001, 0.05], [999,1,0], 200)
#plot(hcat(sirData.dataₛ, sirData.dataᵢ, sirData.dataᵣ), sirData.time)
Error:
InexactError: Int64(2.508057234147307)
Stacktrace: [1] Int64 at .\float.jl:709 [inlined] [2] convert at
.\number.jl:7 [inlined] [3] push! at .\array.jl:868 [inlined] [4]
stochasticSIR(::Array{Float64,1}, ::Array{Int64,1}, ::Int64) at
.\In[9]:33 [5] top-level scope at In[9]:51
Could someone please explain why I receive this error? It does not tell me what line (I am using Jupyter notebook) and I do not understand it.
First error
You have to qualify your references to time as sirData.time
The error message is a bit confusing because time is a function in Base as well, so it is automatically in scope.
Second error
You need your data to be represented as Float64, so you have to explictly type your input array:
sirOutput = stochasticSIR([0.0001, 0.05], Float64[999,1,0], 200)
Alternatively, you can create the array with float literals: [999.0,1,0]. If you create an array with only literal integers, Julia will create an integer array.
I'm not sure StackOverflow is the best venue for this, as you seem to editing the original post as you go along with new errors.
Your current error at the time of writing (InexactError: Int(2.50805)) tells you that you are trying to create an integer from a Float64 floating point number, which you can't do without rounding explicitly.
I would highly recommend reading the Julia docs to get the hang of basic usage, and maybe use the Julia Discourse forum for more interactive back-and-forth debugging with the community.

How to make and, or, not, xor, plus using only substraction

I read that there is a computer that uses only subtraction.
How is that possible. For the plus operand it's pretty easy.
The logical operands I think can be made using subtraction with a constant.
What do you guys think ?
Plus +
is easy as you already have minus implemented so:
x + y = x - (0-y)
NOT !
In standard ALU is usual to compute substraction by addition:
-x = !x + 1
So from this the negation is:
!x = -1 - x
AND &,OR |,XOR ^
Sorry have no clue about efficient AND,OR,XOR implementations without more info about the architecture other then testing each bit individually from MSB to LSB. So first you need to know the bit value from a number so let assume 4 bit unsigned integer numbers for simplification so x=(x3,x2,x1,x0) where x3 is the MSB and x0 is the LSB.
if (x>=8) { x3=1; x-=8; } else x3=0;
if (x>=4) { x2=1; x-=4; } else x2=0;
if (x>=2) { x1=1; x-=2; } else x1=0;
if (x>=1) { x0=1; x-=1; } else x0=0;
And this is how to get the number back
x=0
if (x0) x+=1;
if (x1) x+=2;
if (x2) x+=4;
if (x3) x+=8;
or like this:
x=15
if (!x0) x-=1;
if (!x1) x-=2;
if (!x2) x-=4;
if (!x3) x-=8;
now we can do the AND,OR,XOR operations
z=x&y // AND
z0=(x0+y0==2);
z1=(x1+y1==2);
z2=(x2+y2==2);
z3=(x3+y3==2);
z=x|y // OR
z0=(x0+y0>0);
z1=(x1+y1>0);
z2=(x2+y2>0);
z3=(x3+y3>0);
z=x^y // XOR
z0=!(x0+y0==1);
z1=!(x1+y1==1);
z2=!(x2+y2==1);
z3=!(x3+y3==1);
PS the comparison is just substraction + Carry and Zero flags examination. Also all the + can be rewriten and optimized to use of - to better suite this weird architecture
bit shift <<,>>
z=x>>1
z0=x1;
z1=x2;
z2=x3;
z3=0;
z=x<<1
z0=0;
z1=x0;
z2=x1;
z3=x2;

Resources