Is it possible to set initial values to use in optimisation? - openmdao

I'm currently using SQSLP, and defining my design variables like so:
p.model.add_design_var('indeps.upperWeights', lower=np.array([1E-3, 1E-3, 1E-3]))
p.model.add_design_var('indeps.lowerWeights', upper=np.array([-1E-3, -1E-3, -1E-3]))
p.model.add_constraint('cl', equals=1)
p.model.add_objective('cd')
p.driver = om.ScipyOptimizeDriver()
However, it insists on trying [1, 1, 1] for both variables. I can't override with val=[...] in the component because of how the program is structured.
Is it possible to get the optimiser to accept some initial values instead of trying to set anything without a default value to 1?

By default, OpenMDAO initializes variables to a value of 1.0 (this tends to avoid unintentional divide-by-zero if things were initialized to zero).
Specifying shape=... on input or output results in the variable values being populated by 1.0
Specifying val=... uses the given value as the default value.
But that's only the default values. Typically, when you run an optimization, you need to specify initial values of the variables for the given problem at hand. This is done after setup, through the problem object.
The set_val and get_val methods on problem allow the user to convert units. (using Newtons here for example)
p.set_val('indeps.upperWeights', np.array([1E-3, 1E-3, 1E-3]), units='N')
p.set_val('indeps.upperWeights', np.array([-1E-3, -1E-3, -1E-3]), units='N')
There's a corresponding get_val method to retrieve values in the desired units after optimization.
You can also access the problem object as though it were a dictionary, although doing so removes the ability to specify units (you get the variable values in its native units).
p['indeps.upperWeights'] = np.array([1E-3, 1E-3, 1E-3])
p['indeps.upperWeights'] = np.array([-1E-3, -1E-3, -1E-3])

Related

How can exp(x) be used in xcos?

I'm trying to simulate Ke^(-θs)/(𝜏*s + 1), but xcos won't let me use exp(s) in my CLR block. Is there any way around that?
Also, how can I create an xcos model, without the value for the variables, and then assign the values through the editor?
Thanks!
Your transfer function represents a time delay θ in series with a first order system, use the following block to approximate the delay part : https://help.scilab.org/docs/6.1.0/en_US/TIME_DELAY.html
Depending on what you mean with "to simulate Ke^(-θs)/(𝜏*s + 1)", you may try or use
https://help.scilab.org/docs/6.1.0/en_US/scifunc_block_m.html
or
https://help.scilab.org/docs/6.1.0/en_US/EXPRESSION.html
The second part of your question is quite unclear as well.
Usually, parameters (not variables) are defined in the context of the diagram. If by variables you mean the input signal, then you must create and use a block among possible sources (see sources palette), that will deliver an output to be branched as input to your processing block.

What is the matching VOI LUT function value for EFV_Default in DCMTK library?

I'm using Dcmtk library and I used the getVoiLutFunction() and this function return three different enum outputs (EFV_Linear, EFV_Sigmoid, EFV_Default), and for my current CT image I get the the EFV_Default value.
I looked into the standard documentation, and I found that a VOI LUT function can have one of three values (LINEAR, LINEAR_EXACT, SIGMOID), and they mention that LINEAR in the default one when (VOI LUT Function) attribute is absent, I'm confused, what is the matching one for DCMTK's EFV_Default enum
PS: I'm dealing with CT images.
AFAIK, EFV_Default is the enumeration literal expressing "not set to a well-known value yet", e.g.:
in (default) constructor
when reading a monchrome image for which the VOI LUT attributes are not present
It might e.g. be used to trigger calculation of a window from the image's histogram.
So you should not set this value explicitly but read it as an indication whether the pixel data is non-linear (explicitly set), linear (explicitly set) or linear (by default).

Function doesn't change value (R)

I have written a function that takes two arguments, a number between 0:16 and a vector which contains four parameter values.
The output of the function does change if I change the parameters in the vector, but it does not change if I change the number between 0:16.
I can add, that the function I'm having troubles with, includes another function (called 'pi') which takes the same arguments.
I have checked that the 'pi' function does actually change values if I change the value from 0:16 (and it does also change if I change the values of the parameters).
Firstly, here is my code;
pterm_ny <- function(x, theta){
(1-sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*theta[4])/pi(x, theta)
}
pi <- function(x, theta){
theta[1]*1*(x==0)+theta[2]*(theta[3]^(x))*exp((-1)*(theta[3]))+(1-
sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*(theta[4]))
}
Which returns 0.75 for pterm_ny(i,c(0.2,0.2,2,2)), were i = 1,...,16 and 0.2634 for i = 0, which tells me that the indicator function part in 'pi' does work.
With respect to raising a number to a certain power, I have been told that one should wrap the wished number in a 'I', as an example it would be like;
x^I(2)
I have tried to do that in my code, but that didn't help either.
I can't remember the argument for doing it, but I expect that it's to ensure that the number in parentheses is interpreted as an integer.
My end goal is to get 17 different values of the 'pterm' and to accomplish that, I was thinking of using the sapply function like this;
sapply(c(0:16),pterm_ny,theta = c(0.2,0.2,2,2))
I really hope that someone can point out what I'm missing here.
In advance, thank you!
You have a theta[4]^x term both in your main expression and in your pi() function; these are cancelling out, leaving the result invariant to changes in x ...
Also:
you might want to avoid using pi as your function name, as it's also a built-in variable (3.14159...) - this can sometimes cause confusion
the advice about using the "as is" function I() to protect powers is only relevant within formulas, e.g. as used in lm() (linear regression). (It would be used as I(x^2), not x^I(2)

torch/nn: What's the canonical way to multiply by a constant matrix?

nn.MM requires a table argument of the matrices that will be multiplied. In my case, one of the matrices is the output of some previously defined model (e.g. an nn.Sequential) and the other is just a constant matrix. How can I inject a constant into nn's pipeline and should I be worried that optimizer will start changing it if I do?
I'm aware that I could solve the injection problem by:
Writing my own nn.Module. This seems heavy handed.
Breaking the model into two parts and manually injecting the constant. I really want the model to just be some nn.Module subclass that gets called with :forward(input) and allows consumers to be blissfully ignorant of the existence of the constant.
Using nn.ParallelTable, but that would also expose the constant to model consumers.
Using nn.Linear with no bias and overwriting the weights. I'm just not sure how to prevent the optimizer from performing the update.
You can create an nn.Linear and the override the :accGradParameters to be a no-op function
m = nn.Linear(100,200)
-- copy your weights / bias into m.weight / m.bias
m.accGradParameters = function() end
-- m is a constant multiplier thing
Use MulConstant
m=nn.MulConstant(7,true)(myMatrix)

Boolean (BitArray) multidimensional array indexing or masking in Julia?

As part of a larger algorithm, I need to produce the residuals of an array relative to a specified limit. In other words, I need to produce an array which, given someArray, comprises elements which encode the amount by which the corresponding element of someArray exceeds a limit value. My initial inclination was to use a distributed comparison to determine when a value has exceeded the threshold. As follows:
# Generate some test data.
residualLimit = 1
someArray = 2.1.*(rand(10,10,3).-0.5)
# Determine the residuals.
someArrayResiduals = (residualLimit-someArray)[(residualLimit-someArray.<0)]
The problem is that the someArrayResiduals is a one-dimensional vector containing the residual values, rather than a mask of (residualLimit-someArray). If you check [(residualLimit-someArray.<0)] you'll find that it is behaving as expected; it's producing a BitArray. The question is, why doesn't Julia allow to use this BitArray to mask someArray?
Casting the Bools in the BitArray to Ints using int() and distributing using .*produces the desired result, but is a bit inelegant... See the following:
# Generate some test data.
residualLimit = 1
someArray = 2.1.*(rand(10,10,3).-0.5)
# Determine the residuals.
someArrayResiduals = (residualLimit-someArray).*int(residualLimit-someArray.<0)
# This array should be (and is) limited at residualLimit. This is correct...
someArrayLimited = someArray + someArrayResiduals
Anyone know why a BitArray can't be used to mask an array? Or, any way that this entire process can be simplified?
Thanks, all!
Indexing with a logical array simply selects the elements at indices where the logical array is true. You can think of it as transforming the logical index array with find before doing the indexing expression. Note that this can be used in both array indexing and indexed assignment. These logical arrays are often themselves called masks, but indexing is more like a "selection" operation than a clamping operation.
The suggestions in the comments are good, but you can also solve your problem using logical indexing with indexed assignment:
overLimitMask = someArray .> residualLimit
someArray[overLimitMask] = residualLimit
In this case, though, I think the most readable way to solve this problem is with min or clamp: min(someArray, residualLimit) or clamp(someArray, -residualLimit, residualLimit)

Resources