What is the matching VOI LUT function value for EFV_Default in DCMTK library? - dicom

I'm using Dcmtk library and I used the getVoiLutFunction() and this function return three different enum outputs (EFV_Linear, EFV_Sigmoid, EFV_Default), and for my current CT image I get the the EFV_Default value.
I looked into the standard documentation, and I found that a VOI LUT function can have one of three values (LINEAR, LINEAR_EXACT, SIGMOID), and they mention that LINEAR in the default one when (VOI LUT Function) attribute is absent, I'm confused, what is the matching one for DCMTK's EFV_Default enum
PS: I'm dealing with CT images.

AFAIK, EFV_Default is the enumeration literal expressing "not set to a well-known value yet", e.g.:
in (default) constructor
when reading a monchrome image for which the VOI LUT attributes are not present
It might e.g. be used to trigger calculation of a window from the image's histogram.
So you should not set this value explicitly but read it as an indication whether the pixel data is non-linear (explicitly set), linear (explicitly set) or linear (by default).

Related

Is it possible to set initial values to use in optimisation?

I'm currently using SQSLP, and defining my design variables like so:
p.model.add_design_var('indeps.upperWeights', lower=np.array([1E-3, 1E-3, 1E-3]))
p.model.add_design_var('indeps.lowerWeights', upper=np.array([-1E-3, -1E-3, -1E-3]))
p.model.add_constraint('cl', equals=1)
p.model.add_objective('cd')
p.driver = om.ScipyOptimizeDriver()
However, it insists on trying [1, 1, 1] for both variables. I can't override with val=[...] in the component because of how the program is structured.
Is it possible to get the optimiser to accept some initial values instead of trying to set anything without a default value to 1?
By default, OpenMDAO initializes variables to a value of 1.0 (this tends to avoid unintentional divide-by-zero if things were initialized to zero).
Specifying shape=... on input or output results in the variable values being populated by 1.0
Specifying val=... uses the given value as the default value.
But that's only the default values. Typically, when you run an optimization, you need to specify initial values of the variables for the given problem at hand. This is done after setup, through the problem object.
The set_val and get_val methods on problem allow the user to convert units. (using Newtons here for example)
p.set_val('indeps.upperWeights', np.array([1E-3, 1E-3, 1E-3]), units='N')
p.set_val('indeps.upperWeights', np.array([-1E-3, -1E-3, -1E-3]), units='N')
There's a corresponding get_val method to retrieve values in the desired units after optimization.
You can also access the problem object as though it were a dictionary, although doing so removes the ability to specify units (you get the variable values in its native units).
p['indeps.upperWeights'] = np.array([1E-3, 1E-3, 1E-3])
p['indeps.upperWeights'] = np.array([-1E-3, -1E-3, -1E-3])

Expressible vs denotable values

What programming language has a value that is expressible but not denotable. Also what would this imply?
I don't really understand the difference. At the moment I think it means a functional language because then you can't give variables values only point to them?
Is this completely wrong?
According to these lecture notes by David Schmidt:
Expressible values are values that can be produced by expressions in code, like strings, numbers, lambdas/anonymous functions (in languages that support them), etc.
Denotable values are values that can be named (bound to an identifier) and referred to later, like values of variables or named functions.
For example, a language can have syntax for declaring named functions, but no expression syntax for anonymous functions. So (if I understand that correctly) in this language, functions would be denotable but not expressible.
The only example I could find of values that are expressible but not denotable are error values (in some theoretical languages p.11), which can be produced by an expression (like 1/0) but cannot be bound to an identifier (saved in a variable).
(This assumes that the assignment statement propagates the error instead of simply storing the error value in the variable.)
Anonymous types are also somewhat similar. For example in C#, you can define an anonymous object, which has an anonymous type that cannot be bound to an identifier (is not denotable):
// anonymous objects can only be saved into a variable by using type inference
var obj = new { Name = "Farah", Kind = "Human" };

In rstan, are initial parameter values that are specified via a list applied on the constrained support or the unconstrained support?

The help file for rstan::stan has the following to say about the init argument:
init="random" (default):
Let Stan generate random initial values for
all parameters. The seed of the random number generator used by Stan
can be specified via the seed argument. If the seed for Stan is fixed,
the same initial values are used. The default is to randomly generate
initial values between -2 and 2 on the unconstrained support. The
optional additional parameter init_r can be set to some value other
than 2 to change the range of the randomly generated inits.
init="0", init=0:
Initialize all parameters to zero on the unconstrained
support.
inits via list:
Set inital values by providing a list equal
in length to the number of chains. The elements of this list should
themselves be named lists, where each of these named lists has the
name of a parameter and is used to specify the initial values for that
parameter for the corresponding chain.
Unfortunately, this does not make it clear whether initial parameter values specified via a list are applied on the constrained support or the unconstrained support. For example, if I have the following parameter block,
parameters {
real<lower=3, upper=7> theta;
}
and I call stan as follows,
rstan::stan(file, data = standata, init = list(list(theta = 5)), chains = 1)
is the initial value of theta equal to 5 on the constrained support or the unconstrained support?
Constrained. In the documentation, it says that when init is a list of lists that
The elements of this list should themselves be named lists, where each of these named lists has the name of a parameter and is used to specify the initial values for that parameter for the corresponding chain.
If it pertains to the parameters block of a Stan program, then it is referring to the constrained space. The unconstrained space does not necessarily match up to the dimensions of the constrained space for things like covariance matrices, simplexes, etc.

How to Compare Pointers in LLVM-IR?

I want to analyze the pointer values in LLVM IR.
As illustrated in LLVM Value Class,
Value is is a very important LLVM class. It is the base class of all
values computed by a program that may be used as operands to other
values. Value is the super class of other important classes such as
Instruction and Function. All Values have a Type. Type is not a
subclass of Value. Some values can have a name and they belong to some
Module. Setting the name on the Value automatically updates the
module's symbol table.
To test if a Value is a pointer or not, there is a function a->getType()->isPointerTy(). LLVM also provides a LLVM PointerType class, however there are not direct apis to compare the values of pointers.
So I wonder how to compare these pointer values, to test if they are equal or not. I know there is AliasAnalysis, but I have doubt with the AliasAnalysis results, so I want to validate it myself.
The quick solution is to use IRBuilder::CreatePtrDiff. This will compute the difference between the two pointers, and return an i64 result. If the pointers are equal, this will be zero, and otherwise, it will be nonzero.
It might seem excessive, seeing as CreatePtrDiff will make an extra effort to compute the result in terms of number of elements rather than number of bytes, but in all likelihood that extra division will get optimized out.
The other option is to use a ptrtoint instruction, with a reasonably large result type such as i64, and then do an integer comparison.
From the online reference:
Value * CreatePtrDiff (Value *LHS, Value *RHS, const Twine &Name="")
Return the i64 difference between two pointer values, dividing out the size of the pointed-to objects.

Test equality of sets in Erlang

Given a data structure for sets, testing two sets for equality seems to be a desirable task, and indeed many implementations allow this (e.g. builtin sets in python).
There are different set implements in Erlang: sets, ordsets, gb_sets. Their documentation does not indicate, whether it is possible to test equality using term comparison ("=="), nor do they provide explicit functions for testing equality.
Some naive cases seem to allow equality testing with "==", but I have a larger application where I'm able to produce sets and gb_sets which are equal (tested with the function below) but do not compare equal with "==". For ordsets, they always compare equal. Unfortunately I haven't found a way to produce a minimal example for cases where equal sets do not compare equal with "==".
For reliably testing equality I use the following function, based on this theorem on set equality:
%% #doc Compare two sets for equality.
-spec sets_equal(sets:set(), sets:set()) -> boolean().
sets_equal(Set1, Set2) ->
sets:is_subset(Set1, Set2) andalso sets:is_subset(Set2, Set1).
My questions:
Is their a rationale, why Erlang set implementations do not offer explicit equality testing?
How to explain the difference when testing set equality with "==" with for the different set implementations?
How can produce a minimal example of sets where "==" does not compare equal but the sets are equal given the above code?
Some thoughts on question 2:
The documentation for sets states, that "The representation of a set is not defined." where as the documentation of ordsets states, that "An ordset is a representation of a set". The documentation on gb_sets does not give any comparable indication.
The following comment, from the source code of the sets implementation, seems to reiterate the statement from the documentation :
Note that as the order of the keys is undefined we may freely reorder keys within in a bucket.
My interpretation is, that term comparison with "==" in Erlang works on the representation of the sets, i.e. two sets only compare equal if their representation is identical. This would explain the different behavior of the different set implementations but also reinforces the question, why there is no explicit equality comparison.
ordsets are implemented as a sorted list, and the implementation is fairly open and meant to be visible. They are going to compare equal (==), although == means that 1.0 is equal to 1. They won't compare as strictly equal (=:=).
sets are implemented as a form of hash table, and its internal representation does not lend itself to any form of direct comparison; as hash collisions happen, the last element added is prepended to the list for the given hash entry. This prepend operation is sensitive to the order in which the elements are added.
gb_sets are implemented as a general balancing tree, and the structure of a tree does depend on the order in which the elements were inserted and when rebalancing took place. They are not safe to compare directly.
To compare two sets of the same type together, an easy way is to call Mod:is_subset(A,B) andalso Mod:is_subset(B,A) -- two sets can only be subsets of each other when they're equal.

Resources