How to handle NaN in Bosun? - bosun

I have 2 metrics and try to find the difference of average value between them in percentage like 100*(m1+m2)/m1 but this obviously produces NaN if m1 turns to zero.
How should I handle this case if I don't want to alert when the metrics turn to zero?

With bools bosun has a short-circuit like behavior. Since Bosun's expression language lacks if statements, you need to use a bool operation to see if the divisor is 0 first:
$foo = 0
$foo && 1/$foo
Since $foo is zero, the statement is "not true" so 1/$foo is not factored into the final calculation:

Related

Why Yosys synthesis the sequential statement to constant

I have the Verilog statement below:
module test (A,B, CLK);
input A, CLK;
output B;
always#(posedge CLK)
if(A) B <= 1'b1;
endmodule
I am expecting a register. However, after I synthesis it with Yosys, I got the result as follow:
assign B = 1'b1;
I don't understand why Yosys translate the above Verilog statement to a constant 1.
Please advice, thanks!
Your B has two possible values:
1'b x during initialization (more in IEEE Std 1364 4.2.2 Variable declarations),
1'b 1 when A is equal to 1'b 1.
You really have only one value. Thats mean you can optimize it to hardwired 1'b 1.
This is not a Yosys fault. All (or almost all) synthesis software will behave same way. If you want to let it work (if I guess what you want), you have to allow B to take two different values. You can do it by initial value equal to 1'b 0 or by reset to value 1'b 0.
I suggest to use reset instead of initial value because initial value can be implemented as A connected to register's set pin.
Interesting! I noticed that if you assign an initial value of zero to the register (e.g. output reg B = 1'b0) you do get a flip-flop. (I used read_verilog <your_code.v> ; synth ; show.)
However, an initial value of one still produces the constant output you mention. So perhaps what's happening here (and I'm only speculating) is that when an initial value is not given, yosys is free to pick its own, in which case it picks 1'b1, so that the whole circuit is equivalent to a simple hard-wired constant? Only when the initial value is zero is the flip-flop necessary?

scipy optimize trust-constr disobeyed linear constraints

I'm trying to use scipy.optimize.minimize with method trust-constr to optimize a function of 8 variables. Unfortunately, the function is too complicated to post in full here; it involves around 1000 terms, each of which involves an integral. But here is an excerpt that I think already shows it doesn't work as expected.
def objective(variables):
starts_neg=variables[0]
starts_neu=variables[1]
etc
return _____
(these are the two variables that cause the problem)
Having defined the objective function, I then need to define the constraints. There are 12 constraints involving the 8 variables; but I'll just show the ones involving the offending variables.
constraint_matrix=[[0 for j in range(8)] for i in range(12)]
constraint_matrix[0][0]=1
constraint_matrix[1][1]=1
constraint_matrix[2][0]=1
constraint_matrix[2][1]=1
etc
lower_bounds=[10**(-12) for i in range(12)]
upper_bounds=[1 for i in range(12)]
prob_constraints=LinearConstraint(constraint_matrix,lower_bounds,upper_bounds,keep_feasible=True)
My intent here is to say 0 < starts_neg < 1, 0 < starts_neu < 1, 0 < start_neg + starts_neu < 1. The lower bounds are changed from 0 to 10^-12 to avoid nan errors, since the objective function involves taking the logs of the variables.
I then give scipy an initial estimate x0=[estimate,estimate,etc.]
Lastly, call optimize as follows:
result=minimize(objective,x0,method='trust-constr',constraints=[prob_constraints],options={'xtol':10**(-9)}).x
Unfortunately, this yielded a nan error. So I tried inserting the following in the objective function and running again:
if starts_neg<=0 or starts_neg>=1 or starts_neu<=0 or starts_neu>=1 or starts_neu+starts_neg>=1:
print(starts_neg,starts_neu)
This outputs -0.02436406136453448 0.7588112085953852 before the nan error & traceback, which seems too large a constraint violation to be explained by rounding error. And no, this was not the initial estimate x0; I checked for that too.
So clearly scipy disobeyed one of my constraints, despite my setting keep_feasible=True. Did I set up something wrong? Sorry the function is too long to include in full.

General Comparisons vs Value Comparisons

Why does XQuery treat the following expressions differently?
() = 2 returns false (general Comparison)
() eq 2 returns an empty sequence (value Comparison)
This effect is explained in the XQuery specifications. For XQuery 3, it is in chapter 3.7.1, Value Comparisons (highlighting added by me):
Atomization is applied to the operand. The result of this operation is called the atomized operand.
If the atomized operand is an empty sequence, the result of the value comparison is an empty sequence, and the implementation need not evaluate the other operand or apply the operator. However, an implementation may choose to evaluate the other operand in order to determine whether it raises an error.
Thus, if you're comparing two single element sequences (or scalar values, which are equal to those), you will as expected receive a true/false value:
1 eq 2 is false
2 eq 2 is true
(1) eq 2 is false
(2) eq 2 is true
(2) eq (2) is true
and so on
But, if one or both of the operands is the empty list, you will receive the empty list instead:
() eq 2 is ()
2 eq () is ()
() eq () is ()
This behavior allows you to pass-through empty sequences, which could be used as a kind of null value here. As #adamretter added in the comments, the empty sequence () has the effective boolean value of false, so even if you run something like if ( () eq 2) ..., you won't observe anything surprising.
If any of the operands contains a list of more than one element, it is a type error.
General comparison, $sequence1 = $sequence2 tests if any element in $sequence1 has an equal element in $sequence2. As this semantically already supports sequences of arbitrary length, no atomization must be applied.
Why?
The difference comes from the requirements imposed by the operators' signatures. If you compare sequences of arbitrary length in a set-based manner, there is no reason to include any special cases for empty sequences -- if an empty sequence is included, the comparison is automatically false by definition.
For the operators comparing single values, one has to consider the case where an empty sequence is passed; the decision was to not raise an error, but also return a value equal to false: the empty sequence. This allows to use the empty sequence as a kind of null value, when the value is unknown; anything compared to an unknown value can never be true, but must not (necessarily) be false. If you need to, you could check for an empty(...) result, if so, one of the values to be compared was unknown; otherwise they're simply different. In Java and other languages, a null value would have been used to achieve similar results, in Haskell there's the Data.Maybe.

When would indeterminate NULL in PL/SQL be useful?

I was reading some PL/SQL documentation, and I am seeing that NULL in PL/SQL is indeterminate.
In other words:
x := 5;
y := NULL;
...
IF x != y THEN -- yields NULL, not TRUE
sequence_of_statements; -- not executed
END IF;
The statement would not evaluate to true, because the value of y is unknown and therefore it is unknown if x != y.
I am not finding much info other than the facts stated above, and how to deal with this in PL/SQL. What I would like to know is, when would something like this be useful?
This is three valued logic, see http://en.wikipedia.org/wiki/Three-valued_logic, and - specific for SQL - in http://en.wikipedia.org/wiki/Null_(SQL).
It follows the concept that a NULL value means: this value is currently unknown, and might be filled with something real in future. Hence, the behavior is defined in a way that would be correct in all cases of future non-null values. E. g. true or unknown is true, as - no matter if the unknown (which is the truth value of NULL) will later be replaced by something that is true or something that is false, the outcome will be true. However, true and unknown is unknown, as the result will be true if the unknown will later be replaced by a true value, while it will be false, if theunknown` will later be replaced by something being false.
And finally, this behavior is not "non determinictic", as the result is well defined, and you get the same result on each execution - which is by definition deterministic. It is just defined in a way that is a bit more complex than the standard Boolean two-valued logic used in most other programming languages. A non-deterministic function would be dbms_random.random, as it returns a dfferent value each time it is called, or even SYSTIMESTAMP, which also returns different values if called several times.
You can find good explanation why NULL was introduced and more in Wikipedia.
In PL/SQL you deal with NULL by
using IS (NOT) NULL as a comparision, when you would like to test against NULL
using COALESCE and NVL functions, when you want to substitute NULL with something else, like here IF NVL(SALARY, 0) = 0

How can I bubble an "impossible" value up via a recursive algorithm?

I have a recursive algorithm with special cases, for example a path algorithm where I want to add 1 to the distance if the path is good, but return -1 if it hits a dead-end. This is problematic when solving maximization problems with a bunch of recursive calls.
Is there a better way to code the following:
def rec(n):
if n == 1:
return -1
if n == 0:
return 1
val = rec(n - 2)
if val == -1:
return -1
else:
return val + 1
Therefore, rec(4) = 2, rec(3) = -1
In Python, not really. You could make it clearer in Python by returning None rather than -1; this has the advantage that erroneously adding to the invalid value will throw an exception.
A language that has a more rigorous type system and a good concept of 'maybe' or optional values makes it a snap. Say, Haskell:
rec 1 = Nothing
rec 0 = Just 1
rec n = map ((+) 1) $ rec (n - 2)
The map invocation means that it will add whatever is in the box if it is Just x, and return the invalid value (Nothing) unchanged. Of course, you can design your own more sophisticated type that allows for multiple error conditions or whatever and still obtain similarly simple results. This is pretty much just as easy in OCaml, F#, Standard ML, and Scala.
You can simulate this approach with None in Python by defining a helper function:
def mapMaybe(obj, f):
if obj is None:
return None
else:
return f(obj)
Then you can do:
return mapMaybe(val, lambda x: x + 1)
But I don't know that I would really recommend doing that.
Simulating a trick from Scala's book, it would also be possible to wrap all of this up in generator comprehensions (untested):
def maybe(x):
if x is not None:
yield x
def firstMaybe(it):
try:
return it.next()
except StopIteration:
return None
Then:
return firstMaybe(x + 1 for x in maybe(val))
But that's really non-standard, non-idiomatic Python.
A useful technique is to select a "no solution available" value such that processing it as though it represented a solution would still yield a "no solution available" value. If low numbers represent optimal solutions, for example, one could choose a value which is larger than any valid solution which would be of interest. If one is using integer types, one would have to make sure the "no solution available" value was small enough that operating on it as though it were a valid solution would not cause numerical overflows [e.g. if recursive calls always assume that the cost of a solution from some position will be one greater than the cost of a solution generated by the recursive call, then using values greater than 999,999,999 to represent "no solution avaialble" should work; if code might regard the cost of the solution as being the sum of two other solutions, however, it may be necessary to choose a smaller value]. Alternatively, one might benefit from using floating-point types since a value of "positive infinity" will compare greater than any other value, and adding any positive or finite amount to it won't change that.

Resources