When would indeterminate NULL in PL/SQL be useful? - plsql

I was reading some PL/SQL documentation, and I am seeing that NULL in PL/SQL is indeterminate.
In other words:
x := 5;
y := NULL;
...
IF x != y THEN -- yields NULL, not TRUE
sequence_of_statements; -- not executed
END IF;
The statement would not evaluate to true, because the value of y is unknown and therefore it is unknown if x != y.
I am not finding much info other than the facts stated above, and how to deal with this in PL/SQL. What I would like to know is, when would something like this be useful?

This is three valued logic, see http://en.wikipedia.org/wiki/Three-valued_logic, and - specific for SQL - in http://en.wikipedia.org/wiki/Null_(SQL).
It follows the concept that a NULL value means: this value is currently unknown, and might be filled with something real in future. Hence, the behavior is defined in a way that would be correct in all cases of future non-null values. E. g. true or unknown is true, as - no matter if the unknown (which is the truth value of NULL) will later be replaced by something that is true or something that is false, the outcome will be true. However, true and unknown is unknown, as the result will be true if the unknown will later be replaced by a true value, while it will be false, if theunknown` will later be replaced by something being false.
And finally, this behavior is not "non determinictic", as the result is well defined, and you get the same result on each execution - which is by definition deterministic. It is just defined in a way that is a bit more complex than the standard Boolean two-valued logic used in most other programming languages. A non-deterministic function would be dbms_random.random, as it returns a dfferent value each time it is called, or even SYSTIMESTAMP, which also returns different values if called several times.

You can find good explanation why NULL was introduced and more in Wikipedia.
In PL/SQL you deal with NULL by
using IS (NOT) NULL as a comparision, when you would like to test against NULL
using COALESCE and NVL functions, when you want to substitute NULL with something else, like here IF NVL(SALARY, 0) = 0

Related

Best way of mimicking Scala Option[T] in Julia?

If think that Scala Option[T] is quite useful to handle some exceptions, so I would like to use this concept in Julia as well.
For example, if we write the following Scala code in Julia,
def div(x: Double, y: Double): Option[Double] = {
if (y == 0.0) None else Some(x / y)
}
I guess the following code would do the job, but is there any better of doing this?
function div(x:: Float64, y:: Float64):: Array{Union{Float64, Missing}}
if (y == 0.0)
[missing]
else
[x / y]
end
end
In Julia, Union is only an untagged union, which makes a bit of a difference.
In your case, missing might be a very idiomatic solution, depending on the application -- the purpose of missing is to get propagated through following operations, like a chain of map would in Scala:
div(1.0, 0) + 1 === missing
can be compared to
div(1.0, 0).map(_ + 1) == None
But note that this happens automatically, until you hit some function that doesn't know missing. (Note that even x == missing evaluates to missing!) Think of propagating null.
The other variant, which is a bit more like Option[T], is Union{Some{T}, Nothing}. This type will force you to explicitely take care of handling both cases: Some needs to be unwrapped, and nothing does not get propagated and will error soon. This has somewhat different semantics, too: missing is more like a N/A value in data handling, while Some/nothing come closer to Option for things that might or might not exist (e.g., the head of a possibly empty list).
Note that often, just Union{T, Nothing} is used. This in most cases makes no difference in semantics, and is easier to handle: due to Union being untagged, T <: Union{T, Nothing}, and values behave just like plain T. But if you need to distinguish a nothing with None semantics from a nothing with T semantics, which can occur in generic functions, you need the additional Some layer to get Some(nothing) and nothing.

Parameters of function in Julia

Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?  This requires, if we want to use it anyway, to go through a very artificial process, by representing these data in the form of a ridiculous single element table.
Ada, which had the same kind of limitation, abandoned it in its 2012 redesign to the great satisfaction of its users. A small keyword (like out in Ada) could very well indicate that the possibility of keeping the modifications of a parameter at the output is required.
From my experience in Julia it is useful to understand the difference between a value and a binding.
Values
Each value in Julia has a concrete type and location in memory. Value can be mutable or immutable. In particular when you define your own composite type you can decide if objects of this type should be mutable (mutable struct) or immutable (struct).
Of course Julia has in-built types and some of them are mutable (e.g. arrays) and other are immutable (e.g. numbers, strings). Of course there are design trade-offs between them. From my perspective two major benefits of immutable values are:
if a compiler works with immutable values it can perform many optimizations to speed up code;
a user is can be sure that passing an immutable to a function will not change it and such encapsulation can simplify code analysis.
However, in particular, if you want to wrap an immutable value in a mutable wrapper a standard way to do it is to use Ref like this:
julia> x = Ref(1)
Base.RefValue{Int64}(1)
julia> x[]
1
julia> x[] = 10
10
julia> x
Base.RefValue{Int64}(10)
julia> x[]
10
You can pass such values to a function and modify them inside. Of course Ref introduces a different type so method implementation has to be a bit different.
Variables
A variable is a name bound to a value. In general, except for some special cases like:
rebinding a variable from module A in module B;
redefining some constants, e.g. trying to reassign a function name with a non-function value;
rebinding a variable that has a specified type of allowed values with a value that cannot be converted to this type;
you can rebind a variable to point to any value you wish. Rebinding is performed most of the time using = or some special constructs (like in for, let or catch statements).
Now - getting to the point - function is passed a value not a binding. You can modify a binding of a function parameter (in other words: you can rebind a value that a parameter is pointing to), but this parameter is a fresh variable whose scope lies inside a function.
If, for instance, we wanted a call like:
x = 10
f(x)
change a binding of variable x it is impossible because f does not even know of existence of x. It only gets passed its value. In particular - as I have noted above - adding such a functionality would break the rule that module A cannot rebind variables form module B, as f might be defined in a module different than where x is defined.
What to do
Actually it is easy enough to work without this feature from my experience:
What I typically do is simply return a value from a function that I assign to a variable. In Julia it is very easy because of tuple unpacking syntax like e.g. x,y,z = f(x,y,z), where f can be defined e.g. as f(x,y,z) = 2x,3y,4z;
You can use macros which get expanded before code execution and thus can have an effect modifying a binding of a variable, e.g. macro plusone(x) return esc(:($x = $x+1)) end and now writing y=100; #plusone(y) will change the binding of y;
Finally you can use Ref as discussed above (or any other mutable wrapper - as you have noted in your question).
"Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?" asked by Schemer
Your question is wrong because you assume the wrong things.
Parameters are variables
When you pass things to a function, often those things are values and not variables.
for example:
function double(x::Int64)
2 * x
end
Now what happens when you call it using
double(4)
What is the point of the function modifying it's parameter x , it's pointless. Furthermore the function has no idea how it is called.
Furthermore, Julia is built for speed.
A function that modifies its parameter will be hard to optimise because it causes side effects. A side effect is when a procedure/function changes objects/things outside of it's scope.
If a function does not modifies a variable that is part of its calling parameter then you can be safe knowing.
the variable will not have its value changed
the result of the function can be optimised to a constant
not calling the function will not break the program's behaviour
Those above three factors are what makes FUNCTIONAL language fast and NON FUNCTIONAL language slow.
Furthermore when you move into Parallel programming or Multi Threaded programming, you absolutely DO NOT WANT a variable having it's value changed without you (The programmer) knowing about it.
"How would you implement with your proposed macro, the function F(x) which returns a boolean value and modifies c by c:= c + 1. F can be used in the following piece of Ada code : c:= 0; While F(c) Loop ... End Loop;" asked by Schemer
I would write
function F(x)
boolean_result = perform_some_logic()
return (boolean_result,x+1)
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
"Unfortunately no, because, and I should have said that, c has to take again the value 0 when F return the value False (c increases as long the Loop lives and return to 0 when it dies). " said Schemer
Then I would write
function F(x)
boolean_result = perform_some_logic()
if boolean_result == true
return (true,x+1)
else
return (false,0)
end
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end

General Comparisons vs Value Comparisons

Why does XQuery treat the following expressions differently?
() = 2 returns false (general Comparison)
() eq 2 returns an empty sequence (value Comparison)
This effect is explained in the XQuery specifications. For XQuery 3, it is in chapter 3.7.1, Value Comparisons (highlighting added by me):
Atomization is applied to the operand. The result of this operation is called the atomized operand.
If the atomized operand is an empty sequence, the result of the value comparison is an empty sequence, and the implementation need not evaluate the other operand or apply the operator. However, an implementation may choose to evaluate the other operand in order to determine whether it raises an error.
Thus, if you're comparing two single element sequences (or scalar values, which are equal to those), you will as expected receive a true/false value:
1 eq 2 is false
2 eq 2 is true
(1) eq 2 is false
(2) eq 2 is true
(2) eq (2) is true
and so on
But, if one or both of the operands is the empty list, you will receive the empty list instead:
() eq 2 is ()
2 eq () is ()
() eq () is ()
This behavior allows you to pass-through empty sequences, which could be used as a kind of null value here. As #adamretter added in the comments, the empty sequence () has the effective boolean value of false, so even if you run something like if ( () eq 2) ..., you won't observe anything surprising.
If any of the operands contains a list of more than one element, it is a type error.
General comparison, $sequence1 = $sequence2 tests if any element in $sequence1 has an equal element in $sequence2. As this semantically already supports sequences of arbitrary length, no atomization must be applied.
Why?
The difference comes from the requirements imposed by the operators' signatures. If you compare sequences of arbitrary length in a set-based manner, there is no reason to include any special cases for empty sequences -- if an empty sequence is included, the comparison is automatically false by definition.
For the operators comparing single values, one has to consider the case where an empty sequence is passed; the decision was to not raise an error, but also return a value equal to false: the empty sequence. This allows to use the empty sequence as a kind of null value, when the value is unknown; anything compared to an unknown value can never be true, but must not (necessarily) be false. If you need to, you could check for an empty(...) result, if so, one of the values to be compared was unknown; otherwise they're simply different. In Java and other languages, a null value would have been used to achieve similar results, in Haskell there's the Data.Maybe.

One insert kills Recursion in drools

This is related to my previous question. if I don't have the insert, it goes into a recursive loop as expected. But if I do have the insert the program ends. What am I missing here?
rule "Recurse"
when
f : Fibonacci(value == 0)
not Fibonacci(sequence == 0)
then
System.out.println(f.sequence + "/" + f.value);
insert(new Fibonacci(f.sequence - 1));
f.value = 0;
update(f);
end
For the purpose of explaining this example, lets assume:
there is only one rule in the system
that the initial fact set provided to the rule engine meets the criteria of the when in that rule
that sequence is a positive integer value
Firstly, we consider the case where the insert is commented out:
We know that the Working Memory contains at least one object that has value == 0 and there are no objects that have sequence == 0. (I find the more verbose form of not slightly more legible, you can replace not Fibonacci (...) with not ( exists Fibonacci(...))). Note that the rule is valid if there is a single object that meets both criteria.
The consequence sets the object's value to zero and notifies the engine that this object has changed. An infinite loop is then encountered as there is no object in the system with sequence == 0 and we've set the value to be such that this object will trigger the rule to fire.
Now, lets consider the case where the insert is uncommented:
We already know that the initial fact set fires the rule at least once. The consequence is that now an object is placed in working memory which has a decremented sequence and the object referenced by f has its value set to zero (it isn't changed from zero) and updated. There is a mechanism in place by which the end conditions are met, since now, at some point there will be an object inserted that has a zero sequence. That meets the end condition.
In short: the engine will exit when there is a Fibonacci object with sequence zero in it.
I, err.., think that this system might need a little bit of changing before it will output the Fibonacci sequence. You need a way to reference the previous two Fibonacci numbers to evaluate the one being set, the recursive method is much more elegent ;)

Check whether a tuple of variables cannot be constrained any further, in Mozart/Oz

Greetings,
The idea can be best given with an example:
Suppose we have a vector vec(a:{FD.int 1#100} b:{FD.int 1#100} c:{FD.int 1#100}).
I want to be able to add constraints to this vector, until the point that every additional constraint I add
to it does not add any more information, e.g. does not constraint vec.a, vec.b and vec.c any further.
Is it possible to do it in Mozart/Oz?
I'd like to think it that way.
In a loop:
Access the constraint store,
Check whether it is changed
Terminate if there is no change.
You can check the state of a finite domain variable with the functions in the FD.reflect module. The FD.reflect.dom function seems especially useful in this context.
To get the current domain of every field in a record, you can map this function over records:
declare
fun {GetDomains Vec}
{Record.map Vec FD.reflect.dom}
end
The initial result in your example would be:
vec(a:[1#100] b:[1#100] c:[1#100])
Now you can compare the result of this function before and after adding constraints, to see if anything happens.
Two limitations:
This will only work with constraints which actually change the domain of at least one variable. Some constraints change the constraint store but do not change any domains, for example an equality constraint with an unbound variable.
Using reflection like this does not work well with concurrency, i.e. if constraints are added from multiple threads, this will introduce race conditions.
If you need an example on how to use the GetDomains function in a loop, let me know...
EDIT: With a hint from an old mailing list message, I came up with this general solution which should work with all types of constraints. It works by speculatively executing a constraint in a subordinate computation space.
declare
Vec = vec(a:{FD.int 1#100} b:{FD.int 1#100} c:{FD.int 1#100})
%% A number of constraints as a list of procedures
Constraints =
[proc {$} Vec.a <: 50 end
proc {$} Vec.b =: Vec.a end
proc {$} Vec.b <: 50 end
proc {$} Vec.a =: Vec.b end
]
%% Tentatively executes a constraint C (represented as a procedure).
%% If it is already entailed by the current constraint store, returns false.
%% Otherwise merges the space (and thereby finally executes the constraint)
%% and returns true.
fun {ExecuteConstraint C}
%% create a compuation space which tentatively executes C
S = {Space.new
proc {$ Root}
{C}
Root = unit
end
}
in
%% check whether the computation space is entailed
case {Space.askVerbose S}
of succeeded(entailed) then false
else
{Wait {Space.merge S}}
true
end
end
for C in Constraints I in 1..4 break:Break do
{System.showInfo "Trying constraint "#I#" ..."}
if {Not {ExecuteConstraint C}} then
{System.showInfo "Constraint "#I#" is already entailed. Stopping."}
{Break}
end
{Show Vec}
end
{Show Vec}

Resources