I am trying to implement a rule in Drools language. I need to add up two integer values from two variables. I have tried some different ways but none of them seem to work. I get either failure due to the "+" symbol or due to the "eval" function. I need to do the summation and then compare its value to another one for the rule to fire. These are the combinations I have already tried
$generationGrain : ($smallerGenerationIndex.intValue() + App.GENERATION_TIME.intValue() )
$generationGrain : $smallerGenerationIndex.intValue() + App.GENERATION_TIME.intValue()
$generationGrain : eval($smallerGenerationIndex.intValue() + App.GENERATION_TIME.intValue())
$generationGrain : eval($smallerGenerationIndex + App.GENERATION_TIME)
$generationGrain : ($smallerGenerationIndex + App.GENERATION_TIME )
$generationGrain : ($smallerGenerationIndex.intValue() + App.GENERATION_TIME.intValue() )
Is it just not possible to do it or am I missing something? Should I just create an rule with the summation after the then and then another rule with the conditional and the other value?
Integer has a static sum operation you could use in this case.
$generationGrain: Integer() from Integer.sum( $smallerGenerationIndex, App.GENERATION_TIME )
That would also let you apply constraints. For example, if you wanted to write a rule such that it would only trigger if the sum is negative, you could do like:
$generationGrain: Integer(this < 0) from ...
Alternatively if you wanted to sum up values across a list of objects, you could use an accumulate with a sum function. But for two discrete numbers I'd do it like this.
(The other numeric wrapper types also have similar functions, eg. Double.)
You are trying to reuse and give business name to some computation. DSL would be suitable here
[when]SmallerGenerationIndex = $smallerGenerationIndex.intValue()
[when]GenerationTime = App.GENERATION_TIME.intValue()
[when]GenerationGrain = (SmallerGenerationIndex + GenerationTime)
Now in rules you can reuse value with clear business meaning, it will be resolved to longer expressions each time, but this technical stuff would be hidden from human eyes.
Related
I need to create a new variable in my data frame that is the output of an equation with many nested parentheses. Part of this equation is in the form of the last line below
temp=36
Tc = 647.097
( ( 1-273.15+temp )/Tc )^1.5
where temp will be a variable, and Tc will be a constant. However, when I run the code the result is always NA.
But, if I break the code down into the number that I know results from
( 1-273.15+temp )/Tc
and then add the exponent like so
-0.3649376^1.5
then the code works as it should.
Why is R not able to correctly output the calculation ( ( 1-273.15+temp )/Tc )^1.5 ?
And more importantly, how can I get R to give me the result of ( ( 1-273.15+temp )/Tc )^1.5 while retaining my use of objects for the constant and the variable?
I need to address this, because the full equation is even worse, where the problem I describe above is nested within itself:
e_sat_test <- Pc^( ( Tc/(273.15+temp ) ) *
( a1*( (1-273.15+temp)/Tc ) + a2*( (1-273.15+temp)/Tc )^1.5 +
a3*( (1-273.15+temp)/Tc )^3 + a4* ( (1-273.15+temp)/Tc )^3.5 +
a5*( (1-273.15+temp)/Tc)^4 + a6*( (1-273.15+temp)/Tc )^7.5 ) )
The problem is that
-0.3649376^1.5
is interpreted as
-(0.3649376^1.5)
not
(-0.3649376)^1.5
because the exponent operator has a higher precedence. And when you take something to a .5 exponent that's like taking a square root and those aren't defined for simple numeric vectors in R (unless you want to use imaginary numbers). Your calculation is simply NaN for your data values because your result is not real. You might want to check your formula again,
From the documentation:
Users are sometimes surprised by the value returned, for example why (-8)^(1/3) is NaN. For double inputs, R makes use of IEC 60559 arithmetic on all platforms, together with the C system function pow for the ^ operator. The relevant standards define the result in many corner cases. In particular, the result in the example above is mandated by the C99 standard. On many Unix-alike systems the command man pow gives details of the values in a large number of corner cases.
I have written a function that takes two arguments, a number between 0:16 and a vector which contains four parameter values.
The output of the function does change if I change the parameters in the vector, but it does not change if I change the number between 0:16.
I can add, that the function I'm having troubles with, includes another function (called 'pi') which takes the same arguments.
I have checked that the 'pi' function does actually change values if I change the value from 0:16 (and it does also change if I change the values of the parameters).
Firstly, here is my code;
pterm_ny <- function(x, theta){
(1-sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*theta[4])/pi(x, theta)
}
pi <- function(x, theta){
theta[1]*1*(x==0)+theta[2]*(theta[3]^(x))*exp((-1)*(theta[3]))+(1-
sum(theta[1:2]))*(theta[4]^(x))*exp((-1)*(theta[4]))
}
Which returns 0.75 for pterm_ny(i,c(0.2,0.2,2,2)), were i = 1,...,16 and 0.2634 for i = 0, which tells me that the indicator function part in 'pi' does work.
With respect to raising a number to a certain power, I have been told that one should wrap the wished number in a 'I', as an example it would be like;
x^I(2)
I have tried to do that in my code, but that didn't help either.
I can't remember the argument for doing it, but I expect that it's to ensure that the number in parentheses is interpreted as an integer.
My end goal is to get 17 different values of the 'pterm' and to accomplish that, I was thinking of using the sapply function like this;
sapply(c(0:16),pterm_ny,theta = c(0.2,0.2,2,2))
I really hope that someone can point out what I'm missing here.
In advance, thank you!
You have a theta[4]^x term both in your main expression and in your pi() function; these are cancelling out, leaving the result invariant to changes in x ...
Also:
you might want to avoid using pi as your function name, as it's also a built-in variable (3.14159...) - this can sometimes cause confusion
the advice about using the "as is" function I() to protect powers is only relevant within formulas, e.g. as used in lm() (linear regression). (It would be used as I(x^2), not x^I(2)
I want to write a version that accepts a supplementary argument. The difference with the initial version only resides in a few lines of codes, potentially within loops. A typical example is to user a vector of weight w.
One solution is to completely rewrite a new function
function f(Vector::a)
...
for x in a
...
s += x[i]
...
end
...
end
function f(a::Vector, w::Vector)
...
for x in a
...
s += x[i] * w[i]
...
end
...
end
This solution duplicates code and therefore makes the program harder to maintain.
I could split ... into different helper functions, which are called by both functions, but the resulting code would be hard to follow
Another solution is to write only one function and use a ? : structure for each line that should be changed
function f(a, w::Union(Nothing, Vector) = nothing)
....
for x in a
...
s += (w == nothing)? x[i] : x[i] * w[i]
...
end
....
end
This code requires to check a condition at every step in a loop, which does not sound efficient, compared to the first version.
I'm sure there is a better solution, maybe using macros. What would be a good way to deal with this?
There are lots of ways to do this sort of thing, ranging from optional arguments to custom types to metaprogramming with #eval'ed code generation (this would splice in the changes for each new method as you loop over a list of possibilities).
I think in this case I'd use a combination of the approaches suggested by #ColinTBowers and #GnimucKey.
It's fairly simple to define a custom array type that is all ones:
immutable Ones{N} <: AbstractArray{Int,N}
dims::NTuple{N, Int}
end
Base.size(O::Ones) = O.dims
Base.getindex(O::Ones, I::Int...) = (checkbounds(O, I...); 1)
I've chosen to use an Int as the element type since it tends to promote well. Now all you need is to be a bit more flexible in your argument list and you're good to go:
function f(a::Vector, w::AbstractVector=Ones(size(a))
…
This should have a lower overhead than either of the other proposed solutions; getindex should inline nicely as a bounds check and the number 1, there's no type instability, and you don't need to rewrite your algorithm. If you're sure that all your accesses are in-bounds, you could even remove the bounds checking as an additional optimization. Or on a recent 0.4, you could define and use Base.unsafe_getindex(O::Ones, I::Int...) = 1 (that won't quite work on 0.3 since it's not guaranteed to be defined for all AbstractArrays).
In this case, using Optional Arguments may play the trick.
Just make the w argument default to ones().
I've come up against this problem a few times. If you want to avoid the conditional if statement inside the loop, one possibility is to use multiple dispatch over some dummy types. For example:
abstract MyFuncTypes
type FuncWithNoWeight <: MyFuncTypes; end
evaluate(x::Vector, i::Int, ::FuncWithNoWeight) = x[i]
type FuncWithWeight{T} <: MyFuncTypes
w::Vector{T}
end
evaluate(x::Vector, i::Int, wT::FuncWithWeight) = x[i] * wT.w[i]
function f(a, w::MyFuncTypes=FuncWithNoWeight())
....
for x in a
...
s += evaluate(x, i, w)
...
end
....
end
I extend the evaluate method over FuncWithNoWeight and FuncWithWeight in order to get the appropriate behaviour. I also nest these types within an abstract type MyFuncTypes, which is the second input to f (with default value of FuncWithNoWeight). From here, multiple dispatch and Julia's type system takes care of the rest.
One neat thing about this approach is that if you decide later on you want to add a third type of behaviour inside the loop (not necessarily even weighting, pretty much any type of transformation will be possible), it is as simple as defining a new type, nesting it under MyFuncTypes, and extending the evaluate method to the new type.
UPDATE: As Matt B. has pointed out, the first version of my answer accidentally introduced type instability into the function with my solution. As a general rule I typically find that if Matt posts something it is worth paying close attention (hint, hint, check out his answer). I'm still learning a lot about Julia (and am answering questions on StackOverflow to facilitate that learning). I've updated my answer to remove the type instability pointed out by Matt.
Suppose I wish to define a recursive function theta whose functionality should be apparent below.
The following definition will work.
theta[0] = 0;
theta[i_ ] := theta[i-1] + 1
However, this will not work.
theta[0] = 0;
theta[i_ + 1] := theta[i] + 1
My question is, is it possible to make something like the second definition work, where I can define the function based on the i+1 term instead of the i term?
I understand that they are mathematically equivalent, but I am curious about whether Mathematica will permit something like the second syntax.
It is perfectly feasible to make your second definition work if you understand that default automatic simplifications are done, often before you can get control, and if you then use your definition with appropriate parameters that match your definition.
Example
In[1]:= theta[i_ + 1] := Sin[i]+1;
theta[a + 1]
Out[2]= 1+Sin[a]
but then you probably expect to use this as
In[3]:= theta[8]
Out[3]= theta[8]
and that fails because you defined a function that matches the sum of something and one, but gave it just an integer and you have no definition that matches that. Even this fails
In[4]:= theta[7 + 1]
Out[4]= theta[8]
because the default automatic rules turn the sum of two integers into an integer and you are back to the previous case.
It is sometimes said that Mathematica does "structural" matching, if two structure of two expressions match the Mathematica accepts this as a match. This is very different from the sort of matching that anyone with a bit of mathematical maturity would use. A decade or more ago someone wrote up an article in the Mathematica Journal showing that it would be possible to use a more mathematical version of matching within Mathematica. I think that was completely ignored and nothing more was ever done with that. It would be nice if someone with the skill needed could bring that code up to the current version of Mathematica, but I think this might be a substantial challenge.
There is always "a way". For example:
ClearAll[a];
a[i_] = a[i] /. First#RSolve[{a[i + 1] == a[i] + 1, a[0] == 0}, a[i], i]
I've been watching this MSDN video with Brian Beckman and I'd like to better understand something he says:
Every imperitive programmer goes through this phase of learning that
functions can be replaced with table lookups
Now, I'm a C# programmer who never went to university, so perhaps somewhere along the line I missed out on something everyone else learned to understand.
What does Brian mean by:
functions can be replaced with table lookups
Are there practical examples of this being done and does it apply to all functions? He gives the example of the sin function, which I can make sense of, but how do I make sense of this in more general terms?
Brian just showed that the functions are data too. Functions in general are just a mapping of one set to another: y = f(x) is mapping of set {x} to set {y}: f:X->Y. The tables are mappings as well: [x1, x2, ..., xn] -> [y1, y2, ..., yn].
If function operates on finite set (this is the case in programming) then it's can be replaced with a table which represents that mapping. As Brian mentioned, every imperative programmer goes through this phase of understanding that the functions can be replaced with the table lookups just for performance reason.
But it doesn't mean that all functions easily can or should be replaced with the tables. It only means that you theoretically can do that for every function. So the conclusion would be that the functions are data because tables are (in the context of programming of course).
There is a lovely trick in Mathematica that creates a table as a side-effect of evaluating function-calls-as-rewrite-rules. Consider the classic slow-fibonacci
fib[1] = 1
fib[2] = 1
fib[n_] := fib[n-1] + fib[n-2]
The first two lines create table entries for the inputs 1 and 2. This is exactly the same as saying
fibTable = {};
fibTable[1] = 1;
fibTable[2] = 1;
in JavaScript. The third line of Mathematica says "please install a rewrite rule that will replace any occurrence of fib[n_], after substituting the pattern variable n_ with the actual argument of the occurrence, with fib[n-1] + fib[n-2]." The rewriter will iterate this procedure, and eventually produce the value of fib[n] after an exponential number of rewrites. This is just like the recursive function-call form that we get in JavaScript with
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
return result;
}
Notice it checks the table first for the two values we have explicitly stored before making the recursive calls. The Mathematica evaluator does this check automatically, because the order of presentation of the rules is important -- Mathematica checks the more specific rules first and the more general rules later. That's why Mathematica has two assignment forms, = and :=: the former is for specific rules whose right-hand sides can be evaluated at the time the rule is defined; the latter is for general rules whose right-hand sides must be evaluated when the rule is applied.
Now, in Mathematica, if we say
fib[4]
it gets rewritten to
fib[3] + fib[2]
then to
fib[2] + fib[1] + 1
then to
1 + 1 + 1
and finally to 3, which does not change on the next rewrite. You can imagine that if we say fib[35], we will generate enormous expressions, fill up memory, and melt the CPU. But the trick is to replace the final rewrite rule with the following:
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
This says "please replace every occurrence of fib[n_] with an expression that will install a new specific rule for the value of fib[n] and also produce the value." This one runs much faster because it expands the rule-base -- the table of values! -- at run time.
We can do likewise in JavaScript
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
fibTable[n] = result;
return result;
}
This runs MUCH faster than the prior definition of fib.
This is called "automemoization" [sic -- not "memorization" but "memoization" as in creating a memo for yourself].
Of course, in the real world, you must manage the sizes of the tables that get created. To inspect the tables in Mathematica, do
DownValues[fib]
To inspect them in JavaScript, do just
fibTable
in a REPL such as that supported by Node.JS.
In the context of functional programming, there is the concept of referential transparency. A function that is referentially transparent can be replaced with its value for any given argument (or set of arguments), without changing the behaviour of the program.
Referential Transparency
For example, consider a function F that takes 1 argument, n. F is referentially transparent, so F(n) can be replaced with the value of F evaluated at n. It makes no difference to the program.
In C#, this would look like:
public class Square
{
public static int apply(int n)
{
return n * n;
}
public static void Main()
{
//Should print 4
Console.WriteLine(Square.apply(2));
}
}
(I'm not very familiar with C#, coming from a Java background, so you'll have to forgive me if this example isn't quite syntactically correct).
It's obvious here that the function apply cannot have any other value than 4 when called with an argument of 2, since it's just returning the square of its argument. The value of the function only depends on its argument, n; in other words, referential transparency.
I ask you, then, what the difference is between Console.WriteLine(Square.apply(2)) and Console.WriteLine(4). The answer is, there's no difference at all, for all intents are purposes. We could go through the entire program, replacing all instances of Square.apply(n) with the value returned by Square.apply(n), and the results would be the exact same.
So what did Brian Beckman mean with his statement about replacing function calls with a table lookup? He was referring to this property of referentially transparent functions. If Square.apply(2) can be replaced with 4 with no impact on program behaviour, then why not just cache the values when the first call is made, and put it in a table indexed by the arguments to the function. A lookup table for values of Square.apply(n) would look somewhat like this:
n: 0 1 2 3 4 5 ...
Square.apply(n): 0 1 4 9 16 25 ...
And for any call to Square.apply(n), instead of calling the function, we can simply find the cached value for n in the table, and replace the function call with this value. It's fairly obvious that this will most likely bring about a large speed increase in the program.