VHDL fixed point power opertator (**) - math

I am coding GL fractional operator in VHDL to put on FPGA. I am using IEEE.fixed_pkg package in order to have the ufixed and sfixed types and their operations
the problem is that I need at some part to to do raising to power calculation ( h**alpha) where both h and alpha are fixed numbers.
when I tried ** operator with ufixed, I got: No feasible entries for infix operator " ** ". I realized that this operation is not implemented in this package
Now, Is there a way to raise to power for fixed numbers (both base and exponent) without writing implementing the operation my self? ( as this is not the focus of the project).

Related

What are the exact problems with Octave's power operator?

From Octave docs about arithmetic operators § x ** y (power operator):
The implementation of this operator needs to be improved.
Ouch. So basically the documentation says "Please be careful, there are problems with this operator" but falls short of specifying what exact problems are to be expected.
What exactly must a user be aware of when using the ** operator in Octave?

In R, incomplete gamma function with complex input?

Incomplete gamma functions can be calculated in R with pgamma, or with gamma_inc_Q from library(gsl), or with gammainc from library(expint). However, all of these functions take only real input.
I need an implementation of the incomplete gamma function which will take complex input. Specifically, I have an integer for the first argument, and a complex number for the second argument (the limit in the integral).
This function is well-defined for complex inputs (see Wikipedia), and I've been calculating it in Mathematica. It doesn't seem to be built into R though, and I don't see it in any libraries.
So, can anyone suggest a shorter path to doing these calculations, than looking up an algorithm, implementing it in C, and writing an R interface?
(If I do have to implement it myself, here's the only algorithm for complex inputs that I've found: Kostlan & Gokhman 1987)
Here is an implementation, assuming you want the lower incomplete gamma function. I've compared a couple of values with Wolfram and they match.
library(CharFun)
incgamma <- function(s,z){
z^s * exp(-z) * hypergeom1F1(z, 1, s+1) / s
}
Perhaps the evaluation fails for a large s.
EDIT
Looks like CharFun has been removed from CRAN. You can use IncGamma in HypergeoMat:
> library(HypergeoMat)
> IncGamma(m=50, 2+2i, 5-6i)
[1] 0.3841221+0.3348439i
The result is the same on Wolfram.

Negative Exponents throwing NaN in Fortran

Very basic Fortran question. The following function returns a NaN and I can't seem to figure out why:
F_diameter = 1. - (2.71828**(-1.0*((-1. / 30.)**1.4)))
I've fed 2.71... in rather than using exp() but they both fail the same way. I've noticed that I only get a NaN when the fractional part (-1 / 30) is negative. Positives evaluate ok.
Thanks a lot
The problem is that you are taking a root of a negative number, which would give you a complex answer. This is more obvious if you imagine e.g.
(-1) ** (3/2)
which is equivalent to
(1/sqrt(-1))**3
In other words, your fractional exponent can't trivially operate on a negative number.
There is another interesting point here I learned today and I want to add to ire_and_curses answer: The fortran compiler seems to compute powers with integers with successive multiplications.
For example
PROGRAM Test
PRINT *, (-23) ** 6
END PROGRAM
work fine and gives 148035889 as an answer.
But for REAL exponents, the compiler uses logarithms: y**x = 10**(x * log(y)) (maybe compilers today do differently, but my book says so). Now that negative logarithms give a complex result, this does not work:
PROGRAM Test
PRINT *, (-23) ** 6.1
END PROGRAM
and even gives an compiler error:
Error: Raising a negative REAL at (1) to a REAL power is prohibited
From an mathematical point of view, this problem seems also be quite interesting: https://math.stackexchange.com/questions/1211/non-integer-powers-of-negative-numbers

R writing style - require vs. ::

OK, we're all familiar with double colon operator in R. Whenever I'm about to write some function, I use require(<pkgname>), but I was always thinking about using :: instead. Using require in custom functions is better practice than library, since require returns warning and FALSE, unlike library, which returns error if you provide a name of non-existent package.
On the other hand, :: operator gets the variable from the package, while require loads whole package (at least I hope so), so speed differences came first to my mind. :: must be faster than require.
And I did some analysis in order to check that - I've written two simple functions that load read.systat function from foreign package, with require and :: respectively, hence import Iris.syd dataset that ships with foreign package, replicated functions 1000 times each (which was shamelessly arbitrary), and... crunched some numbers.
Strangely (or not) I found significant differences in terms of user CPU and elapsed time, while there were no significant differences in terms of system CPU. And yet more strange conclusion: :: is actually slower! Documentation for :: is very blunt, and just by looking at sources it's obvious that :: should perform better!
require
#!/usr/local/bin/r
## with require
fn1 <- function() {
require(foreign)
read.systat("Iris.syd", to.data.frame=TRUE)
}
## times
n <- 1e3
sink("require.txt")
print(t(replicate(n, system.time(fn1()))))
sink()
double colon
#!/usr/local/bin/r
## with ::
fn2 <- function() {
foreign::read.systat("Iris.syd", to.data.frame=TRUE)
}
## times
n <- 1e3
sink("double_colon.txt")
print(t(replicate(n, system.time(fn2()))))
sink()
Grab CSV data here. Some stats:
user CPU: W = 475366 p-value = 0.04738 MRr = 975.866 MRc = 1025.134
system CPU: W = 503312.5 p-value = 0.7305 MRr = 1003.8125 MRc = 997.1875
elapsed time: W = 403299.5 p-value < 2.2e-16 MRr = 903.7995 MRc = 1097.2005
MRr is mean rank for require, MRc ibid for ::. I must have done something wrong here. It just doesn't make any sense... Execution time for :: seems way faster!!! I may have screwed something up, you shouldn't discard that option...
OK... I've wasted my time in order to see that there is some difference, and I carried out completely useless analysis, so, back to the question:
"Why should one prefer require over :: when writing a function?"
=)
"Why should one prefer require over ::
when writing a function?"
I usually prefer require due to the nice TRUE/FALSE return value that lets me deal with the possibility of the package not being available up front before getting into the code. Crash as early as possible instead of halfway through your analysis.
I only use :: when I need to make sure I am using the correct version of a function, not a version from some other package that is masking the name.
On the other hand, :: operator gets
the variable from the package, while
require loads whole package (at least
I hope so), so speed differences came
first to my mind. :: must be faster
than require.
I think you may be ignoring the effects of lazy loading which is used by the foreign package according to the first page of its manual. Essentially, packages that use lazy loading defer the loading of objects, such as functions, until the objects are called upon for the first time. So your argument that ":: must be faster than require" is not necessarily true as foreign is not loading all of its contents into memory when you attach it with require. For full details on lazy loading, see Prof. Ripley's article in RNews, Volume 4, Issue 2.
Since the time to load a package is almost always small compared to the time you spend trying to figure out what the code you wrote six months ago was about, in this case coding for clarity is the most important thing.
For scripts, having a call to require or library at the start lets you know which packages you need straight away.
Similarly, calling require (or a wrapper like requirePackage in Hmisc or try_require in ggplot2) at the start of a function is the most unambiguous way of showing that you need to use that package.
:: should be reserved for cases when you have naming conflicts between packages – compare, e.g.,
Hmisc::is.discrete
and
plyr::is.discrete

Efficiency of stack-based expression evaluation for math parsing

I have to write, for academic purposes, an application that plots user-input expressions like: f(x) = 1 - exp(3^(5*ln(cosx)) + x)
The approach I've chosen to write the parser is to convert the expression in RPN with the Shunting-Yard algorithm, treating primitive functions like "cos" as unary operators. This means the function written above would be converted in a series of tokens like:
1, x, cos, ln, 5, *,3, ^, exp, -
The problem is that to plot the function I have to evaluate it LOTS of times, so applying the stack evaluation algorithm for each input value would be very inefficient.
How can I solve this? Do I have to forget the RPN idea?
How much is "LOTS of times"? A million?
What kind of functions could be input? Can we assume they are continuous?
Did you try measuring how well your code performs?
(Sorry, started off with questions!)
You could try one of the two approaches (or both) described briefly below (there are probably many more):
1) Parse Trees.
You could create a Parse Tree. Then do what most compilers do to optimize expressions, constant folding, common subexpression elimination (which you could achieve by linking together the common expression subtrees and caching the result), etc.
Then you could use lazy evaluation techniques to avoid whole subtrees. For instance if you have a tree
*
/ \
A B
where A evaluates to 0, you could completely avoid evaluating B as you know the result is 0. With RPN you would lose out on the lazy evaluation.
2) Interpolation
Assuming your function is continuous, you could approximate your function to a high degree of accuracy using Polynomial Interpolation. This way you can do the complicated calculation of the function a few times (based on the degree of polynomial you choose), and then do fast polynomial calculations for the rest of the time.
To create the initial set of data, you could just use approach 1 or just stick to using your RPN, as you would only be generating a few values.
So if you use Interpolation, you could keep your RPN...
Hope that helps!
Why reinvent the wheel? Use a fast scripting language instead.
Integrating something like lua into your code will take very little time and be very fast.
You'll usually be able byte compile your expression, and that should result in code that runs very fast, certainly fast enough for simple 1D graphs.
I recommend lua as its fast, and integrates with C/C++ easier than any other scripting language. Another good options would be python, but while its better known I found it trickier to integrate.
Why not keep around a parse tree (I use "tree" loosely, in your case it's a sequence of operations), and mark input variables accordingly? (e.g. for inputs x, y, z, etc. annotate "x" with 0 to signify the first input variable, "y" with 1 to signify the 2nd input variable, etc.)
That way you can parse the expression once, keep the parse tree, take in an array of inputs, and apply the parse tree to evaluate.
If you're worrying about the performance aspects of the evaluation step (vs. the parsing step), I don't think you'd do much better unless you get into vectorizing (applying your parse tree on a vector of inputs at once) or hard-coding the operations into a fixed function.
What I do is use the shunting algorithm to produce the RPN. I then "compile" the RPN into a tokenised form that can be executed (interpretively) repeatedly without re-parsing the expression.
Michael Anderson suggested Lua. If you want to try Lua for just this task, see my ae library.
Inefficient in what sense? There's machine time and programmer time. Is there a standard for how fast it needs to run with a particular level of complexity? Is it more important to finish the assignment and move on to the next one (perfectionists sometimes never finish)?
All those steps have to happen for each input value. Yes, you could have a heuristic that scans the list of operations and cleans it up a bit. Yes, you could compile some of it down to assembly instead of calling +, * etc. as high level functions. You can compare vectorization (doing all the +'s then all the *'s etc, with a vector of values) to doing the whole procedure for one value at a time. But do you need to?
I mean, what do you think happens if you plot a function in gnuplot or Mathematica?
Your simple interpretation of RPN should work just fine, especially since it contains
math library functions like cos, exp, and ^(pow, involving logs)
symbol table lookup
Hopefully, your symbol table (with variables like x in it) will be short and simple.
The library functions will most likely be your biggest time-takers, so unless your interpreter is poorly written, it will not be a problem.
If, however, you really gotta go for speed, you could translate the expression into C code, compile and link it into a dll on-the-fly and load it (takes about a second). That, plus memoized versions of the math functions, could give you the best performance.
P.S. For parsing, your syntax is pretty vanilla, so a simple recursive-descent parser (about a page of code, O(n) same as shunting-yard) should work just fine. In fact, you might just be able to compute the result as you parse (if math functions are taking most of the time), and not bother with parse trees, RPN, any of that stuff.
I think this RPN based library can serve the purpose: http://expressionoasis.vedantatree.com/
I used it with one of my calculator project and it works well. It is small and simple, but extensible.
One optimization would be to replace the stack with an array of values and implement the evaluator as a three address mechine where each operation loads from two (or one) location and saves to a third. This can make for very tight code:
struct Op {
enum {
add, sub, mul, div,
cos, sin, tan,
//....
} op;
int a, b, d;
}
void go(Op* ops, int n, float* v) {
for(int i = 0; i < n; i++) {
switch(ops[i].op) {
case add: v[op[i].d] = v[op[i].a] + v[op[i].b]; break;
case sub: v[op[i].d] = v[op[i].a] - v[op[i].b]; break;
case mul: v[op[i].d] = v[op[i].a] * v[op[i].b]; break;
case div: v[op[i].d] = v[op[i].a] / v[op[i].b]; break;
//...
}
}
}
The conversion from RPN to 3-address should be easy as 3-address is a generalization.

Resources