Sage math numerical evaluation - sage

I'm using Sage math to do some calculation where I found the numerical evaluation quite different from that of python's.
For example, the evalf() nolonger works, instead it uses n() and gp().
My questions are:
what are different ways of numerical evaluation in Sage and what's their difference?
what's difference between n() and gp()? why the later seemed to be much slower?

I assume you are referring to Sympy when you say evalf. Anyway, n() or numerical_approx() is the equivalent. See the documentation. The default is 53 bits of accuracy.
You shouldn't be thinking of gp() though, unless you really want to use the GP/Pari interpreter or convert something to GP.

Related

Use of set.seed() in statistics

This is an elementary question so I apologize if I am missing something obvious. I'm in an advanced statistics class, albeit the first at my university to use R software. The question is primarily to get us used to using R and asks us to calculate the log of the square root of 125 and to use the set.seed() function. I am confused about the set.seed aspect. I understand that it is used as a random number generator in simulations but I don't understand where it is applied within the code. This is what I did.
125 %>%
log() %>%
sqrt() %>%
set.seed(100)
Is this how it is supposed to be used?
No. Someone will probably give a fuller answer, but:
set.seed() does not affect the log() or sqrt() operations at all. Maybe this was supposed to be a two-part question ("(1) calculate the log of the square root of 125; (2) use set.seed() to set the state of the pseudo-random number generator")
You can use the pipe (%>%) operator in this way to compose the log() and sqrt() functions, but (unless you have been specifically instructed to do it this way, for some reason) it's overkill. You really might as well write it in the more "normal" way log(sqrt(125)).

R numerical method similar to Vpasolve in Matlab

I am trying to solve a numerical equation in R but would want a method which perform similar to vpasolve in Matlab. I have a non linear equation (involving lot of log functions) which when solve in R with uniroot gives me complete different answer compared to what vpasolve gives in matlab.
First, a word of caution: it's often much more productive to learn that there's a better way to do something than the way you are used to doing.
edit
I went back to MATLAB and realized that the "vpa" collection is using extended precision. Is that absolutely necessary for your purposes? If not, then my suggestions below may suffice.
If you do require extended precision, then perhaps Rmpfr::unirootR function will suffice. I would like to point out that, since all these solvers are generating an approximate solution (as opposed to analytic), the use of extended precision operations seems a bit pointless.
Next, you need to determine whether MATLAB::vpasolve or uniroot is getting you the correct answer. Or maybe you simply are converging to a root that's not the one you want, in which case you need to read up on setting limits on the starting conditions or the search region.
Finally, in addition to uniroot, I recommend you learn to use the R packages BBsolve , nleqslv, rootsolve, and ktsolve (disclaimer: I am the owner and maintainer of ktsolve). These packages are pretty flexible and may lead you to better solutions to your original problem.

Are series representations of functions every practically used to graph in computer science?

As you probably know functions can be represented as a infinite series. For example f(x) = cosx can be represented as this. My question is if this is every used practically in programming for any type of application. I know it can be used I was just wondering if it actually is for serious projects.
Aside from infinite series, there are other representations for functions which can be useful for computing approximations. Asymptotic series, identities involving other "elementary" functions, and interpolation in a table of values are all used in different contexts. Take a look at Abramowitz & Stegun "Handbook of Mathematical Functions" to get an idea of the variety of possibilities. Also look for the source code for popular libraries or systems such as R, Numpy, Scipy, or Octave to see what approaches have been used by the authors of that software.
Specifically about series approximations for trigonometric functions, I think that might be a reasonable thing to do, but only if the range of the argument is reduced (via identities) so that it is as small as possible.
Approximation of functions is a great topic; good luck and have fun.

How to transpose a high-degree tensor in FP way?

I'm currently working on a math library.
It's now supporting several matrix operations:
- Plus
- Product
- Dot
- Get & Set
- Transpose
- Multiply
- Determinant
I always want to generalize everything I can generalize
I was thinking about a recursive way to implement the transpose of a matrix, but I just couldn't figure it out.
Anybody help?
I would advise you against trying to write a recursive method to transpose a matrix.
The idea is easy:
transpose(A) = A(j,i)
Recursion isn't hard in this case. You can see the stopping condition: a 1x1 matrix with a single value is its own transpose. Build it up for 2x2, etc.
The problem is that this will be terribly inefficient, both in terms of stack depth and memory, for any matrix beyond a trivial size. People who apply linear algebra to real problems can require tens of thousands or billions of degrees of freedom.
You don't talk about meaningful, practical cases like sparse or banded matricies, etc.
You're better off doing it using a straightforward declarative approach.
Haskell use BLAS as its backing implementation. It's a more functional language than JavaScript. Perhaps you could crib some ideas by looking at the source code.
I'd recommend that you do the simple thing first, get it all working, and then branch out from there.
Here's a question to ask yourself: Why would anyone want to do serious numerical work using JavaScript? What will your library offer that's an improvement on what's available?
If you want to learn how to reinvent wheels, by all means proceed. Just understand that you aren't the first.

Rllvm and compiler packages: R compilation

This is a fairly general question about the future of R: Any hope to see a merger of compilerand Rllvm (from Omegahat) or another JIT compilation scheme for R (I know there is Ra, but not updated recently)?
In my tests the speed gain from compiler are marginal for "complicated" functions...
What matters isn't how complicated a function is but what kinds of computations it performs. The compiler will make most difference for functions dominated by interpreter overhead, such as ones that perform mostly simple operations on scalar or other small data. In cases like that I have seen a factor of 3 for artificial examples and a a bit
better than a factor of 2 for some production code. Functions that spend most of their time in operations implemented in native code, like linear algebra operations, will see little benefit.
This is just the first release of the compiler and it will evolve over time. LLVM is one of several possible direction we will look at but probably not for a while. In any case, I would expect using something like LLVM to provide further improvements in cases where the current compiler already makes a difference, but not to add much in cases where it does not.
(Moving from a comment to an answer ...)
This sounds more like a question for the r development mailing list. Based on my general impressions I would say "probably not". Are your complicated functions already based on heavily vectorized (and hence efficient) functions? I think a more promising direction for not-so-easily-automatically-optimized situations is the increased simplicity of embedding C++ etc. (i.e. Rcpp), inline if necessary

Resources