I want to get a list of available math functions without looking in index. I tried ls(), methods(), lsf() commands. I could not figure it out.
This will get you started, but I don't know how to identify "math functions" in base. You could run this, copy to Excel and sort as you wish to have a set:
lsf.str("package:base")
See ?Math for mathematics functions:
Group "Math":
abs, sign, sqrt, floor, ceiling, trunc, round, signif
exp, log, expm1, log1p, cos, sin, tan, cospi, sinpi, tanpi, acos,
asin, atan
cosh, sinh, tanh, acosh, asinh, atanh
lgamma, gamma, digamma, trigamma
cumsum, cumprod, cummax, cummin
Members of this group dispatch on x. Most members accept only one
argument, but members log, round and signif accept one or two
arguments, and trunc accepts one or more.
Have you looked at
Advanced R, Vocabulary (by Hadley Wickham)?
http://adv-r.had.co.nz/Vocabulary.html
That gives you the basic math functions in 'The basics' section plus much more. All conveniently in one place. And the full book is an awesome read too (IMHO the best out there for understanding how to really work with R).
Related
I am interested in Julia and would like to understand a couple of things before I dive into it. I would like to have a look at a working code which calculates this expression.
In that expression everything is a constant but for the Bessel functions, of course. The number n is an integer and "e" is an eccentricity (ranging from 0. to, say, 0.999).
For a given value of n I would like to derive hc,n. E.g. if n=2, then hc,2.
No, I am not tricking you into coding for me.
I am used to working with shell scripts, bc, and plot with gnuplot. I would like to have something more flexible than all of this and this one would be a good example to start looking at julia. Thanks!
For the best tutorial on doing equations/mathematics in Julia have a look at https://github.com/mossr/BeautifulAlgorithms.jl
This will give you an excellent overview along with the initial feelling of the language.
I am looking for the code of the base R formula parser or interpreter, that translates the formula the user types into the variables and transformations used to bridge the data to the model matrix. A number of packages have their own formula interpreters that supplement or replace the base R interpreter, e.g. rmutil, gamlss.nl, ttBulk.
At a minimum, the following symbols have a distinct meaning in the formula context. I am looking for the code that implements that meaning.
~, 1, 0, +, -, *, /, :, ^, ., |, I, %in%
In addition, the functions below seem to be used mainly within the formula context, but I am not sure if they operate in a distinct way in that context. Some may have meaning only in model-fitting functions beyond lm or from particular packages. In some cases I am not sure that they have a meaning outside of the formula context.
C
||
poly
offset
strata
cluster
contrasts
ns
lo
bs
s
What I really want is an expository piece or tutorial at a level of detail that would let me figure out, e.g. which of the operations above commute, which are distributive over which other ones, which ones have inverse operations. But I gather that no such exposition exists.
I'd also like to get a complete list of functions that mean something different inside a formula, if such can be had. There is nothing in the R Language Definition or in R Internals about these special meanings, and, e.g., methods("|") gives me methods for hex and octal. The best discussion I have seen is still Statistical Models in S, Chap. 2, sect. 2.3.1, but I believe this is incomplete and maybe also not currant.
I like solving my math problems(high school) using R as it is faster than writing on a piece of paper. One problem I'm having is that I have to keep writing the multiplication sign, example:
9x^2 + 24x + 16 yields = Error: unexpected symbol in "9x"
Is there any way in R to multiply 4x, without having to write 4*x but only 4x?
Would save me some time in having to write one extra character the whole time! Thanks
No. Having a number in front of a character without any space simply isn't valid syntax in R.
Take a step back and look at the syntax rules for, say, Excel, Matlab, Python, Mathematica. Every language has its rules, generally (:-) ) with good reason. For example, in R, the following are legal object names:
foo
foo.bar
foo1
foo39
But 39foo is not legal. So if you wanted any sequence [0-9][Letters] or the reverse to indicate multiplication, you'd have a conflict with naming rules.
I was trying to learn Scipy, using it for mixed integrations and differentiations, but at the very initial step I encountered the following problems.
For numerical differentiation, it seems that the only Scipy function that works for callable functions is scipy.derivative() if I'm right!? However, I couldn't work with it:
1st) when I am not going to specify the point at which the differentiation is to be taken, e.g. when the differentiation is under an integral so that it is the integral that should assign the numerical values to its integrand's variable, not me. As a simple example I tried this code in Sage's notebook:
import scipy as sp
from scipy import integrate, derivative
var('y')
f=lambda x: 10^10*sin(x)
g=lambda x,y: f(x+y^2)
I=integrate.quad( sp.derivative(f(y),y, dx=0.00001, n=1, order=7) , 0, pi)[0]; show(I)
show( integral(diff(f(y),y),y,0,1).n() )
also it gives the warning that "Warning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated." and I don't know what does this warning stand for as it persists even with increasing "dx" and decreasing the "order".
2nd) when I want to find the derivative of a multivariable function like g(x,y) in the above example and something like sp.derivative(g(x,y),(x,0.5), dx=0.01, n=1, order=3) gives error, as is easily expected.
Looking forward to hearing from you about how to resolve the above cited problems with numerical differentiation.
Best Regards
There are some strange problems with your code that suggest you need to brush up on some python! I don't know how you even made these definitions in python since they are not legal syntax.
First, I think you are using an older version of scipy. In recent versions (at least from 0.12+) you need from scipy.misc import derivative. derivative is not in the scipy global namespace.
Second, var is not defined, although it is not necessary anyway (I think you meant to import sympy first and use sympy.var('y')). sin has also not been imported from math (or numpy, if you prefer). show is not a valid function in sympy or scipy.
^ is not the power operator in python. You meant **
You seem to be mixing up the idea of symbolic and numeric calculus operations here. scipy won't numerically differentiate an expression involving a symbolic object -- the second argument to derivative is supposed to be the point at which you wish to take the derivative (i.e. a number). As you say you are trying to do numeric differentiation, I'll resolve the issue for that purpose.
from scipy import integrate
from scipy.misc import derivative
from math import *
f = lambda x: 10**10*sin(x)
df = lambda x: derivative(f, x, dx=0.00001, n=1, order=7)
I = integrate.quad( df, 0, pi)[0]
Now, this last expression generates the warning you mentioned, and the value returned is not very close to zero at -0.0731642869874073 in absolute terms, although that's not bad relative to the scale of f. You have to appreciate the issues of roundoff error in finite differencing. Your function f varies on your interval between 0 and 10^10! It probably seems paradoxical, but making the dx value for differentiation too small can actually magnify roundoff error and cause numerical instability. See the second graph here ("Example showing the difficulty of choosing h due to both rounding error and formula error") for an explanation: http://en.wikipedia.org/wiki/Numerical_differentiation
In fact, in this case, you need to increase it, say to 0.001: df = lambda x: derivative(f, x, dx=0.001, n=1, order=7)
Then, you can integrate safely, with no terrible roundoff.
I=integrate.quad( df, 0, pi)[0]
I don't recommend throwing away the second return value from quad. It's an important verification of what happened, as it is "an estimate of the absolute error in the result". In this case, I == 0.0012846582250212652 and the abs error is ~ 0.00022, which is not bad (the interval that implies still does not include zero). Maybe some more fiddling with the dx and absolute tolerances for quad will get you an even better solution, but hopefully you get the idea.
For your second problem, you simply need to create a proper scalar function (call it gx) that represents g(x,y) along y=0.5 (this is called Currying in computer science).
g = lambda x, y: f(x+y**2)
gx = lambda x: g(x, 0.5)
derivative(gx, 0.2, dx=0.01, n=1, order=3)
gives you a value of the derivative at x=0.2. Naturally, the value is huge given the scale of f. You can integrate using quad like I showed you above.
If you want to be able to differentiate g itself, you need a different numerical differentiation functio. I don't think scipy or numpy support this, although you could hack together a central difference calculation by making a 2D fine mesh (size dx) and using numpy.gradient. There are probably other library solutions that I'm not aware of, but I know my PyDSTool software contains a function diff that will do that (if you rewrite g to take one array argument instead). It uses Ridder's method and is inspired from the Numerical Recipes pseudocode.
On the Wikipedia page about summation it says that the equivalent operation in Haskell is to use foldl. My question is: Is there any reason why it says to use this instead of sum? Is one more 'purist' than the other, or is there no real difference?
foldl is a general tail-recursive reduce function. Recursion is the usual way of thinking about manipulating lists of items in a functional programming languages, and provides an alternative to loop iteration that is often much more elegant. In the case of a reduce function like fold, the tail-recursive implementation is very efficient. As others have explained, sum is then just a convenient mnemonic for foldl (+) 0 l.
Presumably its use on the wikipedia page is to illustrate the general principle of summation through tail-recursion. But since the Haskell Prelude library contains sum, which is shorter and more obvious to understand, you should use that in your code.
Here's a nice discussion of Haskell's fold functions with simple examples that's well worth reading.
I don't see where it says anything about Haskell or foldl on that Wikipedia page, but sum in Haskell is just a more specific case of foldl. It can be implemented like this, for example:
sum l = foldl (+) 0 l
Which can be reduced to:
sum = foldl (+) 0
One thing to note is that sum may be lazier than you would want, so consider using foldl'.
As stated by the others, there's no difference. However, a sum-call is easier to read than a fold-call, so I'd go for sum if you need summation.
There is no difference. That page is simply saying that sum is implemented using foldl. Just use sum whenever you need to calculate the sum of a list of numbers.
The concept of summation can be extended to non-numeric types: all you need is something equivalent to a (+) operation and a zero value. In other words, you need a monoid. This leads to the Haskell function "mconcat", which returns the sum of a list of values of a monoid type. The default "mconcat" of course is defined in terms of "mappend" which is the plus operation.