Is there way to perform arbitrary precision exponentiation in Clojure? I've tried Math/pow and the expt function from clojure.math.numeric-tower, but both will only return limited precision. For example:
(with-precision 100 (expt 2 1/2))
=> 1.4142135623730951
How do I get more digits?
Apfloat for Java provides fast arbitrary precision arithmetic. You can easily use it by adding the following dependency information to your project.clj file, if your project comes with Leiningen.
[org.apfloat/apfloat "1.6.3"]
You can perform arbitrary precision exponentiation in Clojure using Apfloat. For example:
user> (import '(org.apfloat Apfloat ApfloatMath))
org.apfloat.ApfloatMath
user> (-> (Apfloat. 2M 100) (ApfloatMath/pow (Apfloat. 0.5M 100)))
1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727
math/expt is likely not the function you are looking for as it returns a double instead of a BigDecimal in this context, and hence ignores your with-precision statement:
Returns an exact number if the base is an exact number and the power is an integer, otherwise returns a double.
user> (type (with-precision 100 (math/expt 2M 1/2)))
java.lang.Double
the answer to this question seems to cover how to get arbitrary precision out of BigDecimal exponentiation. BigDecimal seems not to provide this "out of the box"
Related
Does anyone know an algorithm for a Pow2() function for fractions?
For a function of the following form, where BigRational stands for any rational number type, with arbitrarily large integers for numerator and denominator:
BigRational Pow2(BigRational exponent, int maxdigits)
For the inverse function, BigRational Log2(BigRational exponent, int maxdigits), I have already found a very nice algorithm that uses several identities, converges quickly, and is many times faster than corresponding Log (Ln) or Log10 functions.
Of course, Pow2(x) works with Exp(x * Log(2)) but the point is to avoid Exp as this function is relatively slow for arbitrary precesission arithmetic.
Currently I work on an library for arbitrary precision arithmetic based on some new foundations:
https://github.com/c-ohle/RationalNumerics
An efficient algorithm for such a Pow2 function that is more powerful than Exp (based on Taylor series) could improve the performance of many other functions and algorithms for arbitrary arithmetic.
I need to find the next power of two smaller than a given number in Julia.
i.e. smallerpoweroftwo(15) should return 8 but smallerpoweroftwo(17) should return 16
I have this so far but searching through the string of bits seems a bit hacky to me. Maybe its not ... Any ideas?
function smallerpoweroftwo(n::Int)
2^(length(bits(n)) - search(bits(n), '1'))
end
Thanks!
Edit:
I was mainly thinking is there a more elegant way to do this just using bitwise arithmetic. Or is there a bit length function somewhere like in some other languages?
Julia's standard library has prevpow2 and nextpow2 functions:
help?> prevpow2
search: prevpow2 prevpow prevprod
prevpow2(n)
The largest power of two not greater than n. Returns 0 for n==0, and returns -prevpow2(-n) for negative
arguments.
help?> nextpow2
search: nextpow2 nextpow nextprod
nextpow2(n)
The smallest power of two not less than n. Returns 0 for n==0, and returns -nextpow2(-n) for negative
arguments.
The prevpow2 function should do what you want.
How about this?1
2^floor(Int, log(2,n-1))
1 added exponent to solution after comment by jverzani.
I want to calculate pi. But, I have quite a few limits. Variables can only hold up to 5 decimal places, and I only have the following operators:
Addition
Subtraction
Multiplication
Division
Exponents
Square roots
Sin
Cos
Basic Loops, Conditionals, and relational operators.
The BBP algorithm seems useless here, because even though it would not need arbitrary precision, I cannot convert between bases. I'm not aware of any other formulas that can find the nth digit of pi in base 10.
Would it even be possible to calculate pi using these constraints?
BBP can be modified to give π in Base 10. There's a Java implementation on Github. (I believe that the screenshot of the algorithm description is taken from Pi - Unleashed by Arndt/Haenel.)
You'll need the modulo operation and a means to calculate the closest integer to the logarithm of a number, but you can perform them using the operations you have and loops.
Fold (aka reduce) is considered a very important higher order function. Map can be expressed in terms of fold (see here). But it sounds more academical than practical to me. A typical use could be to get the sum, or product, or maximum of numbers, but these functions usually accept any number of arguments. So why write (fold + 0 '(2 3 5)) when (+ 2 3 5) works fine. My question is, in what situation is it easiest or most natural to use fold?
The point of fold is that it's more abstract. It's not that you can do things that you couldn't before, it's that you can do them more easily.
Using a fold, you can generalize any function that is defined on two elements to apply to an arbitrary number of elements. This is a win because it's usually much easier to write, test, maintain and modify a single function that applies two arguments than to a list. And it's always easier to write, test, maintain, etc. one simple function instead of two with similar-but-not-quite functionality.
Since fold (and for that matter, map, filter, and friends) have well-defined behaviour, it's often much easier to understand code using these functions than explicit recursion.
Basically, once you have the one version, you get the other "for free". Ultimately, you end up doing less work to get the same result.
Here are a few simple examples where reduce works really well.
Find the sum of the maximum values of each sub-list
Clojure:
user=> (def x '((1 2 3) (4 5) (0 9 1)))
#'user/x
user=> (reduce #(+ %1 (apply max %2)) 0 x)
17
Racket:
> (define x '((1 2 3) (4 5) (0 9 1)))
> (foldl (lambda (a b) (+ b (apply max a))) 0 x)
17
Construct a map from a list
Clojure:
user=> (def y '(("dog" "bark") ("cat" "meow") ("pig" "oink")))
#'user/y
user=> (def z (reduce #(assoc %1 (first %2) (second %2)) {} y))
#'user/z
user=> (z "pig")
"oink"
For a more complicated clojure example featuring reduce, check out my solution to Project Euler problems 18 & 67.
See also: reduce vs. apply
In Common Lisp functions don't accept any number of arguments.
There is a constant defined in every Common Lisp implementation CALL-ARGUMENTS-LIMIT, which must be 50 or larger.
This means that any such portably written function should accept at least 50 arguments. But it could be just 50.
This limit exists to allow compilers to possibly use optimized calling schemes and to not provide the general case, where an unlimited number of arguments could be passed.
Thus to really process large (larger than 50 elements) lists or vectors in portable Common Lisp code, it is necessary to use iteration constructs, reduce, map, and similar. Thus it is also necessary to not use (apply '+ large-list) but use (reduce '+ large-list).
Code using fold is usually awkward to read. That's why people prefer map, filter, exists, sum, and so on—when available. These days I'm primarily writing compilers and interpreters; here's some ways I use fold:
Compute the set of free variables for a function, expression, or type
Add a function's parameters to the symbol table, e.g., for type checking
Accumulate the collection of all sensible error messages generated from a sequence of definitions
Add all the predefined classes to a Smalltalk interpreter at boot time
What all these uses have in common is that they're accumulating information about a sequence into some kind of set or dictionary. Eminently practical.
Your example (+ 2 3 4) only works because you know the number of arguments beforehand. Folds work on lists the size of which can vary.
fold/reduce is the general version of the "cdr-ing down a list" pattern. Each algorithm that's about processing every element of a sequence in order and computing some return value from that can be expressed with it. It's basically the functional version of the foreach loop.
Here's an example that nobody else mentioned yet.
By using a function with a small, well-defined interface like "fold", you can replace that implementation without breaking the programs that use it. You could, for example, make a distributed version that runs on thousands of PCs, so a sorting algorithm that used it would become a distributed sort, and so on. Your programs become more robust, simpler, and faster.
Your example is a trivial one: + already takes any number of arguments, runs quickly in little memory, and has already been written and debugged by whoever wrote your compiler. Those properties are not often true of algorithms I need to run.
I'm writing program in Python and I need to find the derivative of a function (a function expressed as string).
For example: x^2+3*x
Its derivative is: 2*x+3
Are there any scripts available, or is there something helpful you can tell me?
If you are limited to polynomials (which appears to be the case), there would basically be three steps:
Parse the input string into a list of coefficients to x^n
Take that list of coefficients and convert them into a new list of coefficients according to the rules for deriving a polynomial.
Take the list of coefficients for the derivative and create a nice string describing the derivative polynomial function.
If you need to handle polynomials like a*x^15125 + x^2 + c, using a dict for the list of coefficients may make sense, but require a little more attention when doing the iterations through this list.
sympy does it well.
You may find what you are looking for in the answers already provided. I, however, would like to give a short explanation on how to compute symbolic derivatives.
The business is based on operator overloading and the chain rule of derivatives. For instance, the derivative of v^n is n*v^(n-1)dv/dx, right? So, if you have v=3*x and n=3, what would the derivative be? The answer: if f(x)=(3*x)^3, then the derivative is:
f'(x)=3*(3*x)^2*(d/dx(3*x))=3*(3*x)^2*(3)=3^4*x^2
The chain rule allows you to "chain" the operation: each individual derivative is simple, and you just "chain" the complexity. Another example, the derivative of u*v is v*du/dx+u*dv/dx, right? If you get a complicated function, you just chain it, say:
d/dx(x^3*sin(x))
u=x^3; v=sin(x)
du/dx=3*x^2; dv/dx=cos(x)
d/dx=v*du+u*dv
As you can see, differentiation is only a chain of simple operations.
Now, operator overloading.
If you can write a parser (try Pyparsing) then you can request it to evaluate both the function and derivative! I've done this (using Flex/Bison) just for fun, and it is quite powerful. For you to get the idea, the derivative is computed recursively by overloading the corresponding operator, and recursively applying the chain rule, so the evaluation of "*" would correspond to u*v for function value and u*der(v)+v*der(u) for derivative value (try it in C++, it is also fun).
So there you go, I know you don't mean to write your own parser - by all means use existing code (visit www.autodiff.org for automatic differentiation of Fortran and C/C++ code). But it is always interesting to know how this stuff works.
Cheers,
Juan
Better late than never?
I've always done symbolic differentiation in whatever language by working with a parse tree.
But I also recently became aware of another method using complex numbers.
The parse tree approach consists of translating the following tiny Lisp code into whatever language you like:
(defun diff (s x)(cond
((eq s x) 1)
((atom s) 0)
((or (eq (car s) '+)(eq (car s) '-))(list (car s)
(diff (cadr s) x)
(diff (caddr s) x)
))
; ... and so on for multiplication, division, and basic functions
))
and following it with an appropriate simplifier, so you get rid of additions of 0, multiplying by 1, etc.
But the complex method, while completely numeric, has a certain magical quality. Instead of programming your computation F in double precision, do it in double precision complex.
Then, if you need the derivative of the computation with respect to variable X, set the imaginary part of X to a very small number h, like 1e-100.
Then do the calculation and get the result R.
Now real(R) is the result you would normally get, and imag(R)/h = dF/dX
to very high accuracy!
How does it work? Take the case of multiplying complex numbers:
(a+bi)(c+di) = ac + i(ad+bc) - bd
Now suppose the imaginary parts are all zero, except we want the derivative with respect to a.
We set b to a very small number h. Now what do we get?
(a+hi)(c) = ac + hci
So the real part of this is ac, as you would expect, and the imaginary part, divided by h, is c, which is the derivative of ac with respect to a.
The same sort of reasoning seems to apply to all the differentiation rules.
Symbolic Differentiation is an impressive introduction to the subject-at least for non-specialist like me :) The code is written in C++ btw.
Look up automatic differentiation. There are tools for Python. Also, this.
If you are thinking of writing the differentiation program from scratch, without utilizing other libraries as help, then the algorithm/approach of computing the derivative of any algebraic equation I described in my blog will be helpful.
You can try creating a class that will represent a limit rigorously and then evaluate it for (f(x)-f(a))/(x-a) as x approaches a. That should give a pretty accurate value of the limit.
if you're using string as an input, you can separate individual terms using + or - char as a delimiter, which will give you individual terms. Now you can use power rule to solve for each term, say you have x^3 which using power rule will give you 3x^2, or suppose you have a more complicated term like a/(x^3) or a(x^-3), again you can single out other variables as a constant and now solving for x^-3 will give you -3a/(x^2). power rule alone should be enough, however it will require extensive use of the factorization.
Unless any already made library deriving it's quite complex because you need to parse and handle functions and expressions.
Deriving by itself it's an easy task, since it's mechanical and can be done algorithmically but you need a basic structure to store a function.