algorithm to find derivative - math

I'm writing program in Python and I need to find the derivative of a function (a function expressed as string).
For example: x^2+3*x
Its derivative is: 2*x+3
Are there any scripts available, or is there something helpful you can tell me?

If you are limited to polynomials (which appears to be the case), there would basically be three steps:
Parse the input string into a list of coefficients to x^n
Take that list of coefficients and convert them into a new list of coefficients according to the rules for deriving a polynomial.
Take the list of coefficients for the derivative and create a nice string describing the derivative polynomial function.
If you need to handle polynomials like a*x^15125 + x^2 + c, using a dict for the list of coefficients may make sense, but require a little more attention when doing the iterations through this list.

sympy does it well.

You may find what you are looking for in the answers already provided. I, however, would like to give a short explanation on how to compute symbolic derivatives.
The business is based on operator overloading and the chain rule of derivatives. For instance, the derivative of v^n is n*v^(n-1)dv/dx, right? So, if you have v=3*x and n=3, what would the derivative be? The answer: if f(x)=(3*x)^3, then the derivative is:
f'(x)=3*(3*x)^2*(d/dx(3*x))=3*(3*x)^2*(3)=3^4*x^2
The chain rule allows you to "chain" the operation: each individual derivative is simple, and you just "chain" the complexity. Another example, the derivative of u*v is v*du/dx+u*dv/dx, right? If you get a complicated function, you just chain it, say:
d/dx(x^3*sin(x))
u=x^3; v=sin(x)
du/dx=3*x^2; dv/dx=cos(x)
d/dx=v*du+u*dv
As you can see, differentiation is only a chain of simple operations.
Now, operator overloading.
If you can write a parser (try Pyparsing) then you can request it to evaluate both the function and derivative! I've done this (using Flex/Bison) just for fun, and it is quite powerful. For you to get the idea, the derivative is computed recursively by overloading the corresponding operator, and recursively applying the chain rule, so the evaluation of "*" would correspond to u*v for function value and u*der(v)+v*der(u) for derivative value (try it in C++, it is also fun).
So there you go, I know you don't mean to write your own parser - by all means use existing code (visit www.autodiff.org for automatic differentiation of Fortran and C/C++ code). But it is always interesting to know how this stuff works.
Cheers,
Juan

Better late than never?
I've always done symbolic differentiation in whatever language by working with a parse tree.
But I also recently became aware of another method using complex numbers.
The parse tree approach consists of translating the following tiny Lisp code into whatever language you like:
(defun diff (s x)(cond
((eq s x) 1)
((atom s) 0)
((or (eq (car s) '+)(eq (car s) '-))(list (car s)
(diff (cadr s) x)
(diff (caddr s) x)
))
; ... and so on for multiplication, division, and basic functions
))
and following it with an appropriate simplifier, so you get rid of additions of 0, multiplying by 1, etc.
But the complex method, while completely numeric, has a certain magical quality. Instead of programming your computation F in double precision, do it in double precision complex.
Then, if you need the derivative of the computation with respect to variable X, set the imaginary part of X to a very small number h, like 1e-100.
Then do the calculation and get the result R.
Now real(R) is the result you would normally get, and imag(R)/h = dF/dX
to very high accuracy!
How does it work? Take the case of multiplying complex numbers:
(a+bi)(c+di) = ac + i(ad+bc) - bd
Now suppose the imaginary parts are all zero, except we want the derivative with respect to a.
We set b to a very small number h. Now what do we get?
(a+hi)(c) = ac + hci
So the real part of this is ac, as you would expect, and the imaginary part, divided by h, is c, which is the derivative of ac with respect to a.
The same sort of reasoning seems to apply to all the differentiation rules.

Symbolic Differentiation is an impressive introduction to the subject-at least for non-specialist like me :) The code is written in C++ btw.

Look up automatic differentiation. There are tools for Python. Also, this.

If you are thinking of writing the differentiation program from scratch, without utilizing other libraries as help, then the algorithm/approach of computing the derivative of any algebraic equation I described in my blog will be helpful.

You can try creating a class that will represent a limit rigorously and then evaluate it for (f(x)-f(a))/(x-a) as x approaches a. That should give a pretty accurate value of the limit.

if you're using string as an input, you can separate individual terms using + or - char as a delimiter, which will give you individual terms. Now you can use power rule to solve for each term, say you have x^3 which using power rule will give you 3x^2, or suppose you have a more complicated term like a/(x^3) or a(x^-3), again you can single out other variables as a constant and now solving for x^-3 will give you -3a/(x^2). power rule alone should be enough, however it will require extensive use of the factorization.

Unless any already made library deriving it's quite complex because you need to parse and handle functions and expressions.
Deriving by itself it's an easy task, since it's mechanical and can be done algorithmically but you need a basic structure to store a function.

Related

Power with integer exponents in Isabelle

Here is my definition of power for integer exponents following this mailing-list post:
definition
"ipow x n = (if n < 0 then (1 / x) ^ n else x ^ n)"
notation ipow (infixr "^⇩i" 80)
Is there a better way to define it?
Is there an existing theory in Isabelle that already includes it so that I can reuse its results?
Context
I am dealing with complex exponentials, for instance consider this theorem:
after I proved it I realized I need to work with integers n not just naturals and this involves using powers to take out the n from the exponential.
I don't think something like this exists in the library. However, you have a typo in your definition. I believe you want something like
definition
"ipow x n = (if n < 0 then (1 / x) ^ nat (-n) else x ^ nat n)"
Apart from that, it is fine. You could write inverse x ^ nat (-n), but it should make little difference in practice. I would suggest the name int_power since the corresponding operation with natural exponents is called power.
Personally, I would avoid introducting a new constant like this because in order to actually use it productively, you also need an extensive collection of theorems around it. This means quite a bit of (tedious) work. Do you really need to talk about integers here? I find that one can often get around it in practice (in particular, note that the exponentials in question are periodic anyway).
It may be useful to introduce such a power operator nevertheless; all I'm saying is you should be aware of the trade-off.
Side note: An often overlooked function in Isabelle that is useful when talking about exponentials like this is cis (as in ‘cosine + i · sine‘). cis x is equivalent to ‘exp(ix)’ where x is real.

Convergence and vectors theories

Is there a convergence theory in Isabelle/HOL? I need to define ∥x(t)∥ ⟶ 0 as t ⟶ ∞.
Also, I'm looking for vectors theory, I found a matrix theory but I couldn't find the vectors one, Is there exist such theory in Isabelle/HOL?
Cheers.
Convergence etc. are expressed with filters in Isabelle. (See the corresponding documentation)
In your case, that would be something like
filterlim (λt. norm (x t)) (nhds 0) at_top
or, using the tendsto abbreviation,
((λt. norm (x t)) ⤏ 0) at_top
where ⤏ is the Isabelle symbol \<longlongrightarrow>, which can be input using the abbreviation --->.
As a side note, I am wondering why you are writing it that way in the first place, seeing as it is equivalent to
filterlim x (nhds 0) at_top
or, with the tendsto syntax:
(x ⤏ 0) at_top
Reasoning with these filters can be tricky at first, but it has the advantage of providing a unified framework for limits and other topological concepts, and once you get the hang of it, it is very elegant.
As for vectors, just import ~~/src/HOL/Analysis/Analysis. That should have everything you need. Ideally, build the HOL-Analysis session image by starting Isabelle/jEdit with isabelle jedit -l HOL-Analysis. Then you won't have to process all of Isabelle's analysis library every time you start the system.
I assume that by ‘vectors’ you mean concrete finite-dimensional real vector spaces like ℝn. This is provided by ~~/src/HOL/Analysis/Finite_Cartesian_Product.thy, which is part of HOL-Analysis. This provides the vec type, which takes two parameters: the component type (probably real in your case) and the index type, which specifies the dimension of the vector space.
There is also a pre-defined type n for every positive integer n, so that you can write e.g. (real, 3) vec for the vector space ℝ³. There is also type syntax so that you can write 'a ^ 'n for ('a, 'n) vec.

Assumptions in Mathematica's NullSpace Command for Symbolic Matrices

When executing Mathematica's NullSpace command on a symbolic matrix, Mathematica makes some assumptions about the variables and I would like to know what they are.
For example,
In[1]:= NullSpace[{{a, b}, {c, d}}]
Out[1]= {}
but the unstated assumption is that
a d != b c.
How can I determine what assumptions the NullSpace command uses?
The underlying assumptions, so to speak, are enforced by internal uses of PossibleZeroQ. If that function cannot deem an expression to be zero then it will be regarded as nonzero, hence eligible for use as a pivot in row reduction (which is generally what is used for symbolic NullSpace).
---edit---
The question was raised regarding what might be visible in zero testing in symbolic linear algebra. By default the calls to PossibleZeroQ go through internal routes. PossibleZeroQ was later built on top of those.
There is always a question in Mathematica kernel code development of what should go through the main evaluator loop and what (e.g. for purposes of speed) should short circuit. Only the former is readily traced.
One can influence the process in symbolic linear algebra by specifying a non-default zero test. Could be e.g.
myTest[ee_]:= (Print[zerotesting[ee]]; PossibleZeroQ[ee])
and then use ZeroTest->myTest in NullSpace.
---end edit---
Found this:
In this case, if you expand your matrix by one column, the assumption shows up:
NullSpace[{{a, b, 1}, {c, d, 1}}]
{{-((-b+d)/(-b c+a d)),-((a-c)/(-b c+a d)),1}}
Perhaps useful in some situations

How do I efficiently find the maximum value in an array containing values of a smooth function?

I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.

Derivative Calculator

I'm interested in building a derivative calculator. I've racked my brains over solving the problem, but I haven't found a right solution at all. May you have a hint how to start? Thanks
I'm sorry! I clearly want to make symbolic differentiation.
Let's say you have the function f(x) = x^3 + 2x^2 + x
I want to display the derivative, in this case f'(x) = 3x^2 + 4x + 1
I'd like to implement it in objective-c for the iPhone.
I assume that you're trying to find the exact derivative of a function. (Symbolic differentiation)
You need to parse the mathematical expression and store the individual operations in the function in a tree structure.
For example, x + sin²(x) would be stored as a + operation, applied to the expression x and a ^ (exponentiation) operation of sin(x) and 2.
You can then recursively differentiate the tree by applying the rules of differentiation to each node. For example, a + node would become the u' + v', and a * node would become uv' + vu'.
you need to remember your calculus. basically you need two things: table of derivatives of basic functions and rules of how to derivate compound expressions (like d(f + g)/dx = df/dx + dg/dx). Then take expressions parser and recursively go other the tree. (http://www.sosmath.com/tables/derivative/derivative.html)
Parse your string into an S-expression (even though this is usually taken in Lisp context, you can do an equivalent thing in pretty much any language), easiest with lex/yacc or equivalent, then write a recursive "derive" function. In OCaml-ish dialect, something like this:
let rec derive var = function
| Const(_) -> Const(0)
| Var(x) -> if x = var then Const(1) else Deriv(Var(x), Var(var))
| Add(x, y) -> Add(derive var x, derive var y)
| Mul(a, b) -> Add(Mul(a, derive var b), Mul(derive var a, b))
...
(If you don't know OCaml syntax - derive is two-parameter recursive function, with first parameter the variable name, and the second being mathched in successive lines; for example, if this parameter is a structure of form Add(x, y), return the structure Add built from two fields, with values of derived x and derived y; and similarly for other cases of what derive might receive as a parameter; _ in the first pattern means "match anything")
After this you might have some clean-up function to tidy up the resultant expression (reducing fractions etc.) but this gets complicated, and is not necessary for derivation itself (i.e. what you get without it is still a correct answer).
When your transformation of the s-exp is done, reconvert the resultant s-exp into string form, again with a recursive function
SLaks already described the procedure for symbolic differentiation. I'd just like to add a few things:
Symbolic math is mostly parsing and tree transformations. ANTLR is a great tool for both. I'd suggest starting with this great book Language implementation patterns
There are open-source programs that do what you want (e.g. Maxima). Dissecting such a program might be interesting, too (but it's probably easier to understand what's going on if you tried to write it yourself, first)
Probably, you also want some kind of simplification for the output. For example, just applying the basic derivative rules to the expression 2 * x would yield 2 + 0*x. This can also be done by tree processing (e.g. by transforming 0 * [...] to 0 and [...] + 0 to [...] and so on)
For what kinds of operations are you wanting to compute a derivative? If you allow trigonometric functions like sine, cosine and tangent, these are probably best stored in a table while others like polynomials may be much easier to do. Are you allowing for functions to have multiple inputs,e.g. f(x,y) rather than just f(x)?
Polynomials in a single variable would be my suggestion and then consider adding in trigonometric, logarithmic, exponential and other advanced functions to compute derivatives which may be harder to do.
Symbolic differentiation over common functions (+, -, *, /, ^, sin, cos, etc.) ignoring regions where the function or its derivative is undefined is easy. What's difficult, perhaps counterintuitively, is simplifying the result afterward.
To do the differentiation, store the operations in a tree (or even just in Polish notation) and make a table of the derivative of each of the elementary operations. Then repeatedly apply the chain rule and the elementary derivatives, together with setting the derivative of a constant to 0. This is fast and easy to implement.

Resources