Can't get sympy to reduce a differentiate expression - jupyter-notebook

Consider the following code, which is best viewed in a Jupyter notebook, because of the mathematical output the variables a,b,c below contain:
# cell1
import sympy as sp
from sympy import init_printing
init_printing()
u, x = sp.symbols('u x')
f = sp.Function('f')
# cell2: building my function
a = f(x)**2
a
# cell3: doing some stuff to the function
b = a.diff(x).subs([(x,u**2)])
b
# cell4: now I have decided what f should be
c = b.replace(f, lambda x: x**2+x,)
c
but I can't get c to actually symbolically evaluate it, so that I get an expression that doesn't contain a derivative. I have tried everything, simplify, cse etc., nothing seems to work.
Of course I could specify a function at the beginning and then I wouldn't have had this problem. But the thing is I want to keep everything up to b and easily switch functions only at that stage as -for mathematical reasons- it is important for me to view how expression b looks like "abstractly", when I don't have already a concrete function in place, and investigate only afterwards see what happens, when I plug in different concrete functions.

You can use the .doit() method to trigger the evaluation of derivatives:
In [25]: c
Out[25]:
⎛ 4 2⎞ ⎛d ⎛ 2 ⎞⎞│
2⋅⎝u + u ⎠⋅⎜──⎝x + x⎠⎟│ 2
⎝dx ⎠│x=u
In [26]: c.doit()
Out[26]:
⎛ 2 ⎞ ⎛ 4 2⎞
2⋅⎝2⋅u + 1⎠⋅⎝u + u ⎠

Related

How do I evaluate an expression with trignometric functions & also complex numbers in Sagemath?

I wanted to verify if n-th roots of unity are actually the n-th roots of unity?
i.e. if (root)^n = 1
I was trying to use sagemath to do this.
For e.g. for regular expressions sage seems to evaluate stuff
For e.g.
sage: x = var('x')
sage: f(x) = (x+2)^3
sage: f(5)
343
But I am unable to do this
sage: a = var('a')
sage: b = var('b')
sage: f(a, b) = (a + i*b)^3
sage: f(cos((2*pi)/3) , sin((2*pi)/3))
(1/2*I*sqrt(3) - 1/2)^3
How do I make sage raise it to power 3 & evaluate?
A sage expression has several methods to manipulate it, including expanding, factoring and simplifying:
e = f(cos((2*pi)/3) , sin((2*pi)/3))
e.expand()
e.simplify()
e.full_simplify()
e.factor()
You can see the list of all available methods by typing the name of the variable, followed by a dot, followed by a tabulation: e.<tab>.
In your case, it would appear e.full_simplify() should do the trick.
Relevant documentation:
sage doc: Symbolic Expressions;
sage doc: Tutorial for Symbolics and Plotting

Plot of function, DomainError. Exponentiation yielding a complex result requires a complex argument

Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it
That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)
F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot
That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).

Julia: adding anonymous functions together

If I define some anonymous functions a(x) and b(x) as
a = x -> x^2
b = x -> 2x
it would be helpful for recursive problems to add them together, say over the duration of some loop:
for i=1:5
a = x -> a(x) + b(x)
end
where the goal would be to have this represented internally each loop iteration as
a = x -> x^2 + 2x
a = x -> x^2 + 2x + x^2 + 2x
a = x -> x^2 + 2x + x^2 + 2x + x^2 + 2x
...
But, this fails and returns some error. I'm assuming it's because calling the new a(x) is interpreted as: a(2) = 2 -> x^2 + x^2 + ... + x^2 + 2x
julia> a(2)
ERROR: StackOverflowError:
in (::##35#36)(::Int64) at ./REPL[115]:0
in (::##35#36)(::Int64) at ./REPL[115]:1 (repeats 26666 times)
Is there any way around this?
You can do exactly what you're looking for using the let keyword:
a = x -> x^2
b = x -> 2x
for i=1:5
a = let a = a; x -> a(x) + b(x); end
end
a(2) # returns 24
Explanation
The let keyword allows you to create a block with local scope, and return the last statement in the block back to its caller scope. (contrast that with the begin keyword for instance, which does not introduce new scope).
If you pass a sequence of "assignments" to the let keyword, these become variables local to the block (allowing you, therefore, to re-use variable names that already exist in your workspace). The declaration let a = a is perfectly valid and means "create a local variable a which is initialised from the a variable of the outer scope" --- though if we wanted to be really clear, we could have written it like this instead:
for i=1:5
a = let a_old = a
x -> a_old(x) + b(x);
end
end
then again, if you were willing to use an a_old variable, you could have just done this instead:
for i=1:5; a_old = a; a = x-> a_old(x) + b(x); end
let is a very useful keyword: it's extremely handy for creating on-the-spot closures; in fact, this is exactly what we did here: we have returned a closure, where the "local variable a" essentially became a closed variable.
PS. Since matlab was mentioned, what you're doing when you evaluate a = # (x) a(x) + b(x) in matlab is essentially creating a closure. In matlab you can inspect all the closed variables (i.e. the 'workspace' of the closure) using the functions command
PPS. The Dr Livingstone, I presume?
Using Polynomials package could be a way. This would go:
julia> using Polynomials # install with Pkg.add("Polynomials")
julia> x = Poly([0,1])
Poly(x)
julia> a = x^2
Poly(x^2)
julia> b = 2x
Poly(2*x)
julia> a = a+b
Poly(2*x + x^2)
julia> a(2.0)
8.0
The reason this works is because essentially the behavior you want is symbolic manipulation of functions. Julia does not work this way (it's a compiler - or ahead-of-time (AOT) compiler), but it is flexible. If fancier functions than polynomials are required, maybe a symbolic math package would help (there is SymPy, but I haven't used it).
This:
a = x -> a(x) + b(x)
is a recursive call with no stopping condition. It has nothing to do with Julia. As soon as you define this the previous definition (x^2) was overridden, and will have nothing to to with the stack or your result. It doesn't exist anymore. What you're trying to do is:
a(2) = a(2)+2*2 = (a(2)+2*2)+2*2 = ((a(2)+2*2)+2*2)+2*2 = ...
etc. The 2*2 will not even be substituted, I just wrote it to be clear. You probably want to define
c = x -> a(x) + b(x)
EDIT
I see now coming from MATLAB you're expecting the syntax to mean something else. What you wrote in nearly all languages is a recursive call, which you do not want. What you do want is something like:
concatFuncs => f1,f2 -> (x->f1(x)+f2(x))
This piece of code will take any to functions accepting an x and generate a + between the resulting calls. This will work with anything that '+' works with. So:
summed = concatFuncs(a,b)
is the function you need.

Interpretation of logn^logn?

How should I interpret
How do I interpret this? One way is to take it as logn(logn) and other is . Both would be giving different answers.
For eg:
Taking base 2 and n=1024, in first case we get 10*10 as ans. In the second case, we get 10^10 as ans or am I doing something wrong?
From a programmer's viewpoing a good way to better understand a function is to plot it at different parts of its domain.
But what is the domain of f(x) := ln(x)^ln(x)? Well, given that the exponent is not an integer, the base cannot be smaller than 1. Why? Because ln(x) is negative for 0 < x < 1 and it is not even defined for x <= 0.
But what about x = 1. Given that ln(1) = 0, we would get 0^0, which is not defined either. So, let's plot f(x) for x between: 1.000001 and 1.1. We get:
The plot reveals that there would be no harm in extending the definition of f(x) at 1 in this way (let me use pseudocode here):
f(x) := if x = 1 then 1 else ln(x)^ln(x)
Now, let's see what happens for larger values of x. Here is a plot between 1 and 10:
This plot is also interesting because it exposes a singular behavior between 1 and 3, so let's plot that part of the domain to see it better:
There are a couple of questions that one could ask by looking at this plot. For instance, what is the value of x such that f(x)=1? Mm... this value is visibly between 2.7 and 2.8 (much closer to 2.7). And what number do we know that is a little bit larger than 2.7? This number should be related to the ln function, right? Well, ln is logarithm in base e and the number e is something like 2.71828182845904.... So, it looks like a good candidate, doesn't it? Let's see:
f(e) = ln(e)^ln(e) = 1^1 = 1!
So, yes, the answer to our question is e.
Another interesting value of x is the one where the curve has a minimum, which lies somewhere between 1.4 and 1.5. But since this answer is getting too long, I will stop here. Of course, you can keep plotting and answering your own questions as you happen to encounter them. And remember, you can use iterative numeric algorithms to find values of x or f(x) that, for whatever reason, appear interesting to you.
Because log(n^log n)=(log n)^2, I would assume that log n^log n should be interpreted as (log n)^(log n). Otherwise, there's no point in the exponentiation. But whoever wrote that down for you should have clarified.

Unable to solve for x for an equation with independent variable in the exponent

I am trying to help a friend in finance research. I wish to solve, for x, the equation that looks like :
g,h,c,p,a,b are all constants.
I guess the first step would be to find it's derivative. This I did, using an online derivative calculator at http://www.derivative-calculator.net/. I got this :
Further, I am trying to solve for x, assuming this is equal to zero. None of the online tools for 'solve for x' are able to do it. I have tried, Wolfram Alpha's online tool for 'solve-for-x', QuickMath, CynMath etc. All of them, 'cannot be solved'. I am looking for a solution like : x = blah-blah-blah. I have also tried the online Mathlab/Octave tools at CompileOnline/TutorialPoint. What can I do to solve for x, (preferably not having to install MathLab etc). Is there anything about these equations that render them incapable to be solved by Wolfram Alpha or such online tools for 'solve for x' ?
You could just write a function in R to solve your equation for a given value of x:
solvex <- function(x) {
g = 1
h = 1
c = 1
p = 1
b = 1
a = 1
g * (1 - exp(h*x + c)) + p * (1 - exp(b*x + a))
}
Then, to plot the solutions over some range, do something like:
x <- seq(-100,100,1)
yseq <- lapply(x, solvex)
plot(x, yseq, type = "l")
Finding a solution using Mathematica:

Resources