Related
Heres a block of code that plots a function over a range, as well as the at a single input :
a = 1.0
f(x::Float64) = -x^2 - a
scatter(f, -3:.1:3)
scatter!([a], [f(a)])
i would like to plot the line, tangent to the point, like so:
Is there a pattern or simple tool for doing so?
That depends on what you mean by "pattern or simple tool" - the easiest way is to just derive the derivative by hand and then plot that as a function:
hand_gradient(x) = -2x
and then add plot!(hand_gradient, 0:0.01:3) to your plot.
Of course that can be a bit tedious with more complicated functions or when you want to plot lots of gradients, so another way would be to utilise Julia's excellent automatic differentiation capabilities. Comparing all the different options is a bit beyond the scope of this answer, but check out https://juliadiff.org/ if you're interested. Here, I will be using the widely used Zygote library:
julia> using Plots, Zygote
julia> a = 1.0;
julia> f(x) = -x^2 - a;
[NB I have slightly amended your f definition to be in line with the plot you posted, which is an inverted parabola]
note that here I am not restricting the type of input argument x to f - this is crucial for automatic differentiation to work, as it is implemented by runnning a different number type (a Dual) through your function. In general, restricting argument types in this way is an anti-pattern in Julia - it does not help performance, but makes your code less interoperable with other parts of the ecosystem, as you can see here if you try to automatically differentiate through f(x::Float64).
Now let's use Zygote to provide gradients for us:
julia> f'
#43 (generic function with 1 method)
as you can see, running f' now returns an anonymous function - this is the derivative of f, as you can verify by evaluating it at a specific point:
julia> f'(2)
-4.0
Now all we need to do is leverage this to construct a function that itself returns a function which traces out the line of the gradient:
julia> gradient_line(f, x₀) = (x -> f(x₀) + f'(x₀)*(x-x₀))
gradient_line (generic function with 1 method)
this function takes in a function f and a point x₀ for which we want to get the tangent, and then returns an anonymous function which returns the value of the tangent at each value of x. Putting this to use:
julia> default(markerstrokecolor = "white", linewidth = 2);
julia> scatter(f, -3:0.1:3, label = "f(x) = -x² - 1", xlabel = "x", ylabel = "f(x)");
julia> scatter!([1], [f(1)], label = "", markersize = 10);
julia> plot!(gradient_line(f, 1), 0:0.1:3, label = "f'(1)", color = 2);
julia> scatter!([-2], [f(-2)], label = "", markersize = 10, color = 3);
julia> plot!(gradient_line(f, -2), -3:0.1:0, label = "f'(-2)", color = 3)
It is overkill for this problem, but you could use the CalculusWithJulia package which wraps up a tangent operator (along with some other conveniences) similar to what is derived in the previous answers:
using CalculusWithJulia # ignore any warnings
using Plots
f(x) = sin(x)
a, b = 0, pi/2
c = pi/4
plot(f, a, b)
plot!(tangent(f,c), a, b)
Well, the tool is called high school math :)
You can simply calculate the slope (m) and intersect (b) of the tanget (mx + b) and then plot it. To determine the former, we need to compute the derivative of the function f in the point a, i.e. f'(a). The simplest possible estimator is the difference quotient (I assume that it would be cheating to just derive the parabola analytically):
m = (f(a+Δ) - f(a))/Δ
Having the slope, our tanget should better go through the point (a, f(a)). Hence we have to choose b accordingly as:
b = f(a) - m*a
Choosing a suitably small value for Δ, e.g. Δ = 0.01 we obtain:
Δ = 0.01
m = (f(a+Δ) - f(a))/Δ
b = f(a) - m*a
scatter(f, -3:.1:3)
scatter!([a], [f(a)])
plot!(x -> m*x + b, 0, 3)
Higher order estimators for the derivative can be found in FiniteDifferences.jl and FiniteDiff.jl for example. Alternatively, you could use automatic differentiation (AD) tools such as ForwardDiff.jl to obtain the local derivative (see Nils answer for an example).
Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it
That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)
F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot
That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).
I am new to Julia, I would like to solve this system:
where k1 and k2 are constant parameters. However, I=0 when y,0 or Ky otherwise, where k is a constant value.
I followed the tutorial about ODE. The question is, how to solve this piecewise differential equation in DifferentialEquations.jl?
Answered on the OP's cross post on Julia Discourse; copied here for completeness.
Here is a (mildly) interesting example $x''+x'+x=\pm p_1$ where the sign of $p_1$ changes when a switching manifold is encountered at $x=p_2$. To make things more interesting, consider hysteresis in the switching manifold such that $p_2\mapsto -p_2$ whenever the switching manifold is crossed.
The code is relatively straightforward; the StaticArrays/SVector/MVector can be ignored, they are only for speed.
using OrdinaryDiffEq
using StaticArrays
f(x, p, t) = SVector(x[2], -x[2]-x[1]+p[1]) # x'' + x' + x = ±p₁
h(u, t, integrator) = u[1]-integrator.p[2] # switching surface x = ±p₂;
g(integrator) = (integrator.p .= -integrator.p) # impact map (p₁, p₂) = -(p₁, p₂)
prob = ODEProblem(f, # RHS
SVector(0.0, 1.0), # initial value
(0.0, 100.0), # time interval
MVector(1.0, 1.0)) # parameters
cb = ContinuousCallback(h, g)
sol = solve(prob, Vern6(), callback=cb, dtmax=0.1)
Then plot sol[2,:] against sol[1,:] to see the phase plane - a nice non-smooth limit cycle in this case.
Note that if you try to use interpolation of the resulting solution (i.e., sol(t)) you need to be very careful around the points that have a discontinuous derivative as the interpolant goes a little awry. That's why I've used dtmax=0.1 to get a smoother solution output in this case. (I'm probably not using the most appropriate integrator either but it's the one that I was using in a previous piece of code that I copied-and-pasted.)
If I define some anonymous functions a(x) and b(x) as
a = x -> x^2
b = x -> 2x
it would be helpful for recursive problems to add them together, say over the duration of some loop:
for i=1:5
a = x -> a(x) + b(x)
end
where the goal would be to have this represented internally each loop iteration as
a = x -> x^2 + 2x
a = x -> x^2 + 2x + x^2 + 2x
a = x -> x^2 + 2x + x^2 + 2x + x^2 + 2x
...
But, this fails and returns some error. I'm assuming it's because calling the new a(x) is interpreted as: a(2) = 2 -> x^2 + x^2 + ... + x^2 + 2x
julia> a(2)
ERROR: StackOverflowError:
in (::##35#36)(::Int64) at ./REPL[115]:0
in (::##35#36)(::Int64) at ./REPL[115]:1 (repeats 26666 times)
Is there any way around this?
You can do exactly what you're looking for using the let keyword:
a = x -> x^2
b = x -> 2x
for i=1:5
a = let a = a; x -> a(x) + b(x); end
end
a(2) # returns 24
Explanation
The let keyword allows you to create a block with local scope, and return the last statement in the block back to its caller scope. (contrast that with the begin keyword for instance, which does not introduce new scope).
If you pass a sequence of "assignments" to the let keyword, these become variables local to the block (allowing you, therefore, to re-use variable names that already exist in your workspace). The declaration let a = a is perfectly valid and means "create a local variable a which is initialised from the a variable of the outer scope" --- though if we wanted to be really clear, we could have written it like this instead:
for i=1:5
a = let a_old = a
x -> a_old(x) + b(x);
end
end
then again, if you were willing to use an a_old variable, you could have just done this instead:
for i=1:5; a_old = a; a = x-> a_old(x) + b(x); end
let is a very useful keyword: it's extremely handy for creating on-the-spot closures; in fact, this is exactly what we did here: we have returned a closure, where the "local variable a" essentially became a closed variable.
PS. Since matlab was mentioned, what you're doing when you evaluate a = # (x) a(x) + b(x) in matlab is essentially creating a closure. In matlab you can inspect all the closed variables (i.e. the 'workspace' of the closure) using the functions command
PPS. The Dr Livingstone, I presume?
Using Polynomials package could be a way. This would go:
julia> using Polynomials # install with Pkg.add("Polynomials")
julia> x = Poly([0,1])
Poly(x)
julia> a = x^2
Poly(x^2)
julia> b = 2x
Poly(2*x)
julia> a = a+b
Poly(2*x + x^2)
julia> a(2.0)
8.0
The reason this works is because essentially the behavior you want is symbolic manipulation of functions. Julia does not work this way (it's a compiler - or ahead-of-time (AOT) compiler), but it is flexible. If fancier functions than polynomials are required, maybe a symbolic math package would help (there is SymPy, but I haven't used it).
This:
a = x -> a(x) + b(x)
is a recursive call with no stopping condition. It has nothing to do with Julia. As soon as you define this the previous definition (x^2) was overridden, and will have nothing to to with the stack or your result. It doesn't exist anymore. What you're trying to do is:
a(2) = a(2)+2*2 = (a(2)+2*2)+2*2 = ((a(2)+2*2)+2*2)+2*2 = ...
etc. The 2*2 will not even be substituted, I just wrote it to be clear. You probably want to define
c = x -> a(x) + b(x)
EDIT
I see now coming from MATLAB you're expecting the syntax to mean something else. What you wrote in nearly all languages is a recursive call, which you do not want. What you do want is something like:
concatFuncs => f1,f2 -> (x->f1(x)+f2(x))
This piece of code will take any to functions accepting an x and generate a + between the resulting calls. This will work with anything that '+' works with. So:
summed = concatFuncs(a,b)
is the function you need.
I've written a a little function that gives me out a value based on a sine wave when I put in a float between 0 and 1. I'm using it to lerp things around in a game.
public static class Utilities
{
public static float SineMe(float prop)
{
float output = (prop*180f)-90f;
output = Mathf.Sin(output*Mathf.Deg2Rad);
output = (output+1f)/2f;
return output;
}
}
It works fine.. But I was wondering is there a mathematical way of altering the sine wave so I can make it 'steeper' or 'shallower' in the middle?
In the diagram below the blue curve is a sine wave, I'm wondering if I can make it more like the green line.
What you're showing already isn't really sine - the range of sine is between -1 and +1. You're applying the linear function f(x) = (x+1)/2 to change that range. So place another function between the sine and that transform.
To change the shape, you need a non-linear function. So, here's a cubic equation you might try...
g(x) = Ax^3 + Bx^2 + Cx + D
D = 0
C = p
B = 3 - 3C
A = 1 - (B + C)
The parameter p should be given a value between 0.0 and 9.0. If it's 1.0, g(x) is the identity function (the output is the unmodified input). With values between 0.0 and 1.0, it will tend to "fatten" your sine wave (push it away from 0.0 and towards 1.0 or -1.0) which is what you seem to require.
I once "designed" this function as a way to get "fractal waveforms". Using values of p between 1.0 and 9.0 (and particularly between around 3.0 and 6.0) iterative application of this formula is chaotic. I stole the idea from the population fluctuation modelling chaotic function by R. M. May, but that's a quadratic - I wanted something symmetric, so I needed a cubic function. Not really relevant here, and a pretty aweful idea as it happens. Although you certainly get chaotic waveforms, what that really means is huge problems with aliassing - change the sample rate and you get a very different sound. Still, without the iteration, maybe this will give you what you need.
If you iterate enough times with p between 0.0 and 1.0, you end up with a square wave with slightly rounded corners.
Most likely you can just choose a value of p between 0.0 and 1.0, apply that function once, then apply your function to change the range and you'll get what you want.
By the way, there's already a comment suggesting a cheat sheet of "easing functions". "Easing" is a term from animation, and computer animation software often uses Bezier curves for that purpose - the same Bezier curves that vector graphics software often uses. Bezier curves come in quadratic and cubic variants, with cubic being the more common. So what this is doing probably isn't that different. However, cubic Bezier easing gives you more control - you can control the "ease-in" independently of the "ease-out", where my function only provides one parameter.
You can use the y(x) = 1-(1-x)^n function when x = [0..1], as a transform function.
You will just have to replace x by the absolute value of your sinus and report the sign of sinus to the result. In that way you can tweak the sinus slope by increasing n. So what you want is this:
float sinus = Mathf.Sin(output*Mathf.Deg2Rad);
int sign = (sinus >= 0 ? 1 : -1);
int n = 4; // slope parameter
float waveform = sign * ( 1-Mathf.Pow(1-Mathf.Abs(sinus), n) );
You can root the sine function to make it steeper (only working for positive values). The higher the root, the steeper the sine.
Graph of a steeper sine wave function
I discovered this nifty trick for a steeper sine wave (0..1).
f(x) = cos(sin(x)^3)^10
If you need (-1..1):
2 * (f(x) - 0.5)
I think I found the solution.
(0.5+sin(x*π-π/2)/2)^((2*(1-x))^k)
in the interval x = [0.0, 1.0]
with k that control the steepness.
k=0.0 for the unmodified sinus (purple)
k=1.0 (green)
k=2.0 (blue)
https://www.desmos.com/calculator/wdtfsassev
I was looking for a similar function, not for the whole sine but just half the period.
I bumped into the Logistic function:
f(x) = L / (1 + e^(-k(x-x0)))
where
e = the natural logarithm base (also known as Euler's number),
x0 = the x-value of the sigmoid's midpoint,
L = the curve's maximum value, and
k = the steepness of the curve.
See https://en.wikipedia.org/wiki/Logistic_function
Works for me
what about
sign(sin(x))*sqrt(abs(sin(x))
https://www.desmos.com/calculator/5nn34xqkfr