This should hopefully not be a very hard question—I'm just not very experienced in R.
If I want to graph a simple sine wave, all I have to is:
x=seq(-20,20,0.001)
y=seq(-20,20,0.001)
y=sin(x)
plot(x,y,type="l")
But let's say I want to graph a relationship with trigonometric functions on both sides, such as sin(x) = cos(y). Typing:
sin(x) = cos(y)
Gives me the following error
Error in sin(x) = cos(y) : could not find function "sin<-"
Now, the obvious solution is to just rearrange it in terms of one variable, such as x = asin(cos(y)). But with much more complicated equations with multiple nested trigonometric functions on both sides, this no longer becomes viable.
I'm sure I'm missing something extremely obvious, but what is it?
If you want to plot the relationship sin on the x axis and cos on the y axis:
plot(sin(x), cos(y), type = "l")
Or did I misunderstand the question?
The error is caused by your equal sign. The expression sin(x) = cos(y) is an assignment. If you would like to check where they are equal, then you should write sin(x) == cos(x).
Related
I want to optimize a function for a multivariate surface given a range of values for one variable.
For example, take the equation for the following quadratic surface:
z = x + x^2 + xy + y^2 + y.
How would I find values of y that maximize z given all possible values of x? The result should be a line along the surface that maximizes z at every value of x.
I have found a lot of resources online that explain how to find maxima and minima, as well as saddle points, but I am not sure if that approach will be relevant - the slope of the surface along that line will usually not be 0, so I don't think it makes sense to use derivatives here.
I am new to calculus and mathematical optimization. I would be thrilled if someone would point me to a resource that could help me out with this problem
Thank you!
Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it
That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)
F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot
That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).
I am new to Julia, I would like to solve this system:
where k1 and k2 are constant parameters. However, I=0 when y,0 or Ky otherwise, where k is a constant value.
I followed the tutorial about ODE. The question is, how to solve this piecewise differential equation in DifferentialEquations.jl?
Answered on the OP's cross post on Julia Discourse; copied here for completeness.
Here is a (mildly) interesting example $x''+x'+x=\pm p_1$ where the sign of $p_1$ changes when a switching manifold is encountered at $x=p_2$. To make things more interesting, consider hysteresis in the switching manifold such that $p_2\mapsto -p_2$ whenever the switching manifold is crossed.
The code is relatively straightforward; the StaticArrays/SVector/MVector can be ignored, they are only for speed.
using OrdinaryDiffEq
using StaticArrays
f(x, p, t) = SVector(x[2], -x[2]-x[1]+p[1]) # x'' + x' + x = ±p₁
h(u, t, integrator) = u[1]-integrator.p[2] # switching surface x = ±p₂;
g(integrator) = (integrator.p .= -integrator.p) # impact map (p₁, p₂) = -(p₁, p₂)
prob = ODEProblem(f, # RHS
SVector(0.0, 1.0), # initial value
(0.0, 100.0), # time interval
MVector(1.0, 1.0)) # parameters
cb = ContinuousCallback(h, g)
sol = solve(prob, Vern6(), callback=cb, dtmax=0.1)
Then plot sol[2,:] against sol[1,:] to see the phase plane - a nice non-smooth limit cycle in this case.
Note that if you try to use interpolation of the resulting solution (i.e., sol(t)) you need to be very careful around the points that have a discontinuous derivative as the interpolant goes a little awry. That's why I've used dtmax=0.1 to get a smoother solution output in this case. (I'm probably not using the most appropriate integrator either but it's the one that I was using in a previous piece of code that I copied-and-pasted.)
I'm trying to plot the following implicit formula in R:
1 = x^2 + 4*(y^2) + x*y
which should be an ellipse. I'd like to randomly sample the x values and then generate the graph based on those.
Here's a related thread, but the solutions there seem to be specific to the 3D case. This question has been more resistant to Googling that I would have expected, so maybe the R language calls implicit formulas something else.
Thanks in advance!
Two things you may not understand. When plotting implicit functions with that technique, you need to move all terms to the RHS of the function so that your implicit function becomes:
0 = -1+ x^2 + 4*(y^2) + x*y
Then using the contour value of zero will make sense:
x<-seq(-1.1,1.1,length=1000)
y<-seq(-1,1,length=1000)
z<-outer(x,y,function(x,y) 4*y^2+x^2+x*y -1 )
contour(x,y,z,levels=0)
I got a sign wrong on the first version. #mnels' was correct.
I have to plot the function f(x) = ln(20 - e^x) in Octave, and I use the command:
x = -5:0.1:5;
y = log(20 - exp(x));
plot(x,y)
But the graph is not correct, because when I check in Wolfram Alpha it is not the same. Any help is appreciated!
You plotted in Octave ln(20-e^x):
whereas what you put into Wolfram Alpha was e^x + e^y = 20, which looks like this:
Which is the exact same. The only difference here is that for e^x+e^y=20 Wolfram Alpha plots only the real solution (the blue line), whereas for ln(20-e^x) both Wolfram Alpha and Octave plot the full set of solution, so including imaginary solutions (although Octave plots only the real part of the complex solution).
If you look carefully you see that for x<ln(20)the imaginary part shown in Wolfram Alpha is 0, whereas for x>ln(20) there's an imaginary part (incidentally of y=ln(20)). Octave just plots only the real parts, as it ignores imaginary parts when plotting a complex signal. Just check whos y on your command line and it'll tell you it is a complex variable.
I'm on MATLAB, but your console output should be the similar:
>> x = -5:0.1:5;
y = log(20 - exp(x));
plot(x,y)
Warning: Imaginary parts of complex X and/or Y arguments ignored
>> whos y
Name Size Bytes Class Attributes
y 1x101 1616 double complex
which tells you A) when you plot the function that it is a complex signal and B) that y is indeed complex, as it should be for values of x>ln(20).