Find zero of a nonlinear equation using Julia - julia

After a process usyng the SymPy in Julia, I generated a system of nonlinear equations. For the sake of simplicity, I am going to put an approximation here for the case of just a non-linear equation. What I get is something like this equation:
R = (p) -> -5.0488*p + p^2.81 - 3.38/( p^(-1.0) )^2.0
I can plot the R function
using Plots
plot(R, 0,8)
We can see that the R function has two zeros: p = 0 and 5.850< p < 8.75. I would like to find the positive zero. For this, I tryed the nlsolve function but with error:
using NLsolve
nlsolve(R , 5.8)
MethodError: no method matching nlsolve(::var"#1337#1338", ::Float64)
Closest candidates are:
nlsolve(::Any, ::Any, !Matched::AbstractArray; inplace, kwargs...)
First, Where am I going wrong with the nlsolve function?
If possible, I will appreciate a solution using SymPy package in Julia.

This question has been answered on the Julia discourse here: https://discourse.julialang.org/t/find-zero-of-a-nonlinear-equation-using-julia/61974
It's always helpful to cross-reference when asking on multiple platforms.
For reference, the solution was
using NLSolve
function R(F,p) #p is a vector too, not a number
F[1] = -5.0488*p[1] + p[1]^2.81 - 3.38/( p[1]^(-1.0) )^2.0
end
nlsolve(R , [5.8])

Related

Julia: How you subtract a Normal distribution from a Chi distribution? [duplicate]

As part of a mini-project where I am numerically solving a linear differential equation, I have to subtract a probability distribution from another distribution. Is there a way to do this in Julia? When I try:
a = Chi(3) - Uniform(0,1)
There is no method set up for this:
MethodError: no method matching -(::Chi{Float64}, ::Uniform{Float64})
Closest candidates are:
-(::UnivariateDistribution, ::Real) at C:\Users\Acer\.julia\packages\Distributions\Fl5RM\src\univariate\locationscale.jl:139
-(::ChainRulesCore.AbstractThunk, ::Any) at C:\Users\Acer\.julia\packages\ChainRulesCore\sHMAp\src\tangent_types\thunks.jl:30
-(::ChainRulesCore.ZeroTangent, ::Any) at C:\Users\Acer\.julia\packages\ChainRulesCore\sHMAp\src\tangent_arithmetic.jl:101
...
As I have said the convolve function is defined in Distributions.jl. Here is a documentation: https://juliastats.org/Distributions.jl/stable/convolution/. However this is not enough for your purposes as commented above.
Let me help you deriving the PDF of Chi(3)-Uniform(0,1) assuming they are independent.
Let X be the distribution you want, then we do Chi(3)+Uniform(-1,0) have its PDF as: f(x) = C(z+1)-C(x). Where C is CDF of Chi(3) distribution.
So there is a closed form for PDF of your distribution in terms of CDF of Chi(3).
(I am doing the computations in my head so it would be good if you double checked them)

Julia: How do you add/subtract distribution functions?

As part of a mini-project where I am numerically solving a linear differential equation, I have to subtract a probability distribution from another distribution. Is there a way to do this in Julia? When I try:
a = Chi(3) - Uniform(0,1)
There is no method set up for this:
MethodError: no method matching -(::Chi{Float64}, ::Uniform{Float64})
Closest candidates are:
-(::UnivariateDistribution, ::Real) at C:\Users\Acer\.julia\packages\Distributions\Fl5RM\src\univariate\locationscale.jl:139
-(::ChainRulesCore.AbstractThunk, ::Any) at C:\Users\Acer\.julia\packages\ChainRulesCore\sHMAp\src\tangent_types\thunks.jl:30
-(::ChainRulesCore.ZeroTangent, ::Any) at C:\Users\Acer\.julia\packages\ChainRulesCore\sHMAp\src\tangent_arithmetic.jl:101
...
As I have said the convolve function is defined in Distributions.jl. Here is a documentation: https://juliastats.org/Distributions.jl/stable/convolution/. However this is not enough for your purposes as commented above.
Let me help you deriving the PDF of Chi(3)-Uniform(0,1) assuming they are independent.
Let X be the distribution you want, then we do Chi(3)+Uniform(-1,0) have its PDF as: f(x) = C(z+1)-C(x). Where C is CDF of Chi(3) distribution.
So there is a closed form for PDF of your distribution in terms of CDF of Chi(3).
(I am doing the computations in my head so it would be good if you double checked them)

How can I write this line of code in MATLAB (currently R)?

How can I write this line of code in MATLAB (currently R)?
vcov_beta_hat <- c(sigma2_hat) * solve(t(X) %*% X)
My attempt is,
vcov_beta_hat = [sigma2_hat.*((X'*X))];
However I am struggling on what the 'c' is doing in the r code?
Whilst the above answer addresses that the solve is the something missing in your matlab code, solve can mean a number of different things in R,
If there is no comma in the equation its not solving anything and is actually taking the inverse,
Inverse of A, MATLAB: inv(A) R: solve(A)
Therefore, vcov_beta_hat = [sigma2_hat.*inv((X'*X))];
The c(a,b,c) denote a vector in R. In Matlab, you would write
vec = [a b c];
Also, you need to find the equivalent of the R-solve() function. So far, your matlab code just mutliplies X' with X and does not solve the system of equations.
linsolve should be a good starting point.

How can I resolve an exponential function for x in R?

I want to analyse a logarithmic growth curve in more detail. Especially I would like to kow the time point when the slope becomes >0 (which is the starting point of growth after a lag phase).
Therefore I fitted a logarithmic function to my growth data with the grofit package of R. I got values for the three parameters (lambda, mu, maximal assymptote).
Now I thought, I could use the first derivative of the logarithmic growth function to put mu=0 (the slope of any time point during growth) and this way solve the equation for the time (x). I'm not sure if this is possible, since the mu=0 will be correct for a longer timespan at the beginning of the curve (and no unique timepoint). But maybe I could approximate to that point by putting mu=0.01. This should be more specific.
Anyway I used the Deriv package to find the first derivative of my logarithmic function:
Deriv(a/(1+exp(((4*b)/a)*(c-x)+2)), "x")
where a=assymptote, b=maximal slope, c=lambda.
As a result I got:
{.e2 <- exp(2 + 4 * (b * (c - x)/a))
4 * (.e2 * b/(.e2 + 1)^2)}
Or in normal writing:
f'(x)=(4*exp(2+((4b(c-x))/a))*b)/((exp(2+((4b(c-x))/a))+1)^2)
Now I would like to solve this function for x with f'(x)=0.01. Can anyone tell me, how best to do it?
Also, do you have comments on my way of thinking or the R functions I used?
Thank you.
Anne
Using a root solving function is more appropriate than using an optimization function.
I'll give an example with two packages.
It would also be a good idea to plot the function for a range of values.
Like this:
curve(fn,-.1,.1)
You can see that using the base R function uniroot will present problems since it needs function values at the endpoints of the interval to be of opposite sign.
Using package nleqslv like this
library(nleqslv)
nleqslv(1,fn)
gives
$x
[1] 0.003388598
$fvec
[1] 8.293101e-10
$termcd
[1] 1
$message
[1] "Function criterion near zero"
<more info> ......
Using function fsolve from package pracma
library(pracma)
fsolve(fn,1)
gives
$x
[1] 0.003388585
$fval
[1] 3.136539e-10
The solutions given by both packages are very close to each other.
Might not be the best approach but you can use the optim function to find the solution. Check the code below, I am basically trying to find the value of x which minimizes abs(f(x) - 0.01)
There starting seed value for x may be important, the optim function might not converge for some seeds.
fn <- function(x){
a <- 1
b<- 1
c <- 1
return( abs((4*exp(2+((4*b*(c-x))/a))*b)/ ((exp(2+((4*b*(c-x))/a))+1)^2) - 0.01) )
}
x <- optim(10,fn)
x$par
Thank you very much for your efforts. Unfortunately, none of the above solutions worked for me :-(
I figured the problem out the old fashioned way (pencil + paper + mathematics book).
Have a good day
Anne

Why do the inverse t-distributions for small values differ in Matlab and R?

I would like to evaluate the inverse Student's t-distribution function for small values, e.g., 1e-18, in Matlab. The degrees of freedom is 2.
Unfortunately, Matlab returns NaN:
tinv(1e-18,2)
NaN
However, if I use R's built-in function:
qt(1e-18,2)
-707106781
The result is sensible. Why can Matlab not evaluate the function for this small value? The Matlab and R results are quite similar to 1e-15, but for smaller values the difference is considerable:
tinv(1e-16,2)/qt(1e-16,2) = 1.05
Does anyone know what is the difference in the implemented algorithms of Matlab and R, and if R gives correct results, how could I effectively calculate the inverse t-distribution, in Matlab, for smaller values?
It appears that R's qt may use a completely different algorithm than Matlab's tinv. I think that you and others should report this deficiency to The MathWorks by filing a service request. By the way, in R2014b and R2015a, -Inf is returned instead of NaN for small values (about eps/8 and less) of the first argument, p. This is more sensible, but I think they should do better.
In the interim, there are several workarounds.
Special Cases
First, in the case of the Student's t-distribution, there are several simple analytic solutions to the inverse CDF or quantile function for certain integer parameters of ν. For your example of ν = 2:
% for v = 2
p = 1e-18;
x = (2*p-1)./sqrt(2*p.*(1-p))
which returns -7.071067811865475e+08. At a minimum, Matlab's tinv should include these special cases (they only do so for ν = 1). It would probably improve the accuracy and speed of these particular solutions as well.
Numeric Inverse
The tinv function is based on the betaincinv function. It appears that it may be this function that is responsible for the loss of precision for small values of the first argument, p. However, as suggested by the OP, one can use the CDF function, tcdf, and root-finding methods to evaluate the inverse CDF numerically. The tcdf function is based on betainc, which doesn't appear to be as sensitive. Using fzero:
p = 1e-18;
v = 2
x = fzero(#(x)tcdf(x,v)-p, 0)
This returns -7.071067811865468e+08. Note that this method is not very robust for values of p close to 1.
Symbolic Solutions
For more general cases, you can take advantage of symbolic math and variable precision arithmetic. You can use identities in terms of Gausian hypergeometric functions, 2F1, as given here for the CDF. Thus, using solve and hypergeom:
% Supposedly valid for or x^2 < v, but appears to work for your example
p = sym('1e-18');
v = sym(2);
syms x
F = 0.5+x*gamma((v+1)/2)*hypergeom([0.5 (v+1)/2],1.5,-x^2/v)/(sqrt(sym('pi')*v)*gamma(v/2));
sol_x = solve(p==F,x);
vpa(sol_x)
The tinv function is based on the betaincinv function. There is no equivalent function or even an incomplete Beta function in the Symbolic Math toolbox or MuPAD, but a similar 2F1 relation for the incomplete Beta function can be used:
p = sym('1e-18');
v = sym(2);
syms x
a = v/2;
F = 1-x^a*hypergeom([a 0.5],a+1,x)/(a*beta(a,0.5));
sol_x = solve(2*abs(p-0.5)==F,x);
sol_x = sign(p-0.5).*sqrt(v.*(1-sol_x)./sol_x);
vpa(sol_x)
Both symbolic schemes return results that agree to -707106781.186547523340184 using the default value of digits.
I've not fully validated the two symbolic methods above so I can't vouch for their correctness in all cases. The code also needs to be vectorized and will be slower than a fully numerical solution.

Resources