I wanted to use Julia ODE package. I saw this example online:
tspan = [0 2*pi()] y_0 = [1 0]' F = (t, y) -> [0 1; -1 0]*y ode23(F, tspan, y_0)
(source: https://github.com/JuliaLang/julia/blob/84757050b26ed549b9aee77ac7c204d9963285a2/j/ode.j)
Yet when I run it I get the following error:
ERROR: DimensionMismatch("*")
in generic_matmatmul! at linalg/matmul.jl:372
in * at linalg/matmul.jl:117
in anonymous at none:1
in ode23 at /home/rm/.julia/v0.4/ODE/src/ODE.jl:67
A simple example would help.
The example you linked to is from 2011. The code has at least two errors. One, calling pi() is incorrect; pi is now a constant. The second is the code moved from base to the ODE package. A working example (using Julia 0.4) can be seen at:
https://github.com/JuliaLang/ODE.jl/blob/master/src/ODE.jl#L36-39
using ODE
tspan = [0, 2*pi]
y0 = [1, 0]
F = (t, y) -> [0 1; -1 0]*y
ode23(F, tspan, y0)
Note that I know nothing about solving these types of equations, I just know some of the history of moving things from base into separate packages.
Related
I am dealing with a nested optimization problem in Julia 1.7.3. In particular, I need to optimize a function (say f1) that, in turn, depends on the optimization result of another function (say f2). Here is a minimal example to illustrate my problem
using Optim
function f2(x::Float64, y::Float64)
return (x^2 - x - y)^2
end
function f1(y::Float64)
x₁ = optimize(x -> f2(x,y), -10, 10).minimizer
return (y*x₁ - 0.5)^2
end
To get the optimizer of f1, I do
y₁ = optimize(y -> f1(y), -10, 10).minimizer
To get the optimizer of f2, I do
x₁ = optimize(x -> f2(x,y₁), -10, 10).minimizer
However, this last step seems very inefficient because it requires an extra optimization call. The optimizer of f2 is indeed already computed while optimizing f1 (i.e., x₁). Is there a way to retrieve x₁ without an extra optimization step (e.g., saving x₁ during the last iteration step of f1)?
Note: one option is to merge the two optimization problems and simultaneously optimize the objective function with respect to x and y. However, I cannot follow this approach in the actual application I am dealing with.
You may store x₁ in a mutable struct, e.g.
using Optim
mutable struct Minimizer{T}
x::T
y::T
end
function f2(x::Float64, y::Float64)
return (x^2 - x - y)^2
end
function f1!(m::Minimizer, y::Float64)
x₁ = optimize(x -> f2(x,y), -10, 10).minimizer
m.x = x₁
return (y*x₁ - 0.5)^2
end
M = Minimizer(NaN, NaN)
M.y = optimize(y -> f1!(M, y), -10, 10).minimizer
#show M
# M = Minimizer{Float64}(1.2971565074975993, 0.3854585027525779)
I'm using symbolic math in Julia. When I do the differentiation it comes out very nicely, but I cant get the coefficients out
using SymPy
#vars x y
z = x*(10 + 5*x + 4*y)
dz = diff(z,x)
x_s = solveset(dz,x)
how do I get the the coefficients out of x_s?
You can use elements to get the elements of a finite sets as an array:
julia> elements(s)
1-element Array{Sym,1}:
-2*y/5 - 1
To get the coefficients can be done different ways, but here we convert the value to a Polynomial type, then use its coeffs method:
julia> p = sympy.Poly(elements(s)[1], y)
Poly(-2/5*y - 1, y, domain='QQ')
julia> p.coeffs()
2-element Array{Sym,1}:
-2/5
-1
As per my comment, the following works but isn't exactly what I'd describe as pretty:
julia> x_s.__pyobject__.args[1].__pyobject__.args[1]
-1
julia> x_s.__pyobject__.args[1].__pyobject__.args[2]
-2⋅y
─────
5
julia> x_s.__pyobject__.args[1].__pyobject__.args[2].__pyobject__.args[1]
-2/5
I couldn't find an accessor function in Sympy.jl that simplifies this, but as you say this could be the basis for rolling your own.
I need the (analytical) derivatives of the PDFs/log PDFs/CDFs of the most common probability distributions w.r.t. to their parameters in R. Is there any way to use these functions?
The gamlss.dist package provides the derivatives of the log PDFs of many probability distribution (code for the normal distribution). Is there anything similar for PDFs/CDFs?
Edit: Admittedly, the derivatives of the PDFs can be obtained from the derivatives of the log PDFs by a simple application of the chain rule, but I don't think a similar thing is possible for the CDFs...
OP mentioned that calculating the derivatives once is OK, so I'll talk about that. I use Maxima but the same thing could be done with Sympy or other computer algebra systems, and it might even be possible in R; I didn't investigate.
In Maxima, probability distributions are in the distrib add-on package which you load via load(distrib). You can find documentation for all the cdf functions by entering ?? cdf_ at the interactive input prompt.
Maxima applies partial evaluation to functions -- if some variables don't have defined values, that's OK, the result has those variables undefined in it. So you can say diff(cdf_foo(x, a, b), a) to get a derivative wrt a for example, with free variables x, a, and b.
You can generate code via grind, which produces output suitable for Maxima, but other languages will understand the expressions.
There are several ways to do this stuff. Here's just a first attempt.
(%i1) load (distrib) $
(%i2) fundef (cdf_weibull);
(%o2) cdf_weibull(x, a, b) := if maybe((a > 0) and (b > 0)) = false
then error("cdf_weibull: parameters a and b must be greater than 0")
x a
else (1 - exp(- (-) )) unit_step(x)
b
(%i3) assume (a > 0, b > 0);
(%o3) [a > 0, b > 0]
(%i4) diff (cdf_weibull (x, a, b), a);
a
x
- --
a a a
b log(b) x x log(x)
(%o4) - %e unit_step(x) (--------- - ---------)
a a
b b
(%i5) grind (%);
-%e^-(x^a/b^a)*unit_step(x)*((log(b)*x^a)/b^a-(x^a*log(x))/b^a)$
(%o5) done
(%i6) diff (cdf_weibull (x, a, b), b);
a
x
- --
a
(- a) - 1 a b
(%o6) - a b x %e unit_step(x)
(%i7) grind (%);
-a*b^((-a)-1)*x^a*%e^-(x^a/b^a)*unit_step(x)$
(%o7) done
I am having some troubles with my CS assignment. I am trying to call another rule that I created previously within a new rule that will calculate the factorial of a power function (EX. Y = (N^X)!). I think the problem with my code is that Y in exp(Y,X,N) is not carrying over when I call factorial(Y,Z), I am not entirely sure though. I have been trying to find an example of this, but I haven been able to find anything.
I am not expecting an answer since this is homework, but any help would be greatly appreciated.
Here is my code:
/* 1.2: Write recursive rules exp(Y, X, N) to compute mathematical function Y = X^N, where Y is used
to hold the result, X and N are non-negative integers, and X and N cannot be 0 at the same time
as 0^0 is undefined. The program must print an error message if X = N = 0.
*/
exp(_,0,0) :-
write('0^0 is undefined').
exp(1,_,0).
exp(Y,X,N) :-
N > 0, !, N1 is N - 1, exp(Y1, X, N1), Y is X * Y1.
/* 1.3: Write recursive rules factorial(Y,X,N) to compute Y = (X^N)! This function can be described as the
factorial of exp. The rules must use the exp that you designed.
*/
factorial(0,X) :-
X is 1.
factorial(N,X) :-
N> 0, N1 is N - 1, factorial(N1,X1), X is X1 * N.
factorial(Y,X,N) :-
exp(Y,X,N), factorial(Y,Z).
The Z variable mentioned in factorial/3 (mentioned only once; so-called 'singleton variable', cannot ever get unified with anything ...).
Noticed comments under question, short-circuiting it to _ won't work, you have to unify it with a sensible value (what do you want to compute / link head of the clause with exp and factorial through parameters => introduce some parameter "in the middle"/not mentioned in the head).
Edit: I'll rename your variables for you maybe you'll se more clearly what you did:
factorial(Y,X,Result) :-
exp(Y,X,Result), factorial(Y,UnusedResult).
now you should see what your factorial/3 really computes, and how to fix it.
All,
I've just been starting to play around with the Julia language and am enjoying it quite a bit. At the end of the 3rd tutorial there's an interesting problem: genericize the quadratic formula such that it solves for the roots of any n-order polynomial equation.
This struck me as (a) an interesting programming problem and (b) an interesting Julia problem. Has anyone out there solved this one? For reference, here is the Julia code with a couple toy examples. Again, the idea is to make this generic for any n-order polynomial.
Cheers,
Aaron
function derivative(f)
return function(x)
# pick a small value for h
h = x == 0 ? sqrt(eps(Float64)) : sqrt(eps(Float64)) * x
# floating point arithmetic gymnastics
xph = x + h
dx = xph - x
# evaluate f at x + h
f1 = f(xph)
# evaluate f at x
f0 = f(x)
# divide the difference by h
return (f1 - f0) / dx
end
end
function quadratic(f)
f1 = derivative(f)
c = f(0.0)
b = f1(0.0)
a = f(1.0) - b - c
return (-b + sqrt(b^2 - 4a*c + 0im))/2a, (-b - sqrt(b^2 - 4a*c + 0im))/2a
end
quadratic((x) -> x^2 - x - 2)
quadratic((x) -> x^2 + 2)
The package PolynomialRoots.jl provides the function roots() to find all (real and complex) roots of polynomials of any order. The only mandatory argument is the array with coefficients of the polynomial in ascending order.
For example, in order to find the roots of
6x^5 + 5x^4 + 3x^2 + 2x + 1
after loading the package (using PolynomialRoots) you can use
julia> roots([1, 2, 3, 4, 5, 6])
5-element Array{Complex{Float64},1}:
0.294195-0.668367im
-0.670332+2.77556e-17im
0.294195+0.668367im
-0.375695-0.570175im
-0.375695+0.570175im
The package is a Julia implementation of the root-finding algorithm described in this paper: http://arxiv.org/abs/1203.1034
PolynomialRoots.jl has also support for arbitrary precision calculation. This is useful for solving equation that cannot be solved in double precision. For example
julia> r = roots([94906268.375, -189812534, 94906265.625]);
julia> (r[1], r[2])
(1.0000000144879793 - 0.0im,1.0000000144879788 + 0.0im)
gives the wrong result for the polynomial, instead passing the input array in arbitrary precision forces arbitrary precision calculations that provide the right answer (see https://en.wikipedia.org/wiki/Loss_of_significance):
julia> r = roots([BigFloat(94906268.375), BigFloat(-189812534), BigFloat(94906265.625)]);
julia> (Float64(r[1]), Float64(r[2]))
(1.0000000289759583,1.0)
There are no algebraic formulae for a general polynomials of degree five and above (infact there cant be see here). So theoretically, you could proceed using the same methodology for solutions to cubics and quartics, but even that would be a lot of hard work given very unwieldy formulae for roots of quartics. You could also use a CAS like SymPy to find out those formulae.