I'm using JuMP v0.20.0 with the Ipopt optimizer, and I'm trying to solve a system of nonlinear equations in a loop, where the problem statement varies based on what I'm looping over.
Suppose I have this really simple problem of trying to pick $$t_1,\dots, t_n$$ to minimize the non-linear equation $$\sum_{i=1 to N} t_i^2$$. When I run this without looping, I have the following code
using JuMP, Optim, Ipopt, NLsolve
m = Model(Ipopt.Optimizer)
#variable(m, t[1:N] >= 0.00000001)
function solve_Aik(tlist...)
t = collect(tlist)
return sum([t[i]^2 for i in 1:N])
end
register(m, :solve_Aik, N, solve_Aik, autodiff=true)
#NLobjective(m, Min, solve_Aik(t...))
optimize!(m)
solution = [value.(t[i]) for i=1:N]
the last line of which provides me my solution just fine.
However, as soon as I put this in a loop (without even providing the number I'm looping over to the problem), I can no longer recover my solution, with an error of "MethodError: no method matching value(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#107#109"{var"#solve_Aik#378"},Float64},Float64,8})". See the code below:
nums = [1,2,3]
for num in nums
m = Model(Ipopt.Optimizer)
#variable(m, t[1:N] >= 0.00000001)
function solve_Aik(tlist...)
t = collect(tlist)
return sum([t[i]^2 for i in 1:N])
end
register(m, :solve_Aik, N, solve_Aik, autodiff=true)
#NLobjective(m, Min, solve_Aik(t...))
optimize!(m)
solution = [value.(t[i]) for i=1:N]
end
The last line providing the solution is what Julia gets hung up on. Has anyone else encountered a similar issue? TIA!
My guess based on the error message is that by some quirk of Julia's scoping rules, t = collect(tlist) overwrites the JuMP variable t defined in the body of the for loop. Try using a different name for the variable inside solve_Aik.
Related
I am trying to use the DifferentialEquations.jl provided by julia, and it's working all right until I try to use it on a second order ODE.
Consider for instance the second order ODE
x''(t) = x'(t) + 2* x(t), with initial conditions
x'(0) = 0, x(0) = 1
which has an analytic solution given by: x(t) = 2/3 exp(-t) + 1/3 exp(2t).
To solve it numerically, I run the following code:
using DifferentialEquations;
function f_simple(ddu, du, u, p, t)
ddu[1] = du[1] + 2*u[1]
end;
du0 = [0.]
u0 = [1.]
tspan = (0.0,5.0)
prob2 = SecondOrderODEProblem(f_simple, du0, u0, tspan)
sol = solve(prob2,reltol=1e-8, abstol=1e-8);
With that,
sol(3)[2] = 122.57014434362732
whereas the analytic solution yields 134.50945587649028, and so I'm a bit lost here.
According to the the documentation for DifferentialEquations.jl, Vern7() is appropriate for high-accuracy solutions to non-stiff equations:
sol = solve(prob2, Vern7(), reltol=1e-8, abstol=1e-8)
julia> println(sol(3)[2])
134.5094558872943
On my machine, this matches the analytical solution quite closely. I'm not exactly sure what the default method used is: the documentation indicates that solve has some method of choosing an appropriate solver when one isn't specified.
For more information on Vern7(), check out Jim Verner's page on Runge-Kutta algorithms.
I am calculating the solution of Chen's chaotic system using differential transform method. The code that I am using is:
x=zeros(1,7);
x(1)=-0.1;
y=zeros(1,7);
y(1)=0.5;
z=zeros(1,7);
z(1)=-0.6;
for k=0:5
x(k+2)=(40*gamma(1+k)/gamma(2+k))*(y(k+1)-x(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*z(k+1-l);
end
y(k+2)=(gamma(1+k)/gamma(2+k))*(-12*x(k+1)-sum+28*y(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*y(k+1-l);
end
z(k+2)=(gamma(1+k)/(1+k))*(sum-3*z(k+1));
end
s=fliplr(x);
t=0:0.05:2;
a=polyval(s,t);
plot(t,a)
What this code does is calculate x(k), y(k) and z(k) these are the coefficients of the polynomial that is approximating the solution.
The solution x(t) = sum_0^infinity x(k)t^k, and similarly the others. But this code doesn't give the desired output of a chaotic sequence the graph of x(t) that I am getting is:
This is not an answer, but a clearer and more correct (programmatically speaking) to write your loop:
for k = 1:6
x(k+1)=(40*1/k)*(y(k)-x(k));
temp_sum = sum(x(1:k).*z(k:-1:1),2);
y(k+1) = (1/k)*(-12*x(k)-temp_sum+28*y(k));
temp_sum = sum(x(1:k).*y(k:-1:1),2);
z(k+1) = (1/k)*(temp_sum-3*z(k));
end
The most important issue here is not overloading the built-in function sum (I replaced it with temp_sum. Other things include vectorization of the inner loops (using sum...), indexing that starts in 1 (instead of writing k+1 all the time), and removing unnecessary calls to gamma (gamma(k)/gamma(k+1) = 1/k).
I am having some troubles with my CS assignment. I am trying to call another rule that I created previously within a new rule that will calculate the factorial of a power function (EX. Y = (N^X)!). I think the problem with my code is that Y in exp(Y,X,N) is not carrying over when I call factorial(Y,Z), I am not entirely sure though. I have been trying to find an example of this, but I haven been able to find anything.
I am not expecting an answer since this is homework, but any help would be greatly appreciated.
Here is my code:
/* 1.2: Write recursive rules exp(Y, X, N) to compute mathematical function Y = X^N, where Y is used
to hold the result, X and N are non-negative integers, and X and N cannot be 0 at the same time
as 0^0 is undefined. The program must print an error message if X = N = 0.
*/
exp(_,0,0) :-
write('0^0 is undefined').
exp(1,_,0).
exp(Y,X,N) :-
N > 0, !, N1 is N - 1, exp(Y1, X, N1), Y is X * Y1.
/* 1.3: Write recursive rules factorial(Y,X,N) to compute Y = (X^N)! This function can be described as the
factorial of exp. The rules must use the exp that you designed.
*/
factorial(0,X) :-
X is 1.
factorial(N,X) :-
N> 0, N1 is N - 1, factorial(N1,X1), X is X1 * N.
factorial(Y,X,N) :-
exp(Y,X,N), factorial(Y,Z).
The Z variable mentioned in factorial/3 (mentioned only once; so-called 'singleton variable', cannot ever get unified with anything ...).
Noticed comments under question, short-circuiting it to _ won't work, you have to unify it with a sensible value (what do you want to compute / link head of the clause with exp and factorial through parameters => introduce some parameter "in the middle"/not mentioned in the head).
Edit: I'll rename your variables for you maybe you'll se more clearly what you did:
factorial(Y,X,Result) :-
exp(Y,X,Result), factorial(Y,UnusedResult).
now you should see what your factorial/3 really computes, and how to fix it.
I am trying to evaluate a function in Scilab using the following steps:
x=poly(0,'x')
y=(x^18+x^11)^3 // function (the function is variable)
y1=derivat(y) // first derivate
y2=derivat(y) //second derivate
y3=derivat(y) //third derivate
I need evaluate the 3 derivatives in any point.
I know the function: evstr(expression) but it does not work with the return value of the derivative.
I try to use: string(y) but it returns something strange.
How can to do it, to cast the return of derivat to string to evaluate with evstr or how can I evaluate the n-th derivative in any point using Scilab.
To evaluate numerical derivatives of almost any kind of function (of one or sereval variables) up to machine precision (you won't get better results if you evaluate symbolic expressions obtained by hand), you can use the complex step method (google these terms you will have a bunch of references). For example:
function y = f(x)
s = poly(0,'s');
p = (s-s^2)^3;
y = horner(p,x).*exp(-x.^2);
end
x=linspace(-1,1,100);
d = imag(f(x+complex(0,1e-100)))/1e-100;
true_d = exp(-x.^2).*(-1+x).^2.*x^2.*(3-6*x-2*x.^2+2.*x^3)
disp(max(abs(d-true_d)))
--> disp(max(abs(d-true_d)))
1.776D-15
To evaluate a symbolic polynomial at a particular point or points, use the horner command. Example:
t = 0:0.1:1
v1 = horner(y1, t)
plot(t, v1)
This is the closest I got to a solution to this problem.
He proposes using:
old = 'f';
for i=1:n
new = 'd'+string(i)+'f';
deff('y='+new+'(x)','y=numderivative('+old+',x)');
old=new;
end
I know, it's horrible, but I think there is no better solution, at least in Scilab.
I found a way:
function y = deriva(f, v, n, h)
deff("y = DF0(x)", "y="+f)
if n == 0 then
y = DF0(v);
else
for i=1:(n-1)
deff("y=DF"+string(i)+"(x)", "y=numderivative(DF"+string(i-1)+",x,"+string(h)+",4)");
end
deff("y=DFN(x)", "y=numderivative(DF"+string(n-1)+",x,"+string(h)+",4)");
y = DFN(v);
end
endfunction
disp(deriva("x.*x", 3, 2, 0.0001));
This correctly calculates numerical derivatives of nth order. But it needs to have the function passed as a string. Errors can get pretty large, and time to compute tends to go up fast as a function of n.
I need to make a histogram, and my data points each carry a statistical weight. The standard hist function isn't equipped to handle this. I could of course import the numpy.histogram function, which handles weighted data just fine, but I thought it would be a good exercise in learning julia to try and augment the hist() function to accept weights as an optional (named) argument.
I started by looking at the julia source for hist(), and was able to modify it slightly (if amateurishly -- suggestions for improvements welcome), to get it sort of working:
function sturges(n) # Sturges' formula
n==0 && return one(n)
iceil(log2(n))+1
end
function weightedhist!{HT}(h::AbstractArray{HT}, v::AbstractVector, edg::AbstractVector; init::Bool=true, weights::AbstractVector = ones(HT,length(v)))
n = length(edg) - 1
length(weights) == length(v) || error("length(weights) must equal length(v)")
length(h) == n || error("length(h) must equal length(edg) - 1.")
if init
fill!(h, zero(HT))
end
for j=1:length(v)
i = searchsortedfirst(edg, v[j])-1
if 1 <= i <= n
h[i] += weights[j]
end
end
edg, h
end
weightedhist(v::AbstractVector, edg::AbstractVector; weights::AbstractVector = ones(Int,length(v))) = weightedhist!(Array(Float64, length(edg)-1), v, edg; weights=weights)
weightedhist(v::AbstractVector, n::Integer; weights::AbstractVector = ones(Int,length(v))) = weightedhist(v, histrange(v,n); weights=weights)
weightedhist(v::AbstractVector; weights::AbstractVector = ones(Int,length(v))) = weightedhist(v, sturges(length(v)); weights=weights)
If I generate some random data with
v = randn(10^5);
w = rand(length(v));
edges = floor(minimum(v)):0.1:ceil(maximum(v));
then weightedhist(v, edges; weights=w) agrees with numpy.histogram(v, edges, weights=w). If I leave out the optional keyword argument for weights, then weightedhist(v, edges) agrees with the built in hist(v, edges), and weightedhist(v) agrees with the built in hist(v), except for the fact that my function outputs floats rather than ints when no weights are provided.
I don't understand why this is the case (is h getting created as a float array? promoted?), and I'd like for the my function to fall back on the behavior of the built in one as closely as possible when no weights are provided.
Can anyone suggest why my function is outputting floats, and how I might change that behavior to output ints when no weights are provided? I'd like to do this without first creating the h array and then converting it from one type to another, since I'd like the code to be as fast as possible.
If I understand correctly, when you call
weightedhist(v, edges)
you are using the first of your three "extra" definitions at the bottom.
This calls
weightedhist!(Array(Float64, length(edg)-1), v, edg; weights=weights)
so in your "main" weightedhist! the HT parameterization will be Float64, so h will be filled with HT == Float64, hence the Float64 output. So changing it to Array(eltype(weights), length(edg)-1) would be sufficient, I believe.