On this page about differential equations, it writes about writing the output to the 1st input of the function we want to solve. For example suppose we have these set of differential equations:
the code in Julia defining the function is:
function lorenz(du,u,p,t)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
Here the output is du which is also the input. It seems this way of defining a function has something to do with the array allocation.
I do not understand what they mean by giving the output to the 1st input of the function.
Related
I encountered such problem after I specified a differential evolution algorithm and an initial population of multiplied layer perceptron network. It requires to evolve a population of MLPs by DE. I tried to use Evolutionary package, but failed at this problem. I am just a beginner of julia. Can anyone help me with this problem? Or if there is any other way to implement a DE to evolve MLPs? Because I don't know much how to reuse codes if I don't see any similar example, I can't find any example of julia to evolve MLP by DE. The codes are attached as follow.
enter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description hereenter image description here
//Here are the snippets of codes
begin
features = Iris.features();
slabels = Iris.labels();
classes = unique(slabels) # unique classes in the dataset
nclasses = length(classes) # number of classes
d, n = size(features) # dimension and size if the dataset
end
//define MLP
model = Chain(Dense(d, 15, relu), Dense(15, nclasses))
//rewrite initial_population to generate a group of MLPs
begin
import Evolutionary.initial_population
function initial_population(method::M, individual::Chain;
rng::Random.AbstractRNG=Random.default_rng(),
kwargs...) where {M<:Evolutionary.AbstractOptimizer}
θ, re = Flux.destructure(individual);
[re(randn(rng, length(θ))) for i in 1:Evolutionary.population_size(method)]
end
end
//define DE algorithm and I just used random parameters
algo2 = DE(
populationSize=150,
F=0.9,
n=1,
K=0.5*(1.9),
selection = rouletteinv
)
popu = initial_population(algo2, model)
//in the source code of Evolutionary.jl, it seems that to use optimize() function, I need to pass a constranit? I am not sure. I have tried every method of optimize function, but it still reported error. What's worse, I am not sure how to use box constraint, so I tried to use Nonconstranit constraint, but it still failed. I don't know how to set upper and lower bounds of box constraint in this case, so I don't know how to use it. and I tried to set a random box constraint to try to run optimize() function, but it still failed. error reported is in pitcure attached.
cnst = BoxConstraints([0.5, 0.5], [2.0, 2.0])
res2 = Evolutionary.optimize(fitness,cnst,algo2,popu,opts)
//so far what I do is simply define a DE algorithm, an initial population, a MLP network and there is a uniform_mlp(), which is used to deconstruct a mlp into a vector, perform crossover operator and reconstruct from them a new mlp
function uniform_mlp(m1::T, m2::T; rng::Random.AbstractRNG=Random.default_rng()) where {T <: Chain}
θ1, re1 = Flux.destructure(m1);
θ2, re2 = Flux.destructure(m2);
c1, c2 = UX(θ1,θ2; rng=rng)
return re1(c1), re2(c2)
end
//there is also a mutation function
function gaussian_mlp(σ::Real = 1.0)
vop = gaussian(σ)
function mutation(recombinant::T; rng::Random.AbstractRNG=Random.default_rng()) where{T <: Chain}
θ, re = Flux.destructure(recombinant)
return re(convert(Vector{Float32}, vop(θ; rng=rng)))
end
return mutation
end
The easiest way to use this is through Optimization.jl. There is an Evolutionary.jl wrapper that makes it use the standardized Optimization.jl interface. This looks like:
using Optimization, OptimizationEvolutionary
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
f = OptimizationFunction(rosenbrock)
prob = Optimization.OptimizationProblem(f, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
sol = solve(prob, Evolutionary.DE())
Though given previous measurements of global optimizer performance, we would recommend BlackBoxOptim's methods as well, this can be changed through simply by changing the optimizer dispatch:
using Optimization, OptimizationBBO
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(), maxiters=100000, maxtime=1000.0)
This is also a DE method, but one with some adaptive radius etc. etc. that performs much better (on average).
I have derived a survival function for a system of components (ignore the details of how this system is setup) and I am trying to maximize its expected, or more specifically, maximizing the expected value of the function:
surv_func = function(x,mu) = {(exp(-(x/(mu))^(1/3))*((1-exp(-(4/3)*x^(3/2)))+exp(-(-(4/3)*x^(3/2)))))*exp(-(x/(3-mu))^(1/3))}
and I am supposed (since the pdf including my tasks gives a hint about it) to use the function
optimize()
and the expected value for a function can be computed with
# Computes expected value of a the function "function"
E <- integrate(function, 0, Inf)
but my function depends on x and mu. The expected value could (obviously) be computed if the integral had no mu but instead only depended on x. For those interested, the mu comes from the fact that one of the components has a Weibull-distribution with parameters (1/3,mu) and the 3-mu comes from that has a Weibull-distribution with parameters (1/3,lambda). In the task there is a constraint mu + lambda = 3, so I tought substituting the lambda-parameter in the second Weibull-distribution with lambda = 3 - mu and trying to maximize this problem would yield not only mu, but also lambda.
If I try to, just for the sake of learing about R, compute the expected value using the code below (in the console window), it just gives me the following:
> E <- integrate(surv_func,0,Inf)
Error in (function (x, mu) : argument "mu" is missing, with no default
I am new to R and seem to be a little bit "slow" at learning. How can I approach this problem?
Here is my code:
using Plots
using SpecialFunctions
using QuadGK
kappa = 1
B = 1
xi = (kappa/B)^4
function correlation(x)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
r = range(-20, 20, length = 1001)
plot(r, correlation(r))
I get an error on the Bessel function. I get that the argument is the problem and that it should be of the format ::BigFloat, ::Float16, or ::Float32, but I don't know how to do it. I tried to write x .* q instead, but the problem remains the same, I get the error:
ERROR: MethodError: no method matching besselj0(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}})
Also, I'm searching for a way to write +infinity instead of 1e6.
Just replace correlation(r) with correlation.(r) in your code to use broadcasting, as is explained here.
The core of your problem that in Julia functions are not broadcastable by default - you usually have to opt-in to have it (especially when you work with numerical code). Here is a basic example:
julia> sin(1)
0.8414709848078965
julia> sin([1])
ERROR: MethodError: no method matching sin(::Array{Int64,1})
Closest candidates are:
sin(::BigFloat) at mpfr.jl:727
sin(::Missing) at math.jl:1197
sin(::Complex{Float16}) at math.jl:1145
...
Stacktrace:
[1] top-level scope at REPL[2]:1
julia> sin.([1])
1-element Array{Float64,1}:
0.8414709848078965
However, in your case the correlation function is quite expensive. In such a case I usually use ProgressMeter.jl to monitor the progress of the computations (it shows how long you can expect for the computations to finish). So you can write:
using ProgressMeter
result = #showprogress map(correlation, r)
and use the map function to apply correlation function to all elements of r (in this case the result will be the same as for broadcasting).
Finally, your computations will be much faster if you do not use global variables in quadgk. It is better to pass kappa and xi as arguments to the function like this:
function correlation(x, kappa, xi)
quadgk(q -> q * SpecialFunctions.besselj0(x*q)/(q^4 + xi), 0, 1e6)[1]/kappa
end
result = #showprogress map(x -> correlation(x, kappa, xi), r)
I'm trying to set up a "Solver" function to find a normal depth of a channel (yn). The parameters are given in the code below, where I can estimate one side of the equation. All other parameters are function of yn. I need to find yn that solves the function A*(R^(2/3)=nQSo.
So=0.001
n=0.013
Q=30
B=10
nQSo=(n*Q)/(So^(1/2))
A=B*yn
P=B+2*yn
R=A/P
A*(R^(2/3)=nQSo
You can take a look at optimize
So=0.001
n=0.013
Q=30
B=10
nQSo=(n*Q)/(So^(1/2))
error = function(yn,nQSo){
A=B*yn
P=B+2*yn
R=A/P
return(abs(A*(R^(2/3))-nQSo))
}
optimize(error,interval = c(0,2),nQSo = nQSo)
the result as you see is yn = 1.239066
I am taking a numeric calculus class and we are not required to know any scilab programming except the very basic, which is taught through a booklet, since the class is mostly theoretical. I was reading the booklet and found this scilab code meant to find a root of a function through bissection method.
The problem is, I can't find a way to make it work. I tried to call it with bissecao(x,-1,1,0.1,40) however it didn't work.
The error I got was:
at line 3 of function bissecao ( E:\Downloads\bisseccao3.sce line 3 )
Invalid index.
As I highly doubt that the code itself isn't working, and I tried to search for anything I could spot that seemed wrong, to no avail, I guess I am probably calling it wrong, somehow.
The code is the following:
function p = bissecao(f, a, b, TOL, N)
i = 1
fa = f(a)
while (i <= N)
//iteraction of the bissection
p = a + (b-a)/2
fp = f(p)
//stop condition
if ((fp == 0) | ((b-a)/2 < TOL)) then
return p
end
//bissects the interval
i = i+1
if (fa * fp > 0) then
a = p
fa = fp
else
b = p
end
end
error ('Max number iter. exceded!')
endfunction
Where f is a function(I guess), a and b are the limits of the interval in which we will be iterating, TOL is the tolerance at which the program terminates close to a zero, and N is the maximum number of iteractions.
Any help on how to make this run is greatly appreciated.
Error in bissecao
The only error your bissecao function have is the call to return :
In a function return stops the execution of the function,
[x1,..,xn]=return(a1,..,an) stops the execution of the function and
put the local variables ai in calling environment under names xi.
So you should either call it without any argument (input our output) and the function will exit and return p.
Or you could call y1 = return(p) and the function will exit and p will be stored in y1.
It is better to use the non-arguments form return in functions to avoid changing values of variables in the parent/calling script/functions (possible side-effect).
The argument form is more useful when interactively debugging with pause:
In pause mode, it allows to return to lower level.
[x1,..,xn]=return(a1,..,an) returns to lower level and put the local
variables ai in calling environment under names xi.
Error in calling bissecao
The problem may come by your call: bissecao(x,-1,1,0.1,40) because you didn't defined x. Just fixing this by creating a function solves the problem:
function y=x(t)
y=t+0.3
enfunction
x0=bissecao(x,-1,1,0.1,40) // changed 'return p' to 'return'
disp(x0) // gives -0.3 as expected