Multi-Objective optimization using BlackBoxOptim.jl package - julia

I’m running the example code to optimize fitness_schaffer1(x) = (sum(abs2, x), sum(abs2, x .- 2.0)) using Method=:borg_moea.
Code:
fitness_schaffer1(x) = (sum(abs2, x), sum(abs2, x .- 2.0))
res = bboptimize(fitness_schaffer1; Method=:borg_moea, FitnessScheme=ParetoFitnessScheme{2}(is_minimizing=true), SearchRange=(-10.0, 10.0), NumDimensions=3, ϵ=0.05, MaxSteps=50000, TraceInterval=1.0, TraceMode=:verbose);
I like to figure out the followings:
ParetoFitnessScheme{2}
What 2 represents inside the curly brackets?
pf = pareto_frontier(res)
There are 120-element Vector. Is there any place to change the default value of 120?
Details of 120-element Vector
Like to figure out what they represent.
120-element Vector{BlackBoxOptim.FrontierIndividualWrapper{Tuple{Float64, Float64}, IndexedTupleFitness{2, Float64}}}:
BlackBoxOptim.FrontierIndividualWrapper{Tuple{Float64, Float64}, IndexedTupleFitness{2, Float64}}(BlackBoxOptim.FrontierIndividual{IndexedTupleFitness{2, Float64}}(IndexedTupleFitness{2, Float64}((0.9980588444521499, 6.076993313826746), 7.075052158278895, (19, 121), 1.1024139914614826), [0.5678841558214585, 0.581514678153916, 0.5808675486809768], 3, 23878, 13, 1.64915418452e9), (0.9980588444521499, 6.076993313826746))
'''
(0.9980588444521499, 6.076993313826746) = (value of fitness function 1, value of fitness function 2)
'''
'''
7.075052158278895 = sum of value of fitness function 1 and value of fitness function 2
'''
'''
(19, 121) = any idea what they represent
'''
'''
1.1024139914614826 = any idea what this represents
'''
'''
[0.5678841558214585, 0.581514678153916, 0.5808675486809768] = Candidate values from res or solution vector.
'''
'''
3, 23878, 13, 1.64915418452e9 = any idea what they represent
'''
'''
(0.9980588444521499, 6.076993313826746) = (value of fitness function 1, value of fitness function 2)
'''

Related

Weighted average cost of short-term stock trading

I'm new to programming and trying to learn Julia. I tried to compute the weighted average cost of short-term stock trading activities as I did before in R. I rewrite the code in Julia, unfortunately, it return the incorrect result in data frame format.
I tried to investigate the result of each iteration step by changing return vwavg to println([volume[i], s, unitprice[i], value[i], t, vwavg[i], u]) and the output is correct. is it a problem with rounding?
Really appreciate your help
# create trial dataset
df = DataFrame(qty = [3, 2, 2, -7, 4, 4, -3,-2, 4, 4, -2, -3],
price = [100.0, 99.0, 101.0, 103.0, 95.0, 93.0, 90.0, 90.0, 93.0, 95.0, 93.0, 92.0])
# create function for weighted average cost of stock price
function vwacost(volume, unitprice)
value = Vector{Float64}(undef, length(volume))
vwavg = Vector{Float64}(undef, length(volume))
for i in 1:length(volume)
s = 0
t = 0
u = 0
if volume[i]>0
value[i] = (volume[i]*unitprice[i]) + t
volume[i] = volume[i] + s
vwavg[i] = value[i]/volume[i]
u = vwavg[i]
s = volume[i]
t = value[i]
else
volume[i] = volume[i] + s
value[i] = u * volume[i]
s = volume[i]
t = value[i]
vwavg[i] = u
end
return vwavg
end
end
out = transform(df, [:qty, :price] => vwacost)
Simple error:
for i in 1:length(volume)
...
return vwavg
end
should be:
for i in 1:length(volume)
...
end
return vwavg
You are currently returning the result after the first loop iteration, which is why your resulting vwawg vector has only one (the first) calculated entry, with all other entries being zero/whatever was in memory when you created the vwawg vector in the first place.
Ok, the second problem of changing original df that result in incorrect result can be solved by copy(df):
out = select(copy(df), [:qty, :price] => vwacost => :avgcost)
thus, the original df will not change and the result will consistent over time.

Julia I can't make a streamplot with my own function of data

I'm doing my dissertetion and I need to make a streamplot with the velocities matrix.
I have resolved Navier-stokes equations and I have one matrix of u-velocity 19x67 and other matrix of v-velocity 19x67.
To obtain a continuos function I have done a bilinear interpolation but I have problem with the plotting.
I don't know if I explain myself very well but y let you the code.
#BILINEAR INTERPOLATION#
X=2
Y=0.67
x_pos=findlast(x->x<X, x)
y_pos=findlast(x->x<Y, y)
x1=((x_pos-1))*Dx
x2=(x_pos)*Dx
y1=((y_pos-1)-0.5)*Dy
y2=(y_pos-0.5)*Dy
u1=u[y_pos-1,x_pos-1]
u2=u[y_pos-1,x_pos]
u3=u[y_pos,x_pos-1]
u4=u[y_pos,x_pos]
u_int(Y,X)=(1/(Dx*Dy))*((x2.-X).*(y2.-Y).*u1+(X.-x1).*(y2.-Y).*u2+(x2.-X).*(Y.-y1).*u3+(X.-x1).*(Y.-y1).*u4)
xx1=((x_pos-1)-0.5)*Dx
xx2=(x_pos-0.5)*Dx
yy1=((y_pos-1))*Dy
yy2=(y_pos)*Dy
v1=v[y_pos-1,x_pos-1]
v2=v[y_pos-1,x_pos]
v3=v[y_pos,x_pos-1]
v4=v[y_pos,x_pos]
v_int(Y,X)=(1/(Dx*Dy))*((x2-X)*(y2-Y)*v1+(X-x1)*(y2-Y)*v2+(x2-X)*(Y-y1)*v3+(X-x1)*(Y-y1)*v4)
#PLOT#
function stream(Y,X)
u_c=u_int(Y,X)
v_c=v_int(Y,X)
return u_c,v_c
end
using CairoMakie
let
fig = Figure(resolution = (600, 400))
ax = Axis(fig[1, 1], xlabel = "x", ylabel = "y", backgroundcolor = :black)
streamplot!(ax, stream, -2 .. 4, -2 .. 2, colormap = Reverse(:plasma),
gridsize = (32, 32), arrow_size = 10)
display(fig)
end;
Any solution?
If you know other method with other package, pls tell me.

Julia JuMP making sure nonlinear objective function has correct function signatures so that autodifferentiate works properly?

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.
I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error
LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:715
Float64(::Int8) at float.jl:60
This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...
Any suggestions?
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::Array{<:Real,1})
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= Matrix{Float64}(I, 2, 2)
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Use instead
function obj(x::Vector{T}) where {T}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{T}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T}
eye= Matrix{T}(I, 2, 2)
eye[2,2]=var
return eye
end
Essentially, anywhere you see Float64, replace it by the type in the incoming argument.
I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::AbstractVector{T}) where {T<:Real}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
#println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= zeros(Real,(2,2))
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

An unexpected variable reference changing during Julia Zygote gradient pullback

I'm using Julia Flux.jl to train models. But when I customize my model, there is an issue the variable reference in gradient pullback function doesn't behave as expected. For simplicity, the issue code is simplified and shown as following
let
x=rand(Float32, (10,1))
v_layer=[
Dense(10, 11),
Dense(10, 12),
]
ps=Flux.Params([
vcat((collect.(params.(v_layer)))...)...,
])
function model(x)
out=x
for (layer_idx, layer)=enumerate(v_layer)
#show layer_idx, size(x)
out=layer(x)
end
return out
end
gs=gradient(ps) do
sum(model(x))
end
end
If we run the code, the size of x in for loop will change according to the #show macro output as following
(layer_idx, size(x)) = (1, (10, 1))
(layer_idx, size(x)) = (2, (11, 1))
And we will get the following error message
DimensionMismatch("A has dimensions (12,10) but B has dimensions (11,1)")
I think the reference change of out in for loop should not change the reference of x, but that happened. Besides if we put the function model whole body inside gradient do as following
gs=gradient(ps) do
out=x
for (layer_idx, layer)=enumerate(v_layer)
#show layer_idx, size(x)
out=layer(x)
end
sum(out)
end
worked just fine. Why is this?
PS. A slightly more complex but more practical version code is as follows and has the same error.
let
v_layer_A=[
Dense(10, 11),
Dense(11, 12),
]
v_layer_B=[
Dense(10, 11),
Dense(10, 12),
]
x=rand(Float32, (10,1))
ps=Flux.Params([
vcat((collect.(params.(v_layer_A)))...)...,
vcat((collect.(params.(v_layer_B)))...)...,
])
function model(x)
out=x
for (idx, (layer_A, layer_B))=enumerate(zip(v_layer_A, v_layer_B))
#show idx, size(x)
out_A=layer_A(out)
out_B=layer_B(x)
out=out_A.*out_B
end
return out
end
gs=gradient(ps) do
sum(model(x))
end
end

what does "argmax().I" mean in Julia

Here is the great example from StatWithJuliaBook (please find the following)
It demos how to smooth a plot of stary sky stars.png
My question is about argmax().I. According to the author, "Note the use of the trailing “.I” at the end of each argmax, which extracts the values of the co-ordinates in column-major."
What does it mean? Is there other parameter? I can't find any description in the document.
According to author, it seems to be the position of column-wise maxmum value, yet when I tried argmax(gImg, dims=2), the result is different.
#julia> yOriginal, xOriginal = argmax(gImg).I
#(192, 168)
#julia> yy, xx = argmax(gImg, dims = 2)
#400×1 Matrix{CartesianIndex{2}}:
# CartesianIndex(1, 187)
# CartesianIndex(2, 229)
⋮
# CartesianIndex(399, 207)
# CartesianIndex(400, 285)
#julia> yy, xx
#(CartesianIndex(1, 187), CartesianIndex(2, 229))
Please advise.
using Plots, Images; pyplot()
img = load("stars.png")
gImg = red.(img)*0.299 + green.(img)*0.587 + blue.(img)*0.114
rows, cols = size(img)
println("Highest intensity pixel: ", findmax(gImg))
function boxBlur(image,x,y,d)
if x<=d || y<=d || x>=cols-d || y>=rows-d
return image[x,y]
else
total = 0.0
for xi = x-d:x+d
for yi = y-d:y+d
total += image[xi,yi]
end
end
return total/((2d+1)^2)
end
end
blurImg = [boxBlur(gImg,x,y,5) for x in 1:cols, y in 1:rows]
yOriginal, xOriginal = argmax(gImg).I
yBoxBlur, xBoxBlur = argmax(blurImg).I
p1 = heatmap(gImg, c=:Greys, yflip=true)
p1 = scatter!((xOriginal, yOriginal), ms=60, ma=0, msw=4, msc=:red)
p2 = heatmap(blurImg, c=:Greys, yflip=true)
p2 = scatter!((xBoxBlur, yBoxBlur), ms=60, ma=0, msw=4, msc=:red)
plot(p1, p2, size=(800, 400), ratio=:equal, xlims=(0,cols), ylims=(0,rows),
colorbar_entry=false, border=:none, legend=:none)
I is a field in an object of type CartesianIndex which is returned by argmax when its argument has more than 1 dimension.
If in doubt always try using dump.
Please consider the code below:
julia> arr = rand(4,4)
4×4 Matrix{Float64}:
0.971271 0.0350186 0.20805 0.284678
0.348161 0.19649 0.30343 0.291894
0.385583 0.990593 0.216894 0.814146
0.283823 0.750008 0.266643 0.473104
julia> el = argmax(arr)
CartesianIndex(3, 2)
julia> dump(el)
CartesianIndex{2}
I: Tuple{Int64, Int64}
1: Int64 3
2: Int64 2
However, getting CartesianIndex object data via its internal structure is not very elegant. The nice Julian way to do it is to use the appropriate method:
julia> Tuple(el)
(3, 2)
Or just access the indices directly:
julia> el[1], el[2]
(3, 2)

Resources