Related
I'm trying to apply automatic differentiation (ForwardDiff) to a function that contains an instance of find_zero (Roots) and am encountering an error that seems to relate to find_zero not accepting the ForwardDiff.Dual type.
Here's a (contrived) minimal working example that illustrates the issue:
using Distributions
using Roots
using StatsFuns
using ForwardDiff
function test_fun(θ::AbstractVector{T}) where T
μ,σ,p = θ;
z_star = find_zero(z -> logistic(z) - p, 0.0)
return pdf(Normal(μ,σ),z_star)
end
test_fun([0.0,1.0,0.75])
ForwardDiff.gradient(test_fun,[0.0,1.0,0.75])
This results in the following error:
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:716
Float64(::Irrational{:invsqrt2}) at irrationals.jl:189
...
Stacktrace:
[1] convert(::Type{Float64}, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}) at ./number.jl:7
[2] setproperty!(::Roots.UnivariateZeroState{Float64,ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}, ::Symbol, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}) at ./Base.jl:34
[3] update_state(::Roots.Secant, ::Roots.DerivativeFree{Roots.DerivativeFree{var"#5#6"{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}}}, ::Roots.UnivariateZeroState{Float64,ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}, ::Roots.UnivariateZeroOptions{Float64,Float64,ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3},ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}) at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/derivative_free.jl:163
[4] find_zero(::Roots.Secant, ::Roots.AlefeldPotraShi, ::Roots.DerivativeFree{Roots.DerivativeFree{var"#5#6"{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}}}, ::Roots.UnivariateZeroState{Float64,ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}, ::Roots.UnivariateZeroOptions{Float64,Float64,ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3},ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}, ::Roots.NullTracks) at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/find_zero.jl:868
[5] find_zero(::Roots.DerivativeFree{var"#5#6"{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3}}}, ::Float64, ::Roots.Secant, ::Roots.AlefeldPotraShi; tracks::Roots.NullTracks, verbose::Bool, p::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/find_zero.jl:689
[6] #find_zero#36 at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/derivative_free.jl:123 [inlined]
[7] find_zero at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/derivative_free.jl:120 [inlined]
[8] #find_zero#5 at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/find_zero.jl:707 [inlined]
[9] find_zero at /bbkinghome/asharris/.julia/packages/Roots/TZpjF/src/find_zero.jl:707 [inlined]
[10] test_fun at ./REPL[7856]:3 [inlined]
[11] vector_mode_dual_eval at /bbkinghome/asharris/.julia/packages/ForwardDiff/QOqCN/src/apiutils.jl:37 [inlined]
[12] vector_mode_gradient(::typeof(test_fun), ::Array{Float64,1}, ::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3,Array{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3},1}}) at /bbkinghome/asharris/.julia/packages/ForwardDiff/QOqCN/src/gradient.jl:106
[13] gradient(::Function, ::Array{Float64,1}, ::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3,Array{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3},1}}, ::Val{true}) at /bbkinghome/asharris/.julia/packages/ForwardDiff/QOqCN/src/gradient.jl:19
[14] gradient(::Function, ::Array{Float64,1}, ::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3,Array{ForwardDiff.Dual{ForwardDiff.Tag{typeof(test_fun),Float64},Float64,3},1}}) at /bbkinghome/asharris/.julia/packages/ForwardDiff/QOqCN/src/gradient.jl:17 (repeats 2 times)
[15] top-level scope at REPL[7858]:1
[16] run_repl(::REPL.AbstractREPL, ::Any) at /builddir/build/BUILD/julia/build/usr/share/julia/stdlib/v1.5/REPL/src/REPL.jl:288
I have limited experience using the FowardDiff package and am probably misunderstanding how the Dual type works, so I would really appreciate if someone knows how to solve this issue. Thanks so much!
z_star = find_zero(z -> logistic(z) - p, 0.0)
You have a fixed initial condition which is non-dual. Make it dual.
z_star = find_zero(z -> logistic(z) - p, zero(eltype(θ))
Executing, in the REPL, after using Plots, bar(["One","Two","Three"], [1,2,3]) works. It produces the following:
However, I would like to have the x axis as the y, and the y as x. I just want that same data represented with horizontal bars, y'know? How would I go about doing this?
bar([1,2,3], ["One","Two","Three"]) gets:
ERROR: MethodError: no method matching AbstractFloat(::Type{String})
Closest candidates are:
AbstractFloat(::Bool) at float.jl:258
AbstractFloat(::Int8) at float.jl:259
AbstractFloat(::Int16) at float.jl:260
...
Stacktrace:
[1] float(::Type{T} where T) at ./float.jl:277
[2] _preprocess_barlike(::RecipesPipeline.DefaultsDict, ::Array{Float64,1}, ::Array{String,1}) at /home/dan/.julia/packages/Plots/GDtiZ/src/recipes.jl:509
[3] macro expansion at /home/dan/.julia/packages/Plots/GDtiZ/src/recipes.jl:359 [inlined]
[4] apply_recipe(::RecipesPipeline.DefaultsDict, ::Type{Val{:bar}}, ::Array{Float64,1}, ::Array{String,1}, ::Nothing) at /home/dan/.julia/packages/RecipesBase/aQmWx/src/RecipesBase.jl:281
[5] _process_seriesrecipe(::Plots.Plot{Plots.GRBackend}, ::RecipesPipeline.DefaultsDict) at /home/dan/.julia/packages/RecipesPipeline/tkFmN/src/series_recipe.jl:48
[6] _process_seriesrecipes!(::Plots.Plot{Plots.GRBackend}, ::Array{Dict{Symbol,Any},1}) at /home/dan/.julia/packages/RecipesPipeline/tkFmN/src/series_recipe.jl:25
[7] recipe_pipeline!(::Plots.Plot{Plots.GRBackend}, ::Dict{Symbol,Any}, ::Tuple{Array{Int64,1},Array{String,1}}) at /home/dan/.julia/packages/RecipesPipeline/tkFmN/src/RecipesPipeline.jl:96
[8] _plot!(::Plots.Plot{Plots.GRBackend}, ::Dict{Symbol,Any}, ::Tuple{Array{Int64,1},Array{String,1}}) at /home/dan/.julia/packages/Plots/GDtiZ/src/plot.jl:167
[9] plot(::Array{Int64,1}, ::Vararg{Any,N} where N; kw::Base.Iterators.Pairs{Symbol,Symbol,Tuple{Symbol},NamedTuple{(:seriestype,),Tuple{Symbol}}}) at /home/dan/.julia/packages/Plots/GDtiZ/src/plot.jl:57
[10] bar(::Array{Int64,1}, ::Vararg{Any,N} where N; kw::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/dan/.julia/packages/RecipesBase/aQmWx/src/RecipesBase.jl:402
[11] bar(::Array{Int64,1}, ::Vararg{Any,N} where N) at /home/dan/.julia/packages/RecipesBase/aQmWx/src/RecipesBase.jl:402
[12] top-level scope at REPL[41]:1
Executing bar(y=["One","Two","Three"], x=[1,2,3])produces this:
I am thoroughly confused about what to try next.
bar(["One","Two","Three"], [1,2,3], orientation=:h)
"Horizontal or vertical orientation for bar types. Values :h, :hor, :horizontal correspond to horizontal (sideways, anchored to y-axis), and :v, :vert, and :vertical correspond to vertical (the default)."[1]
REF:
[1] http://docs.juliaplots.org/latest/generated/attributes_series/
Suppose I make my own custom vector type with it's own custom show method:
struct MyVector{T} <: AbstractVector{T}
v::Vector{T}
end
function Base.show(io::IO, v::MyVector{T}) where {T}
println(io, "My custom vector with eltype $T with elements")
for i in eachindex(v)
println(io, " ", v.v[i])
end
end
If I try making one of these objects at the REPL I get unexpected errors related to functions I never intended to call:
julia> MyVector([1, 2, 3])
Error showing value of type MyVector{Int64}:
ERROR: MethodError: no method matching size(::MyVector{Int64})
Closest candidates are:
size(::AbstractArray{T,N}, ::Any) where {T, N} at abstractarray.jl:38
size(::BitArray{1}) at bitarray.jl:77
size(::BitArray{1}, ::Integer) at bitarray.jl:81
...
Stacktrace:
[1] axes at ./abstractarray.jl:75 [inlined]
[2] summary(::IOContext{REPL.Terminals.TTYTerminal}, ::MyVector{Int64}) at ./show.jl:1877
[3] show(::IOContext{REPL.Terminals.TTYTerminal}, ::MIME{Symbol("text/plain")}, ::MyVector{Int64}) at ./arrayshow.jl:316
[4] display(::REPL.REPLDisplay, ::MIME{Symbol("text/plain")}, ::Any) at /Users/mason/julia/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:132
[5] display(::REPL.REPLDisplay, ::Any) at /Users/mason/julia/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:136
[6] display(::Any) at ./multimedia.jl:323
...
Okay, whatever so I'll implement Base.size so it'll leave me alone:
julia> Base.size(v::MyVector) = size(v.v)
julia> MyVector([1, 2, 3])
3-element MyVector{Int64}:
Error showing value of type MyVector{Int64}:
ERROR: getindex not defined for MyVector{Int64}
Stacktrace:
[1] error(::String, ::Type) at ./error.jl:42
[2] error_if_canonical_getindex(::IndexCartesian, ::MyVector{Int64}, ::Int64) at ./abstractarray.jl:991
[3] _getindex at ./abstractarray.jl:980 [inlined]
[4] getindex at ./abstractarray.jl:981 [inlined]
[5] isassigned(::MyVector{Int64}, ::Int64, ::Int64) at ./abstractarray.jl:405
[6] alignment(::IOContext{REPL.Terminals.TTYTerminal}, ::MyVector{Int64}, ::UnitRange{Int64}, ::UnitRange{Int64}, ::Int64, ::Int64, ::Int64) at ./arrayshow.jl:67
[7] print_matrix(::IOContext{REPL.Terminals.TTYTerminal}, ::MyVector{Int64}, ::String, ::String, ::String, ::String, ::String, ::String, ::Int64, ::Int64) at ./arrayshow.jl:186
[8] print_matrix at ./arrayshow.jl:159 [inlined]
[9] print_array at ./arrayshow.jl:308 [inlined]
[10] show(::IOContext{REPL.Terminals.TTYTerminal}, ::MIME{Symbol("text/plain")}, ::MyVector{Int64}) at ./arrayshow.jl:345
[11] display(::REPL.REPLDisplay, ::MIME{Symbol("text/plain")}, ::Any) at /Users/mason/julia/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:132
[12] display(::REPL.REPLDisplay, ::Any) at /Users/mason/julia/usr/share/julia/stdlib/v1.3/REPL/src/REPL.jl:136
[13] display(::Any) at ./multimedia.jl:323
...
Hmm, now it wants getindex
julia> Base.getindex(v::MyVector, args...) = getindex(v.v, args...)
julia> MyVector([1, 2, 3])
3-element MyVector{Int64}:
1
2
3
What? That wasn't the print formatting I told it to do! what's going on here?
The problem is that in julia, Base defines a method Base.show(io::IO ::MIME"text/plain", X::AbstractArray) which is actually more specific than the Base.show(io::IO, v::MyVector) for the purposes of display. This section of the julia manual describes the sort of custom printing that AbstractArray uses. So if we want to use our custom show method, we instead need to do
julia> function Base.show(io::IO, ::MIME"text/plain", v::MyVector{T}) where {T}
println(io, "My custom vector with eltype $T and elements")
for i in eachindex(v)
println(io, " ", v.v[i])
end
end
julia> MyVector([1, 2, 3])
My custom vector with eltype Int64 and elements
1
2
3
See also: https://discourse.julialang.org/t/extending-base-show-for-array-of-types/31289
When I tried to calculate
julia> -2.3^-7.6
-0.0017818389423254909
But the result given by my calculator is
0.0005506 + 0.001694 i
Just to be safe I tried it again and this time it complains. Why does it not complain when I tried it the first time?
julia> a = -2.3; b = -7.6; a^b
ERROR: DomainError with -2.6:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] throw_exp_domainerror(::Float64) at ./math.jl:35
[2] ^(::Float64, ::Float64) at ./math.jl:769
[3] top-level scope at none:0
[4] eval at ./boot.jl:319 [inlined]
[5] #85 at /Users/ssiew/.julia/packages/Atom/jodeb/src/repl.jl:129 [inlined]
[6] with_logstate(::getfield(Main, Symbol("##85#87")),::Base.CoreLogging.LogState) at ./logging.jl:397
[7] with_logger(::Function, ::Atom.Progress.JunoProgressLogger) at ./logging.jl:493
[8] top-level scope at /Users/ssiew/.julia/packages/Atom/jodeb/src/repl.jl:128
This is an order of operations issue. You can see how Julia's parsing that expression:
julia> parse("-2.3^-7.6")
:(-(2.3 ^ -7.6))
and so the reason you don't have any problems is because you're actually taking 2.3 ^ (-7.6), which is 0.0017818389423254909, and then flipping the sign.
Your second approach is equivalent to making sure that the "x" in "x^y" is really negative, or:
julia> parse("(-2.3)^-7.6")
:(-2.3 ^ -7.6)
julia> eval(parse("(-2.3)^-7.6"))
ERROR: DomainError:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] nan_dom_err at ./math.jl:300 [inlined]
[2] ^(::Float64, ::Float64) at ./math.jl:699
[3] eval(::Module, ::Any) at ./boot.jl:235
[4] eval(::Any) at ./boot.jl:234
And if we follow that instruction, we get what you expect:
julia> Complex(-2.3)^-7.6
0.0005506185144176565 + 0.0016946295370871215im
Dear users of the language julia. I have a problem when using the optimize function of the Optim package. What is the error of the code below?
using Optim
using Distributions
rng = MersenneTwister(1234);
d = Weibull(1,1)
x = rand(d,1000)
function pdf_weibull(x, lambda, k)
k/lambda * (x/lambda).^(k-1) * exp((-x/lambda)^k)
end
function obj(x::Vector, lambda, k)
soma = 0
for i in x
soma = soma + log(pdf_weibull(i,lambda,k))
end
-soma
end
obj(x, pars) = obj(x, pars...)
optimize(vars -> obj(x, vars...), [1.0,1.0])
Output
julia> optimize(vars -> obj(x, vars...), [1.0,1.0])
ERROR: DomainError:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] nan_dom_err at ./math.jl:300 [inlined]
[2] ^ at ./math.jl:699 [inlined]
[3] (::##2#4)(::Float64, ::Float64, ::Float64) at ./<missing>:0
[4] pdf_weibull(::Float64, ::Float64, ::Float64) at ./REPL[6]:2
[5] obj(::Array{Float64,1}, ::Float64, ::Float64) at ./REPL[7]:4
[6] (::##5#6)(::Array{Float64,1}) at ./REPL[11]:1
[7] value(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}) at /home/pedro/.julia/v0.6/NLSolversBase/src/interface.jl:19
[8] initial_state(::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}, ::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}) at /home/pedro/.julia/v0.6/Optim/src/multivariate/solvers/zeroth_order/nelder_mead.jl:139
[9] optimize(::NLSolversBase.NonDifferentiable{Float64,Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Optim.Options{Float64,Void}) at /home/pedro/.julia/v0.6/Optim/src/multivariate/optimize/optimize.jl:25
[10] #optimize#151(::Array{Any,1}, ::Function, ::Tuple{##5#6}, ::Array{Float64,1}) at /home/pedro/.julia/v0.6/Optim/src/multivariate/optimize/interface.jl:62
[11] #optimize#148(::Array{Any,1}, ::Function, ::Function, ::Array{Float64,1}) at /home/pedro/.julia/v0.6/Optim/src/multivariate/optimize/interface.jl:52
[12] optimize(::Function, ::Array{Float64,1}) at /home/pedro/.julia/v0.6/Optim/src/multivariate/optimize/interface.jl:52
[13] macro expansion at ./REPL.jl:97 [inlined]
[14] (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:73
It is a simple problem to obtain the maximum likelihood estimates of the parameters that index the weibull distribution.
Best regards.
The reason of your problem is that your definition of pdf_weibull is incorrect. Here is a corrected definition:
function pdf_weibull(x, lambda, k)
k/lambda * (x/lambda)^(k-1) * exp(-(x/lambda)^k)
end
Note that I have moved - sign in exp part of the expression. If you change this all will work as expected.
Now - why does Julia complain with DomainError. The reason is that because of the error in your code you try to calculate the value of something like (-1.0)^0.5. In Julia ^ is implemented in type stable way. This means, in particular, that when it is passed Float64 as both arguments it guarantees to return Float64 or throw an error. Clearly (-1.0)^0.5 cannot be computed in real domain - that is why an error is thrown. If you passed (-1+0im)^0.5 then there would be no error as we are passing a complex number to ^ so the result can also be complex number in a type stable way.