I am trying to estimate a spatial autoregressive (SAR) model in Julia using Jim LeSage's MATLAB code. I first have to maximize the concentrated log-likelihood function with respect to the rho parameter.
I wrote the following likelihood function in Julia:
function like_sar(rho,epe0,eped,epe0d,n,W)
# PURPOSE: evaluates concentrated log-likelihood for the
# spatial autoregressive model using sparse matrix algorithms
# ---------------------------------------------------
# USAGE:llike = f_sar(rho,epe0,eped,epe0d,n)
# where: rho = spatial autoregressive parameter
# epe0 = see below
# eped = see below
# eoe0d = see below
# n = # of obs
# b0 = AI*xs'*ys;
# bd = AI*xs'*Wys;
# e0 = ys - xs*b0;
# ed = Wys - xs*bd;
# epe0 = e0'*e0;
# eped = ed'*ed;
# epe0d = ed'*e0;
z = epe0 - 2*rho*epe0d + rho*rho*eped
A = speye(n) - rho*W
sar_like = (n/2)*log(z) - log(det(A))
return sar_like, rho
end
I generate data and pass to the function all of the arguments and it gives me the value of the likelihood function and the rho parameter value.
However, when I try to use the Optim package to maximize this likelihood, I receive the following error:
optimize(like_sar,[rho,epe0,eped,epe0d,n,W])
ERROR: MethodError: no method matching zero(::Type{Any})
Closest candidates are:
zero(::Type{Base.LibGit2.GitHash}) at libgit2\oid.jl:106
zero(::Type{Base.Pkg.Resolve.VersionWeights.VWPreBuildItem}) at pkg\resolve\versionweight.jl:82
zero(::Type{Base.Pkg.Resolve.VersionWeights.VWPreBuild}) at pkg\resolve\versionweight.jl:124
...
Stacktrace:
[1] promote_objtype(::Optim.NelderMead{Optim.AffineSimplexer,Optim.AdaptiveParameters}, ::Array{Any,1}, ::Function) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:39
[2] #optimize#151(::Array{Any,1}, ::Function, ::Tuple{#like_sar}, ::Array{Any,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:57
[3] #optimize#148(::Array{Any,1}, ::Function, ::Function, ::Array{Any,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:52
[4] optimize(::Function, ::Array{Any,1}) at C:\Users\dolacomb\.julia\v0.6\Optim\src\multivariate/optimize\interface.jl:52
[5] eval(::Module, ::Any) at .\boot.jl:235
I'm not sure what I am doing wrong here as it seems like a fairly simple univariate optimization over rho but I'm fairly new to coding in Julia.
Any help would be greatly appreciated. I am planning on converting all of the LeSage code to Julia and have already done a majority of the Bayesian routines (which are much easier, IMHO) and the support functions, e.g. log determinant calculations, credible intervals, weight matrix creation, etc.
If I understand your case correctly you need to do univariate optimization, in which case it is best to use https://github.com/JuliaNLSolvers/Optim.jl/blob/master/docs/src/user/minimization.md#minimizing-a-univariate-function-on-a-bounded-interval (if you know the initial interval - but I guess in your problem it should be [-1,1]).
Then you should pass to the solver a function that takes one argument and returns one value. In your case a simple anonymous function that would do this which leads to the following call:
optimize(rho -> -like_sar(rho,epe0,eped,epe0d,n,W)[1], -1, 1)
of course you would have to have epe0, eped, epe0d, n, W defined in the enclosing scope of the call for this to work.
In the definition I added minus - before like_sar as optimize minimizes a function.
Related
I am trying to write a function in Julia that solves the nonlinear equation f(x)=0 using Newton iteration. I am very much a beginner in Julia so bear with me. In the assignment, my instructor provided this first line of code:
newton(f, x_0; maxiterations=10, tolerance=1e-14, epsilon=1e-7)
He also provided this statement: "the optional epsilon argument provides the value of ε used in the finite-difference approximation f′(x) ≈ (f(x + ε) − f(x)) / ε." In the past, I've created a function for Newton's Method in MATLAB but I assumed that the first derivative of f(x) must be one of the function's inputs. In this assignment, it appears that he wants me to use this approximation formula.
Anyways, here is my code so far.
function newton(f,x_0; maxiterations=10, tolerance=1e-14, epsilon=1e-7)
x = [x_0] # assign x_0 to x and make x a vector
fd = (f(x.+epsilon) - f(x))./epsilon # fd is the first derivative of f(x), calculated from
# the finite-difference approximation
# create for loop to begin iteration
for n = 0:maxiterations
if abs(x[n+1]-x[n]) < tolerance # if the absolute value of the difference of x[n+1] and x[n]
# is less than the tolerance, then the value of x is returned
return x
end
if abs(f(x[n])) < tolerance # if the absolute value of f(x[n]) is less than the tolerance,
# then the value of x is returned
return x
end
push!(x, x[n] - (f(x[n]))/fd) # push each calculated value to the end of vector x,
# and continue iterating
end
return x # after iteration is complete, return the vector x
end
After executing this function, I defined the equation which should be used to determine sqrt(13) and called the newton function with an initial guess of x_0=3.
f(x) = x^2 - 13
newton(f,3)
Here is the error message I'm encountering after the newton function is called:
MethodError: no method matching ^(::Vector{Float64}, ::Int64)
Closest candidates are:
^(::Union{AbstractChar, AbstractString}, ::Integer) at strings/basic.jl:730
^(::LinearAlgebra.Hermitian, ::Integer) at /Applications/Julia-1.8.app/Contents/Resources/julia/share/julia/stdlib/v1.8/LinearAlgebra/src/symmetric.jl:696
^(::LinearAlgebra.Hermitian{T, S} where S<:(AbstractMatrix{<:T}), ::Real) where T at /Applications/Julia-1.8.app/Contents/Resources/julia/share/julia/stdlib/v1.8/LinearAlgebra/src/symmetric.jl:707
...
Stacktrace:
[1] literal_pow
# ./intfuncs.jl:340 [inlined]
[2] f(x::Vector{Float64})
# Main ./In[35]:3
[3] newton(f::typeof(f), x_0::Int64; maxiterations::Int64, tolerance::Float64, epsilon::Float64)
# Main ./In[34]:5
[4] newton(f::Function, x_0::Int64)
# Main ./In[34]:2
[5] top-level scope
# In[35]:4
[6] eval
# ./boot.jl:368 [inlined]
[7] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
# Base ./loading.jl:1428
In this problem, I'm supposed to make my function return a vector of successive approximations and make a logarithmic error plot that includes a curve for the theoretical error estimate for Newton iteration. I would appreciate any guidance in correcting the issue(s) in my code so I can move on to create the error plot.
Thank you.
You can use
newton(x->x.^2 .- 13, 3.)
to manipulate vectors instead of scalars. But your newton code is still bugged!
you start with zeros indexes(not good) and x[n+1] is used before to be allocated and it lacks some mathematical steps. I can propose you
function newton(f,x_0; maxiterations=10, tolerance=1e-14, epsilon=1e-7)
x = x_0
iterates = [x]; # iterates will store the iterates' sequence
# create for loop to begin iteration
for n =1:maxiterations
fd = (f(x.+epsilon) - f(x))./epsilon
tmp=x - f(x) / fd
if (abs(tmp-x) < tolerance || abs(f(x)) < tolerance )
break
end
x = tmp
push!(iterates,x)
end
return iterates
end
Now the result seems to be good
julia> newton(x->x.^2 .- 13, 3.)
5-element Vector{Float64}:
3.0
3.6666666569022053
3.6060606066709835
3.605551311439882
3.60555127546399
This is my first post so I apologize for any formatting issues.
I'm trying to calculate the expected value of a collection of numbers in Julia, given a probability distribution that is the mixture of two Beta distributions. Using the following code gives the error seen below
using Distribution, Expectations, Statistics
d = MixtureModel([Beta(1,1),Beta(3,1.6)],[0.5,0.5])
E = expectation(d)
E*rand(32,1)
MethodError: no method matching *(::MixtureExpectation{Vector{IterableExpectation{Vector{Float64}, Vector{Float64}}}, Vector{Float64}}, ::Matrix{Float64})
If I use just a single Beta distribution, the above syntax works fine:
d = Beta(1,1)
E = expectation(d)
E*rand(32,1)
Out = 0.503
And if I use function notation in the expectation, I can calculate expectations of functions using the Mixture model as well.
d = MixtureModel([Beta(1,1),Beta(3,1.6)],[0.5,0.5])
E = expectation(d)
E(x -> x^2)
It just seems to not work when using the dot-notation shown above.
Single distribution yields IterableExpectation that allows multiplication over an array, while mixture distribution yields MixtureExpectation that allows multiplications only over a scalar. You can run typeof(E) to check the type in your code.
julia> methodswith(IterableExpectation)
[1] *(r::Real, e::IterableExpectation) in Expectations at C:\JuliaPkg\Julia-1.8.0\packages\Expectations\hZ5Gh\src\iterable.jl:53
[2] *(e::IterableExpectation, h::AbstractArray) in Expectations at C:\JuliaPkg\Julia-1.8.0\packages\Expectations\hZ5Gh\src\iterable.jl:44
...
julia> methodswith(MixtureExpectation)
[1] *(r::Real, e::MixtureExpectation) in Expectations at C:\JuliaPkg\Julia-1.8.0\packages\Expectations\hZ5Gh\src\mixturemodels.jl:15
...
I am trying to solve an economic problem using the sympy package in Julia. In this economic problem I have exogenous variables and endogenous variables and I am indexing them all. I have two questions:
How to access the indexed variables to pass: calibrated values ( to exogenous variables, calibrated in other enveiroment) or formula (to endogenous variables, determined by the first order conditions of the agents' maximalization problem using pencil and paper). This will also allow me to study the behavior of equilibrium when I disturb exogenous variables. First, consider my attempto to pass calibrated values on exogenous variables.
using SymPy
# To index
n,N = sympy.symbols("n N", integer=True)
N = 3 # It can change
# Household
#exogenous variables
α = sympy.IndexedBase("α")
#syms γ
α2 = sympy.Sum(α[n], (n, 1, N))
equation_1 = Eq(α2 + γ, 1)
The equation_1 says that the alpha's plus gamma sums one. So I would like to pass values to the α vector according to another vector, alpha3, with calibrated parameters.
# Suposse
alpha3 = [1,2,3]
for n in 1:N
α[n]= alpha3[n]
end
MethodError: no method matching setindex!(::Sym, ::Int64, ::Int64)
I will certainly do this step once the system is solved. Now, I want to pass formulas or expressions as a function of prices. Prices are endogenous and unknown variables. (As said before, the expressions were calculated using paper and pencil)
# Price vector, Endogenous, unknown in the system equations
P = sympy.IndexedBase("P")
# Other exogenous variables to be calibrated.
z = sympy.IndexedBase("z")
s = sympy.IndexedBase("s")
Y = sympy.IndexedBase("Y")
# S[n] and D[n], Supply and Demand, are endogenous, but determined by the first order conditions of the maximalization problem of the agents
# Supply and Demand
S = sympy.IndexedBase("S")
D = sympy.IndexedBase("D")
# (Hypothetical functions that I have to pass)
# S[n] = s[n]*P[n]
# D[n] = z[n]/P[n]
Once I can write the formulas on S[n] and D[n], consider the second question:
How to specify the endogenous variables indexed (All prices in their indexed format P[n]) as being unknown in the system of non-linear equations? I will ignore the possibility of not solving my system. Suppose my system has a single solution or infinite (manifold). So let's assume that I have more equations than variables:
# For all n, I want determine N indexed equations (looping?)
Eq_n = Eq(S[n] - D[n],0)
# Some other equations relating the P[n]'s
Eq0 = Eq(sympy.Sum(P[n]*Y[n] , (n, 1, N)), 0 )
# Equations system
eq_system = [Eq_n,Eq0]
# Solving
solveset(eq_system,P[n])
Many thanks
There isn't any direct support for the IndexedBase feature of SymPy. As such, the syntax alpha[n] is not available. You can call the method __getitem__ directly, as with
alpha.__getitem__[n]
I don't see a corresponding __setitem__ documented, so I'm not sure whether
α[n]= alpha3[n]
is valid in sympy itself. But if there is some other assignment method, you would likely just call that instead of the using [ for assignment.
As for the last question about equations, I'm not sure but you would presumably find the size of the IndexedBase object and use that to loop.
If possible, using native julia constructs would be preferred, as possible. For this example, you might just consider an array of variables. The recently changed #syms macro makes this easy to generate.
For example, I think the following mostly replicates what you are trying to do:
#syms n::integer, N::integer
#exogenous variables
N = 3
#syms α[1:3] # hard code 3 here or use `α =[Sym("αᵢ$i") for i ∈ 1:N]`
#syms γ
α2 = sum(α[i] for i ∈ 1:N)
equation_1 = Eq(α2 + γ, 1)
alpha3 = [1,2,3]
for n in 1:N
α[n]= alpha3[n]
end
#syms P[1:3], z[1:3], s[1:3], γ[1:3], S[1:3], D[1:3]
Eq_n = [Eq(S[n], D[n]) for n ∈ 1:N]
Eq0 = Eq(sum(P .* Y), 0)
eq_system = [Eq_n,Eq0]
solveset(eq_system,P[n])
Analogous to the example given in GSL.jl/examples/Quadrature.jl I am trying to integrate a function. However, since this function has a singularity, I need to use the cauchy weight. My idea was to use the following code
using GSL
function Q(p)
ws_size = 200
ws = GSL.integration_workspace_alloc(ws_size)
f_ = x -> 1/(x+p)
f = GSL.#gsl_function(f_)
result = Cdouble[0][1]
epsrel = 1e-10
epsabs = 1e-10
abserr = Cdouble[0][1]
limit = Csize_t[0][1]
result = integration_qawc(f, 0., 1.e4, p, epsabs,epsrel,limit,ws,result,abserr)
GSL.integration_workspace_free(ws)
return result
end
However, I get the following error
UndefVarError: f_ not defined
Stacktrace:
[1] (::getfield(Main, Symbol("##117#118")))(::Float64, ::Ptr{Nothing}) at /home/varantir/.julia/packages/GSL/IVE5m /src/manual_wrappers.jl:45
[2] integration_qawc at /home/varantir/.julia/packages/GSL/IVE5m/src/gen/direct_wrappers/gsl_integration_h.jl:570 [inlined]
[3] Q(::Float64) at ./In[250]:14
[4] top-level scope at In[251]:1
Which seems a little bit strange to me, since I clearly have defined f_. Any ideas?
for a weird reason, this doesn't throw errors but throws 0:
function Q(p)
ws_size = 200
ws = GSL.integration_workspace_alloc(ws_size)
f0(x::Float64)::Float64 = 1/(x+p)
f = GSL.#gsl_function(f0)
result = Cdouble[0][1]
epsrel = 1e-10
epsabs = 1e-10
abserr = Cdouble[0][1]
limit = Csize_t[0][1]
result = integration_qawc(f, 0., 1.e4, p, epsabs,epsrel,limit,ws,result,abserr)
GSL.integration_workspace_free(ws)
return result
end
From the docs of integration_qawc:
The adaptive bisection algorithm of QAG is used, with modifications to ensure that subdivisions do not occur at the singular point x = c. When a subinterval contains the point x = c or is close to it then a special 25-point modified Clenshaw-Curtis rule is used to control the singularity. Further away from the singularity the algorithm uses an ordinary 15-point Gauss-Kronrod integration rule.
Using an alternative, using QuadGK.jl:
using QuadGK
function G2(p)
f(x)=1/(x+p)
a = 0.0
b = 1e4
if a<-p<b
res, err = quadgk(f,a,-p,b,rtol=1e-10,atol=1e-10)
return res
else
res, err = quadgk(f,a,b,rtol=1e-10,atol=1e-10)
return res
end
end
from the QuadGK docs:
The algorithm is an adaptive Gauss-Kronrod integration technique: the integral in each interval is estimated using a Kronrod rule (2*order+1 points) and the error is estimated using an embedded Gauss rule (order points). The interval with the largest error is then subdivided into two intervals and the process is repeated until the desired error tolerance is achieved.
These quadrature rules work best for smooth functions within each interval, so if your function has a known discontinuity or other singularity, it is best to subdivide your interval to put the singularity at an endpoint. For example, if f has a discontinuity at x=0.7 and you want to integrate from 0 to 1, you should use quadgk(f, 0,0.7,1) to subdivide the interval at the point of discontinuity. The integrand is never evaluated exactly at the endpoints of the intervals, so it is possible to integrate functions that diverge at the endpoints as long as the singularity is integrable (for example, a log(x) or 1/sqrt(x) singularity).
The default order is 7, so is equivalent to the GSL integration.
I have the following code that evaluates the likelihood function for a spatial autoregressive model in Julia, like so:
function like_sar2(betas,rho,sige,y,x,W)
n = length(y)
A = speye(n) - rho*W
e = y-x*betas-rho*sparse(W)*y
epe = e'*e
tmp2 = 1/(2*sige)
llike = -(n/2)*log(pi) - (n/2)*log(sige) + log(det(A)) - tmp2*epe
end
I am trying to maximize this function but I'm not sure how to pass the different sized function inputs so that the Optim.jl package will accept it. I have tried the following:
optimize(like_sar2,[betas;rho;sige;y;x;W],BFGS())
and
optimize(like_sar2,tuple(betas,rho,sige,y,x,W),BFGS())
In the first case, the matrix in brackets does not conform due to dimension mismatch and in the second, the Optim package doesn't allow tuples.
I'd like to try and maximize this likelihood function so that it can return the numerical Hessian matrix (using the Optim options) so that I can compute t-statistics for the parameters.
If there is any easier way to obtain the numerical Hessian for such a function I'd use that but it appears that packages like FowardDiff only accept single inputs.
Any help would be greatly appreciated!
Not 100% sure I correctly understand how your function works, but it seems to me like you're using the likelihood to estimate the coefficient vector beta, with the other input variables fixed. The way to do this would be to amend the function as follows:
using Optim
# Initialize some parameters
coeffs = rand(10)
rho = 0.1
ys = rand(10)
xs = rand(10,10)
Wmat = rand(10,10)
sige=0.5
# Construct likelihood with parameters fixed at pre-defined values
function like_sar2(β::Vector{Float64},ρ=rho,σε=sige,y=ys,x=xs,W=Wmat)
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
# Optimize, with starting value zero for all beta coefficients
optimize(like_sar2, zeros(10), NelderMead())
If you need to optimize more than your beta parameters (in the general autoregressive models I've used often the autocorrelation parameter was estimated jointly with other coefficients), you could do this by chugging it in with the beta vector and unpacking within the functions like so:
append!(coeffs,rho)
function like_sar3(coeffs::Vector{Float64},σε=sige,y=ys,x=xs,W=Wmat)
β = coeffs[1:10]; ρ = coeffs[11]
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
The key is that you end up with one vector of inputs to pass into your function.