f# - Recursive formula for kurtosis - recursion

I am now adapting my formula for skewness to make a kurtosis function in F#. Unfortunately it is again return incorrect results.
Here is my code
let kurtosis_aux (m, m2, m3, m4, k) x =
m + (x - m)/k,
m2 + ((x - m)*(x - m)*(k - 1.0))/k,
m3 + ((x - m)*(x - m)*(x - m)*(k - 1.0)*(k - 2.0))/(k * k) - (3.0 * (x - m) * m2)/k,
m4 + ((x - m)*(x - m)*(x - m)*(x - m)*(k - 1.0)*(k * k - (3.0 * k) + 3.0))/(k * k * k) + 6.0 * (x - m)*(x - m)* m2/(k * k) - (4.0*(x - m)* m3)/k ,
k + 1.0;;
let kurtosis xs =
let _, m2, m3, m4, n = Seq.fold kurtosis_aux (0.0, 0.0, 0.0, 0.0, 1.0) xs
((n - 1.0) * m4 / ( m2 * m2 )) - 3.0;;
Finally I test on a small vector, and should get approximately 2.94631
kurtosis [|9.0; 2.0; 6.0; 3.0; 29.0|];;
But instead FSI returns -0.05369308728.
The error must be in part m4 of the kurtosis_aux function or in the kurtosis function itself. The other variables are all used in the skewness function and work correctly.
Again I am hugely grateful for any and all help.

remove the -3.0 in the last line.
with -3.0 you are calculating excess kurtosis.

Related

MathOptInterface.OTHER_ERROR when trying to use ISRES of NLopt through JuMP

I am trying to minimize a nonlinear function with nonlinear inequality constraints with NLopt and JuMP.
In my test code below, I am minimizing a function with a known global minima.
Local optimizers such as LD_MMA fails to find this global minima, so I am trying to use global optimizers of NLopt that allow nonlinear inequality constraintes.
However, when I check my termination status, it says “termination_status(model) = MathOptInterface.OTHER_ERROR”. I am not sure which part of my code to check for this error.
What could be the cause?
I am using JuMP since in the future I plan to use other solvers such as KNITRO as well, but should I rather use the NLopt syntax?
Below is my code:
# THIS IS A CODE TO SOLVE FOR THE TOYMODEL
# THE EQUILIBRIUM IS CHARACTERIZED BY A NONLINEAR SYSTEM OF ODEs OF INCREASING FUCTIONS B(x) and S(y)
# THE GOAL IS TO APPROXIMATE B(x) and S(y) WITH POLYNOMIALS
# FIND THE POLYNOMIAL COEFFICIENTS THAT MINIMIZE THE LEAST SQUARES OF THE EQUILIBRIUM EQUATIONS
# load packages
using Roots, NLopt, JuMP
# model primitives and other parameters
k = .5 # equal split
d = 1 # degree of polynomial
nparam = 2*d+2 # number of parameters to estimate
m = 10 # number of grids
m -= 1
vGrid = range(0,1,m) # discretize values
c1 = 0 # lower bound for B'() and S'()
c2 = 2 # lower and upper bounds for offers
c3 = 1 # lower and upper bounds for the parameters to be estimated
# objective function to be minimized
function obj(α::T...) where {T<:Real}
# split parameters
αb = α[1:d+1] # coefficients for B(x)
αs = α[d+2:end] # coefficients for S(y)
# define B(x), B'(x), S(y), and S'(y)
B(v) = sum([αb[i] * v .^ (i-1) for i in 1:d+1])
B1(v) = sum([αb[i] * (i-1) * v ^ (i-2) for i in 2:d+1])
S(v) = sum([αs[i] * v .^ (i-1) for i in 1:d+1])
S1(v) = sum([αs[i] * (i-1) * v ^ (i-2) for i in 2:d+1])
# the equilibrium is characterized by the following first order conditions
#FOCb(y) = B(k * y * S1(y) + S(y)) - S(y)
#FOCs(x) = S(- (1-k) * (1-x) * B1(x) + B(x)) - B(x)
function FOCb(y)
sy = S(y)
binv = find_zero(q -> B(q) - sy, (-c2, c2))
return k * y * S1(y) + sy - binv
end
function FOCs(x)
bx = B(x)
sinv = find_zero(q -> S(q) - bx, (-c2, c2))
return (1-k) * (1-x) * B1(x) - B(x) + sinv
end
# evaluate the FOCs at each grid point and return the sum of squares
Eb = [FOCb(y) for y in vGrid]
Es = [FOCs(x) for x in vGrid]
E = [Eb; Es]
return E' * E
end
# this is the actual global minimum
αa = [1/12, 2/3, 1/4, 2/3]
obj(αa...)
# do optimization
model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :GN_ISRES)
#variable(model, -c3 <= α[1:nparam] <= c3)
#NLconstraint(model, [j = 1:m], sum(α[i] * (i-1) * vGrid[j] ^ (i-2) for i in 2:d+1) >= c1) # B should be increasing
#NLconstraint(model, [j = 1:m], sum(α[d+1+i] * (i-1) * vGrid[j] ^ (i-2) for i in 2:d+1) >= c1) # S should be increasing
register(model, :obj, nparam, obj, autodiff=true)
#NLobjective(model, Min, obj(α...))
println("")
println("Initial values:")
for i in 1:nparam
set_start_value(α[i], αa[i]+rand()*.1)
println(start_value(α[i]))
end
JuMP.optimize!(model)
println("")
#show termination_status(model)
#show objective_value(model)
println("")
println("Solution:")
sol = [value(α[i]) for i in 1:nparam]
My output:
Initial values:
0.11233072522513032
0.7631843020124309
0.3331559403539963
0.7161240026812674
termination_status(model) = MathOptInterface.OTHER_ERROR
objective_value(model) = 0.19116585196576466
Solution:
4-element Vector{Float64}:
0.11233072522513032
0.7631843020124309
0.3331559403539963
0.7161240026812674
I answered on the Julia forum: https://discourse.julialang.org/t/mathoptinterface-other-error-when-trying-to-use-isres-of-nlopt-through-jump/87420/2.
Posting my answer for posterity:
You have multiple issues:
range(0,1,m) should be range(0,1; length = m) (how did this work otherwise?) This is true for Julia 1.6. The range(start, stop, length) method was added for Julia v1.8
Sometimes your objective function errors because the root doesn't exist. If I run with Ipopt, I get
ERROR: ArgumentError: The interval [a,b] is not a bracketing interval.
You need f(a) and f(b) to have different signs (f(a) * f(b) < 0).
Consider a different bracket or try fzero(f, c) with an initial guess c.
Here's what I would do:
using JuMP
import Ipopt
import Roots
function main()
k, d, c1, c2, c3, m = 0.5, 1, 0, 2, 1, 10
nparam = 2 * d + 2
m -= 1
vGrid = range(0, 1; length = m)
function obj(α::T...) where {T<:Real}
αb, αs = α[1:d+1], α[d+2:end]
B(v) = sum(αb[i] * v^(i-1) for i in 1:d+1)
B1(v) = sum(αb[i] * (i-1) * v^(i-2) for i in 2:d+1)
S(v) = sum(αs[i] * v^(i-1) for i in 1:d+1)
S1(v) = sum(αs[i] * (i-1) * v^(i-2) for i in 2:d+1)
function FOCb(y)
sy = S(y)
binv = Roots.fzero(q -> B(q) - sy, zero(T))
return k * y * S1(y) + sy - binv
end
function FOCs(x)
bx = B(x)
sinv = Roots.fzero(q -> S(q) - bx, zero(T))
return (1-k) * (1-x) * B1(x) - B(x) + sinv
end
return sum(FOCb(x)^2 + FOCs(x)^2 for x in vGrid)
end
αa = [1/12, 2/3, 1/4, 2/3]
model = Model(Ipopt.Optimizer)
#variable(model, -c3 <= α[i=1:nparam] <= c3, start = αa[i]+ 0.1 * rand())
#constraints(model, begin
[j = 1:m], sum(α[i] * (i-1) * vGrid[j]^(i-2) for i in 2:d+1) >= c1
[j = 1:m], sum(α[d+1+i] * (i-1) * vGrid[j]^(i-2) for i in 2:d+1) >= c1
end)
register(model, :obj, nparam, obj; autodiff = true)
#NLobjective(model, Min, obj(α...))
optimize!(model)
print(solution_summary(model))
return value.(α)
end
main()

Find maximum angle of box in slot

How would I find the maximum possible angle (a) which a rectangle of width (W) can be at within a slot of width (w) and depth (h) - see my crude drawing below
Considering w = hh + WW at the picture:
we can write equation
h * tan(a) + W / cos(a) = w
Then, using formulas for half-angles and t = tan(a/2) substitution
h * 2 * t / (1 - t^2) + W * (1 + t^2) / (1 - t^2) = w
h * 2 * t + W * (1 + t^2) = (1 - t^2) * w
t^2 * (W + w) + t * (2*h) + (W - w) = 0
We have quadratic equation, solve it for unknown t, then get critical angle as
a = 2 * atan(t)
Quick check: Python example for picture above gives correct angle value 18.3 degrees
import math
h = 2
W = 4.12
w = 5
t = (math.sqrt(h*h-W*W+w*w) - h) / (W + w)
a = math.degrees(2 * math.atan(t))
print(a)
Just to elaborate on the above answer as it is not necessarly obvious, this is why why you can write equation:
h * tan(a) + W / cos(a) = w
PS: I suppose that the justification for "why a is the maximum angle" is obvious

Error thrown when adding vectorised constraints to JuMP

I am trying to reproduce this model - the code in the tutorial is for an old version of JuMP/Julia and does not run.
However, when I try to add the constraint:
#constraint(model, con, c[i = 1:N] .== ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
I get the error Unexpected assignment in expression 'c[i = 1:N]'.
Here is the reprex:
using Random
using Distributions
using JuMP
using Ipopt
Random.seed!(123)
N = 1000
γ = 0.5
τ = 0.2
ϵ = rand(Normal(0, 1), N)
wage = rand(Normal(10, 1), N)
consumption = (γ * (1 - τ) * wage) + (γ * ϵ)
leisure = (1 - γ) .+ (( 1 - γ) * ϵ) ./ (( 1 - τ ) * wage)
model = Model(Ipopt.Optimizer)
#variable(model, c[i = 1:N] >= 0)
#variable(model, 0 <= l[i = 1:N] <= 1)
#constraint(model, con, c[i = 1:N] .== ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
#NLobjective(model, Max, sum(γ *log(c[i]) + (1-γ)*log(l[i]) for i in 1:N ) )
Does anyone know why this is being thrown and how to fix it?
Any and all help appreciated!
Running Julia 1.5.1
With the c[i = 1:N] in JuMP yo can only define variables.
With the constraints one way you could do is just:
w = wage # not in your code
e = ϵ # not in your code
#constraint(model, con[i = 1:N], c[i] == ( ((1 - τ) * (1 - l[i]) .* w[i]) + e[i]))
Przemyslaw's answer is a good one. If you want to stick with the vectorized syntax, you can go
N = 1_000
e = rand(N)
w = rand(N)
τ = 0.2
model = Model()
#variable(model, c[i = 1:N] >= 0)
#variable(model, 0 <= l[i = 1:N] <= 1)
#constraint(model, c .== (1 - τ) .* (1 .- l) .* w .+ e)
Here is the JuMP documentation for constraints https://jump.dev/JuMP.jl/stable/constraints

How to compute this double integral in r?

I'm wanting to compute an integral of the following density function :
Using the packages "rmutil" and "psych" in R , i tried :
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cov(X,Y)/(SD(X)*SD(Y))
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = SD(X), sigma_y = SD(Y), rho = correlation) {
force(x)
force(y)
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * ((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # gives Nan
The problem :
This integral should give 1 , but it gives Nan ??
I don't know how can i fix the problem , i tried to force() x and y variables without success.
You were missing a pair of parentheses. The corrected code looks like:
library(rmutil)
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cor(X,Y)
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = sd(X), sigma_y = sd(Y), rho = correlation) {
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * (((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y)))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # prints 1.000047
This was hard to spot. From a debugging point of view, I found it helpful to first integrate over a finite domain. Trying it with things like first [-1,1] and then [-2,2] (on both axes) showed that the integrals were blowing up rather than converging. After that, I looked at the grouping even more carefully.
I also cleaned up the code a bit. I dropped SD in favor of sd since I don't see the motivation in importing the package psych just to make the code less readable (less flippantly, dropping psych from the question makes it easier for others to reproduce. There is no good reason to include a package which isn't be used in any essential way). I also dropped the force() which was doing nothing and used the built-in function cor for calculating the correlation.

Optim using gradient Error: "no method matching"

I’m trying to optimize a function using one of the algorithms that require a gradient. Basically I’m trying to learn how to optimize a function using a gradient in Julia. I’m fairly confident that my gradient is specified correctly. I know this because the similarly defined Matlab function for the gradient gives me the same values as in Julia for some test values of the arguments. Also, the Matlab version using fminunc with the gradient seems to optimize the function fine.
However when I run the Julia script, I seem to get the following error:
julia> include("ex2b.jl")
ERROR: `g!` has no method matching g!(::Array{Float64,1}, ::Array{Float64,1})
while loading ...\ex2b.jl, in ex
pression starting on line 64
I'm running Julia 0.3.2 on a windows 7 32bit machine. Here is the code (basically a translation of some Matlab to Julia):
using Optim
function mapFeature(X1, X2)
degrees = 5
out = ones(size(X1)[1])
for i in range(1, degrees+1)
for j in range(0, i+1)
term = reshape( (X1.^(i-j) .* X2.^(j)), size(X1.^(i-j))[1], 1)
out = hcat(out, term)
end
end
return out
end
function sigmoid(z)
return 1 ./ (1 + exp(-z))
end
function costFunc_logistic(theta, X, y, lam)
m = length(y)
regularization = sum(theta[2:end].^2) * lam / (2 * m)
return sum( (-y .* log(sigmoid(X * theta)) - (1 - y) .* log(1 - sigmoid(X * theta))) ) ./ m + regularization
end
function costFunc_logistic_gradient!(theta, X, y, lam, m)
grad= X' * ( sigmoid(X * theta) .- y ) ./ m
grad[2:end] = grad[2:end] + theta[2:end] .* lam / m
return grad
end
data = readcsv("ex2data2.txt")
X = mapFeature(data[:,1], data[:,2])
m, n = size(data)
y = data[:, end]
theta = zeros(size(X)[2])
lam = 1.0
f(theta::Array) = costFunc_logistic(theta, X, y, lam)
g!(theta::Array) = costFunc_logistic_gradient!(theta, X, y, lam, m)
optimize(f, g!, theta, method = :l_bfgs)
And here is some of the data:
0.051267,0.69956,1
-0.092742,0.68494,1
-0.21371,0.69225,1
-0.375,0.50219,1
-0.51325,0.46564,1
-0.52477,0.2098,1
-0.39804,0.034357,1
-0.30588,-0.19225,1
0.016705,-0.40424,1
0.13191,-0.51389,1
0.38537,-0.56506,1
0.52938,-0.5212,1
0.63882,-0.24342,1
0.73675,-0.18494,1
0.54666,0.48757,1
0.322,0.5826,1
0.16647,0.53874,1
-0.046659,0.81652,1
-0.17339,0.69956,1
-0.47869,0.63377,1
-0.60541,0.59722,1
-0.62846,0.33406,1
-0.59389,0.005117,1
-0.42108,-0.27266,1
-0.11578,-0.39693,1
0.20104,-0.60161,1
0.46601,-0.53582,1
0.67339,-0.53582,1
-0.13882,0.54605,1
-0.29435,0.77997,1
-0.26555,0.96272,1
-0.16187,0.8019,1
-0.17339,0.64839,1
-0.28283,0.47295,1
-0.36348,0.31213,1
-0.30012,0.027047,1
-0.23675,-0.21418,1
-0.06394,-0.18494,1
0.062788,-0.16301,1
0.22984,-0.41155,1
0.2932,-0.2288,1
0.48329,-0.18494,1
0.64459,-0.14108,1
0.46025,0.012427,1
0.6273,0.15863,1
0.57546,0.26827,1
0.72523,0.44371,1
0.22408,0.52412,1
0.44297,0.67032,1
0.322,0.69225,1
0.13767,0.57529,1
-0.0063364,0.39985,1
-0.092742,0.55336,1
-0.20795,0.35599,1
-0.20795,0.17325,1
-0.43836,0.21711,1
-0.21947,-0.016813,1
-0.13882,-0.27266,1
0.18376,0.93348,0
0.22408,0.77997,0
Let me know if you guys need additional details. Btw, this relates to a coursera machine learning course if curious.
The gradient should not be a function to compute the gradient,
but a function to store it
(hence the exclamation mark in the function name, and the second argument in the error message).
The following seems to work.
function g!(theta::Array, storage::Array)
storage[:] = costFunc_logistic_gradient!(theta, X, y, lam, m)
end
optimize(f, g!, theta, method = :l_bfgs)
The same using closures and currying (version for those who got used to a function that returns the cost and gradient):
function cost_gradient(θ, X, y, λ)
m = length(y);
return (θ::Array) -> begin
h = sigmoid(X * θ); #(m,n+1)*(n+1,1) -> (m,1)
J = (1 / m) * sum(-y .* log(h) .- (1 - y) .* log(1 - h)) + λ / (2 * m) * sum(θ[2:end] .^ 2);
end, (θ::Array, storage::Array) -> begin
h = sigmoid(X * θ); #(m,n+1)*(n+1,1) -> (m,1)
storage[:] = (1 / m) * (X' * (h .- y)) + (λ / m) * [0; θ[2:end]];
end
end
Then, somewhere in the code:
initialθ = zeros(n,1);
f, g! = cost_gradient(initialθ, X, y, λ);
res = optimize(f, g!, initialθ, method = :cg, iterations = your_iterations);
θ = res.minimum;

Resources