Ok so I figured out how to plot the credible intervals for a univariate linear model in Turing.jl using the following code (I'm replicating Statistical rethinking by McElreath) This particular exercise is in chapter 4. If anyone has already plotted these types of models with Turing and can give me a guide, it would be great!!!
Univariate model code:
using Turing
using StatsPlots
using Plots
height = df2.height
weight = df2.weight
#model heightmodel(y, x) = begin
#priors
α ~ Normal(178, 100)
σ ~ Uniform(0, 50)
β ~ LogNormal(0, 10)
x_bar = mean(x)
#model
μ = α .+ (x.-x_bar).*β
y ~ MvNormal(μ, σ)
end
chns = sample(heightmodel(height, weight), NUTS(), 100000)
## code 4.43
describe(chns) |> display
# covariance and correlation
alph = get(chns, :α)[1].data
bet = get(chns, :β)[1].data
sigm = get(chns, :σ)[1].data
vecs = (alph[1:352], bet[1:352])
arr = vcat(transpose.(vecs)...)'
ss = [vec(alph + bet.*(x)) for x in 25:1:70]
arrr = vcat(transpose.(ss)...)'
plot([mean(arrr[:,x]) for x in 1:46],25:1:70, ribbon = ([-1*(quantile(arrr[:,x],[0.1,0.9])[1] - mean(arrr[:,x])) for x in 1:46], [quantile(arrr[:,x],[0.1,0.9])[2] - mean(arrr[:,x]) for x in 1:46]))
Credible interval Univariate:
However, when I try to replicate it with a multivatiate function, very strange things are drawn:
Multivariate model code:
weight_s = (df.weight .-mean(df.weight))./std(df.weight)
weight_s² = weight_s.^2
#model heightmodel(height, weight, weight²) = begin
#priors
α ~ Normal(178, 20)
σ ~ Uniform(0, 50)
β1 ~ LogNormal(0, 1)
β2 ~ Normal(0, 1)
#model
μ = α .+ weight.*β1 + weight².*β2
height ~ MvNormal(μ, σ)
end
chns = sample(heightmodel(height, weight_s, weight_s²), NUTS(), 100000)
describe(chns) |> display
### painting the fit
alph = get(chns, :α)[1].data
bet1 = get(chns, :β1)[1].data
bet2 = get(chns, :β2)[1].data
vecs = (alph[1:99000], bet1[1:99000], bet2[1:99000])
arr = vcat(transpose.(vecs)...)'
polinomial = [vec(alph + bet1.*(x) + bet2.*(x.^2)) for x in -2:0.01:2]
arrr = vcat(transpose.(polinomial)...)'
plot([mean(arrr[:,x]) for x in 1:401],-2:0.01:2, ribbon = ([-1*(quantile(arrr[:,x],[0.1,0.9])[1] - mean(arrr[:,x])) for x in 1:46], [quantile(arrr[:,x],[0.1,0.9])[2] - mean(arrr[:,x]) for x in 1:46]))
Credible interval Univariate:
In the Julia Slack channel (https://slackinvite.julialang.org/) Jens was kind enough to give me the answer. The credit goes to him --He doesn't have a SO account.
The main problem was that I was lacking simplicity and was trying to plot the mean the wrong way --in a very inefficient and strange way might I add. Each parameter has a vector which corresponds to 99000 draws from the posterior distribution, I was trying to draw the mean through the matrix but it's much easier to define the median first and then plot it, then you don't make mistakes as I did calculating the mean.
old code
vecs = (alph[1:99000], bet1[1:99000], bet2[1:99000])
arr = vcat(transpose.(vecs)...)'
[mean(arrr[:,x]) for x in 1:401]
can be written as:
testweights = -2:0.01:2
arr = [fheight.(w, res.α, res.β1, res.β2) for w in testweights]
m = [mean(v) for v in arr]
Moreover, the way Jens defined the Credible intervals is much more elegant and Julianic:
Jens' code:
quantiles = [quantile(v, [0.1, 0.9]) for v in arr]
lower = [q[1] - m for (q, m) in zip(quantiles, m)]
upper = [q[2] - m for (q, m) in zip(quantiles, m)]
My code:
ribbon = ([-1*(quantile(arrr[:,x],[0.1,0.9])[1] - mean(arrr[:,x])) for x in 1:46], [quantile(arrr[:,x],[0.1,0.9])[2] - mean(arrr[:,x]) for x in 1:46]))
Complete Solution:
weight_s = (d.weight .-mean(d.weight))./std(d.weight)
height = d.height
#model heightmodel(height, weight) = begin
#priors
α ~ Normal(178, 20)´
σ ~ Uniform(0, 50)
β1 ~ LogNormal(0, 1)
β2 ~ Normal(0, 1)
#model
μ = α .+ weight .* β1 + weight.^2 .* β2
# or μ = fheight.(weight, α, β1, β2) if we are defining fheigth anyways
height ~ MvNormal(μ, σ)
end
chns = sample(heightmodel(height, weight_s), NUTS(), 10000)
describe(chns) |> display
res = DataFrame(chns)
fheight(weight, α, β1, β2) = α + weight * β1 + weight^2 * β2
testweights = -2:0.01:2
arr = [fheight.(w, res.α, res.β1, res.β2) for w in testweights]
m = [mean(v) for v in arr]
quantiles = [quantile(v, [0.1, 0.9]) for v in arr]
lower = [q[1] - m for (q, m) in zip(quantiles, m)]
upper = [q[2] - m for (q, m) in zip(quantiles, m)]
plot(testweights, m, ribbon = [lower, upper])
Related
Is there a R algorithm that fit smoothing splines while minimizing L,
L = ρ ∑ (i from 0 to n-1) wi(yi-Si(xi))² + (1 - ρ) ∫ (x from 0 to x_(n-1)) (S''(x))² dx
Maybe it's possible with smooth.spline but I didn't succeed to find the good parameters.
(The equation can be seen more clearly here : https://www.iro.umontreal.ca/~simardr/ssj/doc/html/umontreal/iro/lecuyer/functionfit/SmoothingCubicSpline.html)
pspline::smooth.Pspline( x = input_x,
y = input_y,
norder = 2,
method = 1,
spar = rho)
will do the job.
I've tried to reproduce the model from a PYMC3 and Stan comparison. But it seems to run slowly and when I look at #code_warntype there are some things -- K and N I think -- which the compiler seemingly calls Any.
I've tried adding types -- though I can't add types to turing_model's arguments and things are complicated within turing_model because it's using autodiff variables and not the usuals. I put all the code into the function do_it to avoid globals, because they say that globals can slow things down. (It actually seems slower, though.)
Any suggestions as to what's causing the problem? The turing_model code is what's iterating, so that should make the most difference.
using Turing, StatsPlots, Random
sigmoid(x) = 1.0 / (1.0 + exp(-x))
function scale(w0::Float64, w1::Array{Float64,1})
scale = √(w0^2 + sum(w1 .^ 2))
return w0 / scale, w1 ./ scale
end
function do_it(iterations::Int64)::Chains
K = 10 # predictor dimension
N = 1000 # number of data samples
X = rand(N, K) # predictors (1000, 10)
w1 = rand(K) # weights (10,)
w0 = -median(X * w1) # 50% of elements for each class (number)
w0, w1 = scale(w0, w1) # unit length (euclidean)
w_true = [w0, w1...]
y = (w0 .+ (X * w1)) .> 0.0 # labels
y = [Float64(x) for x in y]
σ = 5.0
σm = [x == y ? σ : 0.0 for x in 1:K, y in 1:K]
#model turing_model(X, y, σ, σm) = begin
w0_pred ~ Normal(0.0, σ)
w1_pred ~ MvNormal(σm)
p = sigmoid.(w0_pred .+ (X * w1_pred))
#inbounds for n in 1:length(y)
y[n] ~ Bernoulli(p[n])
end
end
#time chain = sample(turing_model(X, y, σ, σm), NUTS(iterations, 200, 0.65));
# ϵ = 0.5
# τ = 10
# #time chain = sample(turing_model(X, y, σ), HMC(iterations, ϵ, τ));
return (w_true=w_true, chains=chain::Chains)
end
chain = do_it(1000)
I have an array Z in Julia which represents an image of a 2D Gaussian function. I.e. Z[i,j] is the height of the Gaussian at pixel i,j. I would like to determine the parameters of the Gaussian (mean and covariance), presumably by some sort of curve fitting.
I've looked into various methods for fitting Z: I first tried the Distributions package, but it is designed for a somewhat different situation (randomly selected points). Then I tried the LsqFit package, but it seems to be tailored for 1D fitting, as it is throwing errors when I try to fit 2D data, and there is no documentation I can find to lead me to a solution.
How can I fit a Gaussian to a 2D array in Julia?
The simplest approach is to use Optim.jl. Here is an example code (it was not optimized for speed, but it should show you how you can handle the problem):
using Distributions, Optim
# generate some sample data
true_d = MvNormal([1.0, 0.0], [2.0 1.0; 1.0 3.0])
const xr = -3:0.1:3
const yr = -3:0.1:3
const s = 5.0
const m = [s * pdf(true_d, [x, y]) for x in xr, y in yr]
decode(x) = (mu=x[1:2], sig=[x[3] x[4]; x[4] x[5]], s=x[6])
function objective(x)
mu, sig, s = decode(x)
try # sig might be infeasible so we have to handle this case
est_d = MvNormal(mu, sig)
ref_m = [s * pdf(est_d, [x, y]) for x in xr, y in yr]
sum((a-b)^2 for (a,b) in zip(ref_m, m))
catch
sum(m)
end
end
# test for an example starting point
result = optimize(objective, [1.0, 0.0, 1.0, 0.0, 1.0, 1.0])
decode(result.minimizer)
Alternatively you could use constrained optimization e.g. like this:
using Distributions, JuMP, NLopt
true_d = MvNormal([1.0, 0.0], [2.0 1.0; 1.0 3.0])
const xr = -3:0.1:3
const yr = -3:0.1:3
const s = 5.0
const Z = [s * pdf(true_d, [x, y]) for x in xr, y in yr]
m = Model(solver=NLoptSolver(algorithm=:LD_MMA))
#variable(m, m1)
#variable(m, m2)
#variable(m, sig11 >= 0.001)
#variable(m, sig12)
#variable(m, sig22 >= 0.001)
#variable(m, sc >= 0.001)
function obj(m1, m2, sig11, sig12, sig22, sc)
est_d = MvNormal([m1, m2], [sig11 sig12; sig12 sig22])
ref_Z = [sc * pdf(est_d, [x, y]) for x in xr, y in yr]
sum((a-b)^2 for (a,b) in zip(ref_Z, Z))
end
JuMP.register(m, :obj, 6, obj, autodiff=true)
#NLobjective(m, Min, obj(m1, m2, sig11, sig12, sig22, sc))
#NLconstraint(m, sig12*sig12 + 0.001 <= sig11*sig22)
setvalue(m1, 0.0)
setvalue(m2, 0.0)
setvalue(sig11, 1.0)
setvalue(sig12, 0.0)
setvalue(sig22, 1.0)
setvalue(sc, 1.0)
status = solve(m)
getvalue.([m1, m2, sig11, sig12, sig22, sc])
In principle, you have a loss function
loss(μ, Σ) = sum(dist(Z[i,j], N([x(i), y(j)], μ, Σ)) for i in Ri, j in Rj)
where x and y convert your indices to points on the axes (for which you need to know the grid distance and offset positions), and Ri and Rj the ranges of the indices. dist is the distance measure you use, eg. squared difference.
You should be able to pass this into an optimizer by packing μ and Σ into a single vector:
pack(μ, Σ) = [μ; vec(Σ)]
unpack(v) = #views v[1:N], reshape(v[N+1:end], N, N)
loss_packed(v) = loss(unpack(v)...)
where in your case N = 2. (Maybe the unpacking deserves some optimization to get rid of unnecessary copying.)
Another thing is that we have to ensure that Σ is positive semidifinite (and hence also symmetric). One way to do that is to parametrize the packed loss function differently, and optimize over some lower triangular matrix L, such that Σ = L * L'. In the case N = 2, we can write this as
unpack(v) = v[1:2], LowerTriangular([v[3] zero(v[3]); v[4] v[5]])
loss_packed(v) = let (μ, L) = unpack(v)
loss(μ, L * L')
end
(This is of course prone to further optimization, such as expanding the multiplication directly in to loss). A different way is to specify the condition as constraints into the optimizer.
For the optimzer to work you probably have to get the derivative of loss_packed. Either have to find the manually calculate it (by a good choice of dist), or maybe more easily by using a log transformation (if you're lucky, you find a way to reduce it to a linear problem...). Alternatively you could try to find an optimizer that does automatic differentiation.
I am trying to Implement gradient Descent algorithm from scratch to find the slope and intercept value for my linear fit line.
Using the package and calculating slope and intercept, I get slope = 0.04 and intercept = 7.2 but when I use my gradient descent algorithm for the same problem, I get slope and intercept both values = (-infinity,-infinity)
Here is my code
x= [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
y=[2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function GradientDescent()
m=0
c=0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp #error in predicted value
dm = 2*E*(-x[k]) # partial derivation of cost function w.r.t slope(m)
dc = 2*E*(-1) # partial derivate of cost function w.r.t. Intercept(c)
m = m + (dm * 0.001)
c = c + (dc * 0.001)
end
end
return m,c
end
Values = GradientDescent() # after running values = (-inf,-inf)
I have not done the math, but instead wrote the tests. It seems you got a sign error when assigning m and c.
Also, writing the tests really helps, and Julia makes it simple :)
function GradientDescent(x, y)
m=0.0
c=0.0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp
dm = 2*E*(-x[k])
dc = 2*E*(-1)
m = m - (dm * 0.001)
c = c - (dc * 0.001)
end
end
return m,c
end
using Base.Test
#testset "gradient descent" begin
#testset "slope $slope" for slope in [0, 1, 2]
#testset "intercept for $intercept" for intercept in [0, 1, 2]
x = 1:20
y = broadcast(x -> slope * x + intercept, x)
computed_slope, computed_intercept = GradientDescent(x, y)
#test slope ≈ computed_slope atol=1e-8
#test intercept ≈ computed_intercept atol=1e-8
end
end
end
I can't get your exact numbers, but this is close. Perhaps it helps?
# 141 ?
datax = [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
datay = [2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function gradientdescent()
m = 0
b = 0
learning_rate = 0.00001
for n in 1:10000
for i in 1:length(datay)
x = datax[i]
y = datay[i]
guess = m * x + b
error = y - guess
dm = 2error * x
dc = 2error
m += dm * learning_rate
b += dc * learning_rate
end
end
return m, b
end
gradientdescent()
(-0.04, 17.35)
It seems that adjusting the learning rate is critical...
This is a interpolation problem:
I have a function z=z(x,y) and I know the relationship between x and y like x=f(y,x_0). Here x_0's are starting points of curves on time y=0. Let's assume x_0=[0 1 2] has three values. For each value of x_0, I get a curve in R^2.x1=f1(y),x2=f2(y) and x3=f3(y) and I draw z1,z2,z3 curves in R^3 using (x1,f1), (x2,f2) and (x3,f3). How can I interpolate z1,z2,23 for getting a surface?
I will be grateful for any help,
mgm
Using your notation, and some arbitrary example relationships for x = f(x0, y) and z = f(x,y), this is how you do it (I also added a plot of the direct calculation for reference):
% Define grid
x0_orig = 0:2;
y_orig = 0:3;
[x0, y] = meshgrid(x0_orig, y_orig);
% Calculate x (replace the relationship with your own)
x = x0 + 0.1 * y.^2;
% Calculate z (replace the relationship with your own)
z = 0.1 * (x.^2 + y.^2);
% Plot
subplot(1,3,1)
surf(x, y, z)
xlabel('x')
ylabel('y')
zlabel('z')
title('Original data')
%%%%%%%%%%
% Interpolate with finer grid
x0i = 0:0.25:2;
yi = 0:0.25:3;
xi = interp2(x0_orig, y_orig, x, x0i, yi');
[x0i yi] = meshgrid(x0i, yi);
zi = interp2(x0, y, z, x0i, yi);
subplot(1,3,2)
surf(xi, yi, zi);
title('Interpolated data')
%%%%%%%%%%
% Recalculate directly with finer grid
x0i = 0:0.25:2;
yi = 0:0.25:3;
[x0i yi] = meshgrid(x0i, yi);
xi = x0i + 0.1 * yi.^2;
zi = 0.1 * (xi.^2 + yi.^2);
subplot(1,3,3)
surf(xi, yi, zi)
title('Recalculated directly')