Bayesian ODE with Julia - julia

I have been trying to implement Bayesian ODE. In Oil industry, we use the following equation to fit production data then forecasting :
The ODE equation is described as :
where 0<n<1, n and K are parameters defined by fitting the raw production data, in my case K is 0.17, n = 0.87.
My initial code :
using DiffEqFlux, OrdinaryDiffEq, Flux, Optim, Plots, AdvancedHMC
function Arps!(du,u,p,t)
y = u
K,n = p
du = (y * K * y^n)
end
tspan=(1.0,200.0)
tsteps = range(1, 200, length = 200)
u0 = [5505.99]
p=[0.17,0.87]
prob1 = ODEProblem(Arps!,u0,tspan)
sol_ode = solve(prob1,Vern7(),saveat = tsteps)
Not sure how to solve this issue :
MethodError: no method matching iterate(::DiffEqBase.NullParameters)

You didn't pass any parameters into your ODE. prob1 = ODEProblem(Arps!,u0,tspan,p).
For the Bayesian part, look at the tutorial:
https://turing.ml/dev/tutorials/10-bayesiandiffeq/

Related

Parameters of truncated normal distribution using R

How can we numerically solve these equations using R when E_(μ,σ) (X)=1 and 〖var〗_(μ,σ) (X)=1 ? I am interested in finding the values of μ and σ.
Here α=(a-μ)/σ and β=(b-μ)/σ. I used the following code, but I'm not getting an answer. Is there any other code or method I may use to get what I want ?
mubar<-1
sigmabar<-1
a<-0.5
b<-5.5
model <- function(x)c(F1 = mubar-x[1]+x[2]*((pnorm((b-x[1])/x[2])-pnorm(a-x[1])/x[2])/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2]))),
F2 = sigmabar^2-x[2]^2*(1-(((b-x[1])/x[2]*pnorm((b-x[1])/x[2])-(a-x[1])/x[2]*pnorm((a-x[1])/x[2]))/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2])))-((pnorm((b-x[1])/x[2])-pnorm((a-x[1])/x[2]))/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2])))^2) )
(ss <- multiroot(f = model, start = c(1, 1)))

How to solve "impacts()" neighbors length error after running spdep::lagsarlm (Spatial Autoregressive Regression model)?

I have 9,150 polygons in my dataset. I was trying to run a spatial autoregressive model (SAR) in spdep to test spatial dependence of my outcome variable. After running the model, I wanted to examine the direct/indirect impacts, but encountered an error that seems to have something to do with the length of neighbors in the weights matrix not being equal to n.
I tried running the very same equation as SLX model (Spatial Lag X), and impacts() worked fine, even though there were some polygons in my set that had no neighbors. I Googled and looked at spdep documentation, but couldn't find a clue on how to solve this error.
# Defining queen contiguity neighbors for polyset and storing the matrix as list
q.nbrs <- poly2nb(polyset)
listweights <- nb2listw(q.nbrs, zero.policy = TRUE)
# Defining the model
model.equation <- TIME ~ A + B + C
# Run SAR model
reg <- lagsarlm(model.equation, data = polyset, listw = listweights, zero.policy = TRUE)
# Run impacts() to show direct/indirect impacts
impacts(reg, listw = listweights, zero.policy = TRUE)
Error in intImpacts(rho = rho, beta = beta, P = P, n = n, mu = mu, Sigma = Sigma, :
length(listweights$neighbours) == n is not TRUE
I know that this is a question from 2019, but maybe it can help people dealing with the same problem. I found out that in my case the problem was the type of dataset, your data=polyset should be of type "SpatialPolygonsDataFrame". Which can be achieved by converting your data:
polyset_spatial_sf <- sf::as_Spatial(polyset, IDs = polyset$ID)
Then rerun your code.

Power law fitted by `fitdistr()` function in package `fitdistrplus`

I generate some random variables using rplcon() function in package poweRlaw
data <- rplcon(1000,10,2)
Now, I want to know which known distributions fit the data best. Lognorm? exp? gamma? power law? power law with exponential cutoff?
So I use function fitdist() in package fitdistrplus:
fit.lnormdl <- fitdist(data,"lnorm")
fit.gammadl <- fitdist(data, "gamma", lower = c(0, 0))
fit.expdl <- fitdist(data,"exp")
Due to the power law distribution and power law with exponential cutoff are not the base probability function according to CRAN Task View: Probability Distributions, so I write the d,p,q function of power law based on the example 4 of ?fitdist
dplcon <- function (x, xmin, alpha, log = FALSE)
{
if (log) {
pdf = log(alpha - 1) - log(xmin) - alpha * (log(x/xmin))
pdf[x < xmin] = -Inf
}
else {
pdf = (alpha - 1)/xmin * (x/xmin)^(-alpha)
pdf[x < xmin] = 0
}
pdf
}
pplcon <- function (q, xmin, alpha, lower.tail = TRUE)
{
cdf = 1 - (q/xmin)^(-alpha + 1)
if (!lower.tail)
cdf = 1 - cdf
cdf[q < round(xmin)] = 0
cdf
}
qplcon <- function(p,xmin,alpha) alpha*p^(1/(1-xmin))
Finally, I use codes below to get parameter xmin and alpha of power law:
fitpl <- fitdist(data,"plcon",start = list(xmin=1,alpha=1))
But it throws an error:
<simpleError in optim(par = vstart, fn = fnobj, fix.arg = fix.arg, obs = data, ddistnam = ddistname, hessian = TRUE, method = meth, lower = lower, upper = upper, ...): function cannot be evaluated at initial parameters>
Error in fitdist(data, "plcon", start = list(xmin = 1, alpha = 1)) :
the function mle failed to estimate the parameters,
with the error code 100
I try to search in google and stackoverflow, and so many similar error questions appear, but after reading and trying, no solutions work in my issues, what should I do to complete it correctly to get the parameters?
Thank you for everyone who does me a favor!
This was an interesting one that I am not entirely happy with the discovery but I will tell you what I have found and see if it helps.
On calling the fitdist function, by default it wants to use mledist from the same package. This itself results in a call to stats::optim which is a general optimization function. In it's return value it gives a convergence error code, see ?optim for details. The 100 you see is not one of the ones returned by optim. So I pulled apart the code for mledist and fitdist to find where that error code comes from. Unfortunately it is defined in more than one case and is a general trap error code. If you break down all of the code, what fitdist is trying to do here is the following, subject to various checks etc beforehand.
fnobj <- function(par, fix.arg, obs, ddistnam) {
-sum(do.call(ddistnam, c(list(obs), as.list(par),
as.list(fix.arg), log = TRUE)))
}
vstart = list(xmin=5,alpha=5)
fnobj <- function(par, fix.arg obs, ddistnam) {
-sum(do.call(ddistnam, c(list(obs), as.list(par),
as.list(fix.arg), log = TRUE)))
}
ddistname=dplcon
fix.arg = NULL
meth = "Nelder-Mead"
lower = -Inf
upper = Inf
optim(par = vstart, fn = fnobj,
fix.arg = fix.arg, obs = data, ddistnam = ddistname,
hessian = TRUE, method = meth, lower = lower,
upper = upper)
If we run this code we find a more useful error "function cannot be evaluated at initial parameters". Which makes sense if we look at the function definition. Having xmin=0 or alpha=1 will yield a log-likelihood of -Inf. OK so think try different initial values, I tried a few random choices but all returned a new error, "non-finite finite-difference value 1".
Searching the optim source further for the source of these two errors they are not part of the R source itself, there is however a .External2 call so I can only assume the errors come from there. The non-finite error implies that one of the function evaluations somewhere gives a non numeric result. The function dplcon will do so when alpha <= 1 or xmin <= 0. fitdist lets you specify additional arguments that get passed to mledist or other (depending on what method you choose, mle is default) of which lower is one for controlling lower bounds on the parameters to be optimized. So I tried imposing these limits and trying again:
fitpl <- fitdist(data,"plcon",start = list(xmin=1,alpha=2), lower = c(xmin = 0, alpha = 1))
Annoyingly this still gives an error code 100. Tracking this down yields the error "L-BFGS-B needs finite values of 'fn'". The optimization method has changed from the default Nelder-Mead as you specifying the boundary and somewhere on the external C code call this error arises, presumably close to the limits of either xmin or alpha where the stability of the numerical calculation as we approach infinity is important.
I decided to do quantile matching rather than max likelihood to try to find out more
fitpl <- fitdist(data,"plcon",start = list(xmin=1,alpha=2),
method= "qme",probs = c(1/3,2/3))
fitpl
## Fitting of the distribution ' plcon ' by matching quantiles
## Parameters:
## estimate
## xmin 0.02135157
## alpha 46.65914353
which suggests that the optimum value of xmin is close to 0, it's limits. The reason I am not satisfied is that I can't get a maximum-likelihood fit of the distribution using fitdist however hopefully this explanation helps and the quantile matching gives an alternative.
Edit:
After learning a little more about power law distributions in general it makes sense that this does not work as you expect. The parameter power parameter has a likelihood function which can be maximised conditional on a given xmin. However no such expression exists for xmin since the likelihood function is increasing in xmin. Typically estimation of xmin comes from a Kolmogorov--Smirnov statistic, see this mathoverflow question and the d_jss_paper vignette of the poweRlaw package for more info and associated references.
There is functionality to estimate the parameters of the power law distribution in the poweRlaw package itself.
m = conpl$new(data)
xminhat = estimate_xmin(m)$xmin
m$setXmin(xminhat)
alphahat = estimate_pars(m)$pars
c(xmin = xminhat, alpha = alphahat)

Passing hypothesis in wald.test in R

Please forgive as i am new to this forum. The study requires to check the sum of coefficients=0. The test can be conducted using eviews like c(2)+c(3)+c(4)=0, where 2 is the coefficient of 2nd term and hence forth. The code for the same using R is
require(Hmisc)#this package is used to generate lags
require(aod)#this package is used to conduct wald test
output<-lm(formula = s_dep ~ m_dep + Lag(m_dep,-1) + Lag(m_dep,-2) + s_rtn, data = qs_eq_comm)
wald.test(b=coef(object=output),Sigma=vcov(object=output), Terms=2:4, H0=2+3+4)
#H0=2+3+4 checks if the sum is zero
This gives error : Error in wald.test(b = coef(object = output), Sigma = vcov(object = output), : Vectors of tested coefficients and of null hypothesis have different lengths. As per the aod package the documentation specifies the format as
wald.test(Sigma, b, Terms = NULL, L = NULL, H0 = NULL,df = NULL, verbose = FALSE)
Please help to conduct this test.
In wald.test function either T or L can passed as the parameter. L is a matrix conformable to b, such as its product with b i.e., L %*% b gives the linear combinations of the coefficients to be tested. Create L matrix first and then conduct wald test.
l<-cbind(0,+1,+1,+1,0)
wald.test(b=coef(object=output),Sigma=vcov(object=output), L=l)

Fit state space model using dlm

I am about to fit a state space model to at univariate time series (y_t). The model i try to fit is:
y_t=F x_t+\epsilon_t, \epsilon_t \sim N(0,V)
x_{t+1}=G x_t+w_t, w_t \sim N(0,W)
x_0 \sim N(m_0,C_0)
I use the following R-code:
# Create function of unknown parameters, which returns dlm object
Build <- function(theta) {dlm(FF=theta[1],
GG=theta[2],V=theta[3],W=theta[4],m0=theta[5],
C0=theta[6])}
# Fit model to data using MLE
f1 <- dlmMLE(y,parm=c(1,1,0.1,0.1,0,0.1),Build)
But I get the following error message (after running f1):
Error in dlm(FF = theta[1], GG = theta[2], V = theta[3], W = theta [4], :
V is not a valid variance matrix
My problem is that I don't understand why V is not a valid variance matrix..
Does anyone know what is wrong?
Thank you in advance
Regards fuente
EDIT:
I tried doing the same, but instead of my real data I used:
y <- rnorm(72,6.44,1.97)
This produced, however, an error involving W (and not V...):
Error in dlm(FF = theta[1], GG = theta[2], V = theta[3], W = theta[4], :
W is not a valid variance matrix
I'm confused. Does it have something to do with the starting values passed to parm=...?

Resources