I want to estimate the forward looking version of the Taylor rule equation using the iterative nonlinear GMM:
I have the data for all the variables in the model, namely (inflation rate), (unemployment gap) and (effective federal funds rate) and what I am trying to estimate is the set of parameters , and .
Where I need help is in the usage of the gmm() function in the {gmm} R package. I 'think' that the parameters of the function that I need are the parameters:
gmm(g, x, type = "iterative",...)
where g is the formula (so, the model stated above), x is the data vector (or matrix) and type is the type of GMM to use.
My problem is with the data matrix parameter. I do not know the way in which to construct it (not that I don't know of matrices in R and all the examples I have seen on the internet are not similar to what I am attempting to do here. Also, this is my first time using the gmm() function in R. Is there anything else I need to know?
Your help will be much appreciated. Thank you :)
Related
I am having difficulty with programming a nonlinear mixed effects model in R using the nlme package. I am not concerned with fitting to data at this point, rather I just want to make sure I have the model coded correctly. The model I am trying to code looks like this in equation form:
y = (a + X)(1 - exp(-bZ))^(c+dX) + e
where X is the random effect varying across a single grouping level we can call g1, Z is a variable in the data, and e is described by N(0, LZ) where L is a parameter estimated from the data.
I am struggling to understand the correct coding syntax, specifically surrounding the (c+dX) and the error structure.
From my best understanding, I can get the code of a simpler equation if we remove the d parameter leaving the equation as:
y = (a + X)(1 - exp(-bZ))^(c+X) + e
m1<-nlme(y~a*(1-exp(-b*Z))^c, fixed=a+b+c~1, random=a+c~1|g1, start=c(a=1,b=1, c=1))
However, I'm not sure how to actually program the equation I want with c+dX. Another concern I have is that the X random effect should be the same random effect just used in two locations of the equation. Is this what will give me that? And lastly, for the error structure, I am completely lost on how to include that in the code and become more confused when reading the R documentation. Any and all help would be greatly appreciated, as I'm certain this is just a lack of understanding on my part. Thank you!
I was workinf on a dataset, trying re-perfomrming an already run statysical analysis and I met the following function:
binomial()$linkinv(fixef(m))
after running the following model
summary((m = glmer(T1.ACC ~ COND + (COND | ID), d9only, family = binomial)))
My first question is what exactly does this functions is made for? Beacuse throgh other command lines the reciprocal code as well as a slightly modified code based always on it are also reported:
1) 1- binomial()$linkinv(fixef())
2) d9only$fit = binomial()$linkinv(model.matrix(m) %*% fixef(m)) #also the sense of the operator %*% is quite misterious too.
Moreover, another function present is the following one:
binomial_pred_ci()
To be honest, I've to search through the overall script and no customized function there was or the package where that has been called from either? Anyone knows where does it may come from? Maybe the package 'runjags'? Just in case, any on how to download it?
Thanks for your answers
I agree with most of #Oliver's answer. I will add a few comments (since I had an answer partly composed already).
I would be very wary of the script you are following: some parts look wrong (I could obviously be mistaken since these bits are taken completely out of context ...)
binomial()$linkinv refers to the inverse link function for the model used. By default (which applies in this case since no optional link= argument has been specified), this is the inverse-logit or logistic function A nearly equivalent function is available via plogis(), but using $linkinv could be better in some cases since it would generalize to binomial analyses done with other link functions [e.g. probit or cloglog].
as #Oliver mentions, applying the inverse link function to the coefficients is at least weird, I would even say wrong. Researchers often exponentiate coefficients estimated on the logit/log-odds scale to obtain odds ratios, but applying the inverse link (usually logistic function) is rarely correct.
binomial()$linkinv(model.matrix(m) %*% fixef(m)) is indeed computing the predicted estimates on the link scale and converting them back to the data (= probability) scale. You can get the same results more reliably (handling missing values, etc.) by using predict(m, type = "response", re.form = ~0) (this extends #Oliver's answer to a case that also applies the inverse-link function for you).
I don't know what binomial_pred_ci is either, but I would suggest you look at predictInterval() from the merTools package ...
PS these answers all have not much to do with runjags, which uses an entirely different model structure. Presumably glmer models are being fitted for comparison ...
help(binomial) describes the link function and inverse link function and their uses. binomial()$linkinv is the binomial inverse-link function (sigmoid function) prob(y|eta) = 1 / (1 + exp(-eta)) where eta is the linear predictor. Using this with the coefficients (or fixed effects) is a bit odd, but is not unusual to get an idea of how large the effect of each coefficient is. I would not encourage it however.
%*% is the matrix multiplier, while model.matrix(m) (for lme4) extracts the fixed effect model matrix. So model.matrix(m) %*% fixef(m) is the linear predictor using only fixed effects. It would be the same as predict(m, re.form = ~ 0). This is often used in case you want to use the fixed effect model either because you want to correct for between-group-variation or because you are predicting new data.
binomial_pred_ci no idea. Guessing it's a function for predicting confidence levels.
I've fit a model using a 5 parameter logistic fit using the drm library. I apologize if this is a dumb question; I'm just getting started with r.
If dose in on my x-axis and response is on my y-axis, it is very easy to use this model to predict my response based on a given dose. You can either use the function PR or predict. However, I want to estimate a dose for a given response. I can't find a function to do this. For my assay, I fit my data to a standard curve and now I have measured a response from my unknowns. I would like to estimate concentration (dose) based on this response. I could fit the data in the opposite direction (flip x and y) but the fit differs slightly and that's not a very conventional strategy. If anyone has any suggestions I would greatly appreciate them. thank you
In case any one comes across this, the easiest way to do this is to use the ED function with the argument type = "absolute".
I'm making a PLS model using packages "pls" and "ChemometricswithR". I'm able to perform the model but I have a problem. I did a leave-one-out validation and if I ask for the coefficients I can see only an equation (I suppose the average of all the equations developed in leave one out validation).
Is there a way to see all the "n" equations (where n is the number of the observations in my matrix) with all the slopes coefficients?
this is the model i used: mod2<-plsr(SH_uve~matrix_uve,ncomp=11, data=dataset_uve, validation="LOO",jackknife = TRUE)
This would be easier to answer if you gave more information, how you are calling the functions etc? Based on what you said you are doing I'm assuming you are using the functions crossval() and PCA() from packages "pls" and "ChemometricswithR" respectively. I'm not familiar with these functions but the documentations sates that for coefficients "(only if
jackknife is TRUE) an array with the jackknifed regression coefficients.The dimensions correspond to the predictors, responses, number of components, and segments, respectively". So I would say make sure jackknife=TRUE and that you are specifying the correct number of segments in crossval(). If you are using different functions you should edit your question and add in the relevant information.
OK, i found the solution.
The model i used is:
mod2<plsr(SH_uve~matrix_uve,ncomp=11,data=dataset_uve,validation="LOO",jackknife = TRUE)
The coefficients matrix is inside the mod2 array. I called the matrix with the command:
coefficients<-mod2$validation$coefficients[,,11,] and i obtained the coefficients matrix for all the equations used in the leave-one-out cross validation.
I have data points that represent a logarithmic function.
Is there an approach where I can just estimate the function that describes this data using R?
Thanks.
I assume that you mean that you have vectors y and x and you try do fit a function y(x)=Alog(x).
First of all, fitting log is a bad idea, because it doesn't behave well. Luckily we have x(y)=exp(y/A), so we can fit an exponential function which is much more convenient. We can do it using nonlinear least squares:
nls(x~exp(y/A),start=list(A=1.),algorithm="port")
where start is an initial guess for A. This approach is a numerical optimization, so it may fail.
The more stable way is to transform it to a linear function, log(x(y))=y/A and fit a straight line using lm:
lm(log(x)~y)
If I understand right you want to estimate a function given some (x,y) values of it. If yes check the following links.
Read about this:
http://en.wikipedia.org/wiki/Spline_%28mathematics%29
http://en.wikipedia.org/wiki/Polynomial_interpolation
http://en.wikipedia.org/wiki/Newton_polynomial
http://en.wikipedia.org/wiki/Lagrange_polynomial
Googled it:
http://www.stat.wisc.edu/~xie/smooth_spline_tutorial.html
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/smooth.spline.html
http://www.image.ucar.edu/GSP/Software/Fields/Help/splint.html
I never used R so I am not sure if that works or not, but if you have Matlab i can explain you more.