I am looking for an R package which can run "Spatial Vector Autoregression".
tandfonline.com/doi/full/10.1080/17421770701346689
According to Chen and Conley (2001), this is a "vector autoregression (VAR) whose coefficient matrix and shock covariance matrix are functions of economic distances between agents. The impact of other agents’ variables on the conditional mean of a given agent’s variable is a function of their economic distances from this agent. Similarly, covariances of VAR shocks are functions of distances between agents in the previous period, a property we refer to as being isotropic."
(Chen, X & Conley, T.G. (2001) A new semiparametric spatial model for panel
time series, Journal of Econometrics, 105, 59–83)
Surprisingly, however, I could only see until "Spatial Autoregression" which is still not what I need for my purpose. May I get help finding the package for this please? Otherwise, may I know an official way to run this Spatial Vector Autoregression model using R programming?
I think I've found what you're looking for, devtools::install_github("James-Thorson/VAST"). VAST stands for "Vector-Autoregressive Spatio-Temporal." This package is a wrapper around a package that incorporates spatial modeling. Essentially it adds to it.
You can see coding examples here. If you want to look at help, use ?VAST::VAST and select one of the three hyperlinks at the bottom of the short description and details (make_settings, fit_model, and plot_results).
Please note:
When I installed this package to check out what it included, it came back with a conflict that the package TMB required an earlier version of the Matrix package. I had not had TMB installed before installing this package. I had no issues installing TMB independently (without a conflict with the version of the Matrix package). However when I called the library VAST it still gave me that error. When I called the library TMB, then the library VAST I didn't receive the warning and both libraries loaded.
Related
I've tried to search for the answer to this question but have not found any.
Does the CochranArmitageTest() function in R support adjustments for additional variables, apart from the two-level dependent and k-leveled independent variable? How in that case should the x (frequency table or matrix) be presented for the function?
Best regards
That particular test is specifically only written to assess a single ordinal or categorical variable and a binary outcome variable. It is not, as far as I know modifiable.
However, it is my understanding that the Bioconductor project, which curates a lot of pharmaceutical, biological and genetics packages in R, hosts a package developed about 5-years ago to work with multiple categorial or ordinal variables and binary outcome.
It is in the globaltest package which you can install with the following directy from the Bioconductor repository.
BiocManager::install("globaltest")
Here is the PDF explaining the whole package:
This is a follow-up to a previous question I asked a while back that was recently answered.
I have built several gbm models with dismo::gbm.step, which relies on the gbm fitting functions found in R package gbm, as well as cross validation tools from R package splines.
As part of my analysis, I would like to use some of the graphical tools available in R (e. g. perspective plots) to visualize pairwise interactions in the data. Both the gbm and the dismo packages have functions for detecting and modelling interactions in the data.
The implementation in dismo is explained in Elith et. al (2008) and returns a statistic which indicates departures of the model predictions from a linear combination of the predictors, while holding all other predictors at their means.
The implementation in gbm uses Friedman`s H statistic (Friedman & Popescue, 2005), and returns a different metric, and also does NOT set the other variables at their means.
The interactions modelled and plotted with dismo::gbm.interactions are great and have been very informative. However, I would also like to use gbm::interact.gbm, partly for publication strength and also to compare the results from the two methods.
If I try to run gbm::interact.gbm in a gbm.object created with dismo, an error is returned…
"Error in is.factor(data[, x$var.names[j]]) :
argument "data" is missing, with no default"
I understand dismo::gmb.step adds extra data the authors thought would be useful to the gbm model.
I also understand that the answer to my question lies somewherein the source code.
My questions is...
Is it possible to modify a gbm object created in dismo to be used in gbm::gbm.interact? If so, would this be accomplished by...
a. Modifying the gbm object created in dismo::gbm.step?
b. Modifying the source code for gbm::interact.gbm?
c. Doing something else?
I will be going through the source code trying to solve this myself, if I come up with a solution before anyone answers I will answer my own question.
The gbm::interact.gbm function requires data as an argument interact.gbm <- function(x, data, i.var = 1, n.trees = x$n.trees).
The dismo gbm.object is essentially the same as the gbm gbm.object, but with extra information attached so I don't imagine changing the gbm.object would help.
I am attempting to find a reference which explains how one computes standard errors for local polynomial regression? Specifically, in R one can use the loess function to get a model object and then use the predict function to retrieve standard errors. Is there a reference somewhere to what is actually happening? What about in the case when there may be serial correlation in the residuals, one must adjust this using Newey-West type methods, is there a way to use the sandwich package to do this as you would for a regular OLS using lm?
I tried looking at the source but the standard error computation calls a C function.
The "Source" section of ?loess tells you that the underlying C-code comes from the cloess package of Cleveland et al., and points you to its web home:
Source:
The 1998 version of ‘cloess’ package of Cleveland, Grosse and
Shyu. A later version is available as ‘dloess’ at http://www.netlib.org/a>.
Going there, you will find a link to a 50 page document (warning: postscript doc) that should tell you everything you need to know about this implementation of loess. In Cleveland's words:
This guide describes crucial steps in the proper analysis of data using
loess. Please read it.
Of particular interest will be the first couple pages of "Section 4: Statistical and Computational Methods".
I am working with several large databases (e.g. PISA and NAEP) that use a complex survey design with replicate weights and multiple plausible values. I can address the former using the survey package. However, does there exist an R package/function to analyze the latter?
For reference, I have found this article to provide a good overview of the issue: http://www.ierinstitute.org/fileadmin/Documents/IERI_Monograph/IERI_Monograph_Volume_02_Chapter_01.pdf
I'm not sure how the general idea of 'plausible values' differs from using multiple imputation to generate several sets of imputed values (such as the the Amelia package does). But Thomas Lumley's mitools package can be used to combine the various sets of imputed values, and it might be the case that it can be used to combine your sets of plausible values to obtain the 'correct' standard errors of the estimates.
Daniel Caro develop an R package for large scale assessments. You can find it here http://cran.r-project.org/web/packages/intsvy/index.html
This is code example using the regression command, over the plausible values on Mathemathics:
## Not run:
# Table I.2.3a, p. 305, International Report 2012
pisa.reg.pv(pvlabel="MATH", x="ST04Q01", by = "IDCNTRYL", data=pisa)
Although, I'm not sure if this package can be used to analyze NAEP data.
I hope this fulfill your purposes; at least partially.
As of survey version 3.36 there's withPV
data(pisamaths, package="mitools")
des<-svydesign(id=~SCHOOLID+STIDSTD, strata=~STRATUM, nest=TRUE,
weights=~W_FSCHWT+condwt, data=pisamaths)
options(survey.lonely.psu="remove")
results<-withPV(list(maths~PV1MATH+PV2MATH+PV3MATH+PV4MATH+PV5MATH),
data=des,
action=quote(svyglm(maths~ST04Q01*(PCGIRLS+SMRATIO)+MATHEFF+OPENPS, design=des)))
summary(MIcombine(results))
I'm checking a simple moving average crossing strategy in R. Instead of running a huge simulation over the 2 dimenional parameter space (length of short term moving average, length of long term moving average), I'd like to implement the Particle Swarm Optimization algorithm to find the optimal parameter values. I've been browsing through the web and was reading that this algorithm was very effective. Moreover, the way the algorithm works fascinates me...
Does anybody of you guys have experience with implementing this algorithm in R? Are there useful packages that can be used?
Thanks a lot for your comments.
Martin
Well, there is a package available on CRAN called pso, and indeed it is a particle swarm optimizer (PSO).
I recommend this package.
It is under actively development (last update 22 Sep 2010) and is consistent with the reference implementation for PSO. In addition, the package includes functions for diagnostics and plotting results.
It certainly appears to be a sophisticated package yet the main function interface (the function psoptim) is straightforward--just pass in a few parameters that describe your problem domain, and a cost function.
More precisely, the key arguments to pass in when you call psoptim:
dimensions of the problem, as a vector
(par);
lower and upper bounds for each
variable (lower, upper); and
a cost function (fn)
There are other parameters in the psoptim method signature; those are generally related to convergence criteria and the like).
Are there any other PSO implementations in R?
There is an R Package called ppso for (parallel PSO). It is available on R-Forge. I do not know anything about this package; i have downloaded it and skimmed the documentation, but that's it.
Beyond those two, none that i am aware of. About three months ago, I looked for R implementations of the more popular meta-heuristics. This is the only pso implementation i am aware of. The R bindings to the Gnu Scientific Library GSL) has a simulated annealing algorithm, but none of the biologically inspired meta-heuristics.
The other place to look is of course the CRAN Task View for Optimization. I did not find another PSO implementation other than what i've recited here, though there are quite a few packages listed there and most of them i did not check other than looking at the name and one-sentence summary.