I would like to save the results of a Bayesian estimation function (e.g. brm) to file (e.g. in Latex format), for sharing results and preparing publications. It appears that existing packages (e.g. xtable, stargazer) are designed with non-Bayesian statistics in mind and cannot handle these model objects. Are there any existing packages or available code to handle this task (before I begin to recreate the wheel)? I have found tools for making tables from models estimated using JAGS/BUGS here but brm uses stan to estimate models.
If you call launch_shinystan on the object and go to the Estimate tab, there is a link to Generate LaTeX table that gives you a bunch of options to check on the left and it outputs the syntax on the right.
For posterity's sake, for basic tables one may also access specific parts of the model by for example:
summary(model)$fixed
where model is a stanfit object, and pass that to xtable or another function to output latex tables.
Related
I have been working on a ML project for which that work (done inside an R-project) resulted in some ML models (built with caret) ALONG WITH code that uses those models for additional analysis.
As the next phase, I am "deploying" these models by creating an R-package that my collaborators can use for analysis of new data, where that analysis includes USING the trained ML models. This package includes functions that generate reports, where, embedded in that report is the application of the trained ML models against the new data sets.
I am trying to identify the "right" way to include those trained models in the package. (Note, currently each model is saved in its own .rds file).
I want to be able to use those models inside of package functions.
I also want to consider the possibility of "updating" the models to a new version at a later date.
So ... should I:
Include the .rda files in inst/exdata
Include as part of sysdata.rda
Put them in an external data package (which seems reasonable, except almost all examples in tutorials expect a data package to
include data.frame-ish objects.)
With regard to that third option ... I note that these models likely imply that there are some additional "NAMESPACE" issues at play, as the models will require a whole bunch of caret related stuff to be useable. Is that NAMESPACE modification required to be in the "data" package or the package that I am building that will "use" the models?
My first intention is to go for 1. There is no need to go for other formats as PMML as you only want to run it within R. So I consider Rda as natively best. As long, as your models are not huge, it should be fine to share with collaborators (but maybe not for a CRAN package). I see, that 3. sounds convenient but why seperate models and functions? Freshly trained models then would come with a new package version, as you anayway would need to go with a data package. I dont see gaining much this way, but I have not much experiance with data packages.
I'm building a species distribution model/habitat suitability model using the package biomod2.
Maxent allows the user to choose one of four output formats (see title) when the Java application is used directly. However, when Maxent is called by functions in the package biomod2 (e.g. BIOMOD_Modeling) there doesn't seem to be an option to specify the output format. Nor is there any indication which one has been chosen. I think cloglog is the default so it is likely that one but I would like to be sure.
This tutorial has some images showing the differences between some of the outputs: https://biodiversityinformatics.amnh.org/open_source/maxent/Maxent_tutorial2017.pdf
Thanks for your help!
I emailed the developers and Damien Georges told me that the output format for MAXENT in biomod2 is logistic and that (so far) there is no way to change it
I want to replicate in texreg the functionality contained in stargazer via the arguments omit and omit.labels (see here). Unfortunately, I cannot use the stargazer package as it does not support the model I am using and is not extensible. Since texreg is easily extensible I can get it to work with my models. I can also easily omit some output from texreg with the omit.coef argument. What I can't seem to figure out is how to insert labels for the omitted coefficients. Does this exist in texreg? Does anyone have experience trying to write this functionality into an extract function? Alternatively, has anyone figured out how to extend stargazer to work with a custom model?
Context: I am writing a presentation in knitr and need to convert the output of some estimators into latex which will then get converted to pdf for my beamer presentation. The output has a bunch of covariates and thus is too long to display nicely in beamer. I want to truncate the output by omitting some covariates and inserting in their place a line indicating whether these covariates have been included in the model or not, e.g. collapse the variables "County Population", "County Income", etc. into a line that reads "County controls" and then have "Yes" or "No" to indicate whether these controls were included in the estimate or not. Ideally, someone could help me figure out a way to do this in texreg. If not, I would be open to other packages/approaches, e.g. xtable.
A possible option is the github version of huxtable. (I am the author.) This has a huxreg function which creates a table from a bunch of regressions, much like texreg: it'll work for anything that has a broom::tidy method defined for it. You can then edit the table much like a normal data frame, just rbind in the rows you want.
You'll need to install using devtools::install_github, if you want to try this route.
I created an xml file using pmml function from pmml library in R.
adamodel_iOS=ada(label~.,data=train_iOS, iter=ntrees, verbose=TRUE, loss="ada", bag.frac=0.7, nu=0.1, control=defctrl, type="real")
Ptrain_iOS = predict(adamodel_iOS,newdata=train_iOS, type="prob")
library(pmml)
adapmml_iOS=pmml(adamodel_iOS)
saveXML(adapmml_iOS,"model_iOS.xml")
save.image()
After, training model in the first line, I found the corresponding probabilities for the training data.
Now I want to use this xml file to generate predictions on a set of data(basically the training set again). How do I do that in R? I see that in java and spark, we can load xml file generated by pmml function and then there are functions which can make predictions.
Basically, I am looking for a function in R that can take this xml file as an input and then return an object which in turn takes some datapoints as input and return their probabilities of having label 0 and 1.
I found a link:
Can PMML models be read in R?
but it does not help
Check this link for the list of PMML producers and consumers. As you can see R is listed as producer not consumer. Also, algorithms for which R can produce the corresponding PMML files are listed.
The most comprehensive tool for PMML validator, convertor, and also for scoring data using PMML models is ADAPA, which is not free.
KNIME is an open source drag & drop analytics tool which supports both import and export of PMML files (not for all models and the features are limited.) It supports R, Python, and Java too.
Although it's a long time ago, I still want to share that you can use the "reticulate" to call the python pypmml package to implement your ideas in R, and in order to be more friendly and make the prediction look more like the predict function in R, I will It is encapsulated, the address of the package is here enter link description here
I've been using xtable package for a long time, and looking forward to writting my first package in R... so I reckon that if I have some "cool" idea that's worth carying out, there's a great chance that somebody got there before me... =)
I'm interested in functions/packages specialized for LaTeX table creation (through R, of course). I bumped on quantreg package which has latex.table function. Any suggestion for similar function(s)/package(s)?
P.S.
I'm thinking about building a webapp in which users can define their own presets/templates of tables, choose style, statistics, etc. It's an early thought, though... =)
I sometimes divide the task of creating LaTeX tables into two parts:
I'll write the table environment, caption, and tabular environment commands directly in my LaTeX document.
I'll export just the body of the table from R using a custom function.
The R export part involves several steps:
Starting with a matrix of the whole table including any headings:
Add any LaTeX specific formatting to the table. E.g., enclose digits in dollar symbols to ensure that negative numbers display correctly.
Collapse rows into a single character value by replacing separate columns with the ampersand (&) and adding ends-of-row symbols "\\"
Add any horizontal lines to be displayed in the table. I use the booktabs LaTeX package.
Export the resulting character vector using the write function
The exported text file is then imported using the input command in LaTeX. I ensure that the file name corresponds to the table label.
I have used this approach in the context of writing journal articles.
In these cases, there are a lot of different types of tables (e.g., multi-page tables, landscape tables, tables requiring extended margins, tables requiring particular alignment, tables where I want to change the wording of the table title). In this setting, I've mostly found it easier to just export the data from R. In this way, the result is reproducible research, but it is easier to tweak aspects of table design in the LaTeX document. And in the context of journal articles, there are usually not too many tables and rather specific formatting requirements.
However, I imagine if I were producing large numbers of batch reports, I'd consider exporting more aspects directly from R.
Beyond xtable and Hmisc as listed by Rob, there are also at least
apsrtable which formats latex tables from one or more model objects
p2lh which exports R to LaTeX and HTML
RcmdrPlugin.Export which graphically exports output to LaTeX or HTML
reporttools which generates LaTeX tables of descriptive statistics
This was just based on a quick search. So there may be more for you to look at before you try to hook it into a webapp. Good luck.
In addition to the packages mentioned above, there is the stargazer package. It works well with objects from many commonly used functions and packages (lm, glm, svyglm, plm, survival, AER, pscl, and others), as well as with zelig objects.
Apart from xtable, there's the latex function in the Hmisc package.