Model variables are described in the Frama-C manual (both in the spec and the "implementation" version).
However I am unable to parse a simple fragment such as the one in the manual, eg.
//# model set<integer > forbidden = \empty;
or even
//# model integer x = 0;
lead to parse errors.
Are model variables really supported? If so, what am I doing wrong?
The version of frama-c I'm using is Nitrogen on MacOS.
Thanks,
Eduardo
As mentioned p.11 of the "implementation" version of the ACSL manual, the features written in a red font are not yet available in Frama-C. Indeed in Nitrogen, neither model variables nor model fields are implemented.
Related
I have an older random forest model I built with the Rborist package, and I recently stopped being able to make predictions with it. I think older versions of the package produced models of class Rborist and starting with version 0.3-1 it makes models of class rfArb. So the predict method also changed from predict.Rborist to predict.rfArb.
I was originally getting a message about how there was "no applicable method for 'predict' applied to an object of class 'Rborist'" (since it had been depreciated, I think). Then I manually changed the class of my old model to rfArb and started getting a new error message:
Error in predict,rfArb(surv_mod_rf, newdata = trees) :
Sampler state needed for prediction
It looks to me like this has to do with the way rfArb objects are constructed (they have a sampler vector that shows how many times each observation in the sample was sampled, or some such), which is different than the way Rborist objects are constructed. But please let me know if I'm misunderstanding this.
Is there any way to update my old model so I can use the recent version of Rborist to make predictions, or are the only options using an older version of the package or rebuilding the model?
R version used: 3.6.3, mlr3 version: 0.4.0-9000, mlr3proba version: 0.1.6.9000 and xgboost version: 0.90.0.2 (as stated on Rstudio package manager)
Unfortunately, when applying surv.xgboost for training and prediction, no distr output is produced as stated in the documentation: https://mlr3proba.mlr-org.com/reference/LearnerSurvXgboost.html, only crank and lp outputs are produced.
Also please note that the documentation link above is also unstable as it sometimes links to a new mlr3proba version 0.2.0 throwing a 404 error while other times it works and shows documentation notes for surv.xgboost as per mlr3proba 0.1.6.
Please let me know if you would like me to provide any further details concerning the issue. Thank you in advance for your time.
Hi thanks for using mlr3proba! Good spot on the documentation problem, I will get that fixed asap. xgboost does not natively predict distr, this is a mistake in the documentation. You can check this with LearnerSurvXgboost$new()$predict_types. However it is easy to get a distribution prediction:
library(mlr3); library(mlr3proba); library(mlr3pipelines)
learn = distrcompositor(lrn("surv.xgboost"), estimator = "kaplan", form = "ph")
You could change the form and estimator arguments though as xgboost assumes a PH form these are the most sensible options.
Let me know if the code chunk doesn't work for some reason, and if it does please mark as answered :)
Raphael
I had created a DecisionTree model in Julia using some features that I had created through an algorithm.
model_rest2 = DecisionTreeClassifier(n_subfeatures=0)
#time DecisionTree.fit!(model_rest2, convert(Matrix, df3[:,[16:45;49:50;52:81;83:end]]), df3[:,:type_1])
I seem to have modified the feature building steps due to which while testing the model now it is not running as the number of features in the model is different from the number of features available in the test set. Is there a way to find the list of features being used by the model so that I can add the missing features?
Since I didn't get any answers, I changed my modus Operandi and entered all the column names instead of the numeric column range. This ensured I dont get into this issue again.
I'm using jags to model engineering inverse problems from a Bayesian framework.
I would like to know if I can include a function to define the mu parameter in the jags model. For example
# Define the model:
modelString = "
model {
for ( i in 1:Ntotal ) {
myData[i] ~ dnorm(mu[i] ,1/sigma^2 )
mu[i]=function(c,fi){...}
}
c ~ dnorm( 9 , 1/9 )
fi ~ dnorm( 24 , 1/4 )
}
when I include the function I get an error: Error parsing model file:
syntax error on line 6 near "{"
Is there some way to include a function inside the model?
Thanks
The short answer is that there is no way to define a new function directly in BUGS/JAGS in the way that you want, because BUGS is not a programming language. You are limited to using the functions and distributions listed in the JAGS user manual, or made available for use by loading external JAGS modules such as runjags or jags-wiener or (currently a small number of) others.
The slightly longer version is that you can define your own functions and distributions in JAGS by writing your own module to specify your desired function/distribution in C++ and then loading that into JAGS. The official JAGS documentation is currently light on details, but there is a tutorial published:
Wabersich, D., and J. Vandekerckhove. 2014. Extending JAGS: a tutorial on adding custom distributions to JAGS (with a diffusion model example).. Behav. Res. Methods 46:15–28. doi:10.3758/s13428-013-0369-3.
This obviously requires familiarity with C++ but it is not that difficult if you are already a C++ coder. Installing the module is much easier if you embed the JAGS extension module within an R package, like the runjags package does (look in the /src directory). If you are not already a C++ coder then best to seek assistance.
Hope that helps,
Matt
——-
Edit: it is also worth saying that there is probably a way of doing what you want in BUGS/JAGS, it is just that what you wanted to implement (writing a function inside the JAGS model) is not a viable solution. If you explain your actual problem in more detail (probably in a new question) then you might get a solution that you had not considered.
I am pretty new to the Julia language (using Version 0.6.0 (2017-06-19 13:05 UTC) Official http://julialang.org/ release x86_64-w64-mingw32 on a Windows 7 machine). I have an R background and found the implementations of mixed models to be slow for very big data sets (n>2,000,000, p > 100). Hence, I searched for alternatives and Julia seems to be lightning fast when it comes to estimation time.
The question I want to raise here is about the MixedModels.jl package of dmbates. Despit its incredible speed compared to, e.g., lme4, I was wondering if there is also some prediction function. Here is a mwe that calls the Dyestuff data from R`s lme4 package:
using MixedModels, RCall
R> library(“lme4”)
R> data(Dyestuff)
Dyestuff = rcopy(R"Dyestuff");
mm = fit!(lmm(#formula(Yield ~ 1 + (1 | Batch)), Dyestuff));
So how can I do predictions using something like:
predict(mm, newdata = Dyestuff)
Many thanks in advance.
Please note that Julia has yet to reach v1.0 yet and there are often big, breaking API changes between releases. Likewise MixedModels.jl is in active development and must track the Julia API changes in its own API. The information here is (hopefully) correct at the time of writing.
Looking at the source code for MixedModels.jl at the current revision e566fcf, there is no predict() method, but there is fitted() method which inherits from / overrides StatsBase.fitted(). It should be easy enough to also write predict() method overriding StatsBase.predict() and submit it as a pull request. You might want to look at the simulate() method -- instead of generating new data based on existing data, you would use data passed as an argument.