I believe that a Bayesian classifier is based on statistical model. But after training a Bayesian model, I can save it and do not need the training dataset to predict the test data. For example, if I build a bayesian model by
y - labels,X-samples
Can I take the model as a equation like this?
If so, how can I extract the weights and bias? and what is the new formula looks like?If not, what is the new equation like?
Yes, from the docs, a trained classifier has two attributes, intercept_ and coef_ which are useful if you want to interpret the NBC as a linear model.
Related
I am working on longitudinal data with three waves and have first utilized a hybrid probit regression model (within-between regression model). I have run the analyses in STATA using the following code:
xtprobit AdhLastWeekNewT m_povertyT d_povertyT ///
BoyGirlT m_AgeT d_AgeT m_RuralT d_RuralT m_HHSizeTotalT d_HHSizeTotalT Timepoint ///
, re vce(cluster ID)
In a next step, I want to estimate a structural equation model to look at possible effect mediators. Does anyone know how I can implement a hybrid structural equation model in Stata or R? Are there any papers you are aware of that are using a similar approach?
Thanks a lot!
I'd like to use the 'e1071' library for fitting an SVM model. So far, I've made a model that creates a curve regression based on the data set.
(take a look at the purple curve):
However, I want the SVM model to "follow" the data, such that the prediction for each value is as close as possible to the actual data. I think this is possible because of this graph that shows how SVMs (model 2) model are similar to an ARIMA model (model 1):
I tried changing the kernel to no avail. Any help will be much appreciated.
Fine tuning a SVM classifier is no easy task. Have you considered other models? For ex. GAM's (generalized additive models)? These work well on very curvy data.
I am trying to apply an existing Conjoint model to a new data set (holdout sample) in order to determine the goodness of the fit.
I run a Conjoint analysis and obtain a model. The conjoint function automatically generates a graphical output, Residual standard error, Multiple R-squared,F-statistic, adjusted R-squared, p-value, and Part worths (utilities).
Now I want to run the Conjoint model on the holdout samples to know how the model performs on the new data.
Ideally, I would like to find out what the best practice of obtaining the goodness of fit of a an existing model based on new data looks like.
Many thanks
I'm trying to learn about MARS/Earth models for classification and am using "classif.earth" in the MLR package in R. My issue is that the MLR documentation says that "classif.earth" performs flexible discriminant analysis using the earth algorithm.
However, when I look at the code:
(https://github.com/mlr-org/mlr/blob/master/R/RLearner_classif_earth.R)
I don't see a call to fda in the mda package, rather it directs earth to fit a glm with a default logit link.
So tell me if I'm wrong, but it seems to me that "classif.earth" is not doing flexible discriminant analysis but rather fitting a logistic regression on the earth model.
The implementation uses MARS to perform the FDA, where the MARS model determines the different groups. You can find more information in this paper; I quote from the abstract:
Linear discriminant analysis is equivalent to multiresponse linear regression [...] to represent the groups.
In Azure ML, I have a predictive regression model using boosted decision tree regression and it is reasonably accurate.
The input dataset has over 450 columns and the model has done a good job of predicting against test data sets, without over-fitting.
To report on the result i need to know what features/columns the model mainly used to make predictions but i cant find this information easily when looking at the trained model data.
How do i identify this information? Im happy to import the result dataset into R to help find this but I just need pointers on what direction to start working in.
Mostly, in using Microsoft Azure Machine Learning, when looking at the features that is mainly used to make predictions, it is found on the output of the Train Model module.
But on using Decision Trees as your algorithm, the output of your Train Model module would be the constructed 'trees' of the algorithm, and it looks like this:
To know the features that made impact on predictions while using Decision Trees algorithms, you can use the Permutation Feature Importance module. Look at the sample experiment below:
The parameters of Permutation Feature Importance are Random Seed and Metric for Measuring Performance (in this case, Regression - Coefficient of Determination)
The left input of Permutation Feature Importance is your trained model, and the right input is your test data.
The output of Permutation Feature Importance looks like this:
You can add Execute R Script to extract the Features and Scores from Permutation Feature Importance module.