Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have used leaps package in R to perform forward and backward feature elimination. However, I want automate the cross validation and prediction operations. Therefore, how can I use forward/backward selection in caret?
in leaps package you could do it this way
forward <- regsubsets(x ~ ., data, nvmax = 20,
method = "forward")
You should be able to run a stepwise regression in caret::train() with method=glmStepAIC from the MASS package. For details, see the list of models supported by caret on the caret documentation website.
The caret test cases for this model are accessible on the caret GitHub repository.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to calculate the Approximate Weight of Evidence (AWE) from information obtained via the mclust R package?
According to R documentation, you should have access to function awe(tree, data) since version R1.1.7.
From the example on the linked page (in case of broken link),
data(iris)
iris.m _ iris[,1:4]
awe.val <- awe(mhtree(iris.m), iris.m)
plot(awe.val)
Following the formula from Banfield, J. and Raftery, A. (1993) Model-based Gaussian and non-Gaussian clustering. Biometrics, 49, 803-821. -2*model$loglik + model$d*(log(model$n)+1.5) Where model represents the model with number of cluster solutions selected. Keeping this question in the hope that it may help someone in the future.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I started studying machine learning in R and I was just curious...
We are trying to find the model with good accuracy by using train and test data for prediction... but instead of using machine learning process, can't we predict the future with Regression model?
I just want to know how machine learning can change the results... will the plot of machine learning model be different with Regression model plot?
I'm just curious...
Regression models are part of Machine learning methods. You can implement a regression model, train it (meaning that the algorithm computes the coefficients, as for a basic regression) and then test it on your test set.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
One task of Machine Learning / Data Science is making predictions. But, I want to get more insights in the variables of my model.
To get more insights, I tried different methods:
Logistic Regression (The output provides some 'insights' in the influence of the different variables, see: Checking interpretation of GLM summary in R)
The xgb.plot.importance function applied on a Boosting Tree, see picture below (applied on the Titanic Data Set).
And I saw a great article (but unfortunately, it is not working) how to explain a boosting tree (see: https://medium.com/applied-data-science/new-r-package-the-xgboost-explainer-51dd7d1aa211).
My question: are there other methods to give yourself (or even better: the business) more insights about which variables have a influence on the target variable? And of course: is the influence positive/negative and how big is it?
You could also try to use lasso regression (https://stats.stackexchange.com/questions/17251/what-is-the-lasso-in-regression-analysis), which basically selects the variables that influence the response variable mostly.
The glmnet package provides support for this type of regression.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I could not find any documentation on how to perform All vs All multi-class classification with kernlab package in R. Any kind of help would be appreciated.
Well apparently the ksvm function of the package does it automatically as it says here .
This is how to use (I quote from the link above):
svp <- ksvm(xtrain,ytrain,type="C-svc",kernel=’vanilladot’,C=100,scaled=c())
And this is the comment below:
"Question 12
Test the ability of a SVM to predict the class of the disease from gene expression. Check the influence of the parameters.
Finally, we may want to predict the type and stage of the diseases. We are then confronted with a multi-class problem, since the variable to predict can take more than two values:
y <- ALL$BT
print(y)
Fortunatelly, kernlab implements automatically multi-class SVM by an all-versus-all strategy to combine several binary SVM."
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
'loess' is implemented in R 'stats' package and 'locfit' in 'locfit' package. They are both nonparametric regression methods that uses local regression. What are the difference between two methods?
Based on this introduction it would appear that locfit is a generalization of loess and in fact you can obtain a loess fit using locfit, but locfit also has additional options to fit more general models including logistic regression style fits and general density estimation. It can also fit loess style models, but using a different weighting formula or even varying bandwidth.