Using the example for lda from quanteda package
require(quanteda)
require(quanteda.corpora)
require(lubridate)
require(topicmodels)
corp_news <- download('data_corpus_guardian')
corp_news_subset <- corpus_subset(corp_news, 'date' >= 2016)
ndoc(corp_news_subset)
dfmat_news <- dfm(corp_news, remove_punct = TRUE, remove = stopwords('en')) %>%
dfm_remove(c('*-time', '*-timeUpdated', 'GMT', 'BST')) %>%
dfm_trim(min_termfreq = 0.95, termfreq_type = "quantile",
max_docfreq = 0.1, docfreq_type = "prop")
dfmat_news <- dfmat_news[ntoken(dfmat_news) > 0,]
dtm <- convert(dfmat_news, to = "topicmodels")
lda <- LDA(dtm, k = 10)
Is there any metrics that can help to understand the appropriate number of topics? I need this as my texts are small and don't know if the performance is right. Also is there any way to have a performance measure (i.e precision/recall) to measure the better performance of lda with different features?
There are several Goodness-of-Fit (GoF) metrics you can use to assess a LDA model. The most common is called perplexity which you can compute trough the function perplexity() in the package topicmodels. The way you select the optimal model is to look for a "knee" in the plot. The idea, stemming from unsupervised methods, is to run multiple LDA models with different topics. As the number of topics increases, you should see the perplexity decrease. You want to stop either when you find a knee or when the incremental decrease is negligible. Thin about the scree plot when you run the Principal Component Analysis.
Having said that, there is an R package called ldatuning which implements four additional metrics based on density-based clustering and on Kullback-Leibler divergence. Three of them can be used with both VEM and Gibbs inference, while the method by Griffith can only be used with Gibbs. For some of these metrics you look for the minimum, for other for the maximum. Also, you can always compute the log-likelihood of your model which want to maximize. The way you can extract the likelihood from an LDA object is pretty straightforward. Let's assume you have an LDA model called ldamodel:
loglikelihood = as.numeric(logLik(ldamodel))
There is a lot of research around this topic. For instance, you can have a look at these papers:
Gerlach et al. (2018)
Fortunato 2010
In addition, you can have a look at the preprint of a paper I am working on with a colleague of mine which uses simple parametric tests to evaluate GoF. We also developed an R package which can be use over a list of LDA models of class LDA from topicmodels. You can find the paper here and the package here. You are more than welcome to submit any issue you may find in the package. The paper is under reviewed at the moment, but again, comments are more than welcome.
Hope this helps!
Related
I am having trouble getting the Brier Score for my Machine Learning Predictive models. The outcome "y" was categorical (1 or 0). Predictors are a mix of continuous and categorical variables.
I have created four models with different predictors, I will call them "model_1"-"model_4" here (except predictors, other parameters are the same). Example code of my model is:
Model_1=rfsrc(y~ ., data=TrainTest, ntree=1000,
mtry=30, nodesize=1, nsplit=1,
na.action="na.impute", nimpute=3,seed=10,
importance=T)
When I run the "Model_1" function in R, I got the results:
My question was how can I get the predicted possibility for those 412 people? And how to find the observed probability for each person? Do I need to calculate by hand? I found the function BrierScore() in "DescTools" package.
But I tried "BrierScore(Model_1)", it gives me no results.
codes I added:
library(scoring)
library(DescTools)
BrierScore(Raw_SB)
class(TrainTest$VL_supress03)
TrainTest$VL_supress03_nu<-as.numeric(as.character(TrainTest$VL_supress03))
class(TrainTest$VL_supress03_nu)
prediction_Raw_SB = predict(Raw_SB, TrainTest)
BrierScore(prediction_Raw_SB, as.numeric(TrainTest$VL_supress03) - 1)
BrierScore(prediction_Raw_SB, as.numeric(as.character(TrainTest$VL_supress03)) - 1)
BrierScore(prediction_Raw_SB, TrainTest$VL_supress03_nu - 1)
I tried some codes: have so many error messages:
One assumption I am making about your approach is that you want to compute the BrierScore on the data you train your model on (which is usually not the correct approach, google train-test split if you need more info there).
In general, therefore you should reflect on whether your approach is correct there.
The BrierScore method in DescTools only has a defined method for glm models, otherwise, it expects as input a vector of predicted probabilities and a vector of true values (see ?BrierScore).
What you would need to do though is to predict on your data using:
prediction = predict(model_1, TrainTest, na.action="na.impute")
and then compute the brier score using
BrierScore(as.numeric(TrainTest$y) - 1, prediction$predicted[, 1L])
(Note, that we transform TrainTest$y into a numeric vector of 0's and 1's in order to compute the brier score.)
Note: The randomForestSRC package also prints a normalized brier score when you call print(prediction).
In general, using one of the available workbenches for machine learning in R (mlr3, tidymodels, caret) might simplify this approach for you and prevent a lot of errors in this direction. This is a really good practice, especially if you are less experienced in ML as it can prevent many errors.
See e.g. this chapter in the mlr3 book for more information.
For reference, here is some very similar code using the mlr3 package, automatically also taking care of train-test splits.
data(breast, package = "randomForestSRC") # with target variable "status"
library(mlr3)
library(mlr3extralearners)
task = TaskClassif$new(id = "breast", backend = breast, target = "status")
algo = lrn("classif.rfsrc", na.action = "na.impute", predict_type = "prob")
resample(task, algo, rsmp("holdout", ratio = 0.8))$score(msr("classif.bbrier"))
I would like to perform a "leave-one-out cross validation" (LOO-CV) for a CAP in R. The CAP was calculated by using capscale in R package vegan and is a canonical analysis of principal coordinates, similar to an rda or cca, but based on another similarity matrix, in my case Bray-Curtis. I have found that within predict.cca there is the function calibrate.cca but I cannot make it work.
https://www.rdocumentation.org/packages/vegan/versions/2.4-2/topics/predict.cca
This is what I have (based on the sample data mite available in vegan)
library(vegan)
data(mite, mite.env)
str(mite.env) #"SubsDens", "WatrCont", "Substrate", "Shrub", "Topo"
miteBC <- vegdist(mite, method="bray") #Bray-Curtis similarity matrix
miteCAP <-capscale(miteBC~Substrate + Shrub + Topo, data=mite.env, #CAP in capscale
distance = "bray", metaMDSdist = F)
summary(miteCAP)
anova(miteCAP)
anova(miteCAP, by = "axis")
anova(miteCAP, by = "margin")
calibrate.cca(miteCAP, type = c("response")) #error cannot find function calibrate.cca
In the program Primer it is done automatically within the CAP function ("Leave-one-out Allocation of Observations to Groups"), where it assigns each sample automatically to a group and get a mis-classification error (similar to a classification randomForest, which I have already done), but I would like to use R, and it should be possible with vegan::capscale.
Any help is very much appreciated!
Function vegan::calibrate does not have argument type and never returns "response". Check its documentation. It does the environmental calibration, and returns the predicted values of constraints (Substrate, Shrub, Topo) in the scale of model matrix, and with factors these hardly make sense directly.
There is no direct option of LOO: you got to do it by hand cycling through points, and using the complete left-out-point as the newdata. However, I'd suggest k-fold cross-validation as a better alternative for estimation of predictive power: LOO changes data too little, and gives over-optimistic view of predictive power.
I am applying the functions from the flexclust package for hard competitive learning clustering, and I am having trouble with the convergence.
I am using this algorithm because I was looking for a method to perform a weighed clustering, giving different weights to groups of variables. I chose hard competitive learning based on a response for a previous question (Weighted Kmeans R).
I am trying to find the optimal number of clusters, and to do so I am using the function stepFlexclust with the following code:
new("flexclustControl") ## check the default values
fc_control <- new("flexclustControl")
fc_control#iter.max <- 500 ### 500 iterations
fc_control#verbose <- 1 # this will set the verbose to TRUE
fc_control#tolerance <- 0.01
### I want to give more weight to the first 24 variables of the dataframe
my_weights <- rep(c(1, 0.064), c(24, 31))
set.seed(1908)
hardcl <- stepFlexclust(x=df, k=c(7:20), nrep=100, verbose=TRUE,
FUN = cclust, dist = "euclidean", method = "hardcl", weights=my_weights, #Parameters for hard competitive learning
control = fc_control,
multicore=TRUE)
However, the algorithm does not converge, even with 500 iterations. I would appreciate any suggestion. Should I increase the number of iterations? Is this an indicator that something else is not going well, or did I a mistake with the R commands?
Thanks in advance.
Two things that answer my question (as well as a comment on weighted variables for kmeans, or better said, with hard competitive learning):
The weights are for observations (=rows of x), not variables (=columns of x). so using hardcl for weighting variables is wrong.
In hardcl or neural gas you need much more iterations compared to standard k-means: In k-means one iteration uses the complete data set to change the centroids, hard competitive learning and uses only a single observation. In comparison to k-means multiply the number of iterations by your sample size.
in traditional gbm, we can use
predict.gbm(model, newsdata=..., n.tree=...)
So that I can compare result with different number of trees for the test data.
In h2o.gbm, although it has n.tree to set, it seems it doesn't have any effect on the result. It's all the same as the default model:
h2o.test.pred <- as.vector(h2o.predict(h2o.gbm.model, newdata=test.frame, n.tree=100))
R2(h2o.test.pred, test.mat$y)
[1] -0.00714109
h2o.test.pred <- as.vector(h2o.predict(h2o.gbm.model, newdata=test.frame, n.tree=10))
> R2(h2o.test.pred, test.mat$y)
[1] -0.00714109
Does anybod have similar problem? How to solve it? h2o.gbm is much faster than gbm, so if it can get detailed result of each tree that would be great.
I don't think H2O supports what you are describing.
BUT, if what you are after is to get the performance against the number of trees used, that can be done at model building time.
library(h2o)
h2o.init()
iris <- as.h2o(iris)
parts <- h2o.splitFrame(iris,c(0.8,0.1))
train <- parts[[1]]
valid <- parts[[2]]
test <- parts[[3]]
m <- h2o.gbm(1:4, 5, train,
validation_frame = valid,
ntrees = 100, #Max desired
score_tree_interval = 1)
h2o.scoreHistory(m)
plot(m)
The score history will show the evaluation after adding each new tree. plot(m) will show a chart of this. Looks like 20 is plenty for iris!
BTW, if your real purpose was to find out the optimum number of trees to use, then switch early stopping on, and it will do that automatically for you. (Just make sure you are using both validation and test data frames.)
As of 3.20.0.6 H2O does support this. The method you are looking for is
staged_predict_proba. For classification models it produces predicted class probabilities after each iteration (tree), for every observation in your testing frame. For regression models (i.e. when response is numerical), although not really documented, it produces the actual prediction for every observation in your testing frame.
From these predictions it is also easy to compute various performance metrics (AUC, r2 etc), assuming that's what you're after.
Python API:
staged_predict_proba = model.staged_predict_proba(test)
R API:
staged_predict_proba <- h2o.staged_predict_proba(model, prostate.test)
I've been working on Support Vector Machine algorithm using R Studio. However, I'm ending up with a low accuracy rate and I don't know how to fix it. I'm expecting an accuracy rate higher than 90%.
Here is my code:
install.packages("caTools")
install.packages("class")
library(caTools)
library(class)
install.packages("ISLR")
library(ISLR)
Collegedata<-College[,-1]
Collegedata[,-17]<-scale(Collegedata[,-17])
med<-median(Collegedata$Grad.Rate)
Grad.Rate<-Collegedata$Grad.Rate>=med
Grad.Rate1<-as.numeric(Grad.Rate)
Collegedata<-data.frame(Collegedata,Grad.Rate1)
corcollege<-cor(Collegedata)
Collegedata<-Collegedata[,-2:-3]
Collegedata<-Collegedata[,-4]
Collegedata<-Collegedata[,-7]
Collegedata<-Collegedata[,-13]
SVM
install.packages("e1071")
library("e1071")
collegesplit=sample.split(Collegedata, SplitRatio=0.8)
collegetrain<-subset(Collegedata, collegesplit==1)
collegetest<-subset(Collegedata, collegesplit==0)
collegetrain<-data.frame(collegetrain)
svm.model.college <- svm(Grad.Rate1 ~ ., data = collegetrain, type = "C-classification", cost = 1,gamma = 0.125, cross =10)
svm.pred.college <- predict(svm.model.college, collegetest[,-13])
table(pred = svm.pred.college, true = collegetest[,13])
install.packages('ROCR')
library(ROCR)
ROC=predict(svm.model.college,newdata=collegetest)
ROC<-as.vector(ROC)
ROC<-as.numeric(ROC)
pred=prediction(ROC,collegetest$Grad.Rate1)
perf=performance(pred,'tpr','fpr')
plot(perf)
as.numeric(performance(pred,'auc')#y.values)
It's been a while since I've coded SVMs so I won't be able to provide you with any code, however, I can say that SVMs are incredibly sensitive to your choice of hyperparameters, e.g. cost and gamma. You'd want to perform a grid search over some sequence of values to determine the optimal values. I recommend using the tune.svm() or best.tune() functions in the e1071 package to do this. Further, while the default Gaussian kernel is often the optimal kernel, it is not guaranteed to be the best, so you could perhaps try the linear kernel instead.
You may find this paper useful in helping develop a framework for building your model.