I am using the lda package in R to perform Latent Dirichlet Allocation modelling. However, each time I run the program I get a different output.
Using set.seed() doesn't seem to help like with the topicmodels package.
Assuming an identical input, is there a way to ensure that identical topics are found on subsequent executions of the code?
I execute the function as follows:
set.seed(11)
fit1 <- lda.collapsed.gibbs.sampler(documents = documents, K = topics, vocab = vocab,
num.iterations = iterations, alpha = alpha,
eta = eta, initial = NULL, burnin = 500,
compute.log.likelihood = TRUE)
Related
This question is an extension of the following question: No Model Stored with Mlr3.
I have been performing nested resampling to get an unbiased metric of model performance. If I don't specify store_models=TRUE then I get Error: No model stored at the end of the run. However, if I specify store_models=TRUE in both the at and resample calls then RStudio crashes due to RAM consumption.
I have now tried the following code in which I specified store_models=TRUE for just the at call:
MSvCon<-read.csv("MS v Control Proteomics Final.csv", row.names=1)
MSvCon$Status<-as.factor(MSvCon$Status)
MSvCon[,2:4399]<-scale(MSvCon[,2:4399], center=TRUE, scale=TRUE)
set.seed(123, "L'Ecuyer")
task = as_task_classif(MSvCon, target = "Status")
learner = lrn("classif.ranger", importance = "impurity", num.trees=10000)
set_threads(learner, n = 8)
measure = msr("classif.fbeta", beta=1, average="micro")
terminator = trm("none")
resampling_inner = rsmp("repeated_cv", folds = 10, repeats = 10)
at = AutoFSelector$new(
learner = learner,
resampling = resampling_inner,
measure = measure,
terminator = terminator,
fselect = fs("rfe", n_features = 1, feature_fraction = 0.5, recursive = FALSE),
store_models=TRUE)
resampling_outer = rsmp("repeated_cv", folds = 10, repeats = 10)
rr = resample(task, at, resampling_outer)
After finishing, I am able to extract performance measures successfully. However, I tried to use extract_inner_fselect_results and extract_inner_fselect_archives to check what features were selected and importance measures but received a NULL result.
Do you have any suggestions on what I would need to adjust in my code to see this information? I anticipate that adding store_models=TRUE to the resample call would but the RAM consumption issue (even using 128GB on Rstudio Workbench) prevents that. Is there a way around this?
The archives of the inner resampling are stored in the model slot of the AutoFSelectors i.e. without store_models = TRUE in resample() you cannot access the inner results and archives. I will write a workaround for you and answer in the other question.
I'm trying to build a regression model with R using lightGBM,
and i'm getting a bit confused with some functions and when/how to use them.
First one is what i've written in the title, what's the difference between lgb.train() and lightgbm()?
The description in the documentation(https://cran.r-project.org/web/packages/lightgbm/lightgbm.pdf) says that lgb.train is 'Logic to train with LightGBM' and lightgbm is 'Simple interface for training a LightGBM model', while both their outcome value is lgb.Booster, a trained model.
One difference I've found is that lgb.train() does not work with valids = , while lightgbm() does.
Second one is about a function lgb.cv(), regarding a cross validation in lightGBM. How do you apply the output of lgb.cv() to a model?
As I understood from the documentation i've linked above, it seems like the output of both lgb.cv and lgb.train is a model.
Is it correct to use it like the example below?
lgbcv <- lgb.cv(params,
lgbtrain,
nrounds = 1000,
nfold = 5,
early_stopping_rounds = 100,
learning_rate = 1.0)
lgbcv <- lightgbm(params,
lgbtrain,
nrounds = 1000,
early_stopping_rounds = 100,
learning_rate = 1.0)
Thank you in advance!
what's the difference between lgb.train() and lightgbm()?
These functions both train a LightGBM model, they're just slightly different interfaces. The biggest difference is in how training data are prepared. LightGBM training requires a special LightGBM-specific representation of the training data, called a Dataset. To use lgb.train(), you have to construct one of these beforehand with lgb.Dataset(). lightgbm(), on the other hand, can accept a data frame, data.table, or matrix and will create the Dataset object for you.
Choose whichever method you feel has a more friendly interface...both will produce a single trained LightGBM model (class "lgb.Booster").
that lgb.train() does not work with valids = , while lightgbm() does.
This is not correct. Both functions accept the keyword argument valids. Run ?lgb.train and ?lightgbm for documentation on those methods.
How do you apply the output of lgb.cv() to a model?
I'm not sure what you mean, but you can find an example of how to use lgb.cv() in the docs that show up when you run ?lgb.cv.
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
params <- list(objective = "regression", metric = "l2")
model <- lgb.cv(
params = params
, data = dtrain
, nrounds = 5L
, nfold = 3L
, min_data = 1L
, learning_rate = 1.0
)
This returns an object of class "lgb.CVBooster". That object has multiple "lgb.Booster" objects in it (the trained models that lightgbm() or lgb.train() produce).
You can extract any one of these from model$boosters. However, in practice I don't recommend using the models from lgb.cv() directly. The goal of cross-validation is to get an estimate of the generalization error for a model. So you can use lgb.cv() to figure out the expected error for a given dataset + set of parameters (by looking at model$record_evals and model$best_score).
I am using TextmineR package to find the most similar documents to given document list. I used the following code to generate the tcm not dtm
tcm <- CreateTcm(doc_vec = text_df$Description,
skipgram_window = 20,
verbose = FALSE,
cpus = 2)
Which is used to fit a lda model:
# note the number of topics is arbitrary here
# see extensions for more info
model <- FitLdaModel(dtm = tcm,
k = 25,
iterations = 200, # I usually recommend at least 500 iterations or more
burnin = 180,
alpha = 0.1,
beta = 0.05,
optimize_alpha = TRUE,
calc_likelihood = TRUE,
calc_coherence = TRUE,
calc_r2 = TRUE,
cpus = 2)
Now the model parameter theta here generates word-per-topic loading rather than document-per-topic loading. I want to retrieve the document number from the document-per-topic loading. Please help in suggesting the method to obtain the document-per-topic distribution from this model while passing term co-occurrence matrix.
I have tried to back connect to get document number from document-per-topic loading, but not successful as per the guidelines given at https://cran.r-project.org/web/packages/textmineR/vignettes/d_text_embeddings.html
11-month old question. But giving it a shot anyway.
Technically, theta with LDA embeddings gives you P(topic|word) and phi still gives you P(word|topic). If I understand you correctly, you want to embed whole documents under this model? If so, here's how you'd do it.
library(textmineR)
# create a tcm
tcm <- CreateTcm(nih_sample$ABSTRACT_TEXT, skipgram_window = 10)
# fit an LDA model
m <- FitLdaModel(dtm = tcm, k = 100, iterations = 100, burnin = 75)
# pull your documents into a dtm
d <- nih_sample_dtm
# get them predicted under the model
# I recommend using the "dot" method for prediction with embeddings as sparsity may
# result in underflow and throw an error using the default "gibbs" method
p <- predict(object = m, newdata = d, method = "dot")
I am trying to find out the topic document probabilities after running the lda model using text2vec package in R.
Following commands generate the model:
lda_model <- LDA$new(n_topics = n_topics, doc_topic_prior = 0.1, topic_word_prior = 0.01)
doc_topic_distr <- lda_model$fit_transform(x = quantdfm, n_iter = 2000, convergence_tol = 0.00001, n_check_convergence = 10, progressbar = FALSE)
quantdfm is the dtm using quanteda package, which I am plugging it in the $fit_transform method.
I noticed that the doc_topic_distr contains the topic document probabilities (without even asking for normalization). Is this correct? Because on a previous post: How to get topic probability table from text2vec LDA, Dmitriy Selivanov has asked to derive such probabilities using:
doc_topic_prob = normalize(doc_topic_distr, norm = "l1")
whereas when I use the same command as above, doc_topic_distr and doc_topic_prob have the same values (I thought the former contains integers as opposed to fractions in the latter).
Please suggest if this is the expected behavior of the code, or I have missed something here.
Thanks.
According to the up to date documentation LDA fit_transform returns topic probabilities.
I am trying to extract topic assignments from a fit I build with R's 'lda' package. I created a fit:
fit <- lda.collapsed.gibbs.sampler(documents = documents, K = K, vocab = vocab,
num.iterations = G, alpha = alpha, eta = eta, initial = NULL,
burnin = 0, compute.log.likelihood = TRUE)
...and would like to extract a probability for each topic-document assignment or simply the most likely topic for each document. With the 'topicmodel' package I can just call
topics(fit)
to get that (as in LDA with topicmodels, how can I see which topics different documents belong to?)
How can I get the same with 'lda'?
I haven't used the 'lda' package of R but I use the 'topicmodels' package in R
I an create the lda fit for lets say 5 topics, using
topic.fit <- LDA(document-term matrix, 5)
now if you want to extract the probability of each topic-document assignment, use
topic.fit#gamma[1:5, ] , gamma contains the document-topic matrix
and to get the most likely topic you can use
most.likely.topic <- topics(topic.fit, 1)
hope this answers your question.