Output of keras in R can not be used to predict - r

i use keras and tensorflow to run an lstm in R to predict some stock market prices.
Here I am providing the code where instead of stock market prices, I just use one randomly generated vector VECTOR of length 100. Then I consider a training period of 80 first values and try to predict the 20 test values...
What am I doing wrong?
I am getting an error:Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "keras_training_history"
Thank you
library(tensorflow)
library(keras)
set.seed(12345)
VECTOR=rnorm(100,2,5)
VECTOR_training=VECTOR[1:80]
VECTOR_test=VECTOR[81:100]
training_rescaled=scale(VECTOR_training)
#I also calculate the scale factors because I will need them when I will be coming
#back to the original data
scale_factors=matrix(NA,nrow=1,ncol=2)
scale_factors=c(mean(VECTOR_training), sd(VECTOR_training))
#We want to predict 20 days, so we need to base each prediction on 20 data points.
prediction_stocks=20
lag_stocks=prediction_stocks
test_rescaled =training_rescaled[(length(VECTOR_training)- prediction_stocks + 1):length(VECTOR_training)]
#We lag the data 20times, so that each prediction is based on 20 values, and arrange lagged values into columns. Then we transform it into the desired 3D form.
x_train_data_stocks=t(sapply(1:(length(VECTOR_training)-lag_stocks-prediction_stocks+1),
function(x) training_rescaled[x:(x+lag_stocks-1),1]
))
# now we transform it into 3D form
x_train_arr_stocks=array(
data=as.numeric(unlist(x_train_data_stocks)),
dim=c(
nrow(x_train_data_stocks),
lag_stocks,
1
)
)
#Now we apply similar transformation for the Y values.
y_train_data_stocks=t(sapply(
(1 + lag_stocks):(length(training_rescaled) - prediction_stocks + 1),
function(x) training_rescaled[x:(x + prediction_stocks - 1)]
))
y_train_arr_stocks= array(
data = as.numeric(unlist(y_train_data_stocks)),
dim = c(
nrow(y_train_data_stocks),
prediction_stocks,
1
)
)
#In the same manner we need to prepare input data for the prediction
#list_test_rescaled
# this time our array just has one sample, as we intend to perform one 20-days prediction
x_pred_arr_stocks=array(
data = test_rescaled,
dim = c(
1,
lag_stocks,
1
)
)
###lstm forecast prova
set.seed(12345)
lstm_model <- keras_model_sequential()
lstm_model_prova=
layer_lstm(lstm_model,units = 70, # size of the layer
batch_input_shape = c(1, 20, 1), # batch size, timesteps, features
return_sequences = TRUE,
stateful = TRUE) %>%
# fraction of the units to drop for the linear transformation of the inputs
layer_dropout(rate = 0.5) %>%
layer_lstm(units = 50,
return_sequences = TRUE,
stateful = TRUE) %>%
layer_dropout(rate = 0.5) %>%
time_distributed(keras::layer_dense(units = 1))
lstm_model_compile=compile(lstm_model_prova,loss = 'mae', optimizer = 'adam', metrics = 'accuracy')
lstm_fit_prova=fit(lstm_model_compile,
x = x_train_arr_stocks[[1]],
y = y_train_arr_stocks[[1]],
batch_size = 1,
epochs = 20,
verbose = 0,
shuffle = FALSE
)
lstm_forecast_prova=predict(lstm_fit_prova,x_pred_arr_stocks, batch_size = 1)
It works if I use
lstm_forecast_prova=predict(lstm_model_compile,x_pred_arr_stocks, batch_size = 1)
But shouldn't I use the fitted model in order to make the predictions?
Also, if I plot the fitted model, the accuracy is 0. And actually on my real data the predictions do not make any sense. So what does it mean that the accuracy is 0? Maybe something is wrong with the lstm parameters?
Thank you in advance!!

Related

r: for loop to simulate predictions when random sampling is applied

I am trying to simulate how replacement/reassignment of values on random samples affect predictions conveyed by AUC.
I have a tumor classification in a dataframe denoted df$who which has levels 1, 2, 3 corresponding to the severity of the tumor lesion.
Intro to the question
Lets say the baseline data looks like this:
set.seed(1)
df <- data.frame(
who = as.factor(sample(1:3, size = 6000, replace = TRUE, prob = c(0.8, 0.15, 0.05))),
age = round(runif(n = 6000, min = 18, max = 95), digits = 1),
gender = sample(c("m", "f"), size = 6000, replace = TRUE, prob = c(1/3, 2/3)),
event.time = runif(n = 6000, min = 8, max = 120),
event = as.factor(sample(0:2, size = 6000, replace = TRUE, prob = c(0.25, 0.2, 0.55)))
)
And a standard cause-specific Cox regression looks like:
library(survival)
a_baseline <- coxph(Surv(event.time, event == 1) ~ who + age + gender, data = df, x = TRUE)
From which AUC can be obtained as a measure of predictive performance. Here, leave-one-out bootstrap on 5-year prediction on df$event == 1.
library(riskRegression)
u <- Score(list("baseline" = a_baseline),
Surv(event.time, event == 1) ~ 1,
data = df,
times = 60,
plots = "cal",
B = 50,
split.method = "loob",
metrics = c("auc", "brier")
)
# The AUC is then obtained
u$AUC$score$AUC[2]
Question
I want to simulate how re-classifying a random 5% of df$who == 1 to dfwho == 2 affect the 5-year prediction on df$event == 1
I want to create 10 separate and simulated subsets of the baseline data df, but each containing a random allocation of 5% df$who == 1 to .. == 2. Then, I want to apply each of these 10 separate and simulated subsets to predict the 5-year risk of df$event == 1.
I have applied a for loop to this. The expected output is dataframe that tells me which of the 10 simulated datasets yielded the highest and lowest u$AUC$score$AUC[2] (i.e., the best and worst prediction).
I am new to for loop, but here is my go (that obviously did not work).
all_auc <- data.frame() ## create a dataframe to fill in AUC from all 10 simulated sub-datasets
for(i in 1:10){ #1:10 represent the simulated datasets from 1 to 10
df[i] <- df #allocating baseline data to each of the 10 datasets
df[i]$who[sample(which(df[i]$who==1), round(0.05*length(which(df[i]$who==1))))]=2 #create the random 5% allocation of who==1 to who==2 in the i'th simulated dataset
ith_cox <- coxph(Surv(event.time, event == 1) ~ who + age + gender, data = df[i], x = TRUE) #create the i'th Cox regression based on the i´th dataset
# create the predictions based on the i´th Cox
u[i] <- Score(list("baseline" = ith_cox),
Surv(event.time, event == 1) ~ 1,
data = df[i],
times = 60,
plots = "cal",
B = 50,
split.method = "loob",
metrics = c("auc", "brier")
)
# summarize all AUC from all 10 sub-datasets
all_auc <- u[i]$AUC$score$AUC[2]
}
(1) I could not get this for loop to work as described, and
(2) the final dataframe all_auc should provide only which of the 10 datasets yielded the worst and best predictions (I will then use these two data sets for further analysis).
A final note
This is only a reproducible example. The for loop will be applied to 10.000 simulated datasets in our analysis. I do not know if this could affect the answer - but, it illustrates the importance of the result: a dataframe (or vector?) that simply tells me which simulated dataset yielded the best vs worst predictions, and that I subsequently will be able to use these two dataframes for furter analysis, eg df2930 and df8939.

MLP model using Keras package in R fails to learn (High training and val_accuracy but very poor performance on test data)

UPDATE
To help someone who is looking for similar answers to this question, I was able to increase AUC by balancing the dataset. I dis this by doing the following edit to the code:
history <- model %>%
fit(train_nn,
train_target, #when using OHE becomes train_label
epoch = 100,
batch_size = 32,
validation_split = 0.10,
class_weight = list("0" =
nrow(dataset[dataset[,134] ==1)/nrow(dataset[dataset[,134] ==0, "1" =
1)
End of Update
I am currently studying biases in predictions of neural network models. Using data from the fintech company Bondora, I am attempting to create an MLP model to predict loan acceptance. The dataset contains multiple categorical and numerical variables. I created a categorical variable called "reject_loan" (serves as my target variable) which is 1 if a loan defaults within 1 year of origination and 0 otherwise. Now I am attempting to create a MLP model to predict "reject_loan".
Problem: Even though accuracy and validation accuracy both are high (around 83% and around 90% respectively)loss, val_loss, acc, val-accuracy, predictions on test data are very poor. The model usually predicts only one class for all observations OR is able to make only very few correct predictions of the other class. AUC hovers close to 50% always.
I have tried a variety of approaches in pre-processing and in model parameters. Some of the major approaches are below:
Using OHE for all categorical variables (including target), normalizing the numerical vars and then using relu activation for hidden, softmax for output and categorical cross entrophy as loss function)
No OHE, normalizing the numerical vars and then using relu activation for hidden, sigmoid for output and binary cross entrophy as loss function)
Using elu activation for hidden to ensure no leaks in relu
Using multiple hidden layers with and without regularizer (l1 and l2)
Using dropouts
Using SGD and ADAM as optimizers (i.e. either SGD or ADAM)
Decreasing learning rate (lowest used is 0.000001)
Nothing has worked to increase predictability. I should also mention that I have trained an XGBoost model on the same dataset with AUC of around 90% ROC curve and AUC of one of the runs.
Would very much appreciate if someone can help me with this issue.
My model code is as under:
#divide into train and test
set.seed (1234)
#dividing cons in 80:20 train:test sample
sample <- sample(2, nrow(dataset), replace = T, prob = c(0.80,0.20))
train <- dataset[sample==1,1:ncol(dataset)-1]
test <- dataset[sample==2,1:ncol(dataset)-1]
train_target <- dataset[sample==1, ncol(dataset)]
test_target <- dataset[sample==2, ncol(dataset)]
#One hot encoding
train_label <- to_categorical(train_target)
test_label <- to_categorical(test_target)
#Create sequential model
model <- keras_model_sequential()
model %>%
layer_dense(units = 16,
activation = 'elu',
input_shape = c(ncol(train_nn)),
kernel_regularizer = regularizer_l1_l2(l1 = 0.2, l2 = 0.2)) %>%
layer_dropout(0.2) %>%
layer_dense(units = 8,
activation = 'elu',
kernel_regularizer = regularizer_l1_l2(l1 = 0.2, l2 = 0.2)) %>%
layer_dropout(0.4) %>%
layer_dense(units = 8,
activation = 'elu') %>%
layer_dense(units = 1,
activation = 'sigmoid' #In this iteration sigmoid is used but I have also used softmax with a OHE coding of target and units = 2)
#compile
opt = optimizer_sgd(lr = 0.001,
momentum = 0,
decay = 0,
nesterov = FALSE,
clipnorm = NULL,
clipvalue = NULL
)
opt2 = optimizer_adam(lr = 0.000001,
beta_1 = 0.9,
beta_2 = 0.999,
epsilon = NULL,
decay = 0,
amsgrad = FALSE,
clipnorm = NULL,
clipvalue = NULL
)
model %>%
compile(loss = 'binary_crossentropy', #also used categorical_crossontrophy in some iterations
optimizer = opt2,
metrics = 'accuracy')
#Fit model
clbck = callback_reduce_lr_on_plateau(monitor='val_loss', factor=0.1, patience=2)
history <- model %>%
fit(train_nn,
train_target, #when using OHE becomes train_label
epoch = 100,
batch_size = 32,
validation_split = 0.10)
#Evaluate model with test data
nn_model_3 <- model %>% evaluate(test, test_target) #When using OHE this becomes test_label
#Prediction & confusion matrix - test data
prob <- model %>%
predict_proba(test)
pred <- model %>%
predict_classes(test)
nn_conf_table_3 <- table(Predicted = pred, Actual = test_target)
nn_probability_table_3 <- cbind (prob, pred, test_target)
#auc AND roc
par(pty = "s")
nn_roc_3 <- roc(test_target, pred, plot=T, percent=T, lwd = 3, print.auc=T)

Keras LSTM and multiple input feature: how to define parameters

I am discoveting Keras in R and the LSTM. Following this blog post, I want to predict time series, and I would like to use various past time point (t-1, t-2) to predict the t point.
Here is what I tried so far:
library(data.table)
library(tensorflow)
library(keras)
Serie <- c(5.66333333333333, 5.51916666666667, 5.43416666666667, 5.33833333333333,
5.44916666666667, 6.2025, 6.57916666666667, 6.70666666666667,
6.95083333333333, 8.1775, 8.55083333333333, 8.42166666666667,
8.01333333333333, 8.99833333333333, 11.0025, 10.3116666666667,
10.51, 10.9916666666667, 10.6116666666667, 10.8475, 13.7841666666667,
16.2916666666667, 15.9975, 14.3683333333333, 13.4041666666667,
11.8666666666667, 9.11916666666667, 9.47862416666667, 9.08404666666667,
8.79606166666667, 9.93211091666667, 9.03834041666667, 8.58787275,
6.77499383333333, 7.21377583333333, 7.53497175, 6.31212966666667,
5.5825105, 4.64021041666667, 4.608787, 5.39446983333333, 4.93945983333333,
4.8612215, 4.13088808333333, 4.09916575, 3.40943183333333, 3.79573258333333,
4.30319966666667, 4.23431266666667, 3.64880758333333, 3.11700716666667,
3.321058, 2.53599408333333, 2.20433991666667, 1.66643905833333,
0.84187275, 0.467880658333333, 0.810507858333333, 0.795)
Npoints <- 2 # number of previous point to take into account
I then create a data frame with the lagged time series, and create a test and train set:
supervised <- data.table(x = diff(Serie, differences = 1))
supervised[,c(paste0("x-",1:Npoints)) := lapply(1:Npoints,function(i){c(rep(NA,i),x[1:(.N-i)])})] # create shifted versions
# take the non NA
supervised <- supervised[!is.na(get(paste0("x-",Npoints)))]
head(supervised)
# Split dataset into training and testing sets
N = nrow(supervised)
n = round(N *0.7, digits = 0)
train = supervised[1:n, ]
test = supervised[(n+1):N, ]
I rescale the data
scale_data = function(train, test, feature_range = c(0, 1)) {
x = train
fr_min = feature_range[1]
fr_max = feature_range[2]
std_train = ((x - min(x,na.rm = T) ) / (max(x,na.rm = T) - min(x,na.rm = T) ))
std_test = ((test - min(x,na.rm = T) ) / (max(x,na.rm = T) - min(x,na.rm = T) ))
scaled_train = std_train *(fr_max -fr_min) + fr_min
scaled_test = std_test *(fr_max -fr_min) + fr_min
return( list(scaled_train = as.vector(scaled_train), scaled_test = as.vector(scaled_test) ,scaler= c(min =min(x,na.rm = T), max = max(x,na.rm = T))) )
}
Scaled = scale_data(train, test, c(-1, 1))
# define x and y train
y_train = as.vector(Scaled$scaled_train[, 1])
x_train = Scaled$scaled_train[, -1]
And following this post I reshape the data in 3D
x_train_reshaped <- array(NA,dim= c(1,dim(x_train)))
x_train_reshaped[1,,] <- as.matrix(x_train)
I do the following model and try to start the learning :
model <- keras_model_sequential()
model%>%
layer_lstm(units = 1, batch_size = 1, input_shape = dim(x_train), stateful= TRUE)%>%
layer_dense(units = 1)
# compile model ####
model %>% compile(
loss = 'mean_squared_error',
optimizer = optimizer_adam( lr= 0.02, decay = 1e-6 ),
metrics = c('accuracy')
)
# make a test
model %>% fit(x_train_reshaped, y_train, epochs=1, batch_size=1, verbose=1, shuffle=FALSE)
but I get the following error:
Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: No data provided for "dense_11". Need data for each key in: ['dense_11']
Trying to reshape the data differently didn't help.
What I am doing wrong ?
Keras and tensorflow in R cannot recognise the size of your input/target data when they are data frames.
y_train is both a data.table and a data.frame:
class(y_train)
[1] "data.table" "data.frame"
The keras fit documentation states: "y: Vector, matrix, or array of target (label) data (or list if the model has multiple outputs)." Similarly, for x.
Unfortunately, there still appears to be an input and/or target dimensionality mismatch when y_train is cast to a matrix:
model %>%
fit(x_train_reshaped, as.matrix(y_train), epochs=1, batch_size=1, verbose=1, shuffle=FALSE)
Error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: Input arrays should have the same number of samples as target arrays.
Found 1 input samples and 39 target samples.
Hope this answer helps you, or someone else, make further progress.

Understanding Keras prediction output of a rnn model in R

I'm trying out the Keras package in R by doing this tutorial about forecasting the temperature. However, the tutorial has no explanation on how to predict with the trained RNN model and I wonder how to do this. To train a model I used the following code copied from the tutorial:
dir.create("~/Downloads/jena_climate", recursive = TRUE)
download.file(
"https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip",
"~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip"
)
unzip(
"~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip",
exdir = "~/Downloads/jena_climate"
)
library(readr)
data_dir <- "~/Downloads/jena_climate"
fname <- file.path(data_dir, "jena_climate_2009_2016.csv")
data <- read_csv(fname)
data <- data.matrix(data[,-1])
train_data <- data[1:200000,]
mean <- apply(train_data, 2, mean)
std <- apply(train_data, 2, sd)
data <- scale(data, center = mean, scale = std)
generator <- function(data, lookback, delay, min_index, max_index,
shuffle = FALSE, batch_size = 128, step = 6) {
if (is.null(max_index))
max_index <- nrow(data) - delay - 1
i <- min_index + lookback
function() {
if (shuffle) {
rows <- sample(c((min_index+lookback):max_index), size = batch_size)
} else {
if (i + batch_size >= max_index)
i <<- min_index + lookback
rows <- c(i:min(i+batch_size, max_index))
i <<- i + length(rows)
}
samples <- array(0, dim = c(length(rows),
lookback / step,
dim(data)[[-1]]))
targets <- array(0, dim = c(length(rows)))
for (j in 1:length(rows)) {
indices <- seq(rows[[j]] - lookback, rows[[j]],
length.out = dim(samples)[[2]])
samples[j,,] <- data[indices,]
targets[[j]] <- data[rows[[j]] + delay,2]
}
list(samples, targets)
}
}
lookback <- 1440
step <- 6
delay <- 144
batch_size <- 128
train_gen <- generator(
data,
lookback = lookback,
delay = delay,
min_index = 1,
max_index = 200000,
shuffle = TRUE,
step = step,
batch_size = batch_size
)
val_gen = generator(
data,
lookback = lookback,
delay = delay,
min_index = 200001,
max_index = 300000,
step = step,
batch_size = batch_size
)
test_gen <- generator(
data,
lookback = lookback,
delay = delay,
min_index = 300001,
max_index = NULL,
step = step,
batch_size = batch_size
)
# How many steps to draw from val_gen in order to see the entire validation set
val_steps <- (300000 - 200001 - lookback) / batch_size
# How many steps to draw from test_gen in order to see the entire test set
test_steps <- (nrow(data) - 300001 - lookback) / batch_size
library(keras)
model <- keras_model_sequential() %>%
layer_flatten(input_shape = c(lookback / step, dim(data)[-1])) %>%
layer_dense(units = 32, activation = "relu") %>%
layer_dense(units = 1)
model %>% compile(
optimizer = optimizer_rmsprop(),
loss = "mae"
)
history <- model %>% fit_generator(
train_gen,
steps_per_epoch = 500,
epochs = 20,
validation_data = val_gen,
validation_steps = val_steps
)
I tried to predict the temperature with the code below. If I am correct, this should give me the normalized predicted temperature for every batch. So when I denormalize the values and average them, I get the predicted temperature. Is this correct and if so for which time is then predicted (latest observation time + delay?) ?
prediction.set <- test_gen()[[1]]
prediction <- predict(model, prediction.set)
Also, what is the correct way to use keras::predict_generator() and the test_gen() function? If I use the following code:
model %>% predict_generator(generator = test_gen,
steps = test_steps)
it gives this error:
error in py_call_impl(callable, dots$args, dots$keywords) :
ValueError: Error when checking model input: the list of Numpy
arrays that you are passing to your model is not the size the model expected.
Expected to see 1 array(s), but instead got the following list of 2 arrays:
[array([[[ 0.50394005, 0.6441838 , 0.5990761 , ..., 0.22060473,
0.2018686 , -1.7336458 ],
[ 0.5475698 , 0.63853574, 0.5890239 , ..., -0.45618412,
-0.45030192, -1.724062...
Note: my familiarity with syntax of R is very little, so unfortunately I can't give you an answer using R. Instead, I am using Python in my answer. I hope you could easily translate back, my words at least, to R.
... If I am correct, this should give me the normalized predicted
temperature for every batch.
Yes, that's right. The predictions would be normalized since you have trained it with normalized labels:
data <- scale(data, center = mean, scale = std)
Therefore, you would need to denormalize the values using the computed mean and std to find the real predictions:
pred = model.predict(test_data)
denorm_pred = pred * std + mean
... for which time is then predicted (latest observation time +
delay?)
That's right. Concretely, since in this particular dataset every ten minutes a new obeservation is recorded and you have set delay=144, it would mean that the predicted value is the temperature 24 hours ahead (i.e. 144 * 10 = 1440 minutes = 24 hours) from the last given observation.
Also, what is the correct way to use keras::predict_generator() and
the test_gen() function?
predict_generator takes a generator that gives as output only test samples and not the labels (since we don't need labels when we are performing prediction; the labels are needed when training, i.e. fit_generator(), and when evaluating the model, i.e. evaluate_generator()). That's why the error mentions that you need to pass one array instead of two arrays. So you need to define a generator that only gives test samples or one alternative way, in Python, is to wrap your existing generator inside another function that gives only the input samples (I don't know whether you can do this in R or not):
def pred_generator(gen):
for data, labels in gen:
yield data # discards labels
preds = model.predict_generator(pred_generator(test_generator), number_of_steps)
You need to provide one other argument which is the number of steps of generator to cover all the samples in test data. Actually we have num_steps = total_number_of_samples / batch_size. For example, if you have 1000 samples and each time the generator generate 10 samples, you need to use generator for 1000 / 10 = 100 steps.
Bonus: To see how good your model performs you can use evaluate_generator using the existing test generator (i.e. test_gen):
loss = model.evaluate_generator(test_gen, number_of_steps)
The given loss is also normalized and to denormalize it (to get a better sense of prediction error) you just need to multiply it by std (you don't need to add mean since you are using mae, i.e. mean absolute error, as the loss function):
denorm_loss = loss * std
This would tell you how much your predictions are off on average. For example, if you are predicting the temperature, a denorm_loss of 5 means that the predictions are on average 5 degrees off (i.e. are either less or more than the actual value).
Update: For prediction, you can define a new generator using an existing generator in R like this:
pred_generator <- function(gen) {
function() { # wrap it in a function to make it callable
gen()[1] # call the given generator and get the first element (i.e. samples)
}
}
preds <- model %>%
predict_generator(
generator = pred_generator(test_gen), # pass test_gen directly to pred_generator without calling it
steps = test_steps
)
evaluate_generator(model, test_gen, test_steps)

Evaluate stm Model

I´m working on a STM Model (topicmodelling) and i´d like to evaluate and verify the model, but i´m not sure how to do it. My code is:
Corpus.STM <- readCorpus(dtm, type = "slam")
Model choice:
BestM1. <- searchK(Corpus.STM$documents, Corpus.STM$vocab, K=c(10,20, 30, 40, 50, 60), proportion = .4, heldout.seed = 1, prevalence=~ cvJahr+ cvDienstgrad+ cvLand, data=Jahr.Land )
BestM2. <- searchK(Corpus.STM$documents, Corpus.STM$vocab, K=c(85,110), proportion = .4, heldout.seed = 1, prevalence=~ cvJahr+ cvDienstgrad+ cvLand, data=Jahr.Land )
BestM3. <- searchK(Corpus.STM$documents, Corpus.STM$vocab, K=c(20,21,22,23,24,25,26,27,28,29,30), proportion = .4, heldout.seed = 1, prevalence=~ cvJahr+ cvDienstgrad+ cvLand, data=Jahr.Land )
str(BestM1.)
plot.searchK(BestM1.)
plot.STM(BestM2)
plot.searchK(BestM3.)
#27 seems to be a good choice
#Heldout
set.seed(1)
heldout<- make.heldout(Corpus.STM$documents, Corpus.STM$vocab, proportion = .5,seed = 1)
stm.mod1 <- stm(heldout$documents, heldout$vocab, K =27, seed = 1, init.type = "Spectral", max.em.its = 100 )
heldout.evaluation <- eval.heldout(stm.mod1, heldout$missing)
heldout.evaluation
#evaluation heldout
labelTopics(stm.mod1)
plot.STM(stm.mod1, type="labels", n=5, frexweight = 0.25)
cloud(stm.mod1, topic=5)
plot.STM(stm.mod1, type="summary", labeltype="frex", topics=c(1:5), n=8)
I´m not sure how to interpret the output of "eval.heldout". Additional I want to make sure that the model doesn´t overfit, but i´m not sure how it could work.
eval.heldout() calculates the held-out log-likelihood using document completion. The number you want is the heldout.evaluation$expected.heldout which is the average of the held-out log-likelihood values for each document. Unfortunately there is no unambiguous measure of whether or not the model is "overfit." The plot.searchK() call you have will give you a plot of the held-out log-likelihood over different values of K and certainly if that number is decreasing as K goes up one explanation is overfitting.
Sorry to not have a clearer answer but unfortunately there are no hard and fast rules here.

Resources