set.seed publishing model in R - r

Do you include set.seed() as part of the final model when you are ready to distribute/expose your model to the real word? Do you only use set.seed() for validation during training and validation?
Once you decide on your method, parameters, etc., do you run the model without setting the seed and that is the model you launch? Or do you pick the seed that performed the best during validation?
Thanks!

I doubt different seeds give that variant results unless you have parameters that involve random sampling of features or observations. Still i believe you answered your own question. To reproduce the Exact result you would need to include the seed value as well with your final model. It is not of really great significance however, so you dont need to really worry about it too much.

Related

When to use Train Validation Test sets

I know this question is quite common but I have looked at all the questions that have been asked before and I still can't understand why we also need a validation set.
I know sometimes people only use a train set and a test set, so why do we also need a validation set?
And how do we use it?
For example, in order to impute missing data, I impute these 3 different sets separately or not?
Thank you!
I will try to answer with an example.
If I'm training a neural network or doing linear regression, and I'm using only train and test data I can check my test data loss for each iteration and stop when my test data loss begins to grow or get a snapshot of the model with the lowest test loss.
Is some sense this is "overfiting" to my test data since i decide when to stop based on that.
If I was using test, train and validation data I can do the same process as above with the validation instead of the test data, and then after i decide when my model is done training, I can test it on the never before seen test data to give me a more unbiased score of my models predictions.
For the second part of the question, I would suggest to treat at least the test data as independent and impute the missing data differently, but it depends on the situation and data.

Can ensemble classifiers underperform the best single classifier?

I have recently run an ensemble classifier in MLR (R) of a multicenter data set. I noticed that the ensemble over three classifiers (that were trained on different data modalities) was worse than the best classifier.
This seemed to be unexpected to me. I was using logistic regressions (without any parameter optimization) as simple classifier and a Partial Least Squares (PLS) Discriminant Analysis as a superlearner, since the base-learner predictions ought to be correlated. I also tested different superlearners like NB, and logistic regression. The results did not change.
Here are my specific questions:
1) Do you know, whether this can in principle occur?
(I also googled a bit and found this blog that seems to indicate that it can:
https://blogs.sas.com/content/sgf/2017/03/10/are-ensemble-classifiers-always-better-than-single-classifiers/)
2) Especially, if you are as surprised as I was, do you know of any checks I could do in mlr to make sure, that there isnt a bug. I have tried to use a different cross-validation scheme (originally I used leave-center-out CV, but since some centers provided very little data, I wasnt sure, whether this might lead to weird model fits of the super learner), but it still holds. I also tried to combine different data modalities and they give me the same phenomenon.
I would be grateful to hear, whether you have experienced this and if not, whether you know what the problem could be.
Thanks in advance!
Yes, this can happen - ensembles do not always guarantee a better result. More details regarding cases where this can happen are discussed also in this cross-validate question

different values by fitting a boosted tree twice

I use the R-package adabag to fit boosted trees to a (large) data set (140 observations with 3 845 predictors).
I executed this method twice with same parameter and same data set and each time different values of the accuracy returned (I defined a simple function which gives accuracy given a data set).
Did I make a mistake or is usual that in each fitting different values of the accuracy return? Is this problem based on the fact that the data set is large?
function which returns accuracy given the predicted values and true test set values.
err<-function(pred_d, test_d)
{
abs.acc<-sum(pred_d==test_d)
rel.acc<-abs.acc/length(test_d)
v<-c(abs.acc,rel.acc)
return(v)
}
new Edit (9.1.2017):
important following question of the above context.
As far as I can see I do not use any "pseudo randomness objects" (such as generating random numbers etc.) in my code, because I essentially fit trees (using r-package rpart) and boosted trees (using r-package adabag) to a large data set. Can you explain me where "pseudo randomness" enters, when I execute my code?
Edit 1: Similar phenomenon happens also with tree (using the R-package rpart).
Edit 2: Similar phenomenon did not happen with trees (using rpart) on the data set iris.
There's no reason you should expect to get the same results if you didn't set your seed (with set.seed()).
It doesn't matter what seed you set if you're doing statistics rather than information security. You might run your model with several different seeds to check its sensitivity. You just have to set it before anything involving pseudo randomness. Most people set it at the beginning of their code.
This is ubiquitous in statistics; it affects all probabilistic models and processes across all languages.
Note that in the case of information security it's important to have a (pseudo) random seed which cannot be easily guessed by brute force attacks, because (in a nutshell) knowing a seed value used internally by a security program paves the way for it to be hacked. In science and statistics it's the opposite - you and anyone you share your code/research with should be aware of the seed to ensure reproducibility.
https://en.wikipedia.org/wiki/Random_seed
http://www.grasshopper3d.com/forum/topics/what-are-random-seed-values

R Supervised Latent Dirichlet Allocation Package

I'm using this LDA package for R. Specifically I am trying to do supervised latent dirichlet allocation (slda). In the linked package, there's an slda.em function. However what confuses me is that it asks for alpha, eta and variance parameters. As far as I understand, I thought these parameters are unknowns in the model. So my question is, did the author of the package mean to say that these are initial guesses for the parameters? If yes, there doesn't seem to be a way of accessing them from the result of running slda.em.
Aside from coding the extra EM steps in the algorithm, is there a suggested way to guess reasonable values for these parameters?
Since you are trying to generate a supervised model, the typical approach would be to use cross validation to determine the model parameters. So you hold out some of the data as your test set, train the a model on the remaining data, and evaluate the model performance, repeating k times. You then continue to repeat with different model parameters to determine which result in the best model performance.
In the specific case of slda, I would run demo(slda) to see the author's implementation of it. When you run the demo, you'll see that he sets alpha=1.0, eta=0.1, and variance=0.25. I'd suggest using these as your starting point, and then use cross validation to determine better parameters if you need to improve model performance.

Does Seed Value affect the result of training data in R?

I am trying to create a model with my data divided into training(70%) ,validation(15%) and testing(15%) set.After running the model I am getting some accuracy(ROC) and some Value for my confusion matrix.But every time I keep changing the seed value,it is affecting my output. How do I address this? Is this the expected behavior? If so how can I come to a conclusion of which value to be chosen as the final output?
set.seed() defines a starting point for the generation of random values. Running an analysis with the same seed should return the same result. Using a different seed can result in different output. In your case probably because of a different split in training, validation and testing.
If the differences are acceptable small, then your model is robust for different splits in training, testing and validation. If the differences are large, then your model is not robust and should not be trusted. You will have to change the way the data is split (stratification might help) or revise the model.

Resources