How to set class_weight in keras package of R? - r

I am using keras package in R to train a deep learning model. My data set is highly imbalanced. Therefore, I want to set class_weight argument in the fit function. Here is the fit function and its arguments that I used for my model
history <- model %>% fit(
trainData, trainClass,
epochs = 5, batch_size = 1000,
class_weight = ????,
validation_split = 0.2
)
In python I can set class_weight as follow:
class_weight={0:1, 1:30}
But I am not sure how to do it in R. In the help menu of R it describes class_weight as follow:
Optional named list mapping indices (integers) to a weight (float) to
apply to the model's loss for the samples from this class during
training. This can be useful to tell the model to "pay more attention"
to samples from an under-represented class.
Any idea or suggestions?

Class_weight needs to be a list, so
history <- model %>% fit(
trainData, trainClass,
epochs = 5, batch_size = 1000,
class_weight = list("0"=1,"1"=30),
validation_split = 0.2
)
seems to work. Keras internally uses a function called as_class_weights to change the list to a python-dictionary (see https://rdrr.io/cran/keras/src/R/model.R).
class_weight <- dict(list('0'=1,'1'=10))
class_weight
>>> {0: 1.0, 1: 10.0}
Looks just like the python dictionary that you mentioned above.

I found a generic solution in Python solution, so I converted into R:
counter=funModeling::freq(Y_data_aux_tr, plot=F) %>% select(var, frequency)
majority=max(counter$frequency)
counter$weight=ceil(majority/counter$frequency)
l_weights=setNames(as.list(counter$weight), counter$var)
Using it:
fit(..., class_weight = l_weights)
An advice if you are using fit_generator: since the weights are based on frequency, having a different number of training-validation samples may bias the validation results. They should be equally-sized.

Related

input error when fitting model Neural network in R

I get an error when i try to run this code. i followed a guide on youtube for building a neural network. Everything works except when i try to run this code to fit the model.
history <- modnn %>% fit(
train_X_matrix, train_Y, epochs = 50, batch_size = 600,
validation_data = list(validation_X_matrix,validation_Y))
```
the error i get when i try to run the code above, all the names you see are the names of the columns. So the features of the model:
[error in visual studio](https://i.stack.imgur.com/ZaPXw.png)
some extra info about the variables i use. Here i created a matrix of the input variables. They did this in the guide. I tried train_x_data as input then it gave the same error but immediately so not after 1 epoch
```
# dependent and independent variables in 1 dataframe
train_X_data <- data.frame(train_X,train_y)
validation_X_data <- data.frame(validation_X,validation_y)
train_X_matrix <- model.matrix(average_daily_rate ~. -1 , data = train_X_data)
train_Y <- train_X_data$average_daily_rate
validation_X_matrix <- model.matrix(average_daily_rate ~. -1, data = validation_X_data)
validation_Y <- validation_X_data$average_daily_rate
```
The model i use, it is just a simple single layer model for testing.
# 1) single layer model structure
# step 1 make architecture powerful enough
modnn <- keras_model_sequential() %>%
layer_dense(units = 500, activation = "relu",
input_shape = ncol(train_X)) %>%
layer_dense(units = 1)
summary(modnn)
modnn %>% compile(loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error"))
The error occurs after running the first epoch. I tought it was because the model could not read the names of the columns, but i tried a lot of things and nothing seemed to work.
Does anyone have an idea on how to fix this?

What's the difference between lgb.train() and lightgbm() in r?

I'm trying to build a regression model with R using lightGBM,
and i'm getting a bit confused with some functions and when/how to use them.
First one is what i've written in the title, what's the difference between lgb.train() and lightgbm()?
The description in the documentation(https://cran.r-project.org/web/packages/lightgbm/lightgbm.pdf) says that lgb.train is 'Logic to train with LightGBM' and lightgbm is 'Simple interface for training a LightGBM model', while both their outcome value is lgb.Booster, a trained model.
One difference I've found is that lgb.train() does not work with valids = , while lightgbm() does.
Second one is about a function lgb.cv(), regarding a cross validation in lightGBM. How do you apply the output of lgb.cv() to a model?
As I understood from the documentation i've linked above, it seems like the output of both lgb.cv and lgb.train is a model.
Is it correct to use it like the example below?
lgbcv <- lgb.cv(params,
lgbtrain,
nrounds = 1000,
nfold = 5,
early_stopping_rounds = 100,
learning_rate = 1.0)
lgbcv <- lightgbm(params,
lgbtrain,
nrounds = 1000,
early_stopping_rounds = 100,
learning_rate = 1.0)
Thank you in advance!
what's the difference between lgb.train() and lightgbm()?
These functions both train a LightGBM model, they're just slightly different interfaces. The biggest difference is in how training data are prepared. LightGBM training requires a special LightGBM-specific representation of the training data, called a Dataset. To use lgb.train(), you have to construct one of these beforehand with lgb.Dataset(). lightgbm(), on the other hand, can accept a data frame, data.table, or matrix and will create the Dataset object for you.
Choose whichever method you feel has a more friendly interface...both will produce a single trained LightGBM model (class "lgb.Booster").
that lgb.train() does not work with valids = , while lightgbm() does.
This is not correct. Both functions accept the keyword argument valids. Run ?lgb.train and ?lightgbm for documentation on those methods.
How do you apply the output of lgb.cv() to a model?
I'm not sure what you mean, but you can find an example of how to use lgb.cv() in the docs that show up when you run ?lgb.cv.
library(lightgbm)
data(agaricus.train, package = "lightgbm")
train <- agaricus.train
dtrain <- lgb.Dataset(train$data, label = train$label)
params <- list(objective = "regression", metric = "l2")
model <- lgb.cv(
params = params
, data = dtrain
, nrounds = 5L
, nfold = 3L
, min_data = 1L
, learning_rate = 1.0
)
This returns an object of class "lgb.CVBooster". That object has multiple "lgb.Booster" objects in it (the trained models that lightgbm() or lgb.train() produce).
You can extract any one of these from model$boosters. However, in practice I don't recommend using the models from lgb.cv() directly. The goal of cross-validation is to get an estimate of the generalization error for a model. So you can use lgb.cv() to figure out the expected error for a given dataset + set of parameters (by looking at model$record_evals and model$best_score).

Is there a way to know how far R has gotten on a random forest model?

I am currently learning about random forests and how to create them in R. However as I have discovered, it can be quite the time consuming activity creating these trees, and sometimes I do not know how far R has gotten or if it has crashed, and so I close R in panic. I use the randomForest package, and my code is as follows:
model <- randomForest(def ~ .,
data = mydataset,
mtry = 4,
ntree = 200,
importance = TRUE)
Is there a way to make R show me how far it has gotten at any time, or when it is finished with one tree and is continuing to the next?
In situation such as these, you are typically looking for an argument that makes the function more verbose. This is typically something like verbose = TRUE but it varies and some functions do not offer any kind of verbosity settings.
In your case, you just have to look up the help of randomForest (with ?randomForest::randomForest) to find the argument do.trace.
do.trace
If set to TRUE, give a more verbose output as randomForest is run. If set to some integer, then running output is printed for every do.trace trees.
In other words, you can enable verbosity with:
model <- randomForest(def ~ ., data = mydataset, mtry = 4,
ntree = 200, importance = TRUE, do.trace = TRUE)
or, to print some feedback every 100 trees:
model <- randomForest(def ~ ., data = mydataset, mtry = 4,
ntree = 200, importance = TRUE, do.trace = 100)
It is always a good reflex to check the manual of the function as a first step. If you use rstudio, you can use the help pane instead of using ? or ??.

How to retrieve elastic net coefficients?

I am using the caret package to train an elastic net model on my dataset modDat. I take a grid search approach paired with repeated cross validation to select the optimal values of the lambda and fraction parameters required by the elastic net function. My code is shown below.
library(caret)
library(elasticnet)
grid <- expand.grid(
lambda = seq(0.5, 0.7, by=0.1),
fraction = seq(0, 1, by=0.1)
)
ctrl <- trainControl(
method = 'repeatedcv',
number = 5, #folds
repeats = 10, #repeats
classProbs = FALSE
)
set.seed(1)
enetTune <- train(
y ~ .,
data = modDat,
method = 'enet',
metric = 'RMSE',
tuneGrid = grid,
verbose = FALSE,
trControl = ctrl
)
I can get predictions using y_hat <- predict(enetTune, modDat), but I cannot view the coefficients underlying the predictions.
I have tried coef(enetTune$finalModel) but the only thing returned is NULL. I am suspecting that I have to give the coef() function more information but not sure how to do this.
In addition, I would like to produce a box plot of the 50 sets of coefficients (10 repeats of 5 folds) associated with the optimal lambda and fraction parameters.
To see the coefficients, use predict:
predict(enetTune$finalModel, type = "coefficients")
See ?predict.enet for more information on how to get specific coefficients.
Following on from the answer by #Weihuang Wong, you can get the coefficients from the final model using the following code:
predict.enet(enetTune$finalModel, s=enetTune$bestTune[1, "fraction"], type="coef", mode="fraction")$coefficients
To me what works best is stats::predict, as is #Weihuang Wong answer. However, as OP pointed out in a comment, that provides a list of coefficients for every value of lambda tested.
The important thing to understand here is that when you are using predict, your intention is precisely to predict the value of the parameters, and not really to retrieve them. You should then be aware of that an explore the options available.
In this case, you could use the same function with the argument s for the penalty parameter lambda. Remebember that you are still predicting, but this time you will get the coefficients you are looking for.
stats::predict(enetTune$finalModel, type = "coefficients", s = enetTune$bestTune$lambda)

R Neural Net Issues

Here is the updated code. My issue is with the output of "results". I'll post below as the format for readability.
library("neuralnet")
library("ggplot2")
setwd("C:/Users/Aaron/Documents/UMUC/R/Data For Assignments")
trainset <- read.csv("SOTS.csv")
head(trainset)
## val data classification
str(trainset)
## building the neural network
risknet <- neuralnet(Overall.Risk.Value ~ Finance + Personnel + Information.Dissemenation.C, trainset, hidden = 10, lifesign = "minimal", linear.output = FALSE, threshold = 0.1)
##plot nn
plot(risknet, rep="best")
##import scoring set
score_set <- read.csv("SOSS.csv")
##select subsets-training and scoring match
score_test <- subset(score_set, select = c("Finance", "Personnel", "Information.Dissemenation.C"))
##display values of score_test
head(score_test)
##neural network compute function score_test and the neural net "risknet"
risknet.results <- compute(risknet, score_test)
##Actual value of Overall.Risk.Value variable wanting to predict. net.result = a matrix containing the overall result of the neural network
results <- data.frame(Actual = score_set$Overall.Risk.Value, Prediction = risknet.results$net.result)
results[1:14, ]
The output of results is not as expected. For instance, the actual data is a number between 5 and 8, whereas "Prediction" displays outputs of .9995...for each result.
Thanks again for the help.
This is how you train and predict:
Use training data to learn model parameters (the variable risknet in your case)
Use parameters to predict scores on test data
Here is an example very much similar to yours that explains how this is done.
The default activation function in neuralnet is "logistic". When linear.output is set as FALSE, it ensures that the output is mapped by the activation function to the interval [0,1].(R_Journal (neuralnet)- Frauke Günther)
I just updated linear.output=TRUE in your code and final result looks much better.
Thanks for the help!

Resources