I made a neural network in R using the Keras package. I basically made the same model I had created in python. I used the same data as well in the same order. However, when I run it in R, the model doesn't seem to be fitting at all.
When I call predict on the model, it returns the same value regardless of the input.
I'm guessing the weights are zeroing out and its returning the bias.
Heres how I built the model:
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu',input= c(18)) %>%
layer_dense(units = 64, activation = 'relu')%>%
layer_dropout(rate = 0.25) %>%
layer_dense(units = 32, activation = 'relu') %>%
layer_dropout(rate = 0.25) %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dropout(rate = 0.25) %>%
layer_dense(units = 8, activation = 'relu') %>%
layer_dense(units = 2, activation = 'softmax')
Heres the output when I call predict:
model%>%
predict(nbainput_test_x)
Related
I trained a CNN using Keras and R with the TensorFlow backend for classifying multispectral images. I want to calculate saliency maps per input data band for a single input image. My idea is to calculate the mean of the saliency map for each channel to get the information, which input band contributed most to the classification. Is this possible and if yes, how? Everywhere I looked, I only found python implementations of saliency maps.
Let's just assume, this is my network and I want to calculate saliency maps for all three channels of the one image in the end, so I know which channel is most important:
# download & load data
cifar <- dataset_cifar10()
# set up model
model <- keras_model_sequential() %>%
layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = "relu", input_shape = c(32,32,3)) %>%
layer_max_pooling_2d() %>%
layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_flatten() %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 10, activation = "softmax")
# compile model
model %>% compile(
optimizer = "adam",
loss = "sparse_categorical_crossentropy",
metrics = "accuracy"
)
# run model
history <- model %>%
fit(
x = cifar$train$x,
y = cifar$train$y,
epochs = 10
)
# pick out one image
test_img <- cifar$test$x[1,,,]
# what now?
To fit a classification model in R, have been using library(KerasR). To control learning rate and KerasR says
compile(optimizer=Adam(lr = 0.001, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-08, decay = 0, clipnorm = -1, clipvalue = -1), loss = 'binary_crossentropy', metrics = c('categorical_accuracy') )
But it is given me an error like this
Error in modules$keras.optimizers$Adam(lr = lr, beta_1 = beta_2,
beta_2 = beta_2, : attempt to apply non-function
I also used keras_compile still getting the same error.
I can change optimizer in compile but the largest learning rate is 0.01, I want to try 0.2.
model <- keras_model_sequential()
model %>% layer_dense(units = 512, activation = 'relu', input_shape = ncol(X_train)) %>%
layer_dropout(rate = 0.2) %>%
layer_dense(units = 128, activation = 'relu')%>%
layer_dropout(rate = 0.1) %>%
layer_dense(units = 2, activation = 'sigmoid')%>%
compile(
optimizer = 'Adam',
loss = 'binary_crossentropy',
metrics = c('categorical_accuracy')
)
I think the issue is you are using two different libraries kerasR and keras together. You should use only one of them. First, you are using keras_model_sequential function
which is from keras and then you try to use Adam function which is from kerasR library. You find the difference between these two libraries here: https://www.datacamp.com/community/tutorials/keras-r-deep-learning#differences
The following code is working for me which is using only keras library.
library(keras)
model <- keras_model_sequential()
model %>%
layer_dense(units = 512, activation = 'relu', input_shape = ncol(X_train)) %>%
layer_dropout(rate = 0.2) %>%
layer_dense(units = 128, activation = 'relu')%>%
layer_dropout(rate = 0.1) %>%
layer_dense(units = 2, activation = 'sigmoid')%>%
compile(optimizer=optimizer_adam(lr = 0.2), loss= 'binary_crossentropy', metrics = c('accuracy') )
I am building a classification model with keras R, and my codes are as follows:
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = ncol(x_train),kernel_regularizer = regularizer_l2(0.001),) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units = 128, activation = 'relu',kernel_regularizer = regularizer_l2(0.001),) %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 2, activation = 'sigmoid')
history <- model %>% compile(
loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = c('accuracy')
)
model %>% fit(x_train,
y_train,
epochs = 50,
batch_size = 128,
validation_data = (x_val,y_val))
Everything is fine but when I tried to pass the outside data (x_val, y_val) to be used as validation data using 'validation_data', then It got this error:
Error: unexpected ',' in:
" batch_size = 128,
validation_data =(x_val,"
If I simply use validation_split=0.2 then all good.
I looked at the codes many times, but could not figure out what is wrong here.
Can somebody help me on this please?
Many thanks,
Ho
The issue is based on the input arguments to be passed. It should be a list as there is no tuple in R (though it is there in python
According to keras documentation
validation_data - Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. This could be a list (x_val, y_val) or a list (x_val, y_val, val_sample_weights). validation_data will override validation_split.
So, we just replace the (x_val, y_val) with list(x_val, y_val)
model %>%
fit(x_train,
y_train,
epochs = 50,
batch_size = 128,
validation_data = list(x_val,y_val))
My first time playing with Keras. I tried to run the model and see the loss and accuracy. For some reason, its not plotting the loss for val_loss.
My code:
model <- keras_model_sequential() %>%
layer_dense(units = 256, activation = "relu", input_shape = dim(train.X)[[2]]) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units = 128, activation = "relu") %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 1, activation = "sigmoid")
model %>% compile (
optimizer = "rmsprop", #configuring optimizer = optimizer_rmsprop(lr = 0.001)
loss = "binary_crossentropy", #custom loss -> loss_binary_crossentropy
metrics = c("accuracy") #metric_binary_accuracy
)
history <- model %>% fit(
train.X,
train.Y,
epochs = 100,
batch_size = 64,
validation_data = list(x_val, y_val)
)
My results:
I would really appreciate if someone can explain to me why the val_loss function is not plotting.
When I run the following R script I get summary information about a keras model and its added layers, but no confirmation that the model has been compiled. How do I check whether the compile step has been completed?
library(keras)
model <- keras_model_sequential()
model %>%
layer_dense(units = 64, activation = 'relu', input_shape = c(20)) %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 10, activation = 'softmax') %>%
compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01, decay = 1e-6,
momentum = 0.9, nesterov = TRUE),
metrics = c('accuracy')
)
summary(model)
Check the built flag ?
library(keras)
model <- keras_model_sequential()
model$built # False
model %>%
layer_dense(units = 64, activation = 'relu', input_shape = c(20)) %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dropout(rate = 0.5) %>%
layer_activation(activation = 'relu') %>%
layer_dense(units = 10) %>%
layer_activation(activation = 'softmax')
model$built # False
model %>%
compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_sgd(lr = 0.01, decay = 1e-6,
momentum = 0.9, nesterov = TRUE),
metrics = c('accuracy')
)
model$built # True