I'm triying to run the code of section 5.4.2 of the F. Chollet's book "Deep Learning with R", on Visualizing convnet filters. The code is as follows:
library(keras)
model <- application_vgg16(
weights = 'imagenet',
include_top = FALSE
)
layer_name <- 'block3_conv1'
filter_index <- 1
layer_output <- get_layer(model, layer_name)$output
loss <- k_mean(layer_output[,,,filter_index])
grads <- k_gradients(loss, model$input)[[1]]
but throws Error in py_call_impl(): ! RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.
According to this github issue this can easily be resolved adding two lines of code at the beginning of the script:
library(tensorflow)
tf$compat$v1$disable_eager_execution()
and effectively, the tf$executing_eagerly() call transitions from TRUE to FALSE. Now, k_gradients() throws another error:
Error in `py_call_impl()`:
! TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='tf.math.reduce_mean/Mean:0', description="created by layer 'tf.math.reduce_mean'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.
How to overcome this issue and visualize convnet filters?
Related
this post is related to my earlier How to define a Python Class which uses R code, but called from rTorch? .
I came across the torch package in R (https://torch.mlverse.org/docs/index.html) which allows to Define a DataSet class definition. Yet, I also need to be able to define a model class like class MyModelClass(torch.nn.Module) in Python. Is this possible in the torch package in R?
When I tried to do it with reticulate it did not work - there were conflicts like
ImportError: /User/homes/mreichstein/miniconda3/envs/r-torch/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: _ZTINSt6thread6_StateE
It also would not make much sense, since torch isn't wrapping Python.
But it is loosing at lot of flexibility, which rTorch has (but see my problem in the upper post).
Thanks for any help!
Markus
You can do that directly using R's torch package which seems quite comprehensive at least for the basic tasks.
Neural networks
Here is an example of how to create nn.Sequential like this:
library(torch)
model <- nn_sequential(
nn_linear(D_in, H),
nn_relu(),
nn_linear(H, D_out)
)
Below is a custom nn_module (a.k.a. torch.nn.Module) which is a simple dense (torch.nn.Linear) layer (source):
library(torch)
# creates example tensors. x requires_grad = TRUE tells that
# we are going to take derivatives over it.
dense <- nn_module(
clasname = "dense",
# the initialize function tuns whenever we instantiate the model
initialize = function(in_features, out_features) {
# just for you to see when this function is called
cat("Calling initialize!")
# we use nn_parameter to indicate that those tensors are special
# and should be treated as parameters by `nn_module`.
self$w <- nn_parameter(torch_randn(in_features, out_features))
self$b <- nn_parameter(torch_zeros(out_features))
},
# this function is called whenever we call our model on input.
forward = function(x) {
cat("Calling forward!")
torch_mm(x, self$w) + self$b
}
)
model <- dense(3, 1)
Another example, using torch.nn.Linear layers to create a neural network this time (source):
two_layer_net <- nn_module(
"two_layer_net",
initialize = function(D_in, H, D_out) {
self$linear1 <- nn_linear(D_in, H)
self$linear2 <- nn_linear(H, D_out)
},
forward = function(x) {
x %>%
self$linear1() %>%
nnf_relu() %>%
self$linear2()
}
)
Also there are other resources like here (using flow control and weight sharing).
Other
Looking at the reference it seems most of the layers are already provided (didn't notice transformer layers at a quick glance, but this is minor).
As far as I can tell basic blocks for neural networks, their training etc. are in-place (even JIT so sharing between languages should be possible).
I’m trying to implement parallel computing in an R package that calls C from R with the .C function. It seems that the nodes of the cluster can’t access the dynamic library. I have made a parallel socket cluster, like this:
cl <- makeCluster(2)
I would like to evaluate a C function called valgrad from my R package on each of the nodes in my cluster using clusterEvalQ, from the R package parallel. However, my code is producing an error. I compile my package, but when I run
out <- clusterEvalQ(cl, cresults <- .C(C_valgrad, …))
where … represents the arguments in the C function valgrad. I get this error:
Error in checkForRemoteErrors(lapply(cl, recvResult)) :
2 nodes produced errors; first error: object 'C_valgrad' not found
I suspect there is a problem with clusterEvalQ’s ability to access the dynamic library. I attempted to fix this problem by loading the glmm package into the cluster using
clusterEvalQ(cl, library(glmm))
but that did not fix the problem.
I can evaluate valgrad on each of the clusters using the foreach function from the foreach R package, like this:
out <- foreach(1:no_cores) %dopar% {.C(C_valgrad, …)}
no_cores is the number of nodes in my cluster. However, this function doesn’t allow any of the results of the evaluation of valgrad to be accessed in any subsequent calculation on the cluster.
How can I either
(1) make the results of the evaluation of valgrad accessible for later calculations on the cluster or
(2) use clusterEvalQ to evaluate valgrad?
You have to load the external library. But this is not done with library calls, it's done with dyn.load.
The following two functions are usefull if you work with more than one operating system, they use the built-in variable .Platform$dynlib.ext.
Note also the unload function. You will need it if you develop a C functions library. If you change a C function before testing it the dynamic library has to be unloaded, then (the new version) reloaded.
See Writing R Extensions, file R-exts.pdf in the doc folder, section 5 or on CRAN.
dynLoad <- function(dynlib){
dynlib <- paste(dynlib, .Platform$dynlib.ext, sep = "")
dyn.load(dynlib)
}
dynUnload <- function(dynlib){
dynlib <- paste(dynlib, .Platform$dynlib.ext, sep = "")
dyn.unload(dynlib)
}
I have established a deep learning model with the h2o package of the R software. I gained a model with good presence and I wanna to save it. However, I tried all kinds of methods but failed. The code "save()" and "save.image()" are provided in the base package of R software. I used the "save()" function to conserve my model. But when I want to use the built model to run new data, it is said that the "model" object is not found in the function. I am really confused about this problem for a few days. If you have any good ideas, just tell me. Thanks for your reading~
load("F:/R/Rstudy/myfile") ##download the saved file
library(h2o)
h2o.init()
Te <- read.csv("F:/Rdata/Test.csv") ## import testing data
Te <- as.h2o(Te)
Te[,2] <- as.factor(Te[,2])
perf <- h2o.performance(model, Te) ## test model
ERROR: Unexpected HTTP Status code: 404 Not Found (url = http://localhost:54321/3/ModelMetrics/models/DeepLearning_model_R_1533035975237_1/frames/RTMP_sid_8185_2)
ERROR MESSAGE:
Object 'DeepLearning_model_R_1533035975237_1' not found in function: predict for argument: model
You can use the below to save and retrieve the model.
build the model
model <- h2o.deeplearning(params)
save the model
model_path <- h2o.saveModel(object=model, path=getwd(), force=TRUE)
print(model_path)
/tmp/mymodel/DeepLearning_model_R_1441838096933
load the model
saved_model <- h2o.loadModel(model_path)
Reference - http://docs.h2o.ai/h2o/latest-stable/h2o-docs/save-and-load-model.html
Hope this helps,
ND
I'm trying to speed up my R code using future package by using mutlicore plan on Linux. In future definition I'm creating a java object and trying to pass it to .jcall(), But I'm getting a null value for java object in future. Could anyone please help me out to resolve this. Below is sample code-
library("future")
plan(multicore)
library(rJava)
.jinit()
# preprocess is a user defined function
my_value <- preprocess(a = value){
# some preprocessing task here
# time consuming statistical analysis here
return(lreturn) # return a list of 3 components
}
obj=.jnew("java.custom.class")
f <- future({
.jcall(obj, "V", "CustomJavaMethod", my_value)
})
Basically I'm dealing with large streaming data. In above code I'm sending the string of streaming data to user defined function for statistical analysis and returning the list of 3 components. Then want to send this list to custom java class [ java.custom.class ]for further processing using custom Java method [ CustomJavaMethod ].
Without using future my code is running fine. But I'm getting 12 streaming records in one minute and then my code is getting slow, observed delay in processing.
Currently I'm using Unix with 16 cores. After using future package my process is done fast. I have traced back my code, in .jcall something happens wrong.
Hope this clarifies my pain.
(Author of the future package here:)
Unfortunately, there are certain types of objects in R that cannot be sent to another R process for further processing. To clarify, this is a limitation to those type of objects - not to the parallel framework use (here the future framework). This simplest example of such an objects may be a file connection, e.g. con <- file("my-local-file.txt", open = "wb"). I've documented some examples in Section 'Non-exportable objects' of the 'Common Issues with Solutions' vignette (https://cran.r-project.org/web/packages/future/vignettes/future-4-issues.html).
As mentioned in the vignette, you can set an option (*) such that the future framework looks for these type of objects and gives an informative error before attempting to launch the future ("early stopping"). Here is your example with this check activated:
library("future")
plan(multisession)
## Assert that global objects can be sent back and forth between
## the main R process and background R processes ("workers")
options(future.globals.onReference = "error")
library("rJava")
.jinit()
end <- .jnew("java/lang/String", " World!")
f <- future({
start <- .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
# Error in FALSE :
# Detected a non-exportable reference ('externalptr') in one of the
# globals ('end' of class 'jobjRef') used in the future expression
So, yes, your example actually works when using plan(multicore). The reason for that is that 'multicore' uses forked processes (available on Unix and macOS but not Windows). However, I would try my best to limit your software to parallelize only on "forkable" systems; if you can find an alternative approach I would aim for that. That way your code will also work on, say, a huge cloud cluster.
(*) The reason for these checks not being enabled by default is (a) it's still in beta testing, and (b) it comes with overhead because we basically need to scan for non-supported objects among all the globals. Whether these checks will be enabled by default in the future or not, will be discussed over at https://github.com/HenrikBengtsson/future.
The code in the question is calling unknown Method1 method, my_value is undefined, ... hard to know what you are really trying to achieve.
Take a look at the following example, maybe you can get inspiration from it:
library(future)
plan(multicore)
library(rJava)
.jinit()
end = .jnew("java/lang/String", " World!")
f <- future({
start = .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
value(f)
[1] "Hello World!"
I created a model using Apache OpenNLP's command line tool to recognize named entities. The below code created the model using the file sentences4OpenNLP.txt as a training set.
opennlp TokenNameFinderTrainer -type maxent -model C:\Users\Documents\en-ner-org.bin -lang en -data C:\Users\Documents\apache-opennlp-1.6.0\sentences4OpenNLP.txt -encoding UTF-8
I tested the model from the command line by passing it sentences to tag, and the model seemed to be working well. However, I am unable to successfully use the model from R. I am using the below lines in attempts to create an organization annotating function. Using the same code to load a model downloaded from OpenNLP works fine.
modelNER <- "C:/Users/Documents/en-ner-org.bin"
oa <- openNLP::Maxent_Entity_Annotator(language = "en",
kind = "organization",
probs = TRUE,
model = modelNER)
When the above code is run I get an error saying:
Could not instantiate the opennlp.tools.namefind.TokenNameFinderFactory. The initialization throw an exception.
opennlp.tools.util.ext.ExtensionNotLoadedException: Unable to find implementation for opennlp.tools.util.BaseToolFactory, the class or service opennlp.tools.namefind.TokenNameFinderFactory could not be located!
at opennlp.tools.util.ext.ExtensionLoader.instantiateExtension(ExtensionLoader.java:97)
at opennlp.tools.util.BaseToolFactory.create(BaseToolFactory.java:106)
at opennlp.tools.util.model.BaseModel.initializeFactory(BaseModel.java:254)
Error in .jnew("opennlp.tools.namefind.TokenNameFinderModel", .jcast(.jnew("java.io.FileInputStream", :
java.lang.IllegalArgumentException: opennlp.tools.util.InvalidFormatException: Could not instantiate the opennlp.tools.namefind.TokenNameFinderFactory. The initialization throw an exception.
at opennlp.tools.util.model.BaseModel.loadModel(BaseModel.java:237)
at opennlp.tools.util.model.BaseModel.<init>(BaseModel.java:181)
at opennlp.tools.namefind.TokenNameFinderModel.<init>(TokenNameFinderModel.java:110)
Any advice on how to fix the error would be a big help. Thanks in advance.
Resolved the error. The R function openNLP::Maxent_Entity_Annotator was not compatible with the named entity recognition (NER) model being produced by OpenNLP 1.6.0. Building the NER model using OpenNLP 1.5.3 resulted in openNLP::Maxent_Entity_Annotator running without error.