tensorRT and tf.while_loop - tensorrt

I'm converting a custom model to TensorRT that contains a 'while_loop'. The 'while_loop' iterates through a fix number of steps.
This GIT post states TensorRT has an issue with while_loop:
https://github.com/tensorflow/tensorrt/issues/87
The current error I'm getting is. This error corresponds to the temporary variable I'm creating in the while loop.
uff.model.exceptions.UffException: Const node conversion requested, but node is not Const
Does TensorRT support while_loops? My code is implemented in TF 1.14. If not, would TensorRT support a for loop implemented in TF 2.0?
I'm able to create a UFF model if I remove the while_loop.
Thanks in advance.

Related

'ConcatenatedDoc2Vec' object has no attribute 'docvecs'

I am a beginner in Machine Learning and trying Document Embedding for a university project. I work with Google Colab and Jupyter Notebook (via Anaconda). The problem is that my code is perfectly running in Google Colab but if i execute the same code in Jupyter Notebook (via Anaconda) I run into an error with the ConcatenatedDoc2Vec Object.
With this function I build the vector features for a Classifier (e.g. Logistic Regression).
def build_vectors(model, length, vector_size):
vector = np.zeros((length, vector_size))
for i in range(0, length):
prefix = 'tag' + '_' + str(i)
vector[i] = model.docvecs[prefix]
return vector
I concatenate two Doc2Vec Models (d2v_dm, d2v_dbow), both are working perfectly trough the whole code and have no problems with the function build_vectors():
d2v_combined = ConcatenatedDoc2Vec([d2v_dm, d2v_dbow])
But if I run the function build_vectors() with the concatenated model:
#Compute combined Vector size
d2v_combined_vector_size = d2v_dm.vector_size + d2v_dbow.vector_size
d2v_combined_vec= build_vectors(d2v_combined, len(X_tagged), d2v_combined_vector_size)
I receive this error (but only if I run this in Jupyter Notebook (via Anaconda) -> no problem with this code in the Notebook in Google Colab):
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [20], in <cell line: 4>()
1 #Compute combined Vector size
2 d2v_combined_vector_size = d2v_dm.vector_size + d2v_dbow.vector_size
----> 4 d2v_combined_vec= build_vectors(d2v_combined, len(X_tagged), d2v_combined_vector_size)
Input In [11], in build_vectors(model, length, vector_size)
3 for i in range(0, length):
4 prefix = 'tag' + '_' + str(i)
----> 5 vector[i] = model.docvecs[prefix]
6 return vector
AttributeError: 'ConcatenatedDoc2Vec' object has no attribute 'docvecs'
Since this is mysterious (for me) -> Working in Google Colab but not Anaconda and Juypter Notebook -> and I did not find anything to solve my problem in the web.
If it's working one place, but not the other, you're probably using different versions of the relevant libraries – in this case, gensim.
Does the following show exactly the same version in both places?
import gensim
print(gensim.__version__)
If not, the most immediate workaround would be to make the place where it doesn't work match the place that it does, by force-installing the same explicit version – pip intall gensim==VERSION (where VERSION is the target version) – then ensuring your notebook is restarted to see the change.
Beware, though, that unless starting from a fresh environment, this could introduce other library-version mismatches!
Other things to note:
Last I looked, Colab was using an over-4-year-old version of Gensim (3.6.0), despite more-recent releases with many fixes & performance improvements. It's often best to stay at or closer-to the latest versions of any key libraries used by your project; this answer describes how to trigger the installation of a more-recent Gensim at Colab. (Though of course, the initial effects of that might be to cause the same breakage in your code, adapted for the older version, at Colab.)
In more-recent Gensim versions, the property formerly called docvecs is now called just dv - so some older code erroring this way may only need docvecs replaced with dv to work. (Other tips for migrating older code to the latest Gensim conventions are available at: https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4 )
It's unclear where you're pulling the ConcatenatedDoc2Vec class from. A clas of that name exists in some Gensim demo/test code, as a very minimal shim class that was at one time used in attempts to reproduce the results of the original "Paragaph Vector" (aka Doc2Vec) paper. But beware: that's not a usual way to use Doc2Vec, & the class of that name I know barely does anything outside its original narrow purpose.
Further, beware that as far as I know, noone has ever reproduced the full claimed performance of the two-kinds-of-doc-vectors-concatenated approach reported in that paper, even using the same data/described-technique/evaluation. The claimed results likely relied on some other undisclosed techniques, or some error in the writeup. So if you're trying to mimic that, don't get too frustrated. And know most uses of Doc2Vec just pick one mode.
If you have your own separate reasons for creating concatenated feature-vectors, from multiple algorithms, you should probably write your own code for that, not limited to the peculiar two-modes-of-Doc2Vec code from that one experiment.

Using reticulate with targets

I'm having this weird issue where my target, which interfaces a slightly customized python module (installed with pip install --editable) through reticulate, gives different results when it's being called from an interactive session in R from when targets is being started from the command line directly, even when I make sure the other argument(s) to tar_make are identical (callr_function = NULL, which I use for interactive debugging). The function is deterministic and should be returning the exact same result but isn't.
It's tricky to provide a reproducible example but if truly necessary I'll invest the required time in it. I'd like to get tips on how to debug this and identify the exact issue. I already safeguarded against potential pointer issues; the python object is not getting passed around between different targets/environments (anymore), rather it's immediately used to compute the result of interest. I also checked that the same python version is being used by printing the result of reticulate::pyconfig() to screen. I also verified both approaches are using the same version of the customized module.
Thanks in advance..!

Runtime customizable settings

I am coming from python and search for a pendant to a runtime customisable setting.
In a python package I would just have a
package.settings dictionary
and could change those settings with a simple:
package.settings[key] = value.
I tried to simulate dictionaries in R with a named list (I am aware of the hash package).
If I import the package, and want to change the list with:
package::settings$key = value
I get the error:
Error in package::settings <- value : object 'package' not found
How can I implement runtime customisable settings in R?
I know that I sometimes have to change my mental model (especially when switching languages ;), so if a dictionary is not the correct data structure, I am happy to hear it.

Training the SyntaxNet POS Tagger in SytaxNet

I had some difficulties training SyntaxNet POS tagger and parser training and I could find a good solution which I addressed in Answers section. if you have got stuck in one of the following problem this documentation really helps you:
the training, testing, and tuning data set introduced by Universal Dependencies were at .conllu format and I did not know how to change the format to .conll file and also after I found conllu-formconvert.py and conllu_to_conllx.plI still didn't have a clue about how to use them. If you have some problem like this the documentation has a python file named convert.py which is called in the main body of train.sh and [train_p.sh][5] to convert the downloded datasets to readble files for SyntaxNet.
whenever I ran bazel test, I was told to run bazel test on one of stackoverflow question and answer, on parser_trainer_test.sh it failed and then it gave me this error in test.log: path to save model cannot be found : --model_path=$TMP_DIR/brain_parser/greedy/$PARAMS/ model
the documentation splited train POS tagger and PARSER and showed how to use different directories in parser_trainer and parser_eval. even if you don't want to use the document it self you can update your files based on that.
3. for me training parser took one day so don't panic it takes time "if you do not use use gpu server" said disinex
I got one answer from github from Disindex and I found it very useful.
the documentation in https://github.com/dsindex/syntaxnet includes:
convert_corpus
train_pos_tagger
preprocess_with_tagger
As Disindex said and I quote: " I thought you want to train pos-tagger. if then, run ./train.sh"

The cause of "bad magic number" error when loading a workspace and how to avoid it?

I tried to load my R workspace and received this error:
Error: bad restore file magic number (file may be corrupted) -- no data loaded
In addition: Warning message:
file ‘WORKSPACE_Wedding_Weekend_September’ has magic number '#gets'
Use of save versions prior to 2 is deprecated
I'm not particularly interested in the technical details, but mostly in how I caused it and how I can prevent it in the future. Here's some notes on the situation:
I'm running R 2.15.1 on a MacBook Pro running Windows XP on a bootcamp partition.
There is something obviously wrong this workspace file, since it weighs in at only ~80kb while all my others are usually >10,000
Over the weekend I was running an external modeling program in R and storing its output to different objects. I ran several iterations of the model over the course of several days, eg output_Saturday <- call_model()
There is nothing special to the model output, its just a list with slots for betas, VC-matrices, model specification, etc.
I got that error when I accidentally used load() instead of source() or readRDS().
Also worth noting the following from a document by the R Core Team summarizing changes in versions of R after v3.5.0 (here):
R has new serialization format (version 3) which supports custom serialization of
ALTREP framework objects... Serialized data in format 3 cannot be read by versions of R prior to version 3.5.0.
I encountered this issue when I saved a workspace in v3.6.0, and then shared the file with a colleague that was using v3.4.2. I was able to resolve the issue by adding "version=2" to my save function.
Assuming your file is named "myfile.ext"
If the file you're trying to load is not an R-script, for which you would use
source("myfile.ext")
you might try the readRDSfunction and assign it to a variable-name:
my.data <- readRDS("myfile.ext")
The magic number comes from UNIX-type systems where the first few bytes of a file held a marker indicating the file type.
This error indicates you are trying to load a non-valid file type into R. For some reason, R no longer recognizes this file as an R workspace file.
Install the readr package, then use library(readr).
It also occurs when you try to load() an rds object instead of using
object <- readRDS("object.rds")
I got the error when saved with saveRDS() rather than save(). E.g. save(iris, file="data/iris.RData")
This fixed the issue for me. I found this info here
Also note that with save() / load() the object is loaded in with the same name it is initially saved with (i.e you can't rename it until it's already loaded into the R environment under the name it had when you initially saved it).
I had this problem when I saved the Rdata file in an older version of R and then I tried to open in a new one. I solved by updating my R version to the newest.
If you are working with devtools try to save the files with:
devtools::use_data(x, internal = TRUE)
Then, delete all files saved previously.
From doc:
internal If FALSE, saves each object in individual .rda files in the data directory. These are available whenever the package is loaded. If
TRUE, stores all objects in a single R/sysdata.rda file. These objects
are only available within the package.
This error occured when I updated my R and R Studio versions and loaded files I created under my prior version. So I reinstalled my prior R version and everything worked as it should.

Resources