I'm trying to run the oolong package to validate a couple of topic models I've created. Using both an STM model and a seededLDA model (this code won't be reproducible)
oolong_test1a <- witi(input_model = model_stm_byt, input_corpus = YS$body)
OR
oolong_test1a <- witi(input_model = slda_howard_docs, input_corpus = howard_df$content)
In both cases it successfully creates an oolong test in my global environment. However, when I run either the word intrusion or topic intrusion test, I get this error in both my console and my viewer:
Listening on http://127.0.0.1:7122
Warning: Error in value[[3L]]: Couldn't normalize path in `addResourcePath`, with arguments: `prefix` = 'miniUI-0.1.1.1'; `directoryPath` = 'D:/temp/RtmpAh8J5r/RLIBS_35b54642a1c09/miniUI/www'
[No stack trace available]
I couldn't find any reference to this error anywhere else. I've checked I'm running the most recent version of oolong.
I've also tried to run it on the models/corpus that comes supplied with oolong. So this code is reproducible:
oolong_test <- witi(input_model = abstracts_keyatm, input_corpus = abstracts$text, userid = "Julia")
oolong_test$do_word_intrusion_test()
oolong_test$do_topic_intrusion_test()
This generates the same errors.
There is a new version in github that fixes this issue.
devtools::install_github("chainsawriot/oolong")
Related
I am trying to use Azure-ML-SDK in R Studio and used Estimator but got error stating estimator deprecated and advised to use ScriptRunConfig and when used it, it not being recognized as a function and fails to run. See the errors below. Please advise.
Already loaded library(azuremlsdk) which should include azureml.core to recognize the ScriptRunConfig function. Is it version compatibility issue? if so, which version should i use for ScriptRunConfig and how to load specific R version in Azure ML Compute (R Studio web interface and not R Studio Desktop)
First Error and code
est <- estimator(source_directory = "train-and-deploy-first-model",
entry_script = "accidents.R",
script_params = list("--data_folder" = ds$path(target_path)),
compute_target = compute_target
)
cran_packages, github_packages, custom_url_packages, custom_docker_image, image_registry_details, use_gpu, environment_variables, and shm_size parameters will be deprecated. Please create an environment object with them using r_environment() and pass the environment object to the estimator().'enabled' is deprecated. Please use the azureml.core.runconfig.DockerConfiguration object with the 'use_docker' param instead.
'Estimator' is deprecated. Please use 'ScriptRunConfig' from 'azureml.core.script_run_config' with your own defined environment or an Azure ML curated environment.
Second Code snippet trying to fix above and it's error
config <- ScriptRunConfig(source_directory = ".",
script = "accidents.R",
compute_target = compute_target
)
Error in ScriptRunConfig(source_directory = ".", script = "accidents.R", :
could not find function "ScriptRunConfig"
I'm trying to run a code in R to get some data from FREDR package but I'm getting trouble to understand the error R shows me.
The code I have:
library(fredr)
fredr_set_key("...")
cpi <- fredr::fredr(series_id = "CPIAUCSL",observation_start = as.Date("1960-01-01"),observation_end = as.Date("2005-12-01"))
The error I get:
Error in (function (endpoint, ..., to_frame = TRUE, print_req = FALSE) :
400: Bad Request. The value for variable api_key is not a 32 character alpha-numeric lower-case string. Read https://research.stlouisfed.org/docs/api/api_key.html for more information.
This code runs perfectly in the computer of my professor (who is a Windows user) so I think that the problem may be related to my Mac but I'm really not sure.
Mac OS 10.15.4
Did you use your API key? You should request one here: https://research.stlouisfed.org/docs/api/api_key.html
Then replace ... with your API key.
I am looking into using MXNet LSTM modelling for time-series analysis for a problem i am currently working on.
As a way of understanding how to implement this, I am following the example code given by xnNet from the link: https://mxnet.incubator.apache.org/tutorials/r/MultidimLstm.html
When running this script after downloading the necessary data to my local source, i am able to execute the code fine until i get to the following section to train the model:
## train the network
system.time(model <- mx.model.buckets(symbol = symbol,
train.data = train.data,
eval.data = eval.data,
num.round = 100,
ctx = ctx,
verbose = TRUE,
metric = mx.metric.mse.seq,
initializer = initializer,
optimizer = optimizer,
batch.end.callback = NULL,
epoch.end.callback = epoch.end.callback))
When running this section, the following error occurs once gaining connection to the API.
Error in mx.nd.internal.as.array(nd) :
[14:22:53] c:\jenkins\workspace\mxnet\mxnet\src\operator\./rnn-inl.h:359:
Check failed: param_.p == 0 (0.2 vs. 0) Dropout is not supported at the moment.
Is there currently a problem internally within the XNNet R package which is unable to run this code? I can't imagine they would provide a tutorial example for the package that is not executable.
My other thought is that it is something to do with my local device execution and connection to the API. I haven't been able to find any information about this being a problem for other users though.
Any inputs or suggestions would be greatly appreciated thanks.
Looks like you're running an old version of R package. I think following instructions on this page to build a recent R-package should resolve this issue.
I created a model using Apache OpenNLP's command line tool to recognize named entities. The below code created the model using the file sentences4OpenNLP.txt as a training set.
opennlp TokenNameFinderTrainer -type maxent -model C:\Users\Documents\en-ner-org.bin -lang en -data C:\Users\Documents\apache-opennlp-1.6.0\sentences4OpenNLP.txt -encoding UTF-8
I tested the model from the command line by passing it sentences to tag, and the model seemed to be working well. However, I am unable to successfully use the model from R. I am using the below lines in attempts to create an organization annotating function. Using the same code to load a model downloaded from OpenNLP works fine.
modelNER <- "C:/Users/Documents/en-ner-org.bin"
oa <- openNLP::Maxent_Entity_Annotator(language = "en",
kind = "organization",
probs = TRUE,
model = modelNER)
When the above code is run I get an error saying:
Could not instantiate the opennlp.tools.namefind.TokenNameFinderFactory. The initialization throw an exception.
opennlp.tools.util.ext.ExtensionNotLoadedException: Unable to find implementation for opennlp.tools.util.BaseToolFactory, the class or service opennlp.tools.namefind.TokenNameFinderFactory could not be located!
at opennlp.tools.util.ext.ExtensionLoader.instantiateExtension(ExtensionLoader.java:97)
at opennlp.tools.util.BaseToolFactory.create(BaseToolFactory.java:106)
at opennlp.tools.util.model.BaseModel.initializeFactory(BaseModel.java:254)
Error in .jnew("opennlp.tools.namefind.TokenNameFinderModel", .jcast(.jnew("java.io.FileInputStream", :
java.lang.IllegalArgumentException: opennlp.tools.util.InvalidFormatException: Could not instantiate the opennlp.tools.namefind.TokenNameFinderFactory. The initialization throw an exception.
at opennlp.tools.util.model.BaseModel.loadModel(BaseModel.java:237)
at opennlp.tools.util.model.BaseModel.<init>(BaseModel.java:181)
at opennlp.tools.namefind.TokenNameFinderModel.<init>(TokenNameFinderModel.java:110)
Any advice on how to fix the error would be a big help. Thanks in advance.
Resolved the error. The R function openNLP::Maxent_Entity_Annotator was not compatible with the named entity recognition (NER) model being produced by OpenNLP 1.6.0. Building the NER model using OpenNLP 1.5.3 resulted in openNLP::Maxent_Entity_Annotator running without error.
I am running gamboost using Caret. The model was generated, but the running process gives the following error message as shown below. I am not clear what does that mean.
rd<- train(formula0, data = df.clean,method = "gamBoost",trControl = train_control,metric="RMSE")
The model looks like this
The error messages are