import R forecast library JAR files into java - r

I am trying to import the R package 'forecast; in netbeans to use its functions. I have managed to make the JRI connection and also to import the javaGD library and experimented with it with a certain success. The problem about the forecasting package is that I cannot find the corresponding JAR files so to include them as a library in my project. I am loading it normally : re.eval(library(forecast)), but when I implement one of the library's function, a null value is returned. Although I am quite sure that the code is correct I am posting it just in case.
tnx in advance
Rengine re = new Rengine(Rargs, false, null);
System.out.println("rengine created, waiting for R!");
if(!re.waitForR())
{
System.out.println("cannot load R");
return;
}
re.eval("library(forecast)");
re.eval("library(tseries)");
re.eval("myData <- read.csv('C:/.../I-35E-NB_1.csv', header=F, dec='.', sep=',')");
System.out.println(re.eval("myData"));
re.eval("timeSeries <- ts(myData,start=1,frequency=24)");
System.out.println("this is time series object : " + re.eval("timeSeries"));
re.eval("fitModel <- auto.arima(timeSeries)");
REXP fc = re.eval("forecast(fitModel, n=20)");
System.out.println("this is the forecast output values: " + fc);

You did not convert values from R into java, you should first create a numerical vector of auto.arima output in R, and then use the method .asDoubleArray() to read it into java.
I gave a complete example in [here] How I can load add-on R libraries into JRI and execute from Java? , that shows exactly How to use the auto.arima function in Java using JRI.

Related

How to export sf object to GDB using RPyGeo in R (Windows)?

I have a bunch of sf objects I'd like to export to GDB from R. I'm running R 4.0.2 on Windows 10. In this case the sf objects are all vector point data. The main reasons to export to GDB are to keep longer field names (the shapefile truncation is very annoying), and because GDBs are more desirable storage locations for our workflows.
Yes, I know about the ArcGisBinding package. I've got it to work in a test script but it's pretty unstable - often crashing and requiring a restart of R. This is a problem, because the sf objects I'd like to export come after an already long Rmd that reads in, formats and cleans the data. So it's not a simple manner of re-running the script until arc.write doesn't break. I could break up the script, but then I'd still have to read in a bunch of shapefiles. One option I haven't yet explored is using reticulate to call a python script instead of trying to do everything in R, but we're trying to do our analysis all in one place, if possible.
I'm pretty sure I've managed to set up RPyGeo appropriately, first setting my python path using the reticulate package. I'm doing it this way because IT restrictions means I can't edit PATH variables on my machine.
#package calls
library(sf)
library(spData)
library(reticulate)
#set python version in reticulate
py_path <- "C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/python.exe"
reticulate::use_python(python = py_path, required = TRUE)
#call RPyGeo
library(RPyGeo) # for potential point export
#output gdb
out.gdb <- "C:/LOCAL_PROJECTS/Output/Output.gdb"
#RPyGeo Parameters
# Note that, in order to use RPyGeo you need a working ArcMap or ArcGIS Pro installation on your computer.
# python path - note that this will change depending on which version of Arc one is using
# py_path <- "C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/python.exe"
arcpy <- rpygeo_build_env(workspace = out.gdb,
overwrite = TRUE,
extensions = c("Spatial","DataInteroperability"),
path = py_path)
I've tried a bunch of different tools to export an sf object, here using dummy data also used in the RPyGeo vignette
data(nz, package = "spData")
arcpy$Copy_management(in_data = nz,out_data = "nz_test")
arcpy$Copy_management(in_data = nz,out_data = file.path(out.gdb,"nz"))
arcpy$FeatureClassToGeodatabase_conversion(Input_Features = nz,Output_Geodatabase = out.gdb)
arcpy$FeatureClassToFeatureClass_conversion(in_features = nz,out_path = out.gdb,out_name = "nz")
arcpy$QuickExport_interop(Input = nz,Output = file.path(out.gdb,"nz"))
arcpy$CopyFeatures_management(in_features = nz,out_feature_class = file.path(out.gdb,"nz"))
arcpy$CopyFeatures_management(in_features = nz,out_feature_class = "nz")
Each time I get an error, for example:
Error in py_call_impl(callable, dots$args, dots$keywords) :
RuntimeError: Object: Error in executing tool
Detailed traceback:
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 3232, in CopyFeatures
raise e
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 3229, in CopyFeatures
retval = convertArcObjectToPythonObject(gp.CopyFeatures_management(*gp_fixargs((in_features, out_feature_class, config_keyword, spatial_grid_1, spatial_grid_2, spatial_grid_3), True)))
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\geoprocessing\_base.py", line 511, in <lambda>
return lambda *args: val(*gp_fixargs(args, True))
I'm not an expert in ArcPy by any means. Nor am I an expert in tracing errors inside packages. Am I making a simple syntax mistake? Is there something else that I'm missing? Any help would be much appreciated!

How to convert Tensorflow Object Detection API model to TFLite?

I am trying to convert a Tensorflow Object Detection model(ssd-mobilenet-v2-fpnlite, from TensorFlow 2 Detection Model Zoo) to TFLite. First of all, I train the model using the model_main_tf2.py and then I use the export_tflite_graph_tf2.py to export a saved model(.pb). However, when it comes to convert the .pb file to .tflite it throws this error:
OSError: SavedModel file does not exist at: /content/gdrive/My Drive/models/research/object_detection/fine_tuned_model/saved_model/saved_model.pb/{saved_model.pbtxt|saved_model.pb}
To convert the .pb file I used:
import tensorflow as tf
SAVED_MODEL_PATH = os.path.join(os.getcwd(),'object_detection', 'fine_tuned_model', 'saved_model', 'saved_model.pb')
# SAVED_MODEL_PATH: '/content/gdrive/My Drive/models/research/object_detection/exported_model/saved_model/saved_model.pb'
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("detect.tflite", "wb").write(tflite_model)
or "tflite_convert" from command line, but with the same error. I also tried to run it with the latest tf-nightly version as it suggests here, but the outcome is the same. I tried to pass the path with various ways, it seems like the .pd is not well written (not the right file). Is there a way to manage to convert the model to tflite so as to implement it to android? Thank you!
Your saved_model path should be "/content/gdrive/My Drive/models/research/object_detection/fine_tuned_model/saved_model/". It is the folder instead of files in that folder
For quick test, try to type in terminal
tflite_convert \
--saved_model_dir="path to saved_folder" \
--output_file="path to tflite file u want to save"
I don't have enough reputation to just comment but the problem here seems to be your SAVED_MODEL_PATH.
You could try to hardcode the path and remove the .pb file. I don't remember exactly what's the trick here but it's definitively due to the path

R h2o load a saved model from disk in MOJO or POJO format

I'm catching up on h2o's MOJO and POJO model format. I'm able to save a model in MOJO/POJO with
h2o.download_mojo(model, path = "/media/somewhere/tmp") # ok
h2o.download_pojo(model, path = "/media/somewhere/tmp") # ok
which writes an object with name like mymodel.zip or mymodel.java to the directory.
However, it's not clear to me how to read it back into the server in R. I tried,
saved_model2 <- h2o.loadModel("/media/somewhere/tmp/mymodel.java") # not work
saved_model3 <- h2o.loadModel("/media/somewhere/tmp/mymodel.zip") # not work
but got error msg like this,
ERROR: Unexpected HTTP Status code: 400 Bad Request (url = http://localhost:54321/99/Models.bin/)
java.lang.IllegalArgumentException
[1] "java.lang.IllegalArgumentException: Missing magic number 0x1CED at stream start"
....
Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page, :
ERROR MESSAGE:
Missing magic number 0x1CED at stream start
If you are looking to make predictions on an H2O model in R, then you have three options (which method you choose depends on your use-case):
You can use a binary model instead of a MOJO (or POJO). For this method, you export the model to disk using h2o.saveModel() and load it back into the H2O clsuter using h2o.loadModel() and make predictions using predict(model, test). This method requires having an H2O cluster running.
If you's still prefer to export a model to MOJO (or POJO) format, you can use the h2o.mojo_predict_df() or h2o.mojo_predict_csv() function in R to generate predictions on a test set (from an R data.frame or in a CSV file).
As an alternative to #2, if your data is in JSON format, you can use h2o.predict_json(), but it will only score one row at a time.
h2o.loadModel is meant to be used with h2o.saveModel. If you want to compile and run a MOJO you need to do the following:
first let's say you created a MOJO from a GBM:
library(h2o)
h2o.init(nthreads=-1)
path = "http://h2o-public-test-data.s3.amazonaws.com/smalldata/prostate/prostate.csv"
h2o_df = h2o.importFile(path)
h2o_df$RACE = as.factor(h2o_df$RACE)
model = h2o.gbm(y="CAPSULE",
x=c("AGE", "RACE", "PSA", "GLEASON"),
training_frame=h2o_df,
distribution="bernoulli",
ntrees=100,
max_depth=4,
learn_rate=0.1)
and then downloaded the MOJO and the resulting h2o-genmodel.jar file to a new experiment folder. Note that the h2o-genmodel.jar file is a library that supports scoring and contains the required readers and interpreters. This file is required when MOJO models are deployed to production.
modelfile = model.download_mojo(path="~/experiment/", get_genmodel_jar=True)
print("Model saved to " + modelfile)
Model saved to /Users/user/GBM_model_R_1475248925871_74.zip"
Then you would open a new terminal window and change into the experiment directory where you have have the MOJO files .zip and .jar.
$ cd experiment
Then you would create your main program in the experiment folder by creating a new file called main.java (for example, using "vim main.java"). Include the following contents. Note that this file is referencing the GBM model created above using R.
import java.io.*;
import hex.genmodel.easy.RowData;
import hex.genmodel.easy.EasyPredictModelWrapper;
import hex.genmodel.easy.prediction.*;
import hex.genmodel.MojoModel;
public class main {
public static void main(String[] args) throws Exception {
EasyPredictModelWrapper model = new EasyPredictModelWrapper(MojoModel.load("GBM_model_R_1475248925871_74.zip"));
RowData row = new RowData();
row.put("AGE", "68");
row.put("RACE", "2");
row.put("DCAPS", "2");
row.put("VOL", "0");
row.put("GLEASON", "6");
BinomialModelPrediction p = model.predictBinomial(row);
System.out.println("Has penetrated the prostatic capsule (1=yes; 0=no): " + p.label);
System.out.print("Class probabilities: ");
for (int i = 0; i < p.classProbabilities.length; i++) {
if (i > 0) {
System.out.print(",");
}
System.out.print(p.classProbabilities[i]);
}
System.out.println("");
}
}
Then compile and run in terminal window 2 to get a display of predicted probabilities
$ javac -cp h2o-genmodel.jar -J-Xms2g -J-XX:MaxPermSize=128m main.java
$ java -cp .:h2o-genmodel.jar main
Newer versions of H2O have the ability to import MOJOs via the python API:
# re-import saved MOJO
imported_model = h2o.import_mojo(path)
new_observations = h2o.import_file(path='new_observations.csv')
predictions = imported_model.predict(new_observations)
Caution: MOJO cannot be re-imported into python in older H2O versions, which lack the h2o.import_mojo() function.
So h2o.save_model() seems to have lost its role - we can use just my_model.save_mojo() (notice it's not a h2o method, but a property of the model object), as these files can be used not just for Java apps deployment, but also in python as well (in fact they still use a python-Java bridge for that internally).

Load variable once in DeployR

I have a model trained and stored in a file called "rpartModel.RData", in my R script to use this model in DeployR, I need to load the model every time the script is called.
There is any way to load the file only once and the variable be used in the R scripts?
My code:
library(caret)
load("rpartModel.RData") #no way to run just once and be used as global?
predict(fitRPart,kyphosis[10,])
Found it.
Using the RBroker I can espcify a file to be preloaded, like in this java example https://github.com/deployr/java-example-fraud-score/blob/master/src/main/java/com/revo/deployr/rbroker/example/service/FraudService.java .
PoolPreloadOptions preloadOptions = new PoolPreloadOptions();
preloadOptions.filename = "rpartModel.rData";

How to catch R background code in rmr map reduce in Rhadoop

I am new in R Hadoop. I am able to run map reduce function of rmr package with Hadoop. Basically in background R runs this map reduce code in Java. It means R converts this R map reduce code in Java, So can I get the java background code when running map reduce.
Can anyone help me?
In Rhadoop, R is not converting R Map Reduce code to java.Rhadoop provides MapReduce interface; mapper and reducer can be described in R code and then called from R.
Rhadoop package will submit R code to Hadoop Cluster using Hadoop
streaming.Hadoop streaming is a utility that comes with the Hadoop
distribution. The utility allows you to create and run Map/Reduce jobs
with any executable or script as the mapper and/or the reducer.
You can understand about this by going throug Rhadoop package code in GitHub.
The RHadoop package submit the hadoop streaming job by using System command in R.
You can get an idea about this from this R scipt in RMR package.The code in that streaming.R is as given below.
final.command =
paste(
hadoop.command,
stream.mapred.io,
if(is.null(backend.parameters)) ""
else
do.call(paste.options, backend.parameters),
input,
output,
mapper,
combiner,
reducer,
image.cmd.line,
m.fl,
r.fl,
c.fl,
input.format.opt,
output.format.opt,
"2>&1")
if(verbose) {
retval = system(final.command)
if (retval != 0) stop("hadoop streaming failed with error code ", retval, "\n")}
else {
console.output = tryCatch(system(final.command, intern=TRUE),
warning = function(e) stop(e))
0}}

Resources