using matlab production server(MPS) and webfigure - servlets

I would like to know how to pass a plot from matlab to be displayed as a webfigure on a servlet page. Note that I'm using the MPS. Hence I'm not packaging the matlab code into java but just using a client proxy to the matlab function.
My eg matlab function:
function varargout = mymagicplot(in,displayPlot)
x = magic(in);
varargout{1} = x;
if (strcmp(displayPlot, 'Plot'))
varargout{2} = {plot(x)};
end
On the servlet side:
interface MatlabMagic {
public Object[] mymagicplot(int num_outargs, int size, String plotOption) throws IOException, MATLABException;
}
Question is how to code the display of the plot as a webfigure on the servlet page?

I tried a workaround by splitting my matlab code into two functions.
First function is used by the client proxy.
function m = mymagic(in)
m = magic(in);
end
Second function is packaged into java classes by the library compiler.
function returnfigure = mygetwebfiguremagicplot(in)
h = figure;
set(h, 'Visible', 'off');
plot(in);
returnfigure = webfigure(h);
close(h);
end
In this way, I can access the mymagic function .ctf in the MPS to return the results to the servlet and plot it as a webfigure using the java classes created from the second function matlab code.
This is just one possible workable solution that I can think of.

Related

Calling a generic C library from R

I am trying to write some simple R bindings for a C library. It is tdjson and many languages can interface with it directly.
I compiled the source for the library and got a fully working built (libtdjson.so) and tested it with python.
Here is a reference implementation with python using the exact same library:
from ctypes import *
import json
# load shared library
tdjson_path = "tdlib/lib/libtdjson.so"
tdjson = CDLL(tdjson_path)
_td_execute = tdjson.td_execute
_td_execute.restype = c_char_p
_td_execute.argtypes = [c_char_p]
def td_execute(query):
query = json.dumps(query).encode('utf-8')
result = _td_execute(query)
if result:
result = json.loads(result.decode('utf-8'))
return result
# test TDLib execute method
test_command = {'#type': 'getTextEntities',
'text': '#telegram /test_command https://telegram.org telegram.me',
'#extra': ['5', 7.0, 'a']}
td_execute(test_command)
When I try to interface with the library in R, I do not get any return value from the function calls. I only get a list with one item which contains the original call. Anyone knows how to that?
This is what I've tried in R:
library(jsonlite)
dyn.load("tdlib/lib/libtdjson.so", type = "External")
td_execute <- function(query) {
query <- jsonlite::toJSON(query, auto_unbox = T)
result <- .C("td_execute", charToRaw(query))
return(result)
}
test_command = list("#type"="getTextEntities",
"text"="#telegram /test_command https://telegram.org telegram.me",
"#extra"= c("5", 7.0, 'a'))
t <- td_execute(test_command)
rawToChar(t[[1]])
# t only contains the original JSON string
The only return values from the lists are basically an echo of the function call parameters.
It is not possible in this case. R requires a custom compilation of the source code with R-specific header files.

Is it possible to change the value of R6 function ? (Good style OOP programming?)

I'm coming from C++ background, trying to make use of it for R OOP programming with R6 package.
Consider the following typical situation when writing a large OOP code. -
You have a class, in which you have several (possibly many) functions, each of which may also be quite complex and with many lines of code:
# file CTest.R
cTest <- R6Class(
"CTest",
public = list(
z = 10,
fDo1 = function() {
# very long and complex code goes here
self$z <- self$z*2; self$z
},
fDo2 = function() {
# another very long and complex code goes here
print(self)
}
)
) #"CTest"
Naturally, you don't want to put ALL your long and various functions in the same (CTest.R) file - it will become messy and unmanageable.
If you program in C++, normal way to program such code is : first, you declare you functions in .h file, then you create .c files for each you complex function, where you define your function. This makes it possible to do collaborative code writing, including efficient source-control.
So, I've tried to do something similar in R, like: first, declaring a function as in code above, and then, trying to assign the "actual long and complex" code to it later (which later I would put in a separate file CTest-Do1.R):
cTest$f <- function() {
self$z <- self$z*100000; self$z
}
Now I test if it works:
> tt <- cTest$new(); tt; tt$fDo1(); tt
<CTest>
Public:
clone: function (deep = FALSE)
fDo1: function ()
fDo2: function ()
z: 10
[1] 20
<CTest>
Public:
clone: function (deep = FALSE)
fDo1: function ()
fDo2: function ()
z: 20
No, it does not.- As seen from output above, the function has not been changed.
Any advice?
Thanks to Grothendieck's comment above, there's a reasonable workaround to make it work.
Instead of this:
# CTest-Do1_doesnotwork.R
cTest$fDo1 <- function() {
...
}
write this:
# CTest-Do1_works.R
cTest$set(
overwrite = TRUE, "public", "fDo1",
function() {
...
}
)
This code can now be written in separate file, as originally desired.
I still wonder though - Is the above describe way actually the common(best) practice for writing large OOP codes in R community? (looks a bit strange to me).
If not, what is it (beyond just using source()) ? - so that to enable collaborative coding and source control for separate parts (functions) of a class ?
I came here also searching for R6 best practice. One way that I've seen (here) is to define the functions elsewhere as normal R functions and pass in self, private etc as required
cTest<- R6::R6Class("CTest",
public = list(
fDo1 = function()
cTestfDo1(self),
fDo2 = function(x)
cTestfDo2(self, private, x)
))
and else where have
cTestfDo1 <- function(self) {
self$z <- self$z*2; self$z
}
and somewhere else
cTestfDo2 <- function(self, private, x) {
self$z * private$q + x
}
etc
I don't know if it's best practice, or efficient, but the class definition looks neat with it and if the cTestfDo1 functions are not exported then it's relatively neat in the namespace too.

Function keeps repeating in Octave

My code is written in a file "plot.m".
If I put the following code in "plot.m", when I call plot("20%"), the Octave GUI will keep opening a new window with a new figure indefinitely.
function X = plot(folderName)
X = 0;
data = ([folderName, "\\summary.txt"]);
NUM_SURVIVED = data(1);
NUM_DATA = size(data)(1)-1;
FINAL_WEALTH = data(2 : NUM_DATA);
%plot FINAL_WEALTH
figure;
plot(1:numel(FINAL_WEALTH), FINAL_WEALTH, '-b', 'LineWidth', 2);
xlabel('x');
ylabel('FINAL_WEALTH');
end
However, if I put the following code in "plot.m" and run it, the program works as intended and will plot data from "summary.txt".
data = ("20%\\summary.txt");
NUM_SURVIVED = data(1);
NUM_DATA = size(data)(1)-1;
FINAL_WEALTH = data(2 : NUM_DATA);
%plot FINAL_WEALTH
figure;
plot(1:numel(FINAL_WEALTH), FINAL_WEALTH, '-b', 'LineWidth', 2);
xlabel('x');
ylabel('FINAL_WEALTH');
Any idea what I am doing wrong in the first section of code? I would like to write it as a function so that I can call it multiple times for different folder names.
When you call plot from the function plot, you get endless recursion. Rename your function and its file.
Just adding to Michael's answer, if you really wanted to name your function as "plot" and override the built-in plot function, but still wanted to be able to call the built-in plot function inside it, this is actually possible to do by using the builtin function to call the built-in version of plot. Your code would then look like this:
function X = plot (folderName)
% same code as before
figure;
builtin ("plot", 1:numel(FINAL_WEALTH), FINAL_WEALTH, '-b', 'LineWidth', 2);
xlabel ('x');
ylabel ('FINAL_WEALTH');
end
Obviously, whether it's a good idea to overload such a core function in the first place is an entirely different discussion topic. (Hint: don't!)

How to log using futile logger from within a parallel method in R?

I am using futile logger in R for logging.
I have a parallel algorithm implemented using snowfall in R. Each core of the parallel process logs an intermediate output in the logger. But this output is not showing up in the logger?
Can we log using futile logger from within a parallel job using snowfall?
adding how it was done:
My specific case was a bit different. I am calling a C function from R using a shared object that I created. The function is an iterative algorithm and I need output to be logged every few iterations. I was interested in logging from the C function to futile logger. Why futile logger? Because this is part of a web-application and it makes sense to have all output for a user session to come in a consistent format.
This is the general approach I followed based on the accepted answer.
# init script
# iter logger namespace global variable
assign("MCMC_LOGGER_NAMESPACE", "iter.logger", envir = .GlobalEnv)
loginit <- function(logfile) {
require('futile.logger')
flog.layout(layout.simple, name = ITER_LOGGER_NAMESPACE)
flog.threshold(TRACE, name = ITER_LOGGER_NAMESPACE)
flog.appender(appender.file(logfile), name = ITER_LOGGER_NAMESPACE)
NULL
}
parallel_funct_call_in_R <- function(required args) {
require('snowfall')
sfSetMaxCPUs()
sfInit(parallel = TRUE, cpus = NUM_CPU)
sfLibrary(required libs)
sfExport(required vars including logger namespace variable ITER_LOGGER_NAMESPACE)
iterLoggers = sprintf(file.path(myloggingdir, 'iterativeLogger_%02d.log', fsep = .Platform$file.sep), seq_len(NUM_CPU))
sfClusterApply(iterLoggers, loginit)
sfSource(required files)
estimates <- sfLapply(list_to_apply_over, func_callling_C_from_R, required args)
sfStop()
return(estimates)
}
iterTrackNumFromC <- function(numvec){
# convert numvec to json and log using flog.info
# the logger namespace has already been registered in the individual cores
flog.info("%s", toJSON(numvec), name = ITER_LOGGER_NAMESPACE)
}
func_callling_C_from_R <- function(args){
load shared obh using dyn.load
estimates = .C("C_func", args, list(iterTrackNumFromC)) # can use .Call also I guess
return(estimates)
}
Now the C function
void C_func(other args, char **R_loggerfunc){ // R_loggerfunc is passed iterTrackNumFromC
// do stuff
// call function that logs numeric values to futile.logger
logNumericVecInR();
}
void logNumericVecInR (char *Rfunc_logger, double *NumVec, int len_NumVec){
long nargs = 1;
void *arguments[1];
arguments[0] = (double*)NumVec;
char *modes[1];
modes[0] = "double";
long lengths[1];
lengths[0] = len_NumVec;
char *results[1];
// void call_R(char *func, long nargs, void **arguments, char **modes, long *lengths, char **names, long nres, char **results)
call_R(Rfunc_logger, nargs, arguments, modes, lengths, (char**)0, (long)1, results);
}
Hope this helps. If there is an easier way for R and C to share a common logger, please let me know.
A simple way to use the futile.logger package from a snowfall program is to use the sfInit slaveOutfile='' option so that worker output isn't redirected.
library(snowfall)
sfInit(parallel=TRUE, cpus=3, slaveOutfile='')
sfLibrary(futile.logger)
work <- function(i) {
flog.info('Got task %d', i)
i
}
sfLapply(1:10, work)
sfStop()
This is the snowfall interface to the snow makeCluster outfile='' option. It may not work properly with GUI interfaces such as Rgui, depending on how they handle process output, but it does work on Windows using Rterm.exe.
I think that it's better to specify different log files for each worker. Here's an example of that:
library(snowfall)
nworkers <- 3
sfInit(parallel=TRUE, cpus=nworkers)
loginit <- function(logfile) {
library(futile.logger)
flog.appender(appender.file(logfile))
NULL
}
sfClusterApply(sprintf('out_%02d.log', seq_len(nworkers)), loginit)
work <- function(i) {
flog.info('Got task %d', i)
i
}
sfLapply(1:10, work)
sfStop()
This avoids all of the extra output coming from snow and puts each worker's log messages into a separate file which can be less confusing.

rJava: using java/lang/Vector with a certain template class

I'm currently programming an R-script which uses a java .jar that makes use of the java/lang/Vector class, which in this case uses a class in a method that is not native. In java source code:
public static Vector<ClassName> methodname(String param)
I found nothing in the documentation of rJava on how to handle a template class like vector and what to write when using jcall or any other method.
I'm currently trying to do something like this:
v <- .jnew("java/util/Vector")
b <- .jcall(v, returnSig = "Ljava/util/Vector", method = "methodname",param)
but R obviously throws an exception:
method methodname with signature (Ljava/lang/String;)Ljava/util/Vector not found
How do I work the template class into this command? Or for that matter, how do I create a vector of a certain class in the first place? Is this possible?
rJava does not know java generics, there is no syntax that will create a Vector of a given type. You can only create Vectors of Objects.
Why are you sticking with the old .jcall api when you can use the J system, which lets you use java objects much more nicely:
> v <- new( J("java.util.Vector") )
> v$add( 1:10 )
[1] TRUE
> v$size()
[1] 1
# code completion
> v$
v$add( v$getClass() v$removeElement(
v$addAll( v$hashCode() v$removeElementAt(
v$addElement( v$indexOf( v$retainAll(
v$capacity() v$insertElementAt( v$set(
v$clear() v$isEmpty() v$setElementAt(
v$clone() v$iterator() v$setSize(
v$contains( v$lastElement() v$size()
v$containsAll( v$lastIndexOf( v$subList(
v$copyInto( v$listIterator( v$toArray(
v$elementAt( v$listIterator() v$toArray()
v$elements() v$notify() v$toString()
v$ensureCapacity( v$notifyAll() v$trimToSize()
v$equals( v$remove( v$wait(
v$firstElement() v$removeAll( v$wait()
v$get( v$removeAllElements()

Resources