How do I make a `Pipe` or `TTY` with a custom callback in Julia? - julia

I'd like to fancy up my embedding of Julia in a MATLAB mex function by hooking up Julia's STDIN, STDOUT, and STDERR to the MATLAB terminal. The documentation for redirect_std[in|out|err] says that the stream that I pass in as the argument needs to be a TTY or a Pipe (or a TcpSocket, which wouldn't seem to apply).
I know how I will define the right callbacks for each stream (basically, wrappers around calls to MATLAB's input and fprintf), but I'm not sure how to construct the required stream.

Pipe was renamed PipeEndpoint in https://github.com/JuliaLang/julia/pull/12739, but the corresponding documentation was not updated and PipeEndpoint is now considered internal. Even so, creating the pipe up front is still doable:
pipe = Pipe()
Base.link_pipe(pipe)
redirect_stdout(pipe.in)
#async while !eof(pipe)
data = readavailable(pipe)
# Pass data to whatever function handles display here
end
Furthermore, the no-argument version of these functions already create a pipe object, so the recommended way to do this would be:
(rd,wr) = redirect_stdout()
#async while !eof(rd)
data = readavailable(rd)
# Pass data to whatever function handles display here
end
Nevertheless, all of this is less clear than it could be, so I have created a pull request to clean up this API: https://github.com/JuliaLang/julia/pull/18253. Once that pull request is merged, the link_pipe call will become unnecessary and pipe can be passed directly into redirect_stdout. Further, the return value from the no-argument version will become a regular Pipe.

Related

Redirect standard output produced by a call to a C function from Julia using ccall

I am making a Julia wrapper for a C/C++ library. The C/C++ functions that I am wrapping write to standard output. Is there a way to redirect those messages from the Julia side without commenting/removing the write statements from the C/C++ code?
You can use redirect_stdout for this.
oldstd = stdout
redirect_stdout(somewhere_else)
ccall(:printf, Cint, (Cstring,), "Hello World!")
Base.Libc.flush_cstdio() # it might be necessary to flush C stdio to maintain the correct order of outputs or forcing a flush
redirect_stdout(oldstd) # recover original stdout
You may want to use redirect_stdout(f::Function, stream) method instead. Here, f should be a function taking no parameter (i.e. like () -> do_something(...)). This method automatically recovers the stream to stdout. Using do syntax;
redirect_stdout(somewhere) do
ccall(:printf, Cint, (Cstring,), "Hello World!")
Base.Libc.flush_cstdio() # might be needed
end

Multiple variables in return object of function in R. Want to run it for multiple argument cases

How do I retrieve outputs from objects in an array as described in the background?
I have a function in R that returns multiple variables. For eg. if my function is called function_ABC,then:
a<-function_ABC (input_var)
gives a such that a$var1, a$var2, and a$var3 exist.
I have multiple cases to run such that I have put then in an array:
input_var <- c(1, 2, ...15)
for storing the outputs, I declared var such that:
var <- c(v1, v2, v3, .... v15)
Then I run:
assign(v1[i],function(input_var(i)))
However, after that I am unable to access these variables as v1[1]$var1. I can access them as: v1$var1, or v3$var1, etc. But this means I need to write 15*3 commands to retrieve my output.
Is there an easier way to do this?
Push your whole input set into an array Arr[ ].
Open a multi threaded executor E of certain size N.
Using a for loop on the input array Arr[], submit your function calls as a Callable job to the executor E. While submitting each job, hold the reference to the FutureTask in another Array FTArr[ ].
When all the FutureTask jobs are executed, you may retrieve the output for each of them by running another for loop on FTArr[ ].
Note :
• make sure to add synchronized block in your func_ABC, where you are accessing shared resources to avoid deadlocks.
• Please refer to the below link, if you want to know more about the usage of a count-down-latch. A count-down-latch helps you to find out, when exactly, all the child threads have finished execution.
https://www.geeksforgeeks.org/countdownlatch-in-java/

How to make REST POST calls in R

I have been using jsonlite to make REST GET calls. The number of parameters have increased and I am wondering how to make a REST POST call using R.
Per your request ...
library(jsonlite)
BLUF
Instead of
fromJSON(myurl, ...)
you need to call httr::POST directly:
txt <- httr::POST(myurl, ...)
fromJSON(txt)
Explanation
The basic mechanism for using jsonlite::fromJSON is to pass it a string. In the special case that you pass an URL (regexpr: ^https?://), it does you a courtesy by calling httr::GET and taking its output as your intended input. You can see this intent by looking at its source by typing in jsonlite::fromJSON and finding the line with if(grepl("^https?://" ...; if you try to find the function download_raw, you'll not find immediately since it is an un-exported function. You can find it as jsonlite:::download_raw (notice the third colon).
Looking at that function's source, you'll see that it makes direct calls to httr::GET. You can mimic how download_raw is calling httr::GET, modifying the arguments as needed. (It might be informative to look at both help(httr::GET) and help(httr::POST) and look for the differences between them. Spoiler: look at the body argument, potentially a list for keys/values. The examples are helpful.)

Function signature not found despite showing with methods(...)

I am new to Julia, so this might be trivial.
I have a function definition within a module that looks like (using URIParser):
function add!(graph::Graph,
subject::URI,
predicate::URI,
object::URI)
...
end
Outside of the module, I call:
add!(g, URIParser.URI("http://test.org/1"), URIParser.URI("http://test.org/2"), URIParser.URI("http://test.org/1"))
Which gives me this error:
ERROR: no method add!(Graph,URI,URI,URI)
in include at boot.jl:238
in include_from_node1 at loading.jl:114
at /Users/jbaran/src/RDF/src/RDF.jl:79
Weird. Because when I can see a matching signature:
julia> methods(RDF.add!)
# 4 methods for generic function "add!":
add!(graph::Graph,subject::URI,predicate::URI,object::Number) at /Users/jbaran/src/RDF/src/RDF.jl:29
add!(graph::Graph,subject::URI,predicate::URI,object::String) at /Users/jbaran/src/RDF/src/RDF.jl:36
add!(graph::Graph,subject::URI,predicate::URI,object::URI) at /Users/jbaran/src/RDF/src/RDF.jl:43
add!(graph::Graph,statement::Statement) at /Users/jbaran/src/RDF/src/RDF.jl:68
At first I thought it was my use of object::Union(...), but even when I define three functions with Number, String, and URI, I get this error.
Is there something obvious that I am missing? I am using Julia 0.2.1 x86_64-apple-darwin12.5.0, by the way.
Thanks,
Kim
This looks like you may be getting bit by the very slight difference between method extension and function shadowing.
Here's the short of it. When you write function add!(::Graph, ...); …; end;, Julia looks at just your local scope and sees if add! is defined. If it is, then it will extend that function with this new method signature. But if it's not already defined locally, then Julia creates a new local variable add! for that function.
As JMW's comment suggests, I bet that you have two independent add! functions. Base.add! and RDF.add!. In your RDF module, you're shadowing the definition of Base.add!. This is similar to how you can name a local variable pi = 3 without affecting the real Base.pi in other scopes. But in this case, you want to merge your methods with the Base.add! function and let multiple dispatch take care of the resolution.
There are two ways to get the method extension behavior:
Within your module RDF scope, say import Base: add!. This explicitly brings Base.add! into your local scope as add!, allowing method extension.
Explicitly define your methods as function Base.add!(graph::Graph, …). I like this form as it more explicitly documents your intentions to extend the Base function at the definition site.
This could definitely be better documented. There's a short reference to this in the Modules section, and there's currently a pull request that should be merged soon that will help.

Passing arguments to execl

I want to create my own pipeline like in Unix terminal (just to practice). It should take applications to execute in quotes like that:
pipeline "ls -l" "grep" ....
I know that I should use fork(), execl() (exec*) and API to redirect stdin and stdout. But are there any alternatives for execl to execute app with arguments using just one argument which includes application path and arguments? Is there a way not to parse manually ls -l but pass it as one argument to execl?
If you have only a single command line instead of an argument vector, let the shell do the parsing for you:
execl("/bin/sh", "sh", "-c", the_command_line, NULL);
Of course, don't let untrusted remote user input into this command line. But if you are dealing with untrusted remote user input to begin with, you should try to arrange to pass actual a list of isolated arguments to the target application as per normal usage of exec[vl], not a command line.
Realistically, you can only really use execl() when the number of arguments to the command are known at compile time. In a shell, you'll normally use execv() or execvp() instead; these can handle an arbitrary number of arguments to the command to be executed. In theory, you use execv() when the path name of the command is given and execvp() (which does a PATH-based search for the command) when it isn't. However, execvp() handles the 'path given' case, so simply use execvp().
So, for your pipeline command, you'll end up with one child using something equivalent to:
char *args_1[] = { "ls", "-l", 0 };
execvp(args_1[0], args_1);
The other child will end up using something equivalent to:
char *args_2[] = { "grep", "pattern", 0 };
execvp(args_2[0], args_2);
Except, of course, that you'll have created those strings from the command line arguments instead of by initialization as shown. Note that grep requires a pattern to search for.
You've still got plumbing issues to resolve. Make sure you close enough pipe file descriptors. When you dup() or dup2() a pipe to standard input or standard output, you close both the file descriptors from the pipe() function.

Resources