I've written Python code to compute an IP programmatically, that I then want to use in an external connection program.
I don't know how to pass it to the subprocess:
import subprocess
from subprocess import call
some_ip = "192.0.2.0" # Actually the result of some computation,
# so I can't just paste it into the call below.
subprocess.call("given.exe -connect host (some_ip)::5631 -Password")
I've read what I could and found similar questions but I truly cannot understand this step, to use the value of some_ip in the subprocess. If someone could explain this to me it would be greatly appreciated.
If you don't use it with shell=True (and I don't recommend shell=True unless you really know what you're doing, as shell mode can have security implications) subprocess.call takes the command as an sequence (e.g. a list) of its components: First the executable name, then the arguments you want to pass to it. All of those should be strings, but whether they are string literals, variables holding a string or function calls returning a string doesn't matter.
Thus, the following should work:
import subprocess
some_ip = "192.0.2.0" # Actually the result of some computation.
subprocess.call(
["given.exe", "-connect", "host", "{}::5631".format(some_ip), "-Password"])
I'm using str's format method to replace the {} placeholder in "{}::5631" with the string in some_ip.
If you invoke it as subprocess.call(...), then
import subprocess
is sufficient and
from subprocess import call
is unnecessary. The latter would be needed if you want to invoke the function as just call(...). In that case the former import would be unneeded.
Related
Is there a way to access all the variables/arguments passed through the command line or variable file (-V option) during robotframework execution. I know in python the execution can access it with 'sys.args' feature.
The answer for getting the CLI arguments is inside your question - just look at the content of the sys.argv, you'll see everything that was passed to the executor:
${args}= Evaluate sys.argv sys
Log To Console ${args}
That'll return a list, where the executable itself (run.py) is the 1st member, and all arguments and their values present the in the order given during the execution:
['C:/my_directories/rf-venv/Lib/site-packages/robot/run.py', '--outputdir', 'logs', '--variable', 'USE_BROWSERSTACK:true', '--variable', 'IS_DEV_ENVIRONMENT:false', '--include', 'worky', 'suites\\test_file.robot']
You explicitly mention variable files; that one is a little bit trickier - the framework parses the files itself, and creates the variables according to its rules. You naturally can see them in the CLI args up there, and the other possibility is to use the built-in keyword Get Variables, which "Returns a dictionary containing all variables in the current scope." (quote from its documentation). Have in mind though that these are all variables - not only the passed on the command line, but also the ones defined in the suite/imported keywords etc.
You have Log Variables to see their names and values "at current scope".
There is no possibility to see the arguments passed to robot.
I'd like to fancy up my embedding of Julia in a MATLAB mex function by hooking up Julia's STDIN, STDOUT, and STDERR to the MATLAB terminal. The documentation for redirect_std[in|out|err] says that the stream that I pass in as the argument needs to be a TTY or a Pipe (or a TcpSocket, which wouldn't seem to apply).
I know how I will define the right callbacks for each stream (basically, wrappers around calls to MATLAB's input and fprintf), but I'm not sure how to construct the required stream.
Pipe was renamed PipeEndpoint in https://github.com/JuliaLang/julia/pull/12739, but the corresponding documentation was not updated and PipeEndpoint is now considered internal. Even so, creating the pipe up front is still doable:
pipe = Pipe()
Base.link_pipe(pipe)
redirect_stdout(pipe.in)
#async while !eof(pipe)
data = readavailable(pipe)
# Pass data to whatever function handles display here
end
Furthermore, the no-argument version of these functions already create a pipe object, so the recommended way to do this would be:
(rd,wr) = redirect_stdout()
#async while !eof(rd)
data = readavailable(rd)
# Pass data to whatever function handles display here
end
Nevertheless, all of this is less clear than it could be, so I have created a pull request to clean up this API: https://github.com/JuliaLang/julia/pull/18253. Once that pull request is merged, the link_pipe call will become unnecessary and pipe can be passed directly into redirect_stdout. Further, the return value from the no-argument version will become a regular Pipe.
I have a list comprehension:
thingie=[f(a,x,c) for x in some_list]
which I am parallelising as follows:
from multiprocessing import Pool
pool=Pool(processes=4)
thingie=pool.map(lambda x: f(a,x,c), some_list)
but I get the following error:
_pickle.PicklingError: Can't pickle <function <lambda> at 0x7f60b3b0e9d8>:
attribute lookup <lambda> on __main__ failed
I have tried to install the pathos package which apparently addresses this issue, but when I try to import it I get the error:
ImportError: No module named 'pathos'
OK, so this answer is just for the record, I've figured it out with author of the question during comment conversation.
multiprocessing needs to transport every object between processes, so it uses pickle to serialize it in one process and deserialize in another. It all works well, but pickle cannot serialize lambda. AFAIR it is so because pickle needs functions source to serialize it, and lambda won't have it, but I'm not 100% sure and cannot quote my source.
It won't be any problem if you use map() on 1 argument function - you can pass that function instead of lambda. If you have more arguments, like in your example, you need to define some wrapper with def keyword:
from multiprocessing import Pool
def f(x, y, z):
print(x, y, z)
def f_wrapper(y):
return f(1, y, "a")
pool = Pool(processes=4)
result = pool.map(f_wrapper, [7, 9, 11])
Just before I close this, I found another way to do this with Python 3, using functools,
say I have a function f with three variables f(a,x,c), one of which I want to may, say x. I can use the following code to do basically what #FilipMalczak suggests:
import functools
from multiprocessing import Pool
f1=functools.partial(f,a=10)
f2=functools.partial(f2,c=10)
pool=Pool(processes=4)
final_answer=pool.map(f2,some_list)
I have tried
let _ = Unix.create_process "ls" [||] Unix.stdin Unix.stdout Unix.stderr
in utop, it will crash the whole thing.
If I write that into a .ml and compile and run, it will crash the terminal and my ubuntu will throw a system error.
But why?
The right way to call it is:
let pid = Unix.create_process "ls" [|"ls"|] Unix.stdin Unix.stdout Unix.stderr
The first element of the array must be the "command" name.
On some systems /bin/ls is a link to some bigger executable that will look at argv.(0) to know how to behave (c.f. Busybox); so you really need to provide that info.
(You see more often that with /usr/bin/vi which is now on many systems a sym-link to vim).
Unix.create_process actually calls fork and the does an execvpe, which itself calls the execv primitive (in the OCaml C implementation of the Unix module).
That function then calls cstringvect (a helper function in the C side of the module implementation), which translates the arg parameters into an array of C string, with last entry set to NULL. However, execve and the like expect by convention (see the execve(2) linux man page) the first entry of that array to be the name of the program:
argv is an array of argument strings passed to the new program. By
convention, the first of these strings should contain the filename
associated with the file being executed.
That first entry (or rather, the copy it receives) can actually be changed by the program receiving these args, and is displayed by ls, top, etc.
I want to create my own pipeline like in Unix terminal (just to practice). It should take applications to execute in quotes like that:
pipeline "ls -l" "grep" ....
I know that I should use fork(), execl() (exec*) and API to redirect stdin and stdout. But are there any alternatives for execl to execute app with arguments using just one argument which includes application path and arguments? Is there a way not to parse manually ls -l but pass it as one argument to execl?
If you have only a single command line instead of an argument vector, let the shell do the parsing for you:
execl("/bin/sh", "sh", "-c", the_command_line, NULL);
Of course, don't let untrusted remote user input into this command line. But if you are dealing with untrusted remote user input to begin with, you should try to arrange to pass actual a list of isolated arguments to the target application as per normal usage of exec[vl], not a command line.
Realistically, you can only really use execl() when the number of arguments to the command are known at compile time. In a shell, you'll normally use execv() or execvp() instead; these can handle an arbitrary number of arguments to the command to be executed. In theory, you use execv() when the path name of the command is given and execvp() (which does a PATH-based search for the command) when it isn't. However, execvp() handles the 'path given' case, so simply use execvp().
So, for your pipeline command, you'll end up with one child using something equivalent to:
char *args_1[] = { "ls", "-l", 0 };
execvp(args_1[0], args_1);
The other child will end up using something equivalent to:
char *args_2[] = { "grep", "pattern", 0 };
execvp(args_2[0], args_2);
Except, of course, that you'll have created those strings from the command line arguments instead of by initialization as shown. Note that grep requires a pattern to search for.
You've still got plumbing issues to resolve. Make sure you close enough pipe file descriptors. When you dup() or dup2() a pipe to standard input or standard output, you close both the file descriptors from the pipe() function.