I mean, I'd like my programs to be reusable later, like the Unix utilities (by using pipes, for example)
If there isn't any, can you give me some tips?
Thank you
Standard unix command line utilities are "simple", in that they accept some input, do some processing, then produce output. As long as your utilities can read from stdin (input) and write output to stdout, they can be chained together with pipes.
Related
In practice I prefer to write R codes with Notepad++ and NppToR, where you can use the default shortcut keys to achieve the following functions:
F8: Pass line or selection
Shift+F8: Pass to point of cursor (from the very beginning)
Ctrl +F8: Pass entire file at once
Ctrl+Shift+F8: Pass by source (i.e., source("C:/Users/lenovo/Desktop/yourRcode.r"))
It is said that Julia is as simple as R or Python, but much faster than the latter two, almost as fast as C or Fortran. Thus, I try to use Julia to write codes.
According to julia-NotepadPlusPlus, we can use Julia with Notepad++ and AutoHotkey, where one can achieve the following goals:
Win-F12 -> Start Julia
Left_Shift-Enter -> Evaluate current line
Right_Shift-Enter -> Evaluate selected block
I want to write a NppToJulia.ahk file to link Notepad++ and Julia, achieving the functions as the R-NpptoR-Notepad++ way:
F8: Pass line or selection
Shift+F8: Pass to point of cursor (from the very beginning)
Ctrl +F8: Pass entire file at once
Ctrl+Shift+F8: Pass by source (i.e., include("C:/Users/lenovo/Desktop/yourJuliaCode.jl"))
As I know nothing about AutoHotkey, can anyone give me some hints?
I want to have a short script that opens a Julia REPL in a specific mode, for instance, the shell> mode or the C++ > (from Cxx.jl) mode. How can this be achieved?
Update:
After getting an answer I created a script to start Julia REPL in Cxx.jl C++ mode (and pre-run some C++ code). See it here: https://github.com/cdsousa/cxxrepl.jl.
Whatever this may be good for...
The easiest way (without having dug into the innards of Base.REPL) is to write the appropriate character to STDIN, e.g
write(STDIN.buffer,'?');
If you want to start the REPL and drop to shell mode immediately, call julia as
julia -i -e write(STDIN.buffer,';')
Assume that I have a script that can be run in either of the following ways.
./foo arg1 arg2
./foo
Is there a generally accepted way to denote that arg1 and arg2 aren't mandatory arguments when printing the correct usage of the command?
I've sometimes noticed usage printed with the arguments wrapped in brackets like in the following usage printout.
Usage: ./foo [arg1] [arg2]
Do these brackets mean that the argument is optional or is there another generally accepted way to denote that an argument is optional?
I suppose this is as much a standard as anything.
The Open Group Base Specifications Issue 7
IEEE Std 1003.1, 2013 Edition
Copyright © 2001-2013 The IEEE and The Open Group
Ch. 12 - Utility Conventions
Although it doesn't seem to mention many things I have commonly seen over the years used to denote various meanings:
square brackets [optional option]
angle brackets <required argument>
curly braces {default values}
parenthesis (miscellaneous info)
Edit: I should add, that these are just conventions. The important thing is to pick a convention which is sensible, clearly state your convention, and stick to it consistently. Be flexible and create conventions which seem to be most frequently encountered on your target platform(s). They will be the easiest for users to adapt to.
I personally have not seen a 'standard' that denotes that a switch is optional (like how there's a standard that defines how certain languages are written for example), as it really is personal choice, but according to IBM's docs and the Wiki, along with numerous shell scripts I've personally seen (and command line options from various programs), and the IEEE, the 'defacto' is to treat square bracketed ([]) parameters as optional parameters. Example from Linux:
ping (output trimmed...)
usage: ping [-c count] [-t ttl] host
where [-c count] and [-t ttl] are optional parameters but host is not (as defined in the help).
I personally follow the defacto as well by using [] to mean they are optional parameters and make sure to note that in the usage of that script/program.
I should note that a computer standard should define how something happens and its failure paths (either true fail or undefined behavior). Something along the lines of the command line interpreter _shall_ treat arguments as optional when enclosed in square brackets, and _shall_ treat X as Y when Z, etc.. Much like the ISO C standard says how a function shall be formed for it to be valid (otherwise it fails). Given that there are no command line interpreters, from ASH to ZSH and everything in between, that fail a script for treating [] as anything but optional, one could say there is not a true standard.
Yes, the square brackets indicate optional arguments in Unix man pages.
From "man man":
[-abc] any or all arguments within [ ] are optional.
I've never wondered if they're formally specified somewhere, I've always just assumed they come from conventions used in abstract algebra, in particular, in BNF grammars.
I want to create my own pipeline like in Unix terminal (just to practice). It should take applications to execute in quotes like that:
pipeline "ls -l" "grep" ....
I know that I should use fork(), execl() (exec*) and API to redirect stdin and stdout. But are there any alternatives for execl to execute app with arguments using just one argument which includes application path and arguments? Is there a way not to parse manually ls -l but pass it as one argument to execl?
If you have only a single command line instead of an argument vector, let the shell do the parsing for you:
execl("/bin/sh", "sh", "-c", the_command_line, NULL);
Of course, don't let untrusted remote user input into this command line. But if you are dealing with untrusted remote user input to begin with, you should try to arrange to pass actual a list of isolated arguments to the target application as per normal usage of exec[vl], not a command line.
Realistically, you can only really use execl() when the number of arguments to the command are known at compile time. In a shell, you'll normally use execv() or execvp() instead; these can handle an arbitrary number of arguments to the command to be executed. In theory, you use execv() when the path name of the command is given and execvp() (which does a PATH-based search for the command) when it isn't. However, execvp() handles the 'path given' case, so simply use execvp().
So, for your pipeline command, you'll end up with one child using something equivalent to:
char *args_1[] = { "ls", "-l", 0 };
execvp(args_1[0], args_1);
The other child will end up using something equivalent to:
char *args_2[] = { "grep", "pattern", 0 };
execvp(args_2[0], args_2);
Except, of course, that you'll have created those strings from the command line arguments instead of by initialization as shown. Note that grep requires a pattern to search for.
You've still got plumbing issues to resolve. Make sure you close enough pipe file descriptors. When you dup() or dup2() a pipe to standard input or standard output, you close both the file descriptors from the pipe() function.
I am a bit of an R novice, and I am stuck with what seems like a simple problem, yet touches pretty deep questions about how and when things get evaluated in R.
I am using Rserve quite a bit; the typical syntax to get things evaluated remotely is a bit of a pain to type repeatedly:
RSeval(connection, quote(try(command)))
So I would like a function r which does the same thing with just the call:
r(command)
My first, naive, bound to fail attempt involved:
r <- function(command) {
RSeval(c, quote(try(command)))
}
You've guessed it: this sends, literally, try(command) to my confused Rserve daemon. I want command to be partially evaluated, if that makes any sense -- i.e. replaced by what I typed as an argument, but without evaluating it locally.
I looked for solutions to this, browsed throught the documentation for quote, substitute, eval, call, etc.. but I was not able to find something that worked. Either command gets evaluated locally, or not at all.
This is not a big problem, I can type the whole damn quote(try()) thing all the time; but at this point I am mostly curious as to how to get this to work!
EDIT:
More explanations as to what I want to do.
In the text above, command is meant to be a call do a function, ideally -- i.e., not a character string. Something like a <- 3 or assign("a", 3) rather than "a<-3" or quote(a<-3).
I believe that this is part of what makes this tricky. It seems really hard to tell R not to evaluate this locally, but only send it literally. Basically I would like my function to be a bit like quote(), which does not evaluate its argument.
Some explanation about my intentions. I wish to use Rserve frequently to pass commands to a remote R daemon. The commands would be my own (or my colleagues) and the daemon protected by firewall and authentication (and would not be run as root) -- so there is no worry of malicious commands being passed.
To be perfectly honest, this is not a big issue, and I would be pretty happy to always use the RSeval(c, quote(try())). At this point I see this more like an interesting inquiry into the subleties of R :-)
You probably want to use the substitute command, it can give you the argument unevaluated that you can build into the call.
I'm not sure if I understood you correctly - would eval(parse(text = command)) do the trick? Notice that command is a character, so you can easily pass it as a function argument. If I'm getting the point...
Anyway, evaluating user-specified commands is potentially malicious, therefore not recommended. You should either install AppArmor and tweak it (which is not an easy one), or drop the whole evaluation thing...