I'm executing a batch file inside an R script. I'd like to run this and another large section of the R script twice using a foreach loop.
foreach (i=1:2, .combine = rbind)%do%{
shell.exec("\\\\network\\path\\to\\batch\\script.ext")
*rest of the R script*
}
One silly problem though is that this batch file generates data and that data is connected to SQL Server localdb inside the loop. I thought at first that the script would execute the batch file, wait for it to finish and then move on. However, (seems obvious in hindsight) the script instead executes the batch file, tries to grab data that hasn't been created yet (because the file isn't finished running) and the executes the batch file again before it finishes the first time.
I've been trying to find away to delay the rest of the script from executing until the batch script has finished executing but have not come up with anything yet. I'd appreciate any insights anyone has.
Use system2 instead of shell.exe. system2 calls are blocking — meaning, the function waits until the external program has finished running. On most systems, this can be used directly to run scripts. On Windows, you may have to invoke rundll32 to execute a script:
cmd = c('rundll32.exe', 'Shell32.dll,ShellExecute', 'NULL', 'open', scriptpath)
system2(paste(shQuote(cmd), collapse = ' '))
Windows users may use shell, which by default has wait=TRUE, which will cause R to wait for its completion. You may choose whether or not to directly "intern" the result.
On unix-like systems, use system, which also defaults to wait=TRUE.
If your batch file simply launches another process and terminates, then it may need to be modified to either wait for completion or return a suitable process or file indicator that can be monitored.
Related
I would like to run a lot of system commands asynchronously in the first part of my code, but then wait for them to finish in the second part of my code.
Right now I do that in the following way
for(command in commands){
system(command, wait = FALSE, intern = FALSE)
}
Sys.wait(300)
#run second part of my code using the files generated by the commands above
In other words. Right now I just run the commands asynchronously and guess that they will be completed in 300 seconds. Is there a way to explicitly check of the commands are all executed before continuing and at the same time run them asynchronously?
I run SAS batch jobs on a UNIX server and usually encounter the problem that I cannot overwrite sas datasets in batch that have been created by my user locally without changing the authorization level of each file in Windows. Is it possible to signon using my user id and password when initializing the batch job to enable me to get full authorization (to my own files) in batch?
Another issue is that I don't have authorization to run UNIX commands using PIPE on a local remote session on the server and can hence not terminate my own sessions. It is on the other hand possible to run PIPE in batch, but this only allows me to terminate batch jobs so I also wonder if it is possible to run a pipe command in batch using my id and password as the batch user does not have authorizatio to cancel "local remote sessions" on my user?
Example code for terminating process:
%let processid = 6938710;
%let unixcmd = "kill &processid";
%PUT executing &unixcmd;
filename unixcmd pipe &unixcmd.;
there's a good and complete answer to your first point in the following SAS support page.
You can use the umask Unix command to specify the default file permission policy used for the permanent datasets created during a SAS session (be it batch or not).
If you are lauching a Unix script which invokes a SAS batch session you can put a umask command just before the sas execution.
Otherwise you can adopt a more permanent solution including the umask command in one of the places specified in the above SAS support article.
You are probably interested in something like:
umask 002
This will assign a rw-rw-r-- file permission to all new datasets.
I have a local server insalled on my system and have to start that from within my R function.
here is how I start it:
cmd<-"sh start-server.sh"
system(cmd, wait=FALSE)
I have to perform computations after starting the server. Basically my function has to start the server and proceeds with further steps.
The server starts but the further steps of the program are not executed.The cursor keeps waiting after the server is started.
Please suggest how to go about this.
My problem is finally solved. I had to add Sys.sleep() and it runs after waiting for few seconds. Thank you for help.
You need to determine whether or not the problem is with the server script or with R's execution of the script. Try:
Running sh start-server.sh directly from a command prompt and seeing what happens.
Running something simple via R's system function, e.g., system("ls", wait = FALSE).
By default, system waits for the executed command to terminate before returning. Add wait=FALSE if you want it to return immediately.
I'm running batch Java on an IBM mainframe under JZOS. The job creates 0 - 6 ".txt" outputs depending upon what it finds in the database. Then, I need to convert those files from Unix to MVS (ebcdic) and I'm using OCOPY command running under IKJEFT01. However, when a particular output was not created, I get a JCL error and the job ends. I'd like to check for the presence or absence of each file name and set a condition code to control whether the IKJEFT01 steps are executed, but don't know what to use that will access the Unix file pathnames.
I have resolved this issue by writing a COBOL program to check the converted MVS files and set return codes to control the execution of subsequent JCL steps. The completed job is now undergoing user acceptance testing. Perhaps it sounds like a kludge, but it does work and I'm happy to share this solution.
The simplest way to do this in JCL is to use BPXBATCH as follows:
//EXIST EXEC PGM=BPXBATCH,
// PARM='pgm /bin/cat /full/path/to/USS/file.txt'
//*
// IF EXIST.RC = 0
//* do whatever you need to
// ENDIF
If the file exists, the step ends with CC 0 and the IF succeeds. If the file does not exist, you get a non-zero CC (256, I believe), and the IF fails.
Since there is no //STDOUT DD statement, there's no output written to JES.
The only drawback is that it is another job step, and if you have a lot of procs (like a compile/assemble job), you can run into the 255 step limit.
I have an executable (no source) that I need to wrap, to make sure that it is not called more than once at a time. I immediately think of some sort of queue wrapper, but how do I actually make it so that my wrapper is called instead of the executable itself? Is there a better way to do this? The solution needs to be invisible because the users are other applications. Any information/recommendations are appreciated.
Method 1: Put the executable in some location not in the standard path. Create a shell script that checks a sentinel file and, if the sentinel file is absent, executes the program, waits for the ptogram to complete, then deletes the sentinel file. If the sentinel file is present, the script will enter a loop with a short delay (1 second? How long is the standard execution of this program? Take that and half it), check the sentential file again, and so on.
Method 2: Create a separate program that does the same thing as the script, but using a system-level semaphore or lock instead. You could even simply use a read/write lock on a file. The program would do a fork() and exec() on the real program, waiting for child exit before clearing the sentinel.
If the users are other applications, you can just rename the executable (e.g. name -> name.real) and call the wrapper with the original name. To make sure that it's only called once at a time, you can use the pidof command (e.g. pidof name.real) to check if the program is running already (pidof actually gives you the PID of the running process, so that you can use stuff such as kill or whatever to send signals to it).