I am running mpirun command under crontab environment and got the error
/opt/pgi/linux86-64/2013/mpi/mpich/bin/mpirun: line 75: /linux86-64/2013/mpi/mpich/bin/mpirun.args: No such file or directory
the mpirun.args actually is under /opt/pgi/linux86-64/2013/mpi/mpich/bin/ directory. How can I let the crontab environment know where to locate it then?
Related
I'm working on a home automation system using a Raspberry Pi. As part of this, I'd like the rPi to pull a file from my web server once a minute. I've been using rsync, but I've run into an error I can't figure out.
This command works fine when run at the command line on my rPi:
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --progress username#example.com:/home/user/example.com/cmd.txt /home/pi/sprinkler/input/cmd.txt
...but when it runs in cron, it produces this error in my log:
Unexpected local arg: /home/pi/sprinkler/input/
If arg is a remote file/dir, prefix it with a colon (:).
rsync error: syntax or usage error (code 1) at main.c(1375) [Receiver=3.1.2]
...and I just answered my own question. Extensive googling around didn't turn up an answer but I just tried putting my rsync command into a bash script, and running the script in cron instead of the command and now everything works!
I'll put this here in case anyone else stumbles over this issue. Here's a script I called "sync.sh"
#!/bin/bash
# attempting a bash shell to use rsync to grab our file
rsync -avz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
--progress user#example.com:/home/user/example.com/vinhus/tovinhus
/cmd.txt /home/pi/sprinkler/input/
I am trying to run an .exe that sits on a remote machine with PSEXEC, but it does not run properly.
The exe is a converted python script:
usage: WindowsUpdates3.exe [-h] [-v] [-m] [-i] [-u] [-r] [-s SKIP]
List, download and update Windows clients
optional arguments:
-h, --help show this help message and exit
-v, --version Show program version and exit
-m, --list-missing List missing updates
-i, --list-installed List installed updates
-u, --update List and install missing updates
-r, --reboot Reboot after installing updates if needed
-s SKIP, --skip SKIP Skips these KB numbers
I am able to run the .exe as intended with a PSEXEC console session:
PSEXEC \\<hostname> cmd
Navigate to the exe my.exe -i, run it and it works the same way if I execute it on the machine locally.
When I try to execute the file directly some fucntionality does not work
PSEXEC.exe \\<hostname> -h "C:\WindowsUpdates\WindowsUpdates3.exe" -i
...
C:\WindowsUpdates\WindowsUpdates3.exe exited on <hostname> with error code 0.
I am able to get the help-menu (-h) and the version (-v) of the exe with the above command.
The other arguments don't return anything but code 0 and the --update argument throws a com_error: -2147024891, which translates to access denied...
How can this be as I have the same privileges as if I spawn a cmd terminal?
Does tcsh support launching itself in a remote directory via an argument?
The setup I am dealing with does not allow me to chdir to the remote directory before invoking tcsh, and I'd like to avoid having to create a .sh file for this workflow.
Here are the available arguments I see for v6.19:
> tcsh --help
tcsh 6.19.00 (Astron) 2015-05-21 (x86_64-unknown-Linux) options wide,nls,dl,al,kan,rh,color,filec
-b file batch mode, read and execute commands from 'file'
-c command run 'command' from next argument
-d load directory stack from '~/.cshdirs'
-Dname[=value] define environment variable `name' to `value' (DomainOS only)
-e exit on any error
-f start faster by ignoring the start-up file
-F use fork() instead of vfork() when spawning (ConvexOS only)
-i interactive, even when input is not from a terminal
-l act as a login shell, must be the only option specified
-m load the start-up file, whether or not owned by effective user
-n file no execute mode, just check syntax of the following `file'
-q accept SIGQUIT for running under a debugger
-s read commands from standard input
-t read one line from standard input
-v echo commands after history substitution
-V like -v but including commands read from the start-up file
-x echo commands immediately before execution
-X like -x but including commands read from the start-up file
--help print this message and exit
--version print the version shell variable and exit
This works, but is suboptimal because it launches two instances of tcsh:
tcsh -c 'cd /tmp && tcsh'
I found many similar questions on internet but no one could resolve my problem..I am working on Solaris 5.10 machine.. Here Inside a shell script a customized command is being run like below.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
This command only runs while logged in as "palf" user. Now this script & subsequently this command is running perfectly from command prompt. But crontab is not able run this command.
I tried few things.. I changed the entry of my crontab file like below which could not even run the script.
40 15 * * * palf bash /opt/bin/scripts/script.sh
Then I tried to edit a cronfile as "palf" user by using the below command but it gave me "invalid options" error.
crontab -u palf -e
I also tried
crontab -e palf
It opened a crontab file but it was same as the root's crontab file not the user's specific
Nothing worked for me. Could anyone please help here? Thanks.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
if [[ $? -ne 0 ]]; then
logger "palf command failed. Please check..." 1
else
logger "palf command successfully executed..." 0
fi
This is how I am checking the status of palf command.. and it prints "palf command failed. Please check..." using logger function every time it runs using cronjob.
In an Amazon EC2 terminal, I type: `sudo nano crontab -e' to bring up the editor. I have the following (empty line at the end included):
#reboot echo "Running RMV scrape & R Shiny via: nano crontab -e"
#reboot nohup python /home/ec2-user/RMV/RMV_scrape.py &
#reboot nohup shiny-server &
#reboot service start httpd
#hourly cp -f /home/ec2-user/RMV/wait_times.csv /var/shiny-server/www/wait_times.csv
Here, I'm trying to run (a) my program, (b) apache, (c) R Shiny server and (d) a script that runs hourly to copy a file.
For some reason, this fails to run. pgrep chron does show chron runs upon startup. It shouldn't be a permissions issue because I ran crontab using sudo. I had one relative pathname in my .py script but I changed it to an absolute pathname.
I've consulted:
https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
http://www.unix.com/answers-to-frequently-asked-questions/13527-cron-crontab.html
Any ideas why this may not be working?
I think your problems is with the command you used to edit the crontab sudo nano crontab -e does not edit the crontab you made a file named crontab in whatever directory you were working in, but crontab files are in /var and are not intended to be edited directly. For any given user crontab -e will edit the crontab using the editor specified in the environment variable EDITOR. So to edit root's crontab the command is sudo crontab -e.
That said adding entries to root's crontab is probably not what you want. You probably want to use the system crontab for some thing like this. In almost all cases the system crontab is /etc/crontab which can be edited using sudo nano /etc/crontab. Note that for the system crontab you need to add the user of the command between the time and command sections. e.g.
#reboot root echo "Running RMV scrape & R Shiny via: nano crontab -e"
Also note that crontab uses a very minimal PATH environment variable for security reasons. If a command you issue is not on the path it will not execute. Remember to either add the paths you need to the crontab PATH (specified in the particular crontab file) or use the full path to a given executable from the (filesystem) root directory.