I have a script in TC shell, for some reason its not working.
Can anyone point out the issue in the same
#!/usr/bin/tcsh
while true
do
if ls /sample/test3 >& /dev/null ; then /sample/app_code/run/initiateLoad.sh ; endif
sleep 1800
done
The path /sample/test3 contains files so the if condition should have been successful and should have started the shell script, but its not happening.
Related
I am running a batch script that starts several .exe and .R scripts and waits for them to finish before doing something else. Right now, the code to check if they have finished runs tasklist and sees if any of the .exes or Rscript.exe is still active
:LOOP
set check_if_ran=files_running.txt
tasklist | findstr "exe1.exe exe2.exe Rscript.exe" > %check_if_ran%
TIMEOUT 1 >nul
for %%F in (%check_if_ran%) do set "file_size=%%~zF"
if %file_size%0 NEQ 00 (
echo still running
TIMEOUT 10 >nul
goto :LOOP
) else (
echo all ran
)
which worked fine in the past, but now some other independent Rscript processes might be running concurrently. Is there any way to change the name of each Rscript process, instead of them all being Rscript.exe? Or maybe some other way (save the PID of the processes I start somehow?)
Thanks
I have a number of Python workers managed by supervisord that should continuously print to stdout (after each completed task) if they are working properly. However, they tend to hang, and we've had difficulty finding the bug. Ideally supervisord would notice that they haven't printed in X minutes and restart them; the tasks are idempotent, so non-graceful restarts are fine. Is there any supervisord feature or addon that can do this? Or another supervisor-like program that has this out of the box?
We are already using http://superlance.readthedocs.io/en/latest/memmon.html to kill if memory usage skyrockets, which mitigates some of the hangs, but a hang that doesn't cause a memory leak can still cause the workers to reach a standstill.
One possible solution would be to wrap your python script in a bash script that'd monitor it and exit if there isn't output to stdout for a period of time.
For example:
kill-if-hung.sh
#!/usr/bin/env bash
set -e
TIMEOUT=60
LAST_CHANGED="$(date +%s)"
{
set -e
while true; do
sleep 1
kill -USR1 $$
done
} &
trap check_output USR1
check_output() {
CURRENT="$(date +%s)"
if [[ $((CURRENT - LAST_CHANGED)) -ge $TIMEOUT ]]; then
echo "Process STDOUT hasn't printed in $TIMEOUT seconds"
echo "Considering process hung and exiting"
exit 1
fi
}
STDOUT_PIPE=$(mktemp -u)
mkfifo $STDOUT_PIPE
trap cleanup EXIT
cleanup() {
kill -- -$$ # Send TERM to child processes
[[ -p $STDOUT_PIPE ]] && rm -f $STDOUT_PIPE
}
$# >$STDOUT_PIPE || exit 2 &
while true; do
if read tmp; then
echo "$tmp"
LAST_CHANGED="$(date +%s)"
fi
done <$STDOUT_PIPE
Then you would run a python script in supervisord like: kill-if-hung.sh python -u some-script.py (-u to disable output buffering, or set PYTHONUNBUFFERED).
I'm sure you could imagine a python script that'd do something similar.
I found many similar questions on internet but no one could resolve my problem..I am working on Solaris 5.10 machine.. Here Inside a shell script a customized command is being run like below.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
This command only runs while logged in as "palf" user. Now this script & subsequently this command is running perfectly from command prompt. But crontab is not able run this command.
I tried few things.. I changed the entry of my crontab file like below which could not even run the script.
40 15 * * * palf bash /opt/bin/scripts/script.sh
Then I tried to edit a cronfile as "palf" user by using the below command but it gave me "invalid options" error.
crontab -u palf -e
I also tried
crontab -e palf
It opened a crontab file but it was same as the root's crontab file not the user's specific
Nothing worked for me. Could anyone please help here? Thanks.
palf -f ${basePath}/palf_file.DAT -e ${basePath}/LOG/palf_file.log
if [[ $? -ne 0 ]]; then
logger "palf command failed. Please check..." 1
else
logger "palf command successfully executed..." 0
fi
This is how I am checking the status of palf command.. and it prints "palf command failed. Please check..." using logger function every time it runs using cronjob.
I am coping some file,So, the result can be either way.
eg:
>cp -R bin/*.ksh ../backup/
>cp bin/file.sh ../backup/bin/
When I execute above commands, its getting copied. No response from the system, if it copied successful. If not, prints the error or response in terminal itself cp: file.sh: No such file or directory.
Now, I want to log the error message, or if it successful I want to log my custom message to a file. How can I do?
Any help indeed.
Thanks
try writing this in a shell script:
#these three lines are to check if script is already running.
#got this from some site don't remember :(
ME=`basename "$0"`;
LCK="./${ME}.LCK";
exec 8>$LCK;
LOGFILE=~/mycp.log
if flock -n -x 8; then
# 2>&1 will redirect any error or other output to $LOGFILE
cp -R bin/*.ksh ../backup/ >> $LOGFILE 2>&1
# $? is shell variable that contains outcome of last ran command
# cp will return 0 if there was no error
if [$? -eq 0]; then
echo 'copied succesfully' >> $LOGFILE
fi
fi
i'm running a script from ASP.NET/C# using SharpSsh. I realize when the script runs and i do a ps -ef grep from unix, i see the same script running twice, one in csh -c, and the other with ksh. The script has shebang ksh, so i'm not sure why a copy of csh is also running. Also if i run the same script directly from unix, only one copy runs with ksh. There's no other shell running from within the script.
Most Unix/Linux now have a command or option that will show process trees, with indented list like, look for -t or -T options to ps OR ptree OR ???
USER PID PPID START TT TIME CMD
daemon 1 1 11-03-06 ? 0 init
myusr 221568 1 11-03-07 tty10 1.00s \_ -ksh
myusr 350976 221568 07:52:11 tty10 0 | \_ ps -efT
I bet you'll see that the csh is the user login shell that includes your script as an argument ( you may have to use different options to ps to see the full command-line of the csh process) AND as a sub process you'll see ksh executing your script, and further sub-processes under ksh for any external commands that the script is calling.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer.