AIX Interactive Shell - unix

I was hoping I could distinguish between when a script is run interactively versus by 'at' or 'cron'. If the user is running a script on the command line I want to put output to their screen, but if it's running via 'at' or 'cron' then the output would go to a log file.
I searched online and saw many suggestions (although not AIX specific) on using the "$-". That sounded promising, but when I tried it, it wasn't as useful.
If I type 'echo "$-"' at the prompt I get "ims" back. If I create a script with the echo command, it returns "h". If I submit the script via "at" I get "h", and if I have cron run it I get "h".
I also looked at using TERM and PS1, but again they don't allow me to distinguish between a script run by either 'cron' or 'at' versus a script invoked at the command line.
Is there something else I could try on AIX?
Thanks.
If I run this script
Glenn. Here's a script I'm running. I get the desired result using "tty -s" but not with "$-". Am I doing something wrong? You see the "$-" results always says I'm not in an interactive shell.
#!/bin/ksh93
echo "tty -s"
if tty -s
then
echo " - I'm an interactive shell"
else
echo " - I'm not interactive"
fi
echo "\$-"
case $- in
*i*) echo " - I'm an interactive shell";;
*) echo " - I'm not interactive";;
esac
I get these three results sets when the script is run via 1) the command line, 2) "at", and 3) cron.
command line result
tty -s
- I'm an interactive shell
$-
- I'm not interactive
at result
tty -s
- I'm not interactive
$-
- I'm not interactive
Cron result
tty -s
- I'm not interactive
$-
- I'm not interactive
By running at the command line I mean I'm running "script.ksh >> script.log". If I run "echo $-" from the command line it returns 'ims' but when $- is reference within a script it always returns 'h'.

If the user is running a script on the command line I want to put output to their screen, but if it's running via 'at' or 'cron' then the output would go to a log file.
That's what standard output and redirection is good for. Also you can use command tty -s to determine if your standard input is a terminal or not. (If you wish to check your standard output instead: tty -s <&1)
Example (~projects/tmp/tty_test.sh):
#!/bin/sh
exec >>~projects/tmp/tty_test.log 2>&1
tty
if tty -s; then echo terminal
else echo otherwise
fi

You want to check if $- contains the letter i. If it does, you have an interactive shell:
case $- in
*i*) echo "I'm an interactive shell";;
*) echo "I'm not interactive";;
esac
Testing
$ ksh
$ case $- in
> *i*) echo "I'm an interactive shell";;
> *) echo "I'm not interactive";;
> esac
I'm an interactive shell
$ ksh <<'END'
> case $- in
> *i*) echo "I'm an interactive shell";;
> *) echo "I'm not interactive";;
> esac
> END
I'm not interactive
Given the comments below, you probably want to use this builtin test:
if [ -t 0 ]; then
: # script running interactively
else
# running non-interactively (cron or at)
# redirect output to a log file
exec 1>/path/to/logfile 2>&1
fi
# and now, your script can just output to stdout/stderr
# and the output will go to the screen or to the logfile
echo "hello"
date
print -u2 "bye"

Related

Shell Options in Deno

Is there a way to read shell options in Deno? For example, to detect whether the current shell is in interactive mode, I would normally check to see if $- has an i in it:
if [[ $- == *i* ]]; then
echo "interactive"
else
echo "not interactive"
fi
I can of course use Deno.run to execute ['bash', '-c', 'echo $-'], but is there any more elegant way to get access to this information?
EDIT: Actually running a bash command to print out the shell options doesn't appear to work for me either. The subprocess always reports itself as non-interactive.
You can use Deno.isatty to make this determination. Example:
const isInteractive = Deno.isatty(Deno.stdin.rid);
console.log(`${isInteractive ? '' : 'not '}interactive`);

How to get an Rscript to return a status code in non-interactive bash mode

I am trying to get the status code out of an Rscript run in an non-interactive way in the form of a bash script. This step is part of larger data processing cycle that involves db2 scripts among other things.
So I have the following contents in a script sample.sh:
Rscript --verbose --no-restore --no-save /home/R/scripts/sample.r >> sample.rout
when this sample.sh is run it always returns a status code of 0, irrespective of if the sample.r script run fully or error out in an intermediate step.
I tried the following things but no luck
1 - in the sample.sh file, I added an if and else condition for a return code like the below, but it again wrote back 0 despite sample.r failing in one of the functions inside.
if Rscript --verbose --no-restore --no-save /home/R/scripts/sample.r >> sample.rout
then
echo -e "0"
else
echo -e "1"
fi
2 - I also tried a wrapper script, like in a sample.wrapper.sh file
r=0
a=$(./sample.sh)
r=$?
echo -e "\n return code of the script is: $a\n"
echo -e "\n The process completed with status: $r"
here also I did not get the expected '1' in the case of failure of the sample.r in an intermediate step on both the variables a and r. Ideally, i would like a way to capture the error (as '1') in a.
Could someone please advice how to get rscript to write '0' only in case of completion of the entire script without any errors and '1' in all other cases?
greatly appreciate the input! thank you!
I solved the problem by returning the status code in addition to echo. below is the code snipped from sample.sh script. In addition, in sample.R code i have added trycatch to catch the errors and quit(status = 1).
function fun {
if Rscript --verbose --no-restore --no-save /home/R/scripts/sample.r > sample.rout 2>&1
then
echo -e "0"
return 0
else
echo -e "1"
return 1
fi
}
fun
thanks everyone for your inputs.
The above code works for me. I modified it so that I could reuse the function and have it exit when there's an error
Rscript_with_status () {
rscript=$1
if Rscript --vanilla $rscript
then
return 0
else
exit 1
fi
}
run r scripts by:
Rscript_with_status /path/to/script/sample.r
Your remote script needs to provide a proper exit status.
You can make a 1st test by providing i.e. "exit 1" at the end of the remote script and see that it will make a difference.
remote.sh:
#!/bin/sh
exit 1
From local machine:
ssh -l username remoteip /home/username/remote.sh
echo $?
1
But the remote script should also provide to you the exit status of the last executed command. Experiment further by modifying your remote script:
#!/bin/sh
#exit 1
/bin/false
The exit status of the remote command will now also be 1.

Running a script according to shebang line

I've got a script on my computer named test.py. What I've been doing so far to run the program is type python test.py into the terminal.
Is there a command on Unix operating systems that doesn't require the user to specify the program he/she uses to run the script but that will instead run the script using whichever program the shebang line is pointing to?
For example, I'm looking for a command that would let me type some_command test.txtinto the terminal, and if the first line of test.txt is #!/usr/bin/python, the script would be interpreted as a python script, but if the first line is #!/path/to/javascript/interpreter, the the script would be interpreted as javascript.
This is the default behavior of the terminal (or just executing a file in general) all you have to do is make the script executable with
chmod u+x test.txt
Then (assuming text.txt is in your current directory) every time you type
./text.txt
It will look at the sh-bang line and use the program there to run text.txt.
If you really want to duplicate built-in functionality, try this.
#!/bin/sh
x=$1
shift
p=$(sed -n 's/^#!//p;q' "$x" | grep .) && exec $p "$#"
exec "$x" "$#"
echo "$0: $x: No can do" >&2
Maybe call it start to remind you of the similarly useful Windows command.

logging unix "cp" (copy) command response

I am coping some file,So, the result can be either way.
eg:
>cp -R bin/*.ksh ../backup/
>cp bin/file.sh ../backup/bin/
When I execute above commands, its getting copied. No response from the system, if it copied successful. If not, prints the error or response in terminal itself cp: file.sh: No such file or directory.
Now, I want to log the error message, or if it successful I want to log my custom message to a file. How can I do?
Any help indeed.
Thanks
try writing this in a shell script:
#these three lines are to check if script is already running.
#got this from some site don't remember :(
ME=`basename "$0"`;
LCK="./${ME}.LCK";
exec 8>$LCK;
LOGFILE=~/mycp.log
if flock -n -x 8; then
# 2>&1 will redirect any error or other output to $LOGFILE
cp -R bin/*.ksh ../backup/ >> $LOGFILE 2>&1
# $? is shell variable that contains outcome of last ran command
# cp will return 0 if there was no error
if [$? -eq 0]; then
echo 'copied succesfully' >> $LOGFILE
fi
fi

Cron job stderr to email AND log file?

I have a cron job:
$SP_s/StartDailyS1.sh >$LP_s/MirrorLogS1.txt
Where SP_s is the path to the script and LP_s is the path for the log file. This sends stdout to the log file and stderr to my email.
How do I?:
1) send both stdout AND stderr to the logfile,
2) AND send stderr to email
or to put it another way: stderr to both the logfile and the email, and stdout only to the logfile.
UPDATE:
None of the answers I've gotten so far either follow the criteria I set out or seem suited to a CRON job.
I saw this, which is intended to "send the STDOUT and STDERR from a command to one file, and then just STDERR to another file" (posted by zazzybob on unix.com), which seems close to what I want to do and I was wondering if it would inspire someone more clever than I:
(( my_command 3>&1 1>&2 2>&3 ) | tee error_only.log ) > all.log 2>&1
I want cron to send STDERR to email rather than 'another file'.
Not sure why nobody mentioned this.
With CRON if you specify MAILTO= in the users crontab,
STDOUT is already sent via mail.
Example
[temp]$ sudo crontab -u user1 -l
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=user1
# transfer order shipping file every 3 minutes past the quarter hour
3,19,33,48 * * * * /home/user1/.bin/trans.sh
Since I was just looking at the info page for tee (trying to figure out how to do the same thing), I can answer the last bit of this for you.
This is most of the way there:
(( my_command 3>&1 1>&2 2>&3 ) | tee error_only.log ) > all.log 2>&1
but replace "error_only.log" with ">(email_command)"
(( my_command 3>&1 1>&2 2>&3 ) | tee >(/bin/mail -s "SUBJECT" "EMAIL") ) > all.log 2>&1
Note: according to tee's docs this will work in bash, but not in /bin/sh. If you're putting this in a cron script (like in /etc/cron.daily/) then you can just but #!/bin/bash at the top. However if you're putting it as a one-liner in a crontab then you may need to wrap it in bash -c ""
If you can do with having stdout/err in separate files, this should do:
($SP_s/StartDailyS1.sh 2>&1 >$LP_s/MirrorLogS1.txt.stdout) | tee $LP_s/MirrorLogS1.txt.stderr
Unless I'm missing something:
command 2>&1 >> file.log | tee -a file.log
2>&1 redirects stderr to stdout
>> redirects regular command stdout to logfile
| tee duplicates stderr (from 2>&1) to logfile and passes it through to stdout be mailed by cron to MAILTO
I tested it with
(echo Hello & echo 1>&2 World) 2>&1 >> x | tee -a x
Which indeed shows World in the console and both texts within x
The ugly thing is the duplicate file name. And the different buffering from stdout/stderr might make text in file.log a bit messy I guess.
A bit tricky if you want stdout and stderr combined in one file, with stderr yet tee'd into its own stream.
This ought to do it (error-checking, clean-up and generalized robustness omitted):
#! /bin/sh
CMD=..../StartDailyS1.sh
LOGFILE=..../MirrorLogS1.txt
FIFO=/tmp/fifo
>$LOGFILE
mkfifo $FIFO 2>/dev/null || :
tee < $FIFO -a $LOGFILE >&2 &
$CMD 2>$FIFO >>$LOGFILE
stderr is sent to a named pipe, picked up by tee(1) where it is appended to the logfile (wherein is also appended your command's stdout) and tee'd back to regular stderr.
My experience (ubuntu) is that 'crontab' only emails 'stderr' (I have the output directed to a log file which is then archived). That is useful, but I wanted a confirmation that the script ran (even when no errors to 'stderr'), and some details about how long it took, which I find is a good way to spot potential trouble.
I found the way I could most easily wrap my head around this problem was to write the script with some duplicate 'echo's in it. The extensive regular 'echo's end up in the log file. For the important non-error bits I want in my crontab 'MAILTO' email, I used an 'echo' that is directed to stderr with '1>&2'.
Thus this:
Frmt_s="+>>%y%m%d %H%M%S($HOSTNAME): " # =Format of timestamp: "<YYMMDD HHMMSS>(<machine name>): "
echo `date "$Frmt_s"`"'$0' started." # '$0' is path & filename
echo `date "$Frmt_s"`"'$0' started." 1>&2 # message to stderr
# REPORT:
echo ""
echo "================================================"
echo "================================================" 1>&2 # message to stderr
TotalMins_i=$(( TotalSecs_i / 60 )) # calculate elapsed mins
RemainderSecs_i=$(( TotalSecs_i-(TotalMins_i*60) ))
Title_s="TOTAL run time"
Summary_s=$Summary_s$'\n'$(printf "%-20s%3s:%02d" "$Title_s" $TotalMins_i $RemainderSecs_i)
echo "$Summary_s"
echo "$Summary_s" 1>&2 # message to stderr
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" 1>&2 # message to stderr
echo ""
echo `date "$Frmt_s"`"TotalSecs_i: $TotalSecs_i"
echo `date "$Frmt_s"`"'$0' concluded." # '$0' is path & filename
echo `date "$Frmt_s"`"'$0' concluded." 1>&2 # message to stderr
Sends me an email containing this (when there are no errors, the lines beginning 'ssh:' and 'rsync:' do not appear):
170408 030001(sb03): '/mnt/data1/LoSR/backup_losr_to_elwd.sh' started.
ssh: connect to host 000.000.000.000 port 0000: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0]
ssh: connect to host 000.000.000.000 port 0000: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0]
================================================
S6 SUMMARY (mins:secs):
'Templates' 2:07
'Clients' 2:08
'Backups' 0:10
'Homes' 0:02
'NetAppsS6' 10:19
'TemplatesNew' 0:01
'S6Www' 0:02
'Aabak' 4:44
'Aaldf' 0:01
'ateam.ldf' 0:01
'Aa50Ini' 0:02
'Aadmin50Ini' 0:01
'GenerateTemplates' 0:01
'BackupScripts' 0:01
TOTAL run time 19:40
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170408 031941(sb03): '/mnt/data1/LoSR/backup_losr_to_elwd.sh' concluded.
This doesn't satisfy my initial desire to "send both stdout AND stderr to the logfile" (stderr, and only the 'echo'ed lines with '1>&2' go to the email; stdout goes to the log), but I find this is better than my initially imagined solution, as the email finds me and I don't have to go looking for problems in the log file.
I think the solution would be:
$SP_s/StartDailyS1.sh 2>&1 >> $LP_s/MirrorLogS1.txt | tee -a $LP_s/MirrorLogS1.txt
This will:
append standard output to $LP_s/MirrorLogS1.txt
append standard error to $LP_s/MirrorLogS1.txt
print standard error, so that cron will send a mail in case of error
I assume you are using bash, you redirect stdout and stderr like so
1> LOG_FILE
2> LOG_FILE
to send a mail containing the stderr in the body something like this
2> MESSAGE_FILE
/bin/mail -s "SUBJECT" "EMAIL_ADDRESS" < MESSAGE_FILE
I'm not sure if you can do the above in only one passage as this
/bin/mail -s "SUBJECT" "EMAIL_ADDRESS" <2
You could try writing another cronjob to read the log file and "display" the log (really just let cron email it to you)

Resources