zsh: reset stdout/stderr redirection - zsh

Something funky I gathered from the zsh man pages, is how you can redirect stdout and stderr to somewhere else, e.g. a file. It works like this:
logfile=/tmp/logfile
# Create a file descriptor and associate it with the logfile
integer logfd
exec {logfd} >> ${logfile}
echo "This goes to the console"
echo "This also goes to the console" >&2
echo "This goes to the logfile" >&{logfd}
# Now redirect everything to stdout and stderr to the logfile
# No output will be printed on the console
exec >&${logfd} 2>&1
print "This goes to the log file"
print "This also goes to the log file" >&2
For completeness' sake, a file descriptor can be closed by issuing exec {logfd}>&-.
I can just not figure out one thing. How do you reset zsh's redirections so that further output is again printed just to the console?

Found it after I issued ls -l /proc/self/fd/. Apparently there is a file descriptor 0 that can be used, it points to the console.
So, first we redirect back to that file descriptor:
exec >&0 2>&1
And now the log file can be safely closed:
exec {logfd}>&-
Ideal for scripts.

Related

Checking for a file of current date in remote server through unix shell script

I've written a script that checks for a specific file of format("OLO2OLO_$DATE.txt.zip") in the ftp server and then copies it to my local machine:
/usr/bin/ftp -n 93.179.136.9 << !EOF!
user $USR $PASSWD
cd "/0009/Codici Migrazione"
get $FILE
bye
!EOF!
echo "$FILE"
But I'm not getting the desired result from this.
This line triggers the error.
SOURCE_FOLDER="/0009/"Codici Migrazione""
It tries to execute the command Migrazione with the environment variable SOURCE_FOLDER set to /0009/Codici which doesn't exists.
What you probably wanted to do was:
SOURCE_FOLDER="/0009/Codici Migrazione"

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

logging unix "cp" (copy) command response

I am coping some file,So, the result can be either way.
eg:
>cp -R bin/*.ksh ../backup/
>cp bin/file.sh ../backup/bin/
When I execute above commands, its getting copied. No response from the system, if it copied successful. If not, prints the error or response in terminal itself cp: file.sh: No such file or directory.
Now, I want to log the error message, or if it successful I want to log my custom message to a file. How can I do?
Any help indeed.
Thanks
try writing this in a shell script:
#these three lines are to check if script is already running.
#got this from some site don't remember :(
ME=`basename "$0"`;
LCK="./${ME}.LCK";
exec 8>$LCK;
LOGFILE=~/mycp.log
if flock -n -x 8; then
# 2>&1 will redirect any error or other output to $LOGFILE
cp -R bin/*.ksh ../backup/ >> $LOGFILE 2>&1
# $? is shell variable that contains outcome of last ran command
# cp will return 0 if there was no error
if [$? -eq 0]; then
echo 'copied succesfully' >> $LOGFILE
fi
fi

Cron job stderr to email AND log file?

I have a cron job:
$SP_s/StartDailyS1.sh >$LP_s/MirrorLogS1.txt
Where SP_s is the path to the script and LP_s is the path for the log file. This sends stdout to the log file and stderr to my email.
How do I?:
1) send both stdout AND stderr to the logfile,
2) AND send stderr to email
or to put it another way: stderr to both the logfile and the email, and stdout only to the logfile.
UPDATE:
None of the answers I've gotten so far either follow the criteria I set out or seem suited to a CRON job.
I saw this, which is intended to "send the STDOUT and STDERR from a command to one file, and then just STDERR to another file" (posted by zazzybob on unix.com), which seems close to what I want to do and I was wondering if it would inspire someone more clever than I:
(( my_command 3>&1 1>&2 2>&3 ) | tee error_only.log ) > all.log 2>&1
I want cron to send STDERR to email rather than 'another file'.
Not sure why nobody mentioned this.
With CRON if you specify MAILTO= in the users crontab,
STDOUT is already sent via mail.
Example
[temp]$ sudo crontab -u user1 -l
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=user1
# transfer order shipping file every 3 minutes past the quarter hour
3,19,33,48 * * * * /home/user1/.bin/trans.sh
Since I was just looking at the info page for tee (trying to figure out how to do the same thing), I can answer the last bit of this for you.
This is most of the way there:
(( my_command 3>&1 1>&2 2>&3 ) | tee error_only.log ) > all.log 2>&1
but replace "error_only.log" with ">(email_command)"
(( my_command 3>&1 1>&2 2>&3 ) | tee >(/bin/mail -s "SUBJECT" "EMAIL") ) > all.log 2>&1
Note: according to tee's docs this will work in bash, but not in /bin/sh. If you're putting this in a cron script (like in /etc/cron.daily/) then you can just but #!/bin/bash at the top. However if you're putting it as a one-liner in a crontab then you may need to wrap it in bash -c ""
If you can do with having stdout/err in separate files, this should do:
($SP_s/StartDailyS1.sh 2>&1 >$LP_s/MirrorLogS1.txt.stdout) | tee $LP_s/MirrorLogS1.txt.stderr
Unless I'm missing something:
command 2>&1 >> file.log | tee -a file.log
2>&1 redirects stderr to stdout
>> redirects regular command stdout to logfile
| tee duplicates stderr (from 2>&1) to logfile and passes it through to stdout be mailed by cron to MAILTO
I tested it with
(echo Hello & echo 1>&2 World) 2>&1 >> x | tee -a x
Which indeed shows World in the console and both texts within x
The ugly thing is the duplicate file name. And the different buffering from stdout/stderr might make text in file.log a bit messy I guess.
A bit tricky if you want stdout and stderr combined in one file, with stderr yet tee'd into its own stream.
This ought to do it (error-checking, clean-up and generalized robustness omitted):
#! /bin/sh
CMD=..../StartDailyS1.sh
LOGFILE=..../MirrorLogS1.txt
FIFO=/tmp/fifo
>$LOGFILE
mkfifo $FIFO 2>/dev/null || :
tee < $FIFO -a $LOGFILE >&2 &
$CMD 2>$FIFO >>$LOGFILE
stderr is sent to a named pipe, picked up by tee(1) where it is appended to the logfile (wherein is also appended your command's stdout) and tee'd back to regular stderr.
My experience (ubuntu) is that 'crontab' only emails 'stderr' (I have the output directed to a log file which is then archived). That is useful, but I wanted a confirmation that the script ran (even when no errors to 'stderr'), and some details about how long it took, which I find is a good way to spot potential trouble.
I found the way I could most easily wrap my head around this problem was to write the script with some duplicate 'echo's in it. The extensive regular 'echo's end up in the log file. For the important non-error bits I want in my crontab 'MAILTO' email, I used an 'echo' that is directed to stderr with '1>&2'.
Thus this:
Frmt_s="+>>%y%m%d %H%M%S($HOSTNAME): " # =Format of timestamp: "<YYMMDD HHMMSS>(<machine name>): "
echo `date "$Frmt_s"`"'$0' started." # '$0' is path & filename
echo `date "$Frmt_s"`"'$0' started." 1>&2 # message to stderr
# REPORT:
echo ""
echo "================================================"
echo "================================================" 1>&2 # message to stderr
TotalMins_i=$(( TotalSecs_i / 60 )) # calculate elapsed mins
RemainderSecs_i=$(( TotalSecs_i-(TotalMins_i*60) ))
Title_s="TOTAL run time"
Summary_s=$Summary_s$'\n'$(printf "%-20s%3s:%02d" "$Title_s" $TotalMins_i $RemainderSecs_i)
echo "$Summary_s"
echo "$Summary_s" 1>&2 # message to stderr
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" 1>&2 # message to stderr
echo ""
echo `date "$Frmt_s"`"TotalSecs_i: $TotalSecs_i"
echo `date "$Frmt_s"`"'$0' concluded." # '$0' is path & filename
echo `date "$Frmt_s"`"'$0' concluded." 1>&2 # message to stderr
Sends me an email containing this (when there are no errors, the lines beginning 'ssh:' and 'rsync:' do not appear):
170408 030001(sb03): '/mnt/data1/LoSR/backup_losr_to_elwd.sh' started.
ssh: connect to host 000.000.000.000 port 0000: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0]
ssh: connect to host 000.000.000.000 port 0000: Connection timed out
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.0]
================================================
S6 SUMMARY (mins:secs):
'Templates' 2:07
'Clients' 2:08
'Backups' 0:10
'Homes' 0:02
'NetAppsS6' 10:19
'TemplatesNew' 0:01
'S6Www' 0:02
'Aabak' 4:44
'Aaldf' 0:01
'ateam.ldf' 0:01
'Aa50Ini' 0:02
'Aadmin50Ini' 0:01
'GenerateTemplates' 0:01
'BackupScripts' 0:01
TOTAL run time 19:40
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170408 031941(sb03): '/mnt/data1/LoSR/backup_losr_to_elwd.sh' concluded.
This doesn't satisfy my initial desire to "send both stdout AND stderr to the logfile" (stderr, and only the 'echo'ed lines with '1>&2' go to the email; stdout goes to the log), but I find this is better than my initially imagined solution, as the email finds me and I don't have to go looking for problems in the log file.
I think the solution would be:
$SP_s/StartDailyS1.sh 2>&1 >> $LP_s/MirrorLogS1.txt | tee -a $LP_s/MirrorLogS1.txt
This will:
append standard output to $LP_s/MirrorLogS1.txt
append standard error to $LP_s/MirrorLogS1.txt
print standard error, so that cron will send a mail in case of error
I assume you are using bash, you redirect stdout and stderr like so
1> LOG_FILE
2> LOG_FILE
to send a mail containing the stderr in the body something like this
2> MESSAGE_FILE
/bin/mail -s "SUBJECT" "EMAIL_ADDRESS" < MESSAGE_FILE
I'm not sure if you can do the above in only one passage as this
/bin/mail -s "SUBJECT" "EMAIL_ADDRESS" <2
You could try writing another cronjob to read the log file and "display" the log (really just let cron email it to you)

Checking ftp return codes from Unix script

I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?

Resources