Script Code to check for File exist in FTP - unix

Can someone help me to write the Unxi Script to check for the file ABC.txt in FTP location and give result as "Ok" or "No File" . The result "Ok" or "No File" should be written in File XYZ.
Regards,
Sriram

If i understand your question right :
#!/bin/bash
ls -l ABC.txt > temp.txt
wc -l temp.txt > temp2.txt
some_var=$(sed 's/[^0-9]//g' temp2.txt)
if [ $some_var -eq 1 ]
then
echo "Ok"
else
echo "No file"
fi
rm -f temp.txt
rm -f temp2.txt
That's the script that answers the question "Is ABC.txt present in the directory?"
But you need to login in order to use this so : Follow the routine described in here http://www.linuxproblem.org/art_9.html in order to be able to SSH to given host without requiring a password and then add the following line at the top of the above script "ssh user#host".
Hope that helps..
P.S Using ssh is way easier and i think it's better than using ftp :)

Related

shell script for checking files in a directory with count

Iam trying to write a shell script to check the files in a particular path ,
if files available then i need to get success mail else I need to get failure mail .
but I my query even if 1 file is available I am getting success mail but daily I am getting 9 files , even if 1 file is not available I need to get failure mail please help me to write a script for the above logic
cd /file path
if [ -f $(date '+%Y%m%d') file name ]; then
echo "Hi Team, Input Files have been received successfully" |  mailx -s "SUCCESS" -r "FILE_CHK" userid#doamin.com
else
echo "Hi Team, Input Files have NOT been received . Please check" |  mailx -s "FAILED" -r "FILE_CHK" userid#doamin.com
fi
exit
You need to check that all the files exist and only if this is the case send the mail that it was successful. If you have only one file not present, then you should directly send the error message.
The following code prototype does the trick:
#!/bin/bash
files=( "file1" "file2" "file3" )
for i in "${files[#]}"
do
echo "Checking if file: $i exists."
if [ ! -f $i ]; then
echo "Hi Team, Input File $i has NOT been received! Please check" | mailx -s "FAILED" -r "FILE_CHK" userid#doamin.com
exit 0;
fi
done
echo "Hi Team, Input Files have been received successfully" | mailx -s "SUCCESS" -r "FILE_CHK" userid#doamin.com
Basically you have a list of files you need to check, you check element by element that it does exist and if one of the element of the list is not present then you send the failure notification by mail and you exit.
If and only if all the files are present then you send the success message!!!
Last but not least, this script is just a skeleton that you need to adapt to your particular needs (adding a timestamp etc).

Unix script changing directory

I am in root directory, i am creating a script that will take me from root > Home > Logs and inside logs delete 3 log files.
Script will check if they exist, if YES it will delete it.
I am facing some syntax problems if you could help.
Thanks
My code:
#!/bin/sh
cd Home/Log
if [ -e error1.log ]
then
rm error1
fi
if [ -e error2.log ]
then
rm error1
fi
if [ -e error3.log ]
then
rm error1
fi
when i execute the file in root using ./delete here is what is am getting as errors:
$ ./delete
: No such file or directoryme/Log
./delete: line 14: syntax error near unexpected token `fi'
I am in root directory
When writing a script, it's almost always better not to assume things like that. If you know where the files are and it's not important that they're somewhere relative to what happens to be your current working directory, just name them.
Here are three ways you could accomplish what you want safely.
#!/bin/sh
dir=/Home/Log
rm -f ${dir}/error1.log ${dir}/error2.log ${dir}/error2.log
or
#!/bin/sh
dir=/Home/Log
rm -f ${dir}/error{1,2,3}.log
or
#!/bin/sh
set -e
cd /Home/Log && rm -f error1.log error2.log error2.log
For anything nontrivial, set -e is your friend. In your example, nothing happens later in the script. What you don't want is to keep going thinking you've changed directories, but haven't, and wind up scribbling somewhere you didn't intend. Many have lost much that way.

Shell script to sort & mv file based on date

Im new to unix,I have search a lot of info but still don not how to make it in a bash
What i know is used this command ls -tr|xargs -i ksh -c "mv {} ../tmp/" to move file by file.
Now I need to make a script that sorts all of these files by system date and moves them into a directory, The first 1000 oldest files being to be moved.
Example files r like these
KPK.AWQ07102011.66.6708.01
KPK.AWQ07102011.68.6708.01
KPK.EER07102011.561.8312.13
KPK.WWS07102011.806.3287.13
-----------This is the script tat i hv been created-------
if [ ! -d /app/RAID/Source_Files/test/testfolder ] then
echo "test directory does not exist!"
mkdir /app/RAID/Source_Files/calvin/testfolder
echo "unused_file directory created!"
fi
echo "Moving xx oldest files to test directory"
ls -tr /app/RAID/Source_Files/test/*.Z|head -1000|xargs -i ksh -c "mv {} /app/RAID/Source_Files/test/testfolder/"
the problem of this script is
1) unix prompt a syntax erro 'if'
2) The move command is working but it create a new filename testfolder instead move to directory testfolder (testfolder alredy been created in this path)
anyone can gv me a hand ? thanks
Could this help?
mv `ls -tr|head -1000` ../tmp/
head -n takes the n first lines of the previous command (here the 1000 oldest files). The backticks allow for the result of ls and head commands to be used as arguments to mv.

shell script help - checking for file exists

I'm not sure why this code isn't working. Its not going to the copy command.
I successfully run this manually on the command line (without the check)
I don't think i'm performing a correct file check? Is there a better, cleaner way to write this?
I just want to make sure the file exists, if so, copy it over. Thanks.
#!/bin/bash
if [ $# != 1 ]; then
echo "Usage: getcnf.sh <remote-host>" 2>&1
exit 1
fi
#Declare variables
HOURDATE=`date '+%Y%m%d%H%M'`
STAMP=`date '+%Y%m%d-%H:%M'`
REMOTE_MYCNF=/var/log/mysoft/mysoft.log
BACKUP_DIR=/home/mysql/dev/logs/
export REMOTE_MYCNF HOURDATE STAMP
#Copy file over
echo "Checking for mysoft.log file $REMOTE_MYCNF $STAMP" 2>&1
if [ -f $REMOTE_MYCNF ]; then
echo "File exists lets bring a copy over...." 2>&1
/usr/bin/scp $1:$REMOTE_MYCNF $BACKUP_DIR$1.mysoft.log
echo "END CP" 2>&1
exit 0
else
echo "Unable to get file" 2>&1
exit 0
fi
your checking existing file on remote computer seems like:
you should do:
ssh $host "test -f $file"
if [ $? = 0 ]; then
use sh -x script.sh to see what is happening.
You are testing for the existence of a remote file
$1:$REMOTE_MYCNF
using the local name $REMOTE_MYCNF. The if test is never satisfied.
You don't check that $1 is set.
Your file check runs on the local machine - not on the remote.
Change your if to:
if[! -f $REMOTE_MYCNF -o ! -d $REMOTE_MYCNF];

Checking ftp return codes from Unix script

I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?

Resources