I've written a script that checks for a specific file of format("OLO2OLO_$DATE.txt.zip") in the ftp server and then copies it to my local machine:
/usr/bin/ftp -n 93.179.136.9 << !EOF!
user $USR $PASSWD
cd "/0009/Codici Migrazione"
get $FILE
bye
!EOF!
echo "$FILE"
But I'm not getting the desired result from this.
This line triggers the error.
SOURCE_FOLDER="/0009/"Codici Migrazione""
It tries to execute the command Migrazione with the environment variable SOURCE_FOLDER set to /0009/Codici which doesn't exists.
What you probably wanted to do was:
SOURCE_FOLDER="/0009/Codici Migrazione"
I got a Korn Shell that calls a SQL/PLUS to check and spool data into a .CSV file in UNIX.
This KShell is working fine on Unix, it creates the file and Return 0.
Launching the Job from UC4 AppWorx i want him to Attach the Spooled File in UNIX in the Notification sended by the Job when he Finish.
I want this to work this way:
1º I Launch the Job
2º It checks the data, if data is founded then it creates a file in /tmp directory in UNIX with the .CSV extension.
3º When the job finishes he send me an email with Spool File (.CSV) in Unix.
Is there any way? How can i make this?
Thanks.
You'll need to create an chain in Appworx (the docs should be able to walk you through it). This chain will have 1 or more jobs.
First, you don't need a k script to call SQL/PLUS. You can invoke SQL/PLUS directly. Write the script as a .sql file (it can include sqlplus directives, sql, and PL/SQL as needed). Set the job as "program type" AWSQLP. Point it at the .sql script that you have made available to Appworx.
The sqlplus script can use logic to determine if it should create a file. If it should, it can write out files directly (though getting proper .csv files from it can be a pain).
Then, attach a notification to the job, and the notification object should be set to do an email attachment. You'll have to use the "pattern" type, and put in a full filepath to the csv. Substitution variables can be used if you want a new filename each invocation.
Depending on your version, some of these options can be moved around a bit (We just upgraded last year, UC4 no longer owns it). Click on the help menu and do to the documentation entry... it's not the best in the world, but far from the worst.
first of all thanks for answering.
I usually create a JOB lets say, SEM_CHECK_THINGS, with 1 prompt defined on UC4 that runs a Query in the Database to check if the table, test_table (Example) got data, to do this i use select decode(count(*),0, 'N', 'Y') from test_table;
This job also executes a simple KShell in unix:
KShell Content:
echo "Job Name: $1"
echo "Job Control Flag: $2"
jobName=$1
jobFlag=$2
echo "Job ${jobName} started ..." >> $logFile
date >> $logFile
if [[ ${jobFlag} == "Y" ]]; then
echo "Job ${jobName} executed successfully with data found." >> $logFile
echo "Job ${jobName} executed successfully with data found."
exit 1
else
echo "Job ${jobName} finished with no data found." >> $logFile
echo "Job ${jobName} finished with no data found."
exit 0
fi
I usually force "ABORT" by using Exit 1 if data is found to request another Job that will execute an .SQL that will spool the data from the test_table.
whenever sqlerror exit sql.sqlcode
whenever sqlerror exit 1
prompt this is a test
set echo off
set trimspool on
set trimout off
set linesize 1500
set feedback on
set newpage none
SET HEADING OFF
set und off
set pagesize 10000
alter session set nls_date_format = 'dd-MON-yyyy HH24:MI:SS';
spool &1
SELECT 'PREV_RESULTSET;LAST_RESULTSET;NR_COUNT' FROM DUAL
UNION ALL
SELECT PREV_RESULTSET||';'||LAST_RESULTSET||';'||COUNT(1) NR_COUNT
FROM SEM_REPORT_PEDIDOS
GROUP BY PREV_RESULTSET, LAST_RESULTSET;
spool off
exit 1
By using the Spool &1 and by "Hardcoding" the file testspool.csv in the "Other Output" option in the Notification of that Job i managed to do this, to receive an email with the content i need/want from that table.
But i want really want its to make this in one single job, make a validation, if data is found then Spool and attach .CSV file to the email notification sent by that job.
I have a requirement to connect to an FTP server from Unix, and download a particular file which has the most recent date/time stamp.
For example here is what the file name might look like: FILE_NAME_W5215.ZIP
The "W5215" part is the date/time stamp.
If I was to try and get the latest file locally I would do something like this:
ls -t FILE_NAME_W*.ZIP | head -1
however that doesn't work on the remote server.
I don't know what OS the FTP server is running on. I know that when I establish the connection, a lot of the commands that I can do locally on Unix don't work when I'm connected to the FTP.
Any ideas, thoughts, suggestions would be greatly appreciated.
You can do something like this:
Get the file list on ftp server in a temp file
ftp -n $SERVER >tempfile <<EOF
user $USER $PASSWORD
ls -t
quit
EOF
Get the latest filename from the list
filename=`cut -c57- tempfile|head -1`
Note: In ls file list, filename starts from the 57th position, change it if necessary
Now get that particular filename from ftp server
ftp -n $SERVER<<EOF
user $USER $PASSWORD
get $filename
quit
EOF
I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.
You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?
I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?