SFTP file to get and remove files from remote server - sftp

I'm trying to write an expect script to pull files from a remote server onto a local folder, and delete them from the remote as they're pulled in. The script I have that doesn't remove them is:
#!/usr/bin/expect
spawn sftp -oHostKeyAlgorithms=+ssh-dss sftp://<username>#<ftp server>
expect "<username>#<ftp server>'s password:"
send "<password>\n"
expect "sftp>"
send "cd <folder>\n"
expect "sftp>"
send "get *"
expect "sftp>"
send "exit\n"
interact
I could add "rm *" after the get command, but sometimes it loses connection to the server while getting the files and the script stops, so I'd like to remove files as I get them.
I tried setting up a for loop like this:
#!/usr/bin/expect
spawn sftp -oHostKeyAlgorithms=+ssh-dss sftp://<username>#<ftp server>
expect "<username>#<ftp server>'s password:"
send "<password>\n"
expect "sftp>"
send "cd <folder>\n"
expect "sftp>"
set filelist [expr { send "ls -1\n" } ]
foreach file $filelist {
send "get $file\n"
expect "sftp>"
send "rm $file\n"
expect "sftp>"
}
send "exit\n"
interact
But I get:
invalid bareword "send" in expression " send "ls -1\n" "; should be "$send" or "{send}" or "send(...)" or ...
Can anyone tell me what I'm doing wrong in that script, or if there's another way to achieve what I want?

The error message you get comes from the line
set filelist [expr { send "ls -1\n" } ]
This is because the expr command evaluates arithmetic and logical expressions (as documented at https://www.tcl-lang.org/man/tcl8.6/TclCmd/expr.htm) but send "ls -1\n" is not an expression it understands.
If you were trying to read a list of files from your local machine you could do this with
set filelist [exec ls -1]
However what you really want here is to read a list of files through the ssh connection to the remote machine. This is a little more complicated, you need to use expect to loop over the lines you get back until you see the prompt again, something like this (untested):
send "ls -1\r"
expect {
"sftp>" {}
-re "(.*)\n" {
lappend filelist $expect_out(1,string)
exp_continue
}
}
For more info see https://www.tcl-lang.org/man/expect5.31/expect.1.html and https://www.tcl-lang.org/man/tcl8.6/TclCmd/contents.htm .

Related

SFTP scripting unable to change directories question

I'm relatively new to using sftp in scripting format (bash shell on Mac OSX High Sierra). I am having issues changing dirs once logged into the remote server. I want to cd to 'FTP PDF (Download) to CR'
Here is my script(edited):
#!/bin/bash
expect -c "
spawn sftp ClaimReturn#8.19.132.155
expect \"Password\"
send \"xxxxxxx\r\"
expect \"sftp>\"
send \"cd CR\ Reports\r\"
#DIR TO CD to "CR REPORTS"
expect \"sftp>\"
send \"bye\r\"
expect \"#\"
"
This is really just an formatted comment expanding on #meuh's comment.
You're having quoting trouble. You could use single quotes or a quoted heredoc to make your life easier
#!/bin/bash
expect <<'END_EXPECT'
spawn sftp ClaimReturn#8.19.132.155
expect "Password"
send "xxxxxxx\r"
expect "sftp>"
send "cd 'CR Reports'\r"
#DIR TO CD to "CR REPORTS"
expect "sftp>"
send "bye\r"
expect "#"
END_EXPECT
Or, just an expect script:
#!/usr/bin/expect -f
spawn sftp ClaimReturn#8.19.132.155
expect "Password"
send "xxxxxxx\r"
expect "sftp>"
send "cd 'CR Reports'\r"
#DIR TO CD to "CR REPORTS"
expect "sftp>"
send "bye\r"
expect "#"

how to transfer in sftp with password ? put command not sending the full file

I am new to UNIX. Need help in finding the correct approach to send a file.
I have to send a big file 1gb and the time it takes to sftp manually is approx 10 mins, We have tried the below script because we have to login with password.
The problem here is before completely transferring the file. The script come out of the SFTP connection with no error.
Script:
` expect -c " `
spawn sftp ${remote_user}#${remote_host}
expect \"password\"
send ${remote_pswd}\r
expect sftp>
send \" cd ${remote_path}\r \"
expect sftp>
send \" lcd ${source_path}\r \"
expect sftp>
send \" put ${source_file} \r \"
expect sftp>
send \" echo $? \r \"
expect sftp>
send \"bye\" " ' `
Log:
` spawn sftp DataStageIM2#192.168.79.15
DataStageIM2#192.168.79.15's password:
Connected to 192.168.79.15.
sftp> cd /users/StoreStockManagement/ReferenceData/Inbound
sftp> lcd /staging/oretail/external/data/DSPRD/Output/Pricing/INT340
sftp> mput hhtstore_price.dat
Uploading hhtstore_price.dat to /users/StoreStockManagement/ReferenceData/Inbound/hhtstore_price.dat
hhtstore_price.dat 3% 189MB 18.1MB/s 04:31 ETA+ [[ 0 -ne 0 ]]`
--Here after transferring 3% of the file this script comes out and I cannot see the file there. But when i manually trying the sftp it is working. Only with script it is not copying.
Can some one help here
The default timeout value for the expect command is 10 seconds. So, after put, expect will wait for 10 seconds to see the prompt, then timeout, then continue on with the script.
Clearly you want to wait for however log as necessary to transfer the file, so add this to your script:
set timeout -1

Expect scripts need a focused window session to work?

I have the following expect script to sync a local folder with a remote one:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
set password [lrange $argv 0 0]
set user_ip [lrange $argv 1 1]
set dir1 [lrange $argv 2 2]
set dir2 [lrange $argv 3 3]
set target [lrange $argv 4 4]
set timeout 10
# now connect to remote UNIX box (ipaddr) with given script to execute
spawn rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for passwod prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
It work properly if I execute it in a gnome_terminal window, but it stops to the password request if I execute in foreground (such us using ALT+F2 combination, or with crone, or with a startup script).
I don't found information if expect needs of an active windows terminal to interact correctly.
Somebody else experiments this strange behaviour? It is a feature or a bug? Any solution?
Thank you.
Your script has several errors. A quick re-write:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
lassign $argv password user_ip dir1 dir2 target
set timeout 10
spawn /bin/sh
set sh_prompt {\$ $}
expect -re $sh_prompt
match_max 100000
# now connect to remote UNIX box (ipaddr) with given script to execute
send rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
# Look for password prompt
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
}
-re $sh_prompt
}
send rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
send ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
Main points:
you were spawning several commands instead of spawning a shell and sending the commands to it
you put a comment outside of an action block (more details below)
the interact command gives control back to the user, which you don't want in a cron script
Why a comment in a multi-pattern expect block is bad:
Tcl doesn't treat commands like other languages do: the comment character only acts like a comment when it appears in a place that a command can go. That's why you see end-of-line comments in expect/tcl code like this
command arg arg ... ;# this is the comment
If that semi-colon was missing, the # would be handles as just another argument for the command.
A mult-pattern expect command looks like
expect pattern1 {body1} pattern2 {body2} ...
or with line continuations
expect \
pattern1 {body1} \
pattern2 {body2} \
...
Or in braces (best style, and as you've written)
expect {
pattern1 {body1}
pattern2 {body2}
...
}
The pattern may be optionally preceded with -exact, -regexp, -glob and --
When you put a comment in there where like this:
expect {
pattern1 {body1}
# this is a comment
pattern2 {body2}
...
}
Expect is not looking for a new command there: it will interpret the block like this
expect {
pattern1 {body1}
# this
is a
comment pattern2
{body2} ...
}
When you put the comment inside an action body, as I've done above, then you're safe because the body is evaluated according to the rules of Tcl (spelled out in the 12 whole rules here).
Phew. Hope that helps. I highly recommend that you check out the book for all the details.
As I commented to Glenn's answer, I saw that the problem wasn't the terminal windows but the way the script is called.
My expect script is called several times by another BASH script with the rude line: "/path/expect-script-name.exp [parameters]". Opening a terminal window (in any desktop environment), I can execute the caller script by: "/path/bash-script-name.sh". In this way, everything run well because the shebang is used to call the right shell (in this case EXPECT).
I added in the start-up system list the BASH script (i.e. the caller script of the EXPECT script) working in a non-focused terminal window instance. This last way gives errors.
The solution is calling explicitly the EXPECT script in the BASH script in the way: "expect /path/expect-script-name.exp".
I found that without this explicit call the shell DASH manages all the scripts (included the EXPECT scripts).

SFTP prompting for password even though password is in script

I am trying to transfer a file from one server to a remote server using SFTP. Client is not ready for key setup so
I have gone through other questions on this forum related to SFTP and tried all. But still its not working in my case.
My Script :-
#!/bin/sh
# sample automatic ftp script to dump a file
USER="username"
PASSWORD="password"
HOST="hostname"
sftp $USER#$HOST << EOF
$PASSOWRD
cd test_path
put test_file.txt
quit
EOF
You have a misprint in your script - you are writing $PASSOWRD instead of $PASSWORD, so it substitutes empty string.
You can do this using expect. It's very easy and simple;
#!/usr/bin/expect
spawn sftp <userid>#<server>
expect "password:"
send "<password>\n"
expect "sftp>"
send "cd <remot dirctory>\r"
expect "sftp>"
send "mput * \r"
expect "sftp>"
send "quit \r"
Try the below steps,
lftp -u $user,$passwd sftp://$host << --EOF--
cd $directory
put $srcfile
quit
--EOF--

Checking ftp return codes from Unix script

I am currently creating an overnight job that calls a Unix script which in turn creates and transfers a file using ftp. I would like to check all possible return codes. The man page for ftp doesn't list return codes. Does anyone know where to find a list? Anyone with experience with this? We have other scripts that grep for certain return strings in the log, and they send an email when in error. However, they often miss unanticipated codes.
I am then putting the reason into the log and the email.
The ftp command does not return anything other than zero on most implementations that I've come across.
It's much better to process the three digit codes in the log - and if you're sending a binary file, you can check that bytes sent was correct.
The three digit codes are called 'series codes' and a list can be found here
I wrote a script to transfer only one file at a time and in that script use grep to check for the 226 Transfer complete message. If it finds it, grep returns 0.
ftp -niv < "$2"_ftp.tmp | grep "^226 "
Install the ncftp package. It comes with ncftpget and ncftpput which will each attempt to upload/download a single file, and return with a descriptive error code if there is a problem. See the “Diagnostics” section of the man page.
I think it is easier to run the ftp and check the exit code of ftp if something gone wrong.
I did this like the example below:
# ...
ftp -i -n $HOST 2>&1 1> $FTPLOG << EOF
quote USER $USER
quote PASS $PASSWD
cd $RFOLDER
binary
put $FOLDER/$FILE.sql.Z $FILE.sql.Z
bye
EOF
# Check the ftp util exit code (0 is ok, every else means an error occurred!)
EXITFTP=$?
if test $EXITFTP -ne 0; then echo "$D ERROR FTP" >> $LOG; exit 3; fi
if (grep "^Not connected." $FTPLOG); then echo "$D ERROR FTP CONNECT" >> $LOG; fi
if (grep "No such file" $FTPLOG); then echo "$D ERROR FTP NO SUCH FILE" >> $LOG; fi
if (grep "access denied" $FTPLOG ); then echo "$D ERROR FTP ACCESS DENIED" >> $LOG; fi
if (grep "^Please login" $FTPLOG ); then echo "$D ERROR FTP LOGIN" >> $LOG; fi
Edit: To catch errors I grep the output of the ftp command. But it's truly it's not the best solution.
I don't know how familier you are with a Scriptlanguage like Perl, Python or Ruby. They all have a FTP module which you can be used. This enables you to check for errors after each command. Here is a example in Perl:
#!/usr/bin/perl -w
use Net::FTP;
$ftp = Net::FTP->new("example.net") or die "Cannot connect to example.net: $#";
$ftp->login("username", "password") or die "Cannot login ", $ftp->message;
$ftp->cwd("/pub") or die "Cannot change working directory ", $ftp->message;
$ftp->binary;
$ftp->put("foo.bar") or die "Failed to upload ", $ftp->message;
$ftp->quit;
For this logic to work user need to redirect STDERR as well from ftp command as below
ftp -i -n $HOST >$FTPLOG 2>&1 << EOF
Below command will always assign 0 (success) as because ftp command wont return success or failure. So user should not depend on it
EXITFTP=$?
lame answer I know, but how about getting the ftp sources and see for yourself
I like the solution from Anurag, for the bytes transfered problem I have extended the command with grep -v "bytes"
ie
grep "^530" ftp_out2.txt | grep -v "byte"
-instead of 530 you can use all the error codes as Anurag did.
You said you wanted to FTP the file there, but you didn't say whether or not regular BSD FTP client was the only way you wanted to get it there. BSD FTP doesn't give you a return code for error conditions necessitating all that parsing, but there are a whole series of other Unix programs that can be used to transfer files by FTP if you or your administrator will install them. I will give you some examples of ways to transfer a file by FTP while still catching all error conditions with little amounts of code.
FTPUSER is your ftp user login name
FTPPASS is your ftp password
FILE is the local file you want to upload without any path info (eg file1.txt, not /whatever/file1.txt or whatever/file1.txt
FTPHOST is the remote machine you want to FTP to
REMOTEDIR is an ABSOLUTE PATH to the location on the remote machine you want to upload to
Here are the examples:
curl --user $FTPUSER:$FTPPASS -T $FILE ftp://$FTPHOST/%2f$REMOTEDIR
ftp-upload --host $FTPHOST --user $FTPUSER --password $FTPPASS --as $REMOTEDIR/$FILE $FILE
tnftp -u ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE $FILE
wput $FILE ftp://$FTPUSER:$FTPPASS#$FTPHOST/%2f$REMOTEDIR/$FILE
All of these programs will return a nonzero exit code if anything at all goes wrong, along with text that indicates what failed. You can test for this and then do whatever you want with the output, log it, email it, etc as you wished.
Please note the following however:
"%2f" is used in URLs to indicate that the following path is an absolute path on the remote machine. However, if your FTP server chroots you, you won't be able to bypass this.
for the commands above that use an actual URL (ftp://etc) to the server with the user and password embedded in it, the username and password MUST be URL-encoded if it contains special characters.
In some cases you can be flexible with the remote directory being absolute and local file being just the plain filename once you are familiar with the syntax of each program. You might just have to add a local directory environment variable or just hardcode everything.
IF you really, absolutely MUST use regular FTP client, one way you can test for failure is by, inside your script, including first a command that PUTs the file, followed by another that does a GET of the same file returning it under a different name. After FTP exits, simply test for the existence of the downloaded file in your shell script, or even checksum it against the original to make sure it transferred correctly. Yeah that stinks, but in my opinion it is better to have code that is easy to read than do tons of parsing for every possible error condition. BSD FTP is just not all that great.
Here is what I finally went with. Thanks for all the help. All the answers help lead me in the right direction.
It may be a little overkill, checking both the result and the log, but it should cover all of the bases.
echo "open ftp_ip
pwd
binary
lcd /out
cd /in
mput datafile.csv
quit"|ftp -iv > ftpreturn.log
ftpresult=$?
bytesindatafile=`wc -c datafile.csv | cut -d " " -f 1`
bytestransferred=`grep -e '^[0-9]* bytes sent' ftpreturn.log | cut -d " " -f 1`
ftptransfercomplete=`grep -e '226 ' ftpreturn.log | cut -d " " -f 1`
echo "-- FTP result code: $ftpresult" >> ftpreturn.log
echo "-- bytes in datafile: $bytesindatafile bytes" >> ftpreturn.log
echo "-- bytes transferred: $bytestransferred bytes sent" >> ftpreturn.log
if [ "$ftpresult" != "0" ] || [ "$bytestransferred" != "$bytesindatafile" ] || ["$ftptransfercomplete" != "226" ]
then
echo "-- *abend* FTP Error occurred" >> ftpreturn.log
mailx -s 'FTP error' `cat email.lst` < ftpreturn.log
else
echo "-- file sent via ftp successfully" >> ftpreturn.log
fi
Why not just store all output from the command to a log file, then check the return code from the command and, if it's not 0, send the log file in the email?

Resources