Expect scripts need a focused window session to work? - rsync

I have the following expect script to sync a local folder with a remote one:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
set password [lrange $argv 0 0]
set user_ip [lrange $argv 1 1]
set dir1 [lrange $argv 2 2]
set dir2 [lrange $argv 3 3]
set target [lrange $argv 4 4]
set timeout 10
# now connect to remote UNIX box (ipaddr) with given script to execute
spawn rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for passwod prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
It work properly if I execute it in a gnome_terminal window, but it stops to the password request if I execute in foreground (such us using ALT+F2 combination, or with crone, or with a startup script).
I don't found information if expect needs of an active windows terminal to interact correctly.
Somebody else experiments this strange behaviour? It is a feature or a bug? Any solution?
Thank you.

Your script has several errors. A quick re-write:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
lassign $argv password user_ip dir1 dir2 target
set timeout 10
spawn /bin/sh
set sh_prompt {\$ $}
expect -re $sh_prompt
match_max 100000
# now connect to remote UNIX box (ipaddr) with given script to execute
send rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
# Look for password prompt
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
}
-re $sh_prompt
}
send rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
send ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
Main points:
you were spawning several commands instead of spawning a shell and sending the commands to it
you put a comment outside of an action block (more details below)
the interact command gives control back to the user, which you don't want in a cron script
Why a comment in a multi-pattern expect block is bad:
Tcl doesn't treat commands like other languages do: the comment character only acts like a comment when it appears in a place that a command can go. That's why you see end-of-line comments in expect/tcl code like this
command arg arg ... ;# this is the comment
If that semi-colon was missing, the # would be handles as just another argument for the command.
A mult-pattern expect command looks like
expect pattern1 {body1} pattern2 {body2} ...
or with line continuations
expect \
pattern1 {body1} \
pattern2 {body2} \
...
Or in braces (best style, and as you've written)
expect {
pattern1 {body1}
pattern2 {body2}
...
}
The pattern may be optionally preceded with -exact, -regexp, -glob and --
When you put a comment in there where like this:
expect {
pattern1 {body1}
# this is a comment
pattern2 {body2}
...
}
Expect is not looking for a new command there: it will interpret the block like this
expect {
pattern1 {body1}
# this
is a
comment pattern2
{body2} ...
}
When you put the comment inside an action body, as I've done above, then you're safe because the body is evaluated according to the rules of Tcl (spelled out in the 12 whole rules here).
Phew. Hope that helps. I highly recommend that you check out the book for all the details.

As I commented to Glenn's answer, I saw that the problem wasn't the terminal windows but the way the script is called.
My expect script is called several times by another BASH script with the rude line: "/path/expect-script-name.exp [parameters]". Opening a terminal window (in any desktop environment), I can execute the caller script by: "/path/bash-script-name.sh". In this way, everything run well because the shebang is used to call the right shell (in this case EXPECT).
I added in the start-up system list the BASH script (i.e. the caller script of the EXPECT script) working in a non-focused terminal window instance. This last way gives errors.
The solution is calling explicitly the EXPECT script in the BASH script in the way: "expect /path/expect-script-name.exp".
I found that without this explicit call the shell DASH manages all the scripts (included the EXPECT scripts).

Related

SFTP file to get and remove files from remote server

I'm trying to write an expect script to pull files from a remote server onto a local folder, and delete them from the remote as they're pulled in. The script I have that doesn't remove them is:
#!/usr/bin/expect
spawn sftp -oHostKeyAlgorithms=+ssh-dss sftp://<username>#<ftp server>
expect "<username>#<ftp server>'s password:"
send "<password>\n"
expect "sftp>"
send "cd <folder>\n"
expect "sftp>"
send "get *"
expect "sftp>"
send "exit\n"
interact
I could add "rm *" after the get command, but sometimes it loses connection to the server while getting the files and the script stops, so I'd like to remove files as I get them.
I tried setting up a for loop like this:
#!/usr/bin/expect
spawn sftp -oHostKeyAlgorithms=+ssh-dss sftp://<username>#<ftp server>
expect "<username>#<ftp server>'s password:"
send "<password>\n"
expect "sftp>"
send "cd <folder>\n"
expect "sftp>"
set filelist [expr { send "ls -1\n" } ]
foreach file $filelist {
send "get $file\n"
expect "sftp>"
send "rm $file\n"
expect "sftp>"
}
send "exit\n"
interact
But I get:
invalid bareword "send" in expression " send "ls -1\n" "; should be "$send" or "{send}" or "send(...)" or ...
Can anyone tell me what I'm doing wrong in that script, or if there's another way to achieve what I want?
The error message you get comes from the line
set filelist [expr { send "ls -1\n" } ]
This is because the expr command evaluates arithmetic and logical expressions (as documented at https://www.tcl-lang.org/man/tcl8.6/TclCmd/expr.htm) but send "ls -1\n" is not an expression it understands.
If you were trying to read a list of files from your local machine you could do this with
set filelist [exec ls -1]
However what you really want here is to read a list of files through the ssh connection to the remote machine. This is a little more complicated, you need to use expect to loop over the lines you get back until you see the prompt again, something like this (untested):
send "ls -1\r"
expect {
"sftp>" {}
-re "(.*)\n" {
lappend filelist $expect_out(1,string)
exp_continue
}
}
For more info see https://www.tcl-lang.org/man/expect5.31/expect.1.html and https://www.tcl-lang.org/man/tcl8.6/TclCmd/contents.htm .

SNMP TRAPS sending to other file, than /var/log/messages

I have configuration
snmptrapd.conf like below:
disableAuthorization yes
authCommunity log,execute,net public
I wanted to redirect all messages for other file, ex. /var/log/snmp.log, not for /var/log/messages. I tried also reconfigure rsyslog.conf file:
snmp.* /var/log/snmp.log
but I have error like that
sie 17 12:50:47 snmp rsyslogd[20398]: unknown facility name "snmp" [v8.24.0]
My question is, how to redirect all SNMP traps to other file by using rsyslog.conf or snmptrapd.conf
I know, that I can save output manually by using command like below, but I need working deamon as a service, not a single command from bash shell.
snmptrapd -f -Le -A -Lf /var/log/snmptrapd.log
You can use -t option with snmptrapd.
snmptrapd -tLf /your-log-location/yourlogfile.log --disableAutherization=yes
Try this:
# LOGFILE="path to logfile"
# specify the pathname of the logfile; if none or the empty string "" is
# given, use the syslog() mechanism to log the traps
# Default: ""

minicom script exiting immediately

I have written the following minicom script:
sleep 20
send "\n"
expect {
"#" break
}
send "\n"
send "uname -a"
expect {
"Linux:" break
}
The command used to run the script is:
sudo minicom -S v.runscript -C minicom.log
But when I run this command, once I enter password for sudo it exits immediately. Even sleep at the start of is not working. 'minicom.log' file is also empty.
What might me missing in the script or command used to run the script?
Note about script:
When I use 'sudo minicom' manually, it takes around 10 seconds to give the prompt. So I have included 'sleep 20' at the start.
Also I am not prompted for login and password if the earlier session was exited with user still logged in. So I do not expect login / password prompts while using run script also.
you should write:
send "\n"
expect {
"#" break
timeout 20
}
send "\n"
send "uname -a"
expect {
"Linux:" break
}

Using expect script for gpg - password decryption - does not work

Hi I am fairly new to expect scripting. I am trying to use the gpg for password encryption/decryption. Encryption has no issues. For Decryption, I am trying to automate it using expect script.
The basic command I am trying to use is: gpg -o -d <.gpg file with encrypted password>
When I run this command, stand alone, it asks for passphrase, when I enter it, it creates the output file, as expected. The output file has password in it.
When I run this command using an expect script so that the passphrase can be provided automatically at run time, the expect does not create the output file.
Any help is appreciated. It does not show any errors! The output is:
spawn gpg -o /home/gandhipr/passwdfile -d /home/gandhipr/passfile.gpg
gpg: CAST5 encrypted data
Enter passphrase:
Below is my expect script.
#!/usr/bin/expect
set timeout 1
set passdir [lindex $argv 0]
set passfile [lindex $argv 1]
set passfilegpg [lindex $argv 2]
set passphrase [lindex $argv 3]
spawn gpg -o $passdir$passfile -d $passdir$passfilegpg
expect "Enter passphrase:"
send "$passphrase\n"
exp_internal 1
exit 0;
interact
Use \r instead of \n in the send command: \r is the carriage return characters which mimics the user hitting Enter.

moving from one to another server in shell script

Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh
The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH
I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer

Resources