I have written an Expect script that logs into to a remote system, executes some commands in a sequence and captures the output in a log file.
Everything is happening fine except the fact that when I check the log file, some commands appear to be sent twice such that, the next command to be sent, appears in the middle of the output of the previous one. It is also sent again on detecting the prompt (which is the correct execution). Also, this issue does NOT occur in all cases, which is even more baffling.
I would like to add that I have customized the prompt to include this " ---> ". This is for easier output parsing by another script.
Here's the expect code,
set prompt "(]|%|#|>|\\$)"
# go to bash shell
expect -re $prompt
send "/bin/bash\r"
# customize the prompt
expect -re $prompt
send "PS1=\"\\u#\\H ---> \"\r"
# set new prompt into variable
expect -re $prompt
set newPrompt " ---> "
# opens file containing command list
set commFile [open commands.txt]
# reads each line containing commands from file, stores it in "$theLine" variable and sends it.
while {[gets $commFile theLine] >= 0} {
expect "$newPrompt"
send "$theLine\r"
}
close $commFile
This is how my output appears.
"prompt --->" command1
----output----
----output----
command2
----output----
----output----
"prompt --->" command2
----output----
----output----
Hope you get the idea.
I don't understand this behaviour nor was I able to find any solutions to this elsewhere. Any ideas?
There's a bit of a logic problem: after you send PS1=... you expect the old prompt. Then inside the loop you expect the new prompt before sending another command. Does this help?
send "PS1=\"\\u#\\H ---> \"\r"
set newPrompt { ---> $}
expect -re $newprompt
set commFile [open commands.txt]
while {[gets $commFile theLine] >= 0} {
send "$theLine\r"
expect -re "$newPrompt"
}
Related
How can I not generate the substring "...[C[C[C[0k" after issued comands trough expect scripts?
For example, I have the following lines in an expect script:
#set variables, parameters etc
...
spawn telnet $IP $PORT
expect -nocase "name:"
send -- "$USER\r"
expect -nocase "password:"
send -- "$PASS\r"
expect -re "$prompt"
send -- "terminal length 0\r"
expect -re "$prompt"
send -- "show vlan\r"
expect -re "$prompt"
send -- "logout\r"
expect eof
After execution I receive the undesirable substrings:
Username:[C[C[C[C[C[C[C[C[C[0Kadmin
Password:[C[C[C[C[C[C[C[C[C[0K**************
switch#
[C[C[C[C[C[C[C[C[C[0Kterminal length 0
switch#[C[C[C[C[C[C[C[C[C[0Kshow vlan
...
#'show vlan' command result here
...
switch#[C[C[C[C[C[C[C[C[C[0Klogout
Connection closed by foreign host.
Does anyone have a tip on how can I not generate these "[C[C[C[C[C[C[C[C[C[0K" strings? Other issue is that I cannot use other tool(sed, awk, tr etc) to remove the strings, I need that expect not generate then at first place.
Expect is not generating these, it is the Cisco telnet service that you are connecting to. It sends escape sequences to do stuff like changing colours.
You could try changing the TERM variable before calling expect, or setting it in your expect script before the spawn telnet command. Try setting TERM=unknown. Otherwise look into the arguments that your telnet program supports to disable escape sequences.
I use the following calls to spawn a process, get back file descriptors for its stdin, stdout, and stderr, and watch for output on its stdout:
[widget.pid,widget.stdin,widget.stdout,widget.stderr] = \
gobject.spawn_async(['/bin/sed','s/hi/by/g'],
standard_input=True,
standard_output=True,
standard_error=True
)
gobject.io_add_watch(widget.stdout,
gobject.IO_IN,
getdata)
I then write lines to widget.stdin, expecting to trigger the callback function getdata.
What I find is that the callback getdata is called only when I close widget.stdin.
From the terminal, on the other hand, sed echoes each completed line sent to stdin, and so I expect that sed is generating output whenever it sees a completed line at its stdin and that it just isn't getting the lines one at a time.
I'm not clear on how I can force the lines written to widget.stdin to be seen at /bin/sed, while leaving the connection open to send more lines. The python -u flag does not seem to make any difference. Any ideas? Thanks.
I think I may have asked the wrong question; I will submit this under a different title.
What I'm finding is that the watch on the child's stdout is only triggered in the parent when the pipe to the child has filled; in other words, it appears that the child (the "sed" process) does not actually receive any input until the pipe has filled. When I set the pipe size to 4096 and prefill the pipe to the child with 4096 bytes, I can send 512 byte chunks and my parent process' io_add_watch callback, which will read up to 512 bytes, gets called every time I send a new chunk to the child.
All,
I want to execute a unix statement in expect script.The unix statement outputs rsize value for a process. I haven't programmed in expect before.
This is my code:
#!/usr/bin/expect
set some_host "some host"
set Mycmd "top -l 1 -stats pid,rsize,command | grep Process_Name| awk '{print \$2};'"
spawn telnet localhost $some_host
expect "login:"
send "myDevice\r"
expect "Password:"
send "$password\r"
expect "\$"
send "$Mycmd\r"
When I execute this, I don't get any output. What's the correct syntax to execute the unix statement? How do I get this to work so that I get the correct rsize value as the output?
Always a good idea to try to escape with ascii, try \0442 for \$2 or try something like \\$2
.Also you can debug the script to find why you have no output if you insert 'exp_internal 1' without quotes at the second line.
Is it possible to get the output without having to add the "interact" statement
Yes, it is. Other statements which wait for the output will also do; you could add e. g.
expect -re "\n\[0-9]+"
to the end of your script.
For example,
I executed "pwd" and it shows the current working directory. Then if I want to reuse that result in my another command, it would convenient to get it via a Unix command or built-in variable. Does it exist?
You can get the result, as in return code, using $?. In order to get the output you'll need to explicitly keep it around - e.g. with:
MYVAR=`pwd`
echo $MYVAR
Use $? inorder to get the status of the last executed command. Its value will be zero if the last executed command was successful else non zero.
The internal variable $? holds the return value of the last executed command or program. Example: http://tldp.org/LDP/abs/html/complexfunct.html#MAX.
If you don't need to run one command first, you can also try using pipes | to connect commands. I am constantly piping long directory listings over to more, so I can page through the results, with
ls -al | more
so if you want to use the results of running pwd as input to another program, you can try something like piping the results of pwd over to more with
pwd|more
Is there an alternative to tee which captures standard output and standard error of the command being executed and exits with the same exit status as the processed command?
Something like the following:
eet -a some.log -- mycommand --foo --bar
Where "eet" is an imaginary alternative to "tee" :) (-a means append and -- separates the captured command). It shouldn't be hard to hack such a command, but maybe it already exists and I'm not aware of it?
This works with Bash:
(
set -o pipefail
mycommand --foo --bar | tee some.log
)
The parentheses are there to limit the effect of pipefail to just the one command.
From the bash(1) man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
I stumbled upon a couple of interesting solutions at Capture Exit Code Using Pipe & Tee.
There is the $PIPESTATUS variable available in Bash:
false | tee /dev/null
[ $PIPESTATUS -eq 0 ] || exit $PIPESTATUS
And the simplest prototype of "eet" in Perl may look as follows:
open MAKE, "command 2>&1 |" or die;
open (LOGFILE, ">>some.log") or die;
while (<MAKE>) {
print LOGFILE $_;
print
}
close MAKE; # To get $?
my $exit = $? >> 8;
close LOGFILE;
Here's an eet. Works with every Bash I can get my hands on, from 2.05b to 4.0.
#!/bin/bash
tee_args=()
while [[ $# > 0 && $1 != -- ]]; do
tee_args=("${tee_args[#]}" "$1")
shift
done
shift
# now ${tee_args[*]} has the arguments before --,
# and $* has the arguments after --
# redirect standard out through a pipe to tee
exec | tee "${tee_args[#]}"
# do the *real* exec of the desired program
exec "$#"
(pipefail and $PIPESTATUS are nice, but I recall them being introduced in 3.1 or thereabouts.)
This is what I consider to be the best pure-Bourne-shell solution to use as the base upon which you could build your "eet":
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; echo $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, echo will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor three.
While command1 is running, its stdout is being piped to command2 (echo's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor one – because we want file descriptor one clear for when we bring the echo output on file descriptor three back down into file descriptor one so that the command substitution (the backticks) can capture it.
The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor four as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor four as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor one.
(The exec 4>&1 has to be a separate command to work with many common shells. In some shells it works if you just put it on the same line as the variable assignment, after the closing backtick of the substitution.)
(I use compound commands ({ ... }) in my example, but subshells (( ... )) would also work. The subshell will just cause a redundant forking and awaiting of a child process, since each side of a pipe and the inside of a command substitution already normally implies a fork and await of a child process, and I don't know of any shell being coded to recognize that it can skip one of those forks because it's already done or is about to do the other.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the echo's output jumps over command2 so that command2 doesn't catch it, and then command2's output jumps over and out of the command substitution just as echo lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its way to the standard output, just as in a normal pipe.
Also, as I understand it, at the end of this command, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out.
A caveat is that it is possible that command1 will at some point end up using file descriptors three or four, or that command2 or any of the later commands will use file descriptor four, so to be more hygienic, we would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; echo $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- makes sure that command1 will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
Almost no programs uses pre-opened file descriptor three and four directly, so you almost never have to worry about it, but the latter is probably best to keep in mind and use for general-purpose cases.
{ mycommand --foo --bar 2>&1; ret=$?; } | tee -a some.log; (exit $ret)
KornShell, all in one line:
foo; RET_VAL=$?; if test ${RET_VAL} != 0;then echo $RET_VAL; echo Error occurred!>/tmp/out.err;exit 2;fi |tee >>/tmp/out.err ; if test ${RET_VAL} != 0;then exit $RET_VAL;fi
#!/bin/sh
logfile="$1"
shift
exec 2>&1
exec "$#" | tee "$logfile"
Hopefully this works for you.
Assuming Bash or Z shell (zsh),
my_command >>my_log 2>&1
N.B. The sequence of redirection and duplication of standard error onto standard output is significant!
I didn't realise you wanted to see the output on screen as well. This will of course direct all output to the file my_log.