minicom script exiting immediately - automated-tests

I have written the following minicom script:
sleep 20
send "\n"
expect {
"#" break
}
send "\n"
send "uname -a"
expect {
"Linux:" break
}
The command used to run the script is:
sudo minicom -S v.runscript -C minicom.log
But when I run this command, once I enter password for sudo it exits immediately. Even sleep at the start of is not working. 'minicom.log' file is also empty.
What might me missing in the script or command used to run the script?
Note about script:
When I use 'sudo minicom' manually, it takes around 10 seconds to give the prompt. So I have included 'sleep 20' at the start.
Also I am not prompted for login and password if the earlier session was exited with user still logged in. So I do not expect login / password prompts while using run script also.

you should write:
send "\n"
expect {
"#" break
timeout 20
}
send "\n"
send "uname -a"
expect {
"Linux:" break
}

Related

sudo asks for a password instead of getting it from stdin

I have a script running in an open terminal window:
while sleep 345600; \
do pass="$(security find-generic-password -w -s 'service' -a 'account')"; \
sudo --stdin <<< "${pass}" head /etc/hosts; \
done
When for a test I manually run this script having set sleep to 1, it works as intended, sudo getting the pass without user's interaction. When I then run the script with the 4 days delay, it does not run the same say in a specified time, sudo waiting for the password from a user's terminal (i.e. typed manually!). I can even set the pass variable to contain the actual plain-text password, of no avail.
Why this difference?
It's probably safer to add the particular command you need to the sudoers config and allow it to be run without a password (see https://apple.stackexchange.com/q/398656 for an example of this on macOS).
If that's not an option, you can try using the --askpass option: it takes the path to a command that will output the user's password on stdout when called. Put the find-generic-password command in a helper script and pass that to --askpass.

how to transfer in sftp with password ? put command not sending the full file

I am new to UNIX. Need help in finding the correct approach to send a file.
I have to send a big file 1gb and the time it takes to sftp manually is approx 10 mins, We have tried the below script because we have to login with password.
The problem here is before completely transferring the file. The script come out of the SFTP connection with no error.
Script:
` expect -c " `
spawn sftp ${remote_user}#${remote_host}
expect \"password\"
send ${remote_pswd}\r
expect sftp>
send \" cd ${remote_path}\r \"
expect sftp>
send \" lcd ${source_path}\r \"
expect sftp>
send \" put ${source_file} \r \"
expect sftp>
send \" echo $? \r \"
expect sftp>
send \"bye\" " ' `
Log:
` spawn sftp DataStageIM2#192.168.79.15
DataStageIM2#192.168.79.15's password:
Connected to 192.168.79.15.
sftp> cd /users/StoreStockManagement/ReferenceData/Inbound
sftp> lcd /staging/oretail/external/data/DSPRD/Output/Pricing/INT340
sftp> mput hhtstore_price.dat
Uploading hhtstore_price.dat to /users/StoreStockManagement/ReferenceData/Inbound/hhtstore_price.dat
hhtstore_price.dat 3% 189MB 18.1MB/s 04:31 ETA+ [[ 0 -ne 0 ]]`
--Here after transferring 3% of the file this script comes out and I cannot see the file there. But when i manually trying the sftp it is working. Only with script it is not copying.
Can some one help here
The default timeout value for the expect command is 10 seconds. So, after put, expect will wait for 10 seconds to see the prompt, then timeout, then continue on with the script.
Clearly you want to wait for however log as necessary to transfer the file, so add this to your script:
set timeout -1

paramiko and nohup ''

OK so I have paramiko v2.2.1 and I am trying to login to a machine and restart a service. Inside the service scripts it basically starts a process via nohup. However if I allow paramiko to disconnect as soon as it is done the process started terminates with a PIPE signal when it writes to stdout.
If I start the service by ssh'ing into the box and manually starting it there is no issue and it runs in the background fine. Also if I add long sleep 10 before disconnecting (close) paramiko it also seems to work just fine.
The service is started via a init.d script via a line like this:
env LD_LIBRARY_PATH=$bin_path nohup $bin_path/ServerLoop.sh \
"$bin_path/Service service args" "$#" &
Where ServerLoop.sh simply calls the service forever in a loop like this so it will never die:
SERVER=$1
shift
ARGS=$#
logger $ARGS
while [ 1 ]; do
$SERVER $ARGS
logger "$SERVER terminated with exit code: $STATUS. Server has been restarted"
sleep 1
done
I have noticed when I start the service by ssh'ing into the box I get a nohup.out file written to the root. However when I run through paramiko I get no nohup.out written anywhere on the system ... ie this after I manually ssh into the box and start the service:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
/nohup.out
And this is after I run through paramiko:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
As I understand it nohup will only redirect the output to nohup.out if "If standard output is a terminal" (from the manual), otherwise it thinks it is saving the output to a file so it does not redirect. Hence I tried the following:
In [43]: import paramiko
In [44]: paramiko.__version__
Out[44]: '2.2.1'
In [45]: ssh = paramiko.SSHClient()
In [46]: ssh.set_missing_host_key_policy(AutoAddPolicy())
In [47]: ssh.connect(ip, username='root', password=not_for_so_sorry, look_for_keys=False, allow_agent=False)
In [48]: stdin, stdout, stderr = ssh.exec_command("tty")
In [49]: stdout.read()
Out[49]: 'not a tty\n'
So I am thinking that nohup is not redirecting to nohup.out when I run it through paramiko because tty is not returning a terminal. I don't know why adding a sleep(10) would fix this though as the service if run on the command line is quite verbose.
I have also noticed that if the service is started from a manual ssh its tty in the ps ax output is still set to the ssh tty ... however if the process is started by paramiko its tty in the ps ax output is set to "?" .. since both processes are run through nohup I would have expected this to be the same.
If the problem is that nohup is indeed not redirecting the output to nohup.out because of the tty is there a way to force this to happen or a better way to run this sort of command via paramiko?
Thanks all, any help with this would be great :)

Expect scripts need a focused window session to work?

I have the following expect script to sync a local folder with a remote one:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
set password [lrange $argv 0 0]
set user_ip [lrange $argv 1 1]
set dir1 [lrange $argv 2 2]
set dir2 [lrange $argv 3 3]
set target [lrange $argv 4 4]
set timeout 10
# now connect to remote UNIX box (ipaddr) with given script to execute
spawn rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for passwod prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
It work properly if I execute it in a gnome_terminal window, but it stops to the password request if I execute in foreground (such us using ALT+F2 combination, or with crone, or with a startup script).
I don't found information if expect needs of an active windows terminal to interact correctly.
Somebody else experiments this strange behaviour? It is a feature or a bug? Any solution?
Thank you.
Your script has several errors. A quick re-write:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
lassign $argv password user_ip dir1 dir2 target
set timeout 10
spawn /bin/sh
set sh_prompt {\$ $}
expect -re $sh_prompt
match_max 100000
# now connect to remote UNIX box (ipaddr) with given script to execute
send rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
# Look for password prompt
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
}
-re $sh_prompt
}
send rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
send ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
Main points:
you were spawning several commands instead of spawning a shell and sending the commands to it
you put a comment outside of an action block (more details below)
the interact command gives control back to the user, which you don't want in a cron script
Why a comment in a multi-pattern expect block is bad:
Tcl doesn't treat commands like other languages do: the comment character only acts like a comment when it appears in a place that a command can go. That's why you see end-of-line comments in expect/tcl code like this
command arg arg ... ;# this is the comment
If that semi-colon was missing, the # would be handles as just another argument for the command.
A mult-pattern expect command looks like
expect pattern1 {body1} pattern2 {body2} ...
or with line continuations
expect \
pattern1 {body1} \
pattern2 {body2} \
...
Or in braces (best style, and as you've written)
expect {
pattern1 {body1}
pattern2 {body2}
...
}
The pattern may be optionally preceded with -exact, -regexp, -glob and --
When you put a comment in there where like this:
expect {
pattern1 {body1}
# this is a comment
pattern2 {body2}
...
}
Expect is not looking for a new command there: it will interpret the block like this
expect {
pattern1 {body1}
# this
is a
comment pattern2
{body2} ...
}
When you put the comment inside an action body, as I've done above, then you're safe because the body is evaluated according to the rules of Tcl (spelled out in the 12 whole rules here).
Phew. Hope that helps. I highly recommend that you check out the book for all the details.
As I commented to Glenn's answer, I saw that the problem wasn't the terminal windows but the way the script is called.
My expect script is called several times by another BASH script with the rude line: "/path/expect-script-name.exp [parameters]". Opening a terminal window (in any desktop environment), I can execute the caller script by: "/path/bash-script-name.sh". In this way, everything run well because the shebang is used to call the right shell (in this case EXPECT).
I added in the start-up system list the BASH script (i.e. the caller script of the EXPECT script) working in a non-focused terminal window instance. This last way gives errors.
The solution is calling explicitly the EXPECT script in the BASH script in the way: "expect /path/expect-script-name.exp".
I found that without this explicit call the shell DASH manages all the scripts (included the EXPECT scripts).

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Resources