moving from one to another server in shell script - unix

Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh

The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH

I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer

Related

rsync : how to copy only latest file from target to source

We have a main Linux server, say M, where we have files like below (for 2 months, and new files arriving daily)
Folder1
PROCESS1_20211117.txt.gz
PROCESS1_20211118.txt.gz
..
..
PROCESS1_20220114.txt.gz
PROCESS1_20220115.txt.gz
We want to copy only the latest file on our processing server, say P.
So as of now, we were using the below command, on our processing server.
rsync --ignore-existing -azvh -rpgoDe ssh user#M:${TargetServerPath}/${PROCSS_NAME}_*txt.gz ${SourceServerPath}
This process worked fine until now, but from now, in the processing server, we can keep files only up to 3 days. However, in our main server, we can keep files for 2 months.
So when we remove older files from the processing server, the rsync command copies all files from main server to the processing server.
How can I change rsync command to copy only latest file from Main server?
*Note: the example above is only for one file. We have multiple files on which we have to use the same command. Hence we cannot hardcode any filename.
What I tried:
There are multiple solutions, but all seems to be when I want to copy latest file from the server I am running rsync on, not on the remote server.
Also I tried running below to get the latest file from main server, but I cannot pass variable to SSH in my company, as it is not allowed. So below command works if I pass individual path/file name, but cannot work as with variables.
ssh M 'ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz|tail -1'
Would really appreciate any suggestions on how to implement this solution.
OS: Linux 3.10.0-1160.31.1.el7.x86_64
ssh quoting is confusing - to properly quote it, you have to double-quote it locally.
Handy printf %q trick is helpful - quote the relevant parts.
file=$(
ssh M "ls -1 $(printf "%q" "${getServerPath}/${PROCSS_NAME}")_*.txt.gz" |
tail -1
)
rsync --ignore-existing -azvh -rpgoDe ssh user#M:"$file" "${SourceServerPath}"
or maybe nicer to run tail -n1 on the remote, so that minimum amount of data are transferred (we only need one filename, not them all), invoke explicit shell and pass the variables as shell arguments:
file=$(ssh M "$(printf "%q " bash -c \
'ls -1 "$1"_*.txt.gz | tail -n1'
'_' "${TargetServerPath}/${PROCSS_NAME}"
)")
Overall, I recommend doing a function and using declare -f :
sshqfunc() { echo "bash -c $(printf "%q" "$(declare -f "$1"); $1 \"\$#\"")"; };
work() {
ls -1 "$1"_*txt.gz | tail -1
}
tmp=$(ssh M "$(sshqfunc work)" _ "${TargetServerPath}/${PROCSS_NAME}")
or you can also use the mighty declare to transfer variables to remote - then run your command inside single quotes:
ssh M "
$(declare -p TargetServerPath PROCSS_NAME);
"'
ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz | tail -1
'

Running ssh script with background process

Below is a simple example of what I'm trying to accomplish. I'm trying to force an ssh script to not wait for all child processes to exit before returning. The purpose is to launch a daemon process on a remote host via ssh.
test.sh
#!/bin/bash
(
sleep 2
echo "done"
) &
When I run the script on the console it returns immediately, with "done" appearing 2 seconds later.
When I run the script as an ssh script, the ssh command . It appears to wait until all child processes have terminated until ssh exits.
ssh example
$ ssh mike#127.0.0.1 /home/mike/test.sh
(2 seconds)
done
standard terminal example
$ ./test.sh
$
(2 seconds)
done
How can I make ssh return when the parent/main process has terminated?
EDIT:
I'm aware of the -f option to ssh to run the process in the background . It leaves the ssh process and connection open on the source host. For my purposes this is unsuitable.
ssh mike#127.0.0.1 /home/mike/test.sh
When you run ssh in this fashion, the remote ssh server will create a set of pipes (or socketpairs) which become the standard input, output, and error for the process which you requested it to run, in this case the script process. The ssh server doesn't end the session based on when the script process exits. Instead, it ends the session when it reads and end-of-file indication on the script process's standard output and standard error.
In your case, the script process creates a child process which inherits the script's standard input, output, and error. A pipe (or socketpair) only returns EOF when all possible writers have exited or closed their end of the pipe. As long as the child process is running and has a copy of the standard output/error file descriptors, the ssh server won't read an EOF indication on those descriptors and it won't close the session.
You can get around this by redirecting standard input and standard output in the command that you pass to the remote server:
ssh mike#127.0.0.1 '/home/mike/test.sh > /dev/null 2>&1'
(note the quotes are important)
This avoids passing the standard output and standard error created by the ssh server to the script process or the subprocesses that it creates.
Alternately, you could add a redirection to the script:
#!/bin/bash
(
exec > /dev/null 2>&1
sleep 2
echo "done"
) &
This causes the script's child process to close its copies of the original standard output and standard error.

Using file locks with rsync

From the rsync manual documentation I see that by using the option rsync-path, it is possible to specify what program is to be run on the remote machine to start up rsync. In particular, the program could be a wrapper script which calls the actual rsync command in the middle, but which does some actions before and/or after the rsync invocation. One possible interesting use would be to acquire/release a lock (e.g., a flock), so that the operations of rsync at the remote end could be co-ordinated with another process at the far end which is contending for write access to the same files. There could be multiple rsync processes simultaneously holding the shared lock (I am aware of potential for starvation but am not concerned about that right now). The 'writer' process I'm dealing with would just be changing a few hard-links, so it would not block the rsync process for any significant lengh of time.
I have looked at other co-ordination approaches, e.g., implementing a custom remote locking protocol between the client and server, but they all involve more development work and/or are unsatisfactory for other reasons, which is why I am interested in the wrapper/(f)lock approach.
My questions are:
1) Is this a reasonable way to solve the problem of co-ordinating rsync 'readers' with another, 'writer' process accessing the same directory?
2) Can you also put a wrapper around rsync when using the inetd (or xinetd) daemon approach to running rsync, by adding a line something like the following to /etc/inetd.conf (as per the rsyncd.conf man page):
rsync stream tcp nowait root /usr/bin/rsync rsyncd --daemon
but replacing /usr/bin/rsync with the path to your rsync-lookalike wrapper, which in this case would be a C/C++ -code program which seizes a lock, forks off rsync, waits for rsync to complete, then releases the lock.
Thanks,
Tom
One potential catch with the wrapper approach: the remote process seems to be called with extra arguments, which are appended to whatever command line you specify with --rsync-path. So if you need to pass arguments something like the following style is needed.
#! /bin/sh
lock_target=$1
shift
if ! lockfile ${lock_target}.lock ; then exit 1 ; fi
trap "rm -f ${lock_target}.lock" EXIT HUP TERM INT
/usr/bin/rsync "$#"
Thanks to the question and the comments. Armed with your ideas I solved it (for me) using --rsync-path but without any wrapper scrips on the remote host, simply by putting all payload script into --rsync-path, with a few tricks.
This particular example uses rsync to pull data from remote host while holding a flock on the remote host, e.g. remote host dumps data periodically while also holding a flock, so dump and pull must not be interleaved.
Points to note
rsync will append its arguments to the end of whatever command you specify in "--rsync-path", so command needs to cope with that, and for that I rely on bash shell features on both pulling and remote hosts.
any pre and post processing on remote host must not write to STDOUT because that will corrupt rsync protocol and rsync will bail. Any error output should go to STDERR and it will turn up on pulling host as rsync STDERR output. This is why '1>&2' in all the error handling.
this probably relies on remote command spawned by rsync to run by bash because I think the good old sh does not support arrays. This works for me between RHEL7 boxes. Possible work around proposed at the end.
With that in mind, here is my simplified concept only rehash (I've not run this particular script, my full solution has extra layers that distract attention from the main point).
The script on the pulling host:
#!/bin/bash
function rsync_wrap() {
{
flock --exclusive --timeout ${LOCK_TIMEOUT} 100 || {
echo "Failed to lock: ${LOCK_TIMEOUT}" 1>&2
return 1
}
# call real rsync with original arguments
rsync "$#"
exit_code=$?
if [ ${exit_code} -eq 0 ]; then
# Do clean up when success
# rm -f "${LOCK_FILE}"
# rm -rf /eg/purge/data
else
# Do clean up when failed
fi
# Note, return is important, do not let it fall out
return ${exit_code}
} 100<"${LOCK_FILE}"
echo "Failed to open lock file: ${LOCK_FILE}" 1>&2
return 1
}
# Define vars
LOCK_FILE=/var/somedir/name.lock; # or /dev/shm/name.lock
LOCK_TIMEOUT=600; #in seconds
# Build remote command, define vars and functions inside the command
remote_cmd="
# this approach deals with crazy chars in variables and function code
$( declare -p LOCK_FILE )
$( declare -p LOCK_TIMEOUT )
$( declare -f rsync_wrap )
rsync_wrap "
local_cmd=(
rsync
-a
--rsync-path="${remote_cmd}"
# I want to handle network timeouts in SSH, not in rsync,
# because rsync does not know that waiting for lock is expected
-e "ssh -o BatchMode=yes -o ServerAliveCountMax=3 -o ServerAliveInterval=30 ${IDENTITY_FILE:+ -i '${IDENTITY_FILE}'}"
/remote/source/path
/local/destination/path/
)
# Do it
"${local_cmd[#]}"
If remote side executes --rsync-path in something other than bash then maybe the whole remote command could be wrapped in something like:
local_cmd="bash -c '${local_cmd//\'/\'\\\'\'}'"
As per comments to the original post, it is indeed feasible to use wrapper approach to implement (f)locks around rsync at the server end.

TCP network communication security risks

I am developing an application that can establish a server-client connection using QTcp*
The client sends the server a number.
The received string is checked on its length and quality (is it really a number?)
If everything is OK, then the server replies back with a file path (which depends on the sent number).
The client checks if the file exists and if it is a valid image. If the file complies with the rules, it executes a command on the file.
What security concerns exist on this type of connection?
The program is designed for Linux systems and the external command on the image file is executed using QProcess. If the string sent contained something like (do not run the following command):
; rm -rf /
then it would be blocked on the file not found security check (because it isn't a file path). If there wasn't any check about the validity of the sent string then the following command would be executed:
command_to_run_on_image ; rm -rf /
which would cause panic! But this cannot happen.
So, is there anything I should take into consideration?
If you open a console and type command ; rm -rf /*, something bad would likely happen. It's because commands are processed by the shell. It parses text output, e.g. splits commands by ; delimiter and splits arguments by space, then it executes parsed commands with parsed arguments using system API.
However, when you use process->start("command", QStringList() << "; rm -rf /*");, there is no such danger. QProcess will not execute shell. It will execute command directly using system API. The result will be similar to running command "; rm -rf /*" in the shell.
So, you can be sure that only your command will be executed and the parameter will be passed to it as it is. The only danger is the possibility for an attacker to call the command with any file path he could construct. Consequences depends on what the command does.

Expect scripts need a focused window session to work?

I have the following expect script to sync a local folder with a remote one:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
set password [lrange $argv 0 0]
set user_ip [lrange $argv 1 1]
set dir1 [lrange $argv 2 2]
set dir2 [lrange $argv 3 3]
set target [lrange $argv 4 4]
set timeout 10
# now connect to remote UNIX box (ipaddr) with given script to execute
spawn rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for password prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
spawn ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
match_max 100000
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\n"
exp_continue
}
# Look for passwod prompt
"*?assword*" {
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
interact
}
}
It work properly if I execute it in a gnome_terminal window, but it stops to the password request if I execute in foreground (such us using ALT+F2 combination, or with crone, or with a startup script).
I don't found information if expect needs of an active windows terminal to interact correctly.
Somebody else experiments this strange behaviour? It is a feature or a bug? Any solution?
Thank you.
Your script has several errors. A quick re-write:
#!/usr/bin/expect -f
# Expect script to interact with password based commands. It synchronize a local
# folder with an remote in both directions.
# This script needs 5 argument to work:
# password = Password of remote UNIX server, for root user.
# user_ip = user#server format
# dir1=directory in remote server with / final
# dir2=local directory with / final
# target=target directory
# set Variables
lassign $argv password user_ip dir1 dir2 target
set timeout 10
spawn /bin/sh
set sh_prompt {\$ $}
expect -re $sh_prompt
match_max 100000
# now connect to remote UNIX box (ipaddr) with given script to execute
send rsync -ruvzt -e ssh $user_ip:$dir1$target $dir2
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
# Look for password prompt
# Send password aka $password
send -- "$password\r"
# send blank line (\r) to make sure we get back to gui
send -- "\r"
}
-re $sh_prompt
}
send rsync -ruvzt -e ssh $dir2$target $user_ip:$dir1
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
send ssh $user_ip /home/pi/bash/cerca_del.sh $dir1$target
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
exp_continue
}
"*?assword*" {
send -- "$password\r"
send -- "\r"
}
-re $sh_prompt
}
Main points:
you were spawning several commands instead of spawning a shell and sending the commands to it
you put a comment outside of an action block (more details below)
the interact command gives control back to the user, which you don't want in a cron script
Why a comment in a multi-pattern expect block is bad:
Tcl doesn't treat commands like other languages do: the comment character only acts like a comment when it appears in a place that a command can go. That's why you see end-of-line comments in expect/tcl code like this
command arg arg ... ;# this is the comment
If that semi-colon was missing, the # would be handles as just another argument for the command.
A mult-pattern expect command looks like
expect pattern1 {body1} pattern2 {body2} ...
or with line continuations
expect \
pattern1 {body1} \
pattern2 {body2} \
...
Or in braces (best style, and as you've written)
expect {
pattern1 {body1}
pattern2 {body2}
...
}
The pattern may be optionally preceded with -exact, -regexp, -glob and --
When you put a comment in there where like this:
expect {
pattern1 {body1}
# this is a comment
pattern2 {body2}
...
}
Expect is not looking for a new command there: it will interpret the block like this
expect {
pattern1 {body1}
# this
is a
comment pattern2
{body2} ...
}
When you put the comment inside an action body, as I've done above, then you're safe because the body is evaluated according to the rules of Tcl (spelled out in the 12 whole rules here).
Phew. Hope that helps. I highly recommend that you check out the book for all the details.
As I commented to Glenn's answer, I saw that the problem wasn't the terminal windows but the way the script is called.
My expect script is called several times by another BASH script with the rude line: "/path/expect-script-name.exp [parameters]". Opening a terminal window (in any desktop environment), I can execute the caller script by: "/path/bash-script-name.sh". In this way, everything run well because the shebang is used to call the right shell (in this case EXPECT).
I added in the start-up system list the BASH script (i.e. the caller script of the EXPECT script) working in a non-focused terminal window instance. This last way gives errors.
The solution is calling explicitly the EXPECT script in the BASH script in the way: "expect /path/expect-script-name.exp".
I found that without this explicit call the shell DASH manages all the scripts (included the EXPECT scripts).

Resources