The following does not work for me:
ssh user#remote.server "k=5; echo $k;"
it just returns an empty line.
How can I assign a variable on a remote session (ssh)?
Note: My question is not about how to pass local variables to my ssh session, but rather how to create and assign remote variables. (should be a pretty straight forward task?)
Edit:
In more detail I am trying to do this:
bkp=/some/path/to/backups
ssh user#remote.server "bkps=( $(find $bkp/* -type d | sort) );
echo 'number of backups: '${#bkps[#]};
while [ ${#bkps[#]} -gt 5 ]; do
echo ${bkps[${#bkps[#]}-1]};
#rm -rf $bkps[${#bkps[#]}-1];
unset bkps[${#bkps[#]}-1];
done;"
The find command works fine, but for some reason $bkps does not get populated.
So my guess was that it would be a variable assignment issue, since I think I have checked everything else...
Given this invocation:
ssh user#remote.server "k=5; echo $k;"
the local shell is expanding $k (which most likely isn't set) before it is executing ssh .... So the command that actually gets passed to the remote shell once the connection is made is k=5; echo ; (or k=5; echo something_else_entirely; if k is actually set locally).
To avoid this, escape the dollar sign like this:
ssh user#remote.server "k=5; echo \$k;"
Alternatively, use single quotes instead of double quotes to prevent the local expansion. However, while that would work on this simple example, you may actually want local expansion of some variables in the command that gets sent to the remote side, so the backslash-escaping is probably the better route.
For future reference, you can also type set -x in your shell to echo the actual commands that are being executed as a help for troubleshooting.
Related
Need to check for distribution of a file in an array programmatically. Logging into a master server and then would like to check for file on workers using simple ssh. So far I have:
ssh $HOSTNAME "[ -e '$HOSTNAME:/directory/filename' ] && echo 'Exists'"
Based on some of the logging output, I know the ssh is successful, but how can I get the test to return a message to the master server? Running the above returns nothing.
SSH will exit with the same exit code as the command that you run on the remote host. If that command is a test, then the exit code will match what you would normally expect from a test.
I would suggest the following:
Simplify your command to only run the test over SSH
Run the echo on your local machine
It doesn't seem correct that you have $HOSTNAME: in front of your path.
ssh "$HOSTNAME" "test -e '/directory/filename'" && echo 'Exists'
I personally find if statements to be much more easily understandable, which is an optional change if you are willing to go that route:
if ssh "$HOSTNAME" "test -e '/directory/filename'"; then
echo "Exists"
else
echo "Does not exist" >&2
exit 1
fi
I am running a tunnel like this:
socat TCP-LISTEN:9090,fork TCP:192.168.1.3:9090
I would like to run a script to execute code with the strings passing through the tunnel.
The script does not change the strings, only processes strings independently but allows passage without changing between both ends.
Is this possible?
You should be able to even alter the communication using this approach:
Prepare a helper script helper.sh which gets executed for each connection:
#!/bin/bash
./inFilter.sh | socat - TCP:192.168.1.3:9090 | ./outFilter.sh
And start listening by using:
socat TCP-LISTEN:9090,fork EXEC:"./helper.sh"
The scripts inFilter.sh and outFilter.sh are processing the client and the server parts of the communication.
Example inFilter.sh:
#!/bin/bash
while read -r l ; do echo "IN(${l})" ; done
Example outFilter.sh:
#!/bin/bash
while read -r l ; do echo "OUT(${l})" ; done
This method should work for line-based text communication (as almost everything is line-buffered).
To make it work for binary protocols wrapping all processes with stdbuf -i0 -o0 might help (see .e.g here) and getting rid of shell is probably a good idea in this case.
Good luck!
I am trying to get a number of files from a Unix machine using an MS DOS ftp script (Windows 7). I am new to this so I have been trying to modify an on-line example. The code is as follows:
#echo off
SETLOCAL
REM ##################################
REM Change these parameters
set FTP_HOST=host
set FTP_USER=user
set FTP_REMOTE_DIR=/users/myAcc/logFiles
set FTP_REMOTE_FILE=*.log
set FTP_LOCAL_DIR=C:\Temp
set FTP_TRANSFER_MODE=ascii
REM ##################################
set FTP_PASSWD=password
set SCRIPT_FILE=%TEMP%\ftp.txt
(
echo %FTP_USER%
echo %FTP_PASSWD%
echo %FTP_TRANSFER_MODE%
echo lcd %FTP_LOCAL_DIR%
echo cd %FTP_REMOTE_DIR%
echo prompt
echo mget %FTP_REMOTE_FILE%
) > %SCRIPT_FILE%
ftp -s:%SCRIPT_FILE% %FTP_HOST%
del %SCRIPT_FILE%
ENDLOCAL
However, when I run this the mget command fails and the following output is given:
Note: the output from the rest of the script shows that all of the previous steps are working as expected. I have even added ls commands to verify the script is in the correct directory.
...
ftp> mget *.log
200 Type set to A; form set to N.
mget logFile1_SystemOut_22-01-13.log? mget logFile2_SystemOut_22-01-13.log? mget
logFile3_SystemOut_22-01-13.log? ftp>
I have run through this manually repeating the exact same steps and it works fine - no problems and the files are successfully transferred to the C:\Temp directory.
I have checked numerous forums and other websites and I can't see any reason why it should behave like this. Any pointers as to why this doesn't work in the script would be great!
Thanks
The usual option for turning off the prompt generated by ftp mget is
ftp -i
By default ftp waits with a prompt for each file found by the mget "wildcard" string you generate in your script.
I call ftp scripts on Windows like this:
ftp -i -s:%SCRIPT_FILE% %FTP_HOST%
This because ftp -si:%SCRIPT_FILE% %FTP_HOST% doesn't work.
I guess it's the same on unix - the switches have to be separated.
ftp -i worked for me.
Change ftp -s:%SCRIPT_FILE% %FTP_HOST% for ftp -si:%SCRIPT_FILE% %FTP_HOST% in your script as #jim mcnamara suggested.
ftp -i -s:%SCRIPT_FILE% %FTP_HOST% worked for me, too.
another option is to switch prompt in the ftp script before you invoke mget (that you do) and i've also read mget -i somewhere.
but note: prompt in the ftp script switches the prompt back on if it had been off before. so use either ftp -i OR prompt, but not both!
you can check if your script works otherwise by echoing a few y's in the ftp script after mget, so it answers yes to the prompts as they come up.
It appears that in this question, the answer was to separate statements with semicolons. However that could become cumbersome if we get into complex scripting with multiple if statements and complex quoted strings. I would think.
I imagine another alternative would be to simply issue multiple SSH commands one after the other, but again that would be cumbersome, plus I'm not set up for public/private key authentication so this would be asking for passwords a bunch of times.
What I'd ideally like is much similar to the interactive shell experience: at one point in the script you ssh into#the_remote_server and it prompts for the password, which you type in (interactively) and then from that point on until your script issues the "exit" command, all commands in the script are interpreted on the remote machine.
Of course this doesn't work:
ssh user#host.com
cd some/dir/on/remote/machine
tar -xzf my_tarball.tgz
cd some/other/dir/on/remote
cp -R some_directory somewhere_else
exit
Is there another alternative? I suppose I could take that part right out of my script and stick it into a script on the remote host. Meh. Now I'm maintaining two scripts. Plus I want a little configuration file to hold defaults and other stuff and I don't want to be maintaining that in two places either.
Is there another solution?
Use a heredoc.
ssh user#host.com << EOF
cd some/dir/on/remote/machine
tar -xzf my_tarball.tgz
cd some/other/dir/on/remote
cp -R some_directory somewhere_else
EOF
Use heredoc syntax, like
ssh user#host.com <<EOD
cd some/dir/on/remote/machine
...
EOD
or pipe, like
echo "ls -al" | ssh user#host.com
Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh
The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH
I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer