This question already has answers here:
ssh breaks out of while-loop in bash [duplicate]
(2 answers)
Closed 7 years ago.
I'm reading host information from a text file and pass it to an ssh command:
The text file contains the host, user and password for the ssh command
while read LINE
do
R_USER=$(echo $LINE | cut -d ',' -f 1)
R_HOST=$(echo $LINE | cut -d ',' -f 2)
PY_SCRIPT=$(echo $LINE | cut -d ',' -f 4)
ssh $R_USER#$R_HOST 'touch /home/user/file_name.txt'
done </path_name/file_name
As it turns out the while loop is only executed once even if the host text file contains multiple host information.
When I remove the ssh command the while loop gets executed as much as there are lines in the host information text file.
Not sure why this is so.
Any information on this?
Roland
The default standard input handling of ssh drains the remaining line from the while loop.
To avoid this problem, alter where the problematic command reads standard input from. If no standard input need be passed to the command, read standard input from the special /dev/null device:
while read LINE
do
R_USER=$(echo $LINE | cut -d ',' -f 1)
R_HOST=$(echo $LINE | cut -d ',' -f 2)
PY_SCRIPT=$(echo $LINE | cut -d ',' -f 4)
ssh $R_USER#$R_HOST 'touch /home/user/file_name.txt' < /dev/null
done </path_name/file_name
Or alternatively, try using ssh -n which will prevent ssh from reading from standard input. For instance:
ssh -n $R_USER#$R_HOST 'touch /home/user/file_name.txt'
If the file is white space separated
host1 user password
host2 user password
then a simple read loop:
while read -r Server User Password
do
/usr/bin/ssh -n $User#$Server touch /home/user/file_name.txt
done </path/to/file.list
But you will be prompted for the password. You cannot pass the "Password" to ssh so I'd suggest storing passwordless ssh-keys and placing them on each host for the user you are connecting as. If you are running this command from a script you can ssh as these users on each host by placing your public key in the user's ~/.ssh/authorized_keys (or authorized_keys2) file. If you have the ssh-copy-id command you could do this by:
ssh-copy-id user#hostname
which would copy YOUR ssh-key to their authorized_keys file so you could then ssh as them. This is assuming you have permission but then again you have their password so what permission do you need?
Related
When I need to copy a file from local server (server A) to remote server(server B) via SSH, using a user with enough privileges, I do this successfuly like below
localpath='/this/is/local/path/file1.txt'
remotepath='/this/is/remote/path/'
mypass='MyPassword123'
sshpass -p $mypass scp username#hostname:$localpath $remotepath
Now, I have to transfer a file from server A to server C with a user that doesn't have enough privileges to copy. Then once
I connected to Server C, I need to send su in order to be able to send commands like cd, ls, etc.
Manually, I access the server C via SSH like this:
[root#ServerA ~]# ssh username#hostname
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
Last login: Sat Jun 13 10:17:40 2020 from XXX.XXX.XXX.XXX
ServerC ~ $
ServerC ~ $ su
Password:
ServerC /home/myuser #
ServerC /home/myuser # cd /documents/backups/
ServerC /documents/backups #
At this moment myuser has superuser privileges and I can send commands.
Then, how can I automate the task to copy files from server A to server C with the need to send su once I'm connected to Server C?
I've tried so far doing like this:
sshpass -p $mypass ssh -t username#hostname "su -c \"cd /documents/backups/ && ls\""
it requests password for su and I'm able to send cd and ls but with this command, I'm not copying files from Server A to Server C, only semi-automating the access to Server C and sending the su in Server C.
Thanks in advance for any help.
UPDATE
# $TAR | ssh $username#$hostname "$COMMAND"
+ tar -cv -C /this/is/local/path/file1.txt .
+ ssh username#X.X.X.X 'set -x; rm -f /tmp/copy && mknod /tmp/copy p; su - <<< "su_password
set -x; tar -xv -C /this/is/remote/path/ . < /tmp/copy" & cat > /tmp/copy'
tar: /this/is/local/path/file1.txt: Cannot chdir: Not a directory
tar: Error is not recoverable: exiting now
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
+ rm -f /tmp/copy
+ mknod /tmp/copy p
+ su -
+ cat
Password:
Editorial note: the previous version of this answer used sudo, the current version uses su as requested in the question.
You could use tar and pipes, like so:
TAR="tar -cv -C $localpath ."
UNTAR="tar -xv -C $remotepath ."
PREPARE_PIPE="rm -f /tmp/copy && mknod /tmp/copy p"
NEWLINE=$'\n' # that's the easiest way to get a literal newline
ROOT_PASSWORD=rootpasswordverydangerous
COMMAND="set -x; $PREPARE_PIPE; su - <<< \"${ROOT_PASSWORD}${NEWLINE} set -x; $UNTAR < /tmp/copy\" & cat > /tmp/copy"
$TAR | ssh username#hostname "$COMMAND"
Explanation:
tar -c . archives the current directory into a single file. We aren't passing -f to tar, so that single file is standard output.
tar -x . extracts the content of a single tar archive file to the current directory. We aren't passing -f to tar, so that single file is standard input.
-C <path> tells tar to cd into <path> so that it will be the current directory in which files are copied from/to.
-v just tells tar to list the files tar archives/extracts, for debugging purposes.
Likewise, set -x is just to have bash to emit trace information, for debugging purposes.
So we're archiving $localpath into stdout, and piping it to ssh, which will pipe it to $COMMAND.
If there was a way to give su the password in the command line, we would have used something like:
$TAR | ssh ... su --password ${ROOT_PASSWORD} -c "$UNTAR"
and things would have been simple.
But su doesn't have that. su runs like a shell, reading from stdin. So it will first read the password, and once the password is read and su has established a root session, it reads commands from stdin. That's why we have su - <<< \"${ROOT_PASSWORD}${NEWLINE}${UNTAR}.
But now stdin is used by the password and command, so we can't use it as the archive. We could use another file descriptor, but I prefer not to, because then the solution can be more easily ported to work with sudo instead of su. sudo closes all file descriptors, and sudo -C 200 (only close file descriptors above 200) may not work (didn't work on my test machine).
if we went that direction, we would have used something like
$TAR | ssh ... 'exec 9<&2 && sudo -S <<< $mypass bash -c "$UNTAR <&9"'
Our next option is to do something like cat > /tmp/archive.tar in order to write the entire archive into a file, and then have something like $UNTAR < /tmp/archive.tar. But the archive may be huge and we may run out of disk space.
So the idea is to create a dedicated pipe - that's PREPARE_PIPE. Pipes don't save anything to disk, and don't store the entire stream in memory, so the reader and the writer have to work concurrently (you know, like with a real pipe).
So having redirected su's stdin from $ROOT_PASSWORD, we pull ssh's stdin into our pipe with cat > /tmp/copy, and in parallel (&) having $UNTAR read from the pipe (< /tmp/copy).
Notes:
You could also pass -z to both tar commands to pass it compressed, if your network is too slow.
tar will preserve the source's metadata, e.g. timestamps and ownership.
Passing $ROOT_PASSWORD to commands is not good practice, anyone who runs ps -ef can see the password. There are ways to pass the password to server C in a more secure way, I didn't include it in order to not further complicate this answer.
I would suggest asking the server's owner to install sudo, so that if the password is compromised via ps -ef, at least it's not the root password.
I am having few DDL queries written in multiple sql files named as 1.sql 2.sql etc 1000 f iles are there containg 2000+ create table statments
I have to use sybase isql using unix b ox.
I want to prepare single script which can call these scripts one by one.
How to do that
Example
1.sql have create table command ends with go
Script master. sh
It contains
isql -S Server -D database password etc -i 1.sql
Same way upto 1000.sql
Please let me know how to run
In your master.sh try something on the following lines. The code assumes all the files exist and the files are named from 1 to 1000 and all the files exist in the same directory as your master.sh script. You can/must add additional sanity checks to check if your sql files exist or not.
#!/bin/bash
for i in `seq 1 1000`
do
isql -S Server -D database password etc -i $i.sql
done
for (( i = 1; i <= 1000; ++i )); do
isql -S Server -D database password etc -i $i.sql
done
To avoid having a thousand invocations of isql, each setting up a network connection, authenticating etc. (which will take time):
for (( i = 1; i <= 1000; ++i )); do
cat $i.sql
done | isql -S Server -D database password etc
If the SQL in the various files is independent of each other (i.e. file 534.sql may run before 55.sql), you could even try
cat *.sql | isql -S Server -D database password etc
If you are not sure about the number of files, please move all of them to a single directory and make sure that they have the same extension (like '*.sql'). Then you can have a piece of code like below:
#!/bin/bash
for fn in `ls -l *.sql | awk '{print $9}' `
do
isql -S ServeName -D DatabaseName -U UserName -P PassWord -i $fn
done
I'm trying to fetch few data from a Unix based server to my PC automatically ie I want the data to be transfered to my PC say every 30mins. I have the Unix code for fetching data but its through putty and it is getting stored in server only. I would like the data be stored in my local PC folder instead.
tail -n 10000 conveyor2.log | grep -P 'curing result OK' | sed 's/FT\/FT/g' | awk '{print $5 $13}' | uniq | sort -n | uniq >> my_data.txt
If you are currently using putty to connect to the server then you could also use "pscp" or "plink" on the Windows side to execute a transfer to your PC.
You would need to first understand how to do that from a command line.
For example:
pscp -i mykey.ppk user#serverName:logfileName targetName
(Using "-i mykey.ppk" allows you to bypass password prompts. You will need to create "mykey.ppk" using puttygen.)
You could then put that in a .BAT file or powershell or whatever and run it as a Windows "scheduled task" or get fancy and setup a service (which is well beyond the scope of this question).
For this first of all you can create a mount point of your PC on unix server.
this is called Samba.
Need root acess on both unix server and window machine
mount -t cifs //"ip-address of window system"/e$/ftp -o username="username",password="password" /"Mount point name"
after doing this you can directly create log file at window machine
My requirement is to attach all the .csv files in a folder and send them in a single mail.
Here is what have tried,
mutt -s "subject" -a *.csv -- abc#gmail.com < subject.txt
The above command is not working (It's not recognizing multiple files) and throwing the error
Error sending message, child exited 67 (User unknown.).
Could not send the message.
Then I tried using multiple -a option as follows,
mutt -s "subject" -a aaa.csv -a bbb.csv -- abc#gmail.com < subject.txt
This works as expected.
But this is not feasible for 100 files for example. I should be able use it with file mask (as like *.csv to take all csv files). Is there is any way we can use like *.csv in single command?
Thanks
Mutt doesn't support such syntax, but it doesn't mean it's impossible. You just have to build the mutt command.
mutt -s "subject" $( printf -- '-a %q ' *.csv ) ...
The command in $( ... ) produces something like this:
-a aaa.csv -a bbb.csv -a ...
Here is the example of sending multiple files using a single command -
mutt -s "Subject" -i "Mail_body text" email_id#abc.com -c email_cc_id#abc.com -a attachment1.pdf -a attachment2.pdf
At the end of the command line use -a for the attachment .
Some linux system have attachment size limit . Mostly it support less size .
I'm getting backslash( \ ) Additionally
Daily_Batch_Status{20131003}.PDF
Daily_System_Monitoring{20131003}.PDF
printf -- '-a %q ' *.PDF
-a Daily_Batch_Status \ {20131003 \ }.PDF -a Daily_System_Monitoring \ {20131003 \ }.PDF
#!/bin/bash
from="me#address.com"
to="target#address.com"
subject="pdfs $(date +%B) $(date +%Y)"
body="You can find the pdfs from $(date +%B) $(date +%Y)"
# here comes the attachments
mutt -s "$subject" $( printf -- ' -a %q' $PWD/*.pdf ) -- $to <<EOF
Dear Mr and Ms,
$(echo $body)
$(cat ~/.signature)
EOF
but it does not work with escape characters in file name like "\[5\]" which can come in MacOs.
I created as a script and collect needed PDFs in a folder and just run the script from that location. So monthly reports are sent... it does not matter how many pdfs (number can vary) but also there should be no white space.
I'm using plink to run a command on a Unix remote machine.
The command is:
ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';
The way I'm sending this command is by using a file with a set of commands like:
plink.exe -ssh -t -l user -pw pwd tst.url.pt -m commands.out
When I run the command this way the plink does not receive any input. It seems that is waiting for input.
But if I run:
plink.exe -ssh -t -l user -pw pwd tst.url.pt "ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';"
I get the expected result.
I'm not using the plink with a file with the command because I choose so. I'm using a test automation software that allows me to run tests on remote hosts and this is the way the tool works.
Any thoughts on what is going wrong?
I tested the command you provided and it worked without problems.
Maybe the problem is related to:
The server's host key is not cached in the registry.
The path to the file is not correct.
The file is empty.
include server hostkey
most importantly, you need to include the unix profile using the -m paramater
You can include all your commands in the same file where the profile is kept also.
$Output = ((plink.exe -hostkey hostkey -l UNAME -i SSHKEY -P 22 -ssh server -batch -m PROFILE) | ? {$_ -ne ""})