multiple sql queries using single script in isql - unix

I am having few DDL queries written in multiple sql files named as 1.sql 2.sql etc 1000 f iles are there containg 2000+ create table statments
I have to use sybase isql using unix b ox.
I want to prepare single script which can call these scripts one by one.
How to do that
Example
1.sql have create table command ends with go
Script master. sh
It contains
isql -S Server -D database password etc -i 1.sql
Same way upto 1000.sql
Please let me know how to run

In your master.sh try something on the following lines. The code assumes all the files exist and the files are named from 1 to 1000 and all the files exist in the same directory as your master.sh script. You can/must add additional sanity checks to check if your sql files exist or not.
#!/bin/bash
for i in `seq 1 1000`
do
isql -S Server -D database password etc -i $i.sql
done

for (( i = 1; i <= 1000; ++i )); do
isql -S Server -D database password etc -i $i.sql
done
To avoid having a thousand invocations of isql, each setting up a network connection, authenticating etc. (which will take time):
for (( i = 1; i <= 1000; ++i )); do
cat $i.sql
done | isql -S Server -D database password etc
If the SQL in the various files is independent of each other (i.e. file 534.sql may run before 55.sql), you could even try
cat *.sql | isql -S Server -D database password etc

If you are not sure about the number of files, please move all of them to a single directory and make sure that they have the same extension (like '*.sql'). Then you can have a piece of code like below:
#!/bin/bash
for fn in `ls -l *.sql | awk '{print $9}' `
do
isql -S ServeName -D DatabaseName -U UserName -P PassWord -i $fn
done

Related

rsync : how to copy only latest file from target to source

We have a main Linux server, say M, where we have files like below (for 2 months, and new files arriving daily)
Folder1
PROCESS1_20211117.txt.gz
PROCESS1_20211118.txt.gz
..
..
PROCESS1_20220114.txt.gz
PROCESS1_20220115.txt.gz
We want to copy only the latest file on our processing server, say P.
So as of now, we were using the below command, on our processing server.
rsync --ignore-existing -azvh -rpgoDe ssh user#M:${TargetServerPath}/${PROCSS_NAME}_*txt.gz ${SourceServerPath}
This process worked fine until now, but from now, in the processing server, we can keep files only up to 3 days. However, in our main server, we can keep files for 2 months.
So when we remove older files from the processing server, the rsync command copies all files from main server to the processing server.
How can I change rsync command to copy only latest file from Main server?
*Note: the example above is only for one file. We have multiple files on which we have to use the same command. Hence we cannot hardcode any filename.
What I tried:
There are multiple solutions, but all seems to be when I want to copy latest file from the server I am running rsync on, not on the remote server.
Also I tried running below to get the latest file from main server, but I cannot pass variable to SSH in my company, as it is not allowed. So below command works if I pass individual path/file name, but cannot work as with variables.
ssh M 'ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz|tail -1'
Would really appreciate any suggestions on how to implement this solution.
OS: Linux 3.10.0-1160.31.1.el7.x86_64
ssh quoting is confusing - to properly quote it, you have to double-quote it locally.
Handy printf %q trick is helpful - quote the relevant parts.
file=$(
ssh M "ls -1 $(printf "%q" "${getServerPath}/${PROCSS_NAME}")_*.txt.gz" |
tail -1
)
rsync --ignore-existing -azvh -rpgoDe ssh user#M:"$file" "${SourceServerPath}"
or maybe nicer to run tail -n1 on the remote, so that minimum amount of data are transferred (we only need one filename, not them all), invoke explicit shell and pass the variables as shell arguments:
file=$(ssh M "$(printf "%q " bash -c \
'ls -1 "$1"_*.txt.gz | tail -n1'
'_' "${TargetServerPath}/${PROCSS_NAME}"
)")
Overall, I recommend doing a function and using declare -f :
sshqfunc() { echo "bash -c $(printf "%q" "$(declare -f "$1"); $1 \"\$#\"")"; };
work() {
ls -1 "$1"_*txt.gz | tail -1
}
tmp=$(ssh M "$(sshqfunc work)" _ "${TargetServerPath}/${PROCSS_NAME}")
or you can also use the mighty declare to transfer variables to remote - then run your command inside single quotes:
ssh M "
$(declare -p TargetServerPath PROCSS_NAME);
"'
ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz | tail -1
'

How to copy files to remote server with a user without privileges?

When I need to copy a file from local server (server A) to remote server(server B) via SSH, using a user with enough privileges, I do this successfuly like below
localpath='/this/is/local/path/file1.txt'
remotepath='/this/is/remote/path/'
mypass='MyPassword123'
sshpass -p $mypass scp username#hostname:$localpath $remotepath
Now, I have to transfer a file from server A to server C with a user that doesn't have enough privileges to copy. Then once
I connected to Server C, I need to send su in order to be able to send commands like cd, ls, etc.
Manually, I access the server C via SSH like this:
[root#ServerA ~]# ssh username#hostname
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
Last login: Sat Jun 13 10:17:40 2020 from XXX.XXX.XXX.XXX
ServerC ~ $
ServerC ~ $ su
Password:
ServerC /home/myuser #
ServerC /home/myuser # cd /documents/backups/
ServerC /documents/backups #
At this moment myuser has superuser privileges and I can send commands.
Then, how can I automate the task to copy files from server A to server C with the need to send su once I'm connected to Server C?
I've tried so far doing like this:
sshpass -p $mypass ssh -t username#hostname "su -c \"cd /documents/backups/ && ls\""
it requests password for su and I'm able to send cd and ls but with this command, I'm not copying files from Server A to Server C, only semi-automating the access to Server C and sending the su in Server C.
Thanks in advance for any help.
UPDATE
# $TAR | ssh $username#$hostname "$COMMAND"
+ tar -cv -C /this/is/local/path/file1.txt .
+ ssh username#X.X.X.X 'set -x; rm -f /tmp/copy && mknod /tmp/copy p; su - <<< "su_password
set -x; tar -xv -C /this/is/remote/path/ . < /tmp/copy" & cat > /tmp/copy'
tar: /this/is/local/path/file1.txt: Cannot chdir: Not a directory
tar: Error is not recoverable: exiting now
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
+ rm -f /tmp/copy
+ mknod /tmp/copy p
+ su -
+ cat
Password:
Editorial note: the previous version of this answer used sudo, the current version uses su as requested in the question.
You could use tar and pipes, like so:
TAR="tar -cv -C $localpath ."
UNTAR="tar -xv -C $remotepath ."
PREPARE_PIPE="rm -f /tmp/copy && mknod /tmp/copy p"
NEWLINE=$'\n' # that's the easiest way to get a literal newline
ROOT_PASSWORD=rootpasswordverydangerous
COMMAND="set -x; $PREPARE_PIPE; su - <<< \"${ROOT_PASSWORD}${NEWLINE} set -x; $UNTAR < /tmp/copy\" & cat > /tmp/copy"
$TAR | ssh username#hostname "$COMMAND"
Explanation:
tar -c . archives the current directory into a single file. We aren't passing -f to tar, so that single file is standard output.
tar -x . extracts the content of a single tar archive file to the current directory. We aren't passing -f to tar, so that single file is standard input.
-C <path> tells tar to cd into <path> so that it will be the current directory in which files are copied from/to.
-v just tells tar to list the files tar archives/extracts, for debugging purposes.
Likewise, set -x is just to have bash to emit trace information, for debugging purposes.
So we're archiving $localpath into stdout, and piping it to ssh, which will pipe it to $COMMAND.
If there was a way to give su the password in the command line, we would have used something like:
$TAR | ssh ... su --password ${ROOT_PASSWORD} -c "$UNTAR"
and things would have been simple.
But su doesn't have that. su runs like a shell, reading from stdin. So it will first read the password, and once the password is read and su has established a root session, it reads commands from stdin. That's why we have su - <<< \"${ROOT_PASSWORD}${NEWLINE}${UNTAR}.
But now stdin is used by the password and command, so we can't use it as the archive. We could use another file descriptor, but I prefer not to, because then the solution can be more easily ported to work with sudo instead of su. sudo closes all file descriptors, and sudo -C 200 (only close file descriptors above 200) may not work (didn't work on my test machine).
if we went that direction, we would have used something like
$TAR | ssh ... 'exec 9<&2 && sudo -S <<< $mypass bash -c "$UNTAR <&9"'
Our next option is to do something like cat > /tmp/archive.tar in order to write the entire archive into a file, and then have something like $UNTAR < /tmp/archive.tar. But the archive may be huge and we may run out of disk space.
So the idea is to create a dedicated pipe - that's PREPARE_PIPE. Pipes don't save anything to disk, and don't store the entire stream in memory, so the reader and the writer have to work concurrently (you know, like with a real pipe).
So having redirected su's stdin from $ROOT_PASSWORD, we pull ssh's stdin into our pipe with cat > /tmp/copy, and in parallel (&) having $UNTAR read from the pipe (< /tmp/copy).
Notes:
You could also pass -z to both tar commands to pass it compressed, if your network is too slow.
tar will preserve the source's metadata, e.g. timestamps and ownership.
Passing $ROOT_PASSWORD to commands is not good practice, anyone who runs ps -ef can see the password. There are ways to pass the password to server C in a more secure way, I didn't include it in order to not further complicate this answer.
I would suggest asking the server's owner to install sudo, so that if the password is compromised via ps -ef, at least it's not the root password.

SSH between N number of servers using script

I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell

Using mysqldump inside a loop works properly with shell script? [duplicate]

This question already has answers here:
ssh breaks out of while-loop in bash [duplicate]
(2 answers)
Closed 7 years ago.
I'm reading host information from a text file and pass it to an ssh command:
The text file contains the host, user and password for the ssh command
while read LINE
do
R_USER=$(echo $LINE | cut -d ',' -f 1)
R_HOST=$(echo $LINE | cut -d ',' -f 2)
PY_SCRIPT=$(echo $LINE | cut -d ',' -f 4)
ssh $R_USER#$R_HOST 'touch /home/user/file_name.txt'
done </path_name/file_name
As it turns out the while loop is only executed once even if the host text file contains multiple host information.
When I remove the ssh command the while loop gets executed as much as there are lines in the host information text file.
Not sure why this is so.
Any information on this?
Roland
The default standard input handling of ssh drains the remaining line from the while loop.
To avoid this problem, alter where the problematic command reads standard input from. If no standard input need be passed to the command, read standard input from the special /dev/null device:
while read LINE
do
R_USER=$(echo $LINE | cut -d ',' -f 1)
R_HOST=$(echo $LINE | cut -d ',' -f 2)
PY_SCRIPT=$(echo $LINE | cut -d ',' -f 4)
ssh $R_USER#$R_HOST 'touch /home/user/file_name.txt' < /dev/null
done </path_name/file_name
Or alternatively, try using ssh -n which will prevent ssh from reading from standard input. For instance:
ssh -n $R_USER#$R_HOST 'touch /home/user/file_name.txt'
If the file is white space separated
host1 user password
host2 user password
then a simple read loop:
while read -r Server User Password
do
/usr/bin/ssh -n $User#$Server touch /home/user/file_name.txt
done </path/to/file.list
But you will be prompted for the password. You cannot pass the "Password" to ssh so I'd suggest storing passwordless ssh-keys and placing them on each host for the user you are connecting as. If you are running this command from a script you can ssh as these users on each host by placing your public key in the user's ~/.ssh/authorized_keys (or authorized_keys2) file. If you have the ssh-copy-id command you could do this by:
ssh-copy-id user#hostname
which would copy YOUR ssh-key to their authorized_keys file so you could then ssh as them. This is assuming you have permission but then again you have their password so what permission do you need?

Problem with plink output

I'm using plink to run a command on a Unix remote machine.
The command is:
ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';
The way I'm sending this command is by using a file with a set of commands like:
plink.exe -ssh -t -l user -pw pwd tst.url.pt -m commands.out
When I run the command this way the plink does not receive any input. It seems that is waiting for input.
But if I run:
plink.exe -ssh -t -l user -pw pwd tst.url.pt "ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';"
I get the expected result.
I'm not using the plink with a file with the command because I choose so. I'm using a test automation software that allows me to run tests on remote hosts and this is the way the tool works.
Any thoughts on what is going wrong?
I tested the command you provided and it worked without problems.
Maybe the problem is related to:
The server's host key is not cached in the registry.
The path to the file is not correct.
The file is empty.
include server hostkey
most importantly, you need to include the unix profile using the -m paramater
You can include all your commands in the same file where the profile is kept also.
$Output = ((plink.exe -hostkey hostkey -l UNAME -i SSHKEY -P 22 -ssh server -batch -m PROFILE) | ? {$_ -ne ""})

Resources