My capifony deployment works great, however the capifony cleanup command fails.
I'm using private keys over ssh, with sudo to gain write permissions on the deployment directories.
With extended logging the result of cap deploy:cleanup is this:
$ cap deploy:cleanup
* 2013-07-19 15:44:42 executing `deploy:cleanup'
* executing "sudo -p 'sudo password: ' ls -1dt /var/www/html/releases/* | tail -n +4 | sudo -p 'sudo password: ' xargs rm -rf"
Modifying permissions so that the deployment user has full write access to this directory is not an option in this instance.
Has anyone seen/worked around this issue? (This is on a RHEL6 server)
Yep, there is a problem with the cleanup command while using sudo at the moment. Here was my solution to fixing this. Add this to your deploy.rb
namespace :customtasks do
task :customcleanup, :except => {:no_release => true} do
count = fetch(:keep_releases, 5).to_i
run "ls -1dt #{releases_path}/* | tail -n +#{count + 1} | #{try_sudo} xargs rm -rf"
end
end
Then call that instead as cleanup
after "deploy:update", "customtasks:customcleanup"
More info at https://github.com/capistrano/capistrano/issues/474
Related
when I push and that, on my server, I don't have my project, everything is fine (obviously):
rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
building file list ... done
created directory blog_symfony
[...]
sent 44,533,927 bytes received 5,523 bytes 5,239,935.29 bytes/sec
total size is 238,959,003 speedup is 5.37
the problem when I push a 2nd time, it does anything to me:
rsync: [generator] delete_file: rmdir(project/blog_symfony/project/blog_symfony) failed: Permission denied (13)
rsync: [generator] delete_file: rmdir(project/blog_symfony) failed: Permission denied (13)
deleting project/blog_symfony/translations/.gitignore
deleting project/blog_symfony/translations/
[...]
it creates for me, on my server side, a 'project' folder in the blog_symfony folder
annot delete non-empty directory: project/blog_symfony
cannot delete non-empty directory: project
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
sent 13,924 bytes received 175 bytes 28,198.00 bytes/sec
total size is 238,959,004 speedup is 16,948.65
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
my gitlab-ci:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
script:
- ls
- apt-get update && apt-get install rsync -y
- ssh $SSH_USER#$SSH_HOST "ls"
- rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
- ssh $SSH_USER#$SSH_HOST "cd blog_symfony && docker-compose build && docker-compose up"
in ls -l I have a folder written by rsync and which is impossible to remove from gitlab-ci:
drwxrwxr-x 3 root root 4096 Dec 14 23:26 project
I don't think this is normal. This is the first time that I use gitlab-ci for a symfony project.
Thank you for your help
ls -l: I have a folder written by rsync and which is impossible to remove from GitLab CI.
Check if that folder is instead created after the first execution of your docker-compose up: if your Docker image execute itself internally as USER root, using a bind mount, it would write files/folders as root.
And that would impede normal operation (on the server, outside the container), like your rsync, because root files would be i the way.
I am currently working on a research tool that is supposed to be containerized using docker to hopefully be run on as many different systems as possible. This works fine for the most part, we have run into a permission problem because of the workflow though: The tool takes an input file (which we mount into the container), evaluates it using R scripts and is then supposed to generate a report on the input file exactly where the file was taken from on the host system.
The latter part is problematic as at least in our university context, the internal container user lacks write permissions in the (non-root) user home folders, which we are currently taking our testing data from. This would obviously also be bad in a production context as we don't know how the potential users' system is set up, which is why we are trying to dynamically and temporarily set the permissions of the container user to the host user.
I have found different solutions that involve passing the UID/GID to the docker daemon when building the container in some way or another:
docker build --build-arg USER_ID=$(id -u ${USER}) --build-arg GROUP_ID=$(id -g ${USER}) -t IMAGE .
I also changed the dockerfile accordingly using a tutorial that suggested replacing the internal www-data user:
[...Package installation steps that are supposed to be run as root...]
ARG USER_ID
ARG GROUP_ID
RUN if [ ${USER_ID:-0} -ne 0 ] && [ ${GROUP_ID:-0} -ne 0 ]; then \
userdel -f www-data &&\
if getent group www-data ; then groupdel www-data; fi &&\
groupadd -g ${GROUP_ID} www-data &&\
useradd -l -u ${USER_ID} -g www-data www-data &&\
install -d -m 0755 -o www-data -g www-data /work/ &&\
chown --changes --silent --no-dereference --recursive \
--from=33:33 ${USER_ID}:${GROUP_ID} \
/work \
;fi
USER www-data
WORKDIR /work
RUN mkdir files
COPY data/ /opt/MTB/data/
COPY helpers/ /opt/MTB/helpers/
COPY src/www/ /opt/MTB/www/
COPY tmp/ /opt/MTB/tmp/
COPY example_data/ /opt/MTB/example_data/
COPY src/ /opt/MTB/src/
EXPOSE 8080
ENTRYPOINT ["/opt/MTB/src/starter_s_c.sh"]
The entrypoint script starter_s_c.sh is a small bashscript that feeds the trailing argument to the corresponding R script as an input file - the R script writes the report.
This works, but requires the container to be built again for every new user. What we are looking for is a solution that handles the dynamic permission setting at runtime, so that we only have to build the container once and can use it with many different user configurations.
I have found this but I am not entirely sure how to implement it as it would replace our entrypoint script and I'm not sure how to integrate this solution into our project.
Here is our current entrypoint script which already needs the permissions to be set so localmaster.r can generate the report in the host directory:
#!/bin/sh
file="$1"
cd $(dirname $0)/..
if [ $# -eq 0 ]; then
echo '.libPaths(c("~/lib/R/library", .libPaths())); library(shiny); library(shinyjs); runApp("src")' | R --vanilla
else
echo "Rscript --vanilla /opt/MTB/src/localmaster.r "$file""
Rscript --vanilla /opt/MTB/src/localmaster.r "$file"
fi
(If no arguments are given, it starts a shiny app, just to avoid confusion)
Any help or tips would be much appreciated! Thank you.
I know this issue has been reported multiple times, but I tried every single solution and nothing seems to work.
I am running Symfony 3 on a Debian 9 Stretch, and theres a permission issue that I can't fix
cat /var/log/apache2/project_error.log
PHP Fatal error: Uncaught RuntimeException: Unable to create the cache directory (/var/www/mobileoutfitters.fr/public_html/var/cache/prod)\n in /var/www/mobileoutfitters.fr/public_html/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php:676\nStack
In /etc/apache2/envvars :
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
My user is actually part of this group. I tried all the chown -R 777 commands possible, and also as said in the Symfony documentation, these 2 commands :
HTTPDUSER=$(ps axo user,comm | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1)
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX var
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX var
I tried to delete the var folder, delete its content, clear the cache... But still this error.
Turns out that my Symfony database was created but empty.
After running : bin/console doctrine:schema:update --force it now works
I had about the same issue, for me it worked after I disabled selinux on my local Centos virtualbox VM with setenforce Permissive.
I'm setting up Tomcat on Centos according to https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-8-on-centos-7 , but with a twist: I put Tomcat in /opt/apache-tomcat-8.5.6 and then set up a symbolic link:
sudo ln -s /opt/apache-tomcat-8.5.6 /opt/tomcat
Now I change the group ownership of /opt/tomcat to tomcat:
sudo chgrp -R tomcat /opt/tomcat/conf
Then I give the tomcat group write access to the configuration directory:
sudo chmod g+rwx /opt/tomcat/conf
But here is the problem: I try to give the tomcat group read access to all the configuration files:
sudo chmod g+r /opt/tomcat/conf/*
That gives me an error: chmod: cannot access ‘/opt/tomcat/conf/*’: No such file or directory
What? Does chmod not accept wildcards? Or does it not look inside symbolic links? What's going on?
Note that I got around it by doing this:
sudo chmod g+r -R /opt/tomcat/conf
Does that give me effectively the same thing? (I know that it additionally makes the directory readable by the group, but that seems inconsequential --- the group could already read the directory.) Why doesn't the wildcard version work?
Globs are expanded by the current shell. This happens before sudo and chown are ever invoked.
If the current shell doesn't have access to list the files, the glob will be treated as unmatched and just left alone. This makes chmod try to access a file literally named *, which fails.
root# echo /root/.*
/root/.bash_history /root/.bashrc ...
user$ sudo echo /root/.*
/root/.*
The same is true for command substitution, process substitution and other expansions, which are similarly unaffected by sudo:
root# echo $(whoami)
root
user$ sudo echo $(whoami)
user
The shell is also responsible for pipes and redirects, which are also set up before sudo ever runs:
root# echo 60 > /proc/sys/vm/swappiness
(command exits successfully)
user$ sudo echo 60 > /proc/sys/vm/swappiness
bash: /proc/sys/vm/swappiness: Permission denied
In Unix terms, sudo is wrapper for execve(2), and therefore can't help with anything that you can't do through an execve call. If you need shell functionality from the target user, you need to manually invoke that shell:
user$ sudo sh -c 'chmod g+r /opt/tomcat/conf/*'
I have simple example from official guide at docker website.
I run the following:
sudo docker run -d ubuntu:latest /bin/sh -c "while true; do echo hello world; sleep 1; done"
a66asdasdhqie123...
Then take some output from created container:
sudo docker logs a66
hello
hello
hello
...
Then I lookup the running processes of a container:
sudo docker top a66
UID PID PPID C STIME TTY TIME CMD
root 25055 15152 0 20:07 ? 00:00:00 /bin/sh -c while true; do echo hello world; sleep 1; done
root 25295 25055 0 20:10 ? 00:00:00 sleep 1
Next I try to kill the first process of container:
sudo docker exec a66 kill -9 25055
However after I make it nothing changes. Process still works and output "hello" every second. What do I wrong?
When I reproduce your situation I see different PIDs between docker top <container> and docker exec -it <container> ps -aux. When you do docker exec the command is executed inside the container => should use container's pid. Otherwise you could do the kill without docker straight from the host, in your case: sudo kill -9 25055.
check this:
ps | grep -i a66 | tr -s ' '|cut -f2 -d' '|
{
while read line;
do kill -9 $line;
done
}
to understand this start from executing commands from left till end of each pipe (|)
Simpler option:
kill $(pidof a66)
Took me a while to find the right answer, but you will have to manage this process from within the container. When you run docker top a66 from the host, the PIDs are from your host, although that's not quite the case if using Cygwin. Regardless, you will need to bash or what-have-you back into your container and use commands like ps aux and kill while in the container to find and manage the different PIDs for the same processes there.
i was looking for something like this, but i couldn't find and then i did this:
[root#notebook ~]# docker exec -it tadeu_debian ps aux | grep ping |
awk '{ print $2 }' | xargs -I{} docker exec -i tadeu_debian kill -9
It was two "execs" from Docker e one xargs.
Well, i hope this helps someone!
when you build Docker, use this command :
RUN apt-get install lsof
then in py file you can use:
os.system("lsof /dev/nvidia* | awk '{print $2}' | xargs -I {} kill {}")
REMEMBER: this command kill all process on GPU