AWS CodeDeploy not able to execute bash script - aws-code-deploy

All Events Succeeded but flask app not starting
appspec.yml
version: 0.0
os: linux
files:
- source: /testServerRegadv.py
destination: /path to folder in server/python
hooks:
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
runas: root
start_server.sh
#!/bin/bash
echo "In start server" >>results.txt 2> errors.log &
python /path in server/python/testServerRegadv.py > results.txt 2> errors.log &
stop_server.sh
isExistApp = `lsof -t -i:1515`
if [[ -n $isExistApp ]]; then
kill -9 $(lsof -t -i:1515)
fi
Also I am using code commit for storing the code and before pushing to aws code-commit I am executing chmod +x scripts/ * to make the scripts executable.

Related

Using rsync with gitlab-ci how to deploy identically?

when I push and that, on my server, I don't have my project, everything is fine (obviously):
rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
building file list ... done
created directory blog_symfony
[...]
sent 44,533,927 bytes received 5,523 bytes 5,239,935.29 bytes/sec
total size is 238,959,003 speedup is 5.37
the problem when I push a 2nd time, it does anything to me:
rsync: [generator] delete_file: rmdir(project/blog_symfony/project/blog_symfony) failed: Permission denied (13)
rsync: [generator] delete_file: rmdir(project/blog_symfony) failed: Permission denied (13)
deleting project/blog_symfony/translations/.gitignore
deleting project/blog_symfony/translations/
[...]
it creates for me, on my server side, a 'project' folder in the blog_symfony folder
annot delete non-empty directory: project/blog_symfony
cannot delete non-empty directory: project
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
sent 13,924 bytes received 175 bytes 28,198.00 bytes/sec
total size is 238,959,004 speedup is 16,948.65
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: exit code 1
my gitlab-ci:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
script:
- ls
- apt-get update && apt-get install rsync -y
- ssh $SSH_USER#$SSH_HOST "ls"
- rsync --exclude=".git" -e ssh -avz --delete-after . $SSH_USER#$SSH_HOST:blog_symfony/
- ssh $SSH_USER#$SSH_HOST "cd blog_symfony && docker-compose build && docker-compose up"
in ls -l I have a folder written by rsync and which is impossible to remove from gitlab-ci:
drwxrwxr-x 3 root root 4096 Dec 14 23:26 project
I don't think this is normal. This is the first time that I use gitlab-ci for a symfony project.
Thank you for your help
ls -l: I have a folder written by rsync and which is impossible to remove from GitLab CI.
Check if that folder is instead created after the first execution of your docker-compose up: if your Docker image execute itself internally as USER root, using a bind mount, it would write files/folders as root.
And that would impede normal operation (on the server, outside the container), like your rsync, because root files would be i the way.

Run a complex linux command in a docker Ansible container

After a long search I did not find the answer
There is a playbook Ansible.
- name: myscript
hosts: myhost
tasks:
- name: myscript
docker_container:
name: myscript
image: myimage
detach: false
working_dir: "/opt/R/project"
command: Rscript $(find ./*_Modules -iname *_Script.R)
This command works: Rscript ./01_Modules/02_Script.R
This command NOT works: Rscript $(find ./*_Modules -iname *_Script.R) - Treats $(find not as a command, but as a path.
At the same time, in linux, this line is successfully run and finds the script.
How do I pass full-fledged linux commands with && and similar features to command?
Here is a simplified version of your problem
- name: Create a test container
docker_container:
name: test
image: busybox
command: ls |grep var && echo 'it doesn\'t work!'
Output :
ls: |grep: No such file or directory
ls: &&: No such file or directory
ls: echo: No such file or directory
ls: it fails: No such file or directory
var:
spool
www
If I wrap it in quote and use
/bin/sh -c
- name: Create a test container
docker_container:
name: test
image: busybox
command: /bin/sh -c "ls |grep var && echo 'it works!'"
Output :
var
it works!

codedeploy erases content from directory not affected by deploy

I have 2 private repositories on GitHub for 2 different websites. Both websites run off the same set of auto-scaling servers on Amazon (EC2). I use CodeDeploy to pull the repositories from GitHub and deploy them to the servers, one at a time. This nearly works perfectly.
The issue is that when I deploy one website, the files from the other website are completely erased. Not the folder structure, just the files.
One webisite deploys to /var/www/website1 while the other deploys to /var/www/website2 . The appspec files are:
version: 0.0
os: linux
files:
- source: /
destination: /var/www/website1/
hooks:
BeforeInstall:
- location: /beforeinstall.sh
timeout: 300
runas: root
AfterInstall:
- location: /afterinstall.sh
timeout: 300
runas: root
And
version: 0.0
os: linux
files:
- source: /
destination: /var/www/website2/
hooks:
BeforeInstall:
- location: /beforeinstall.sh
timeout: 300
runas: root
AfterInstall:
- location: /afterinstall.sh
timeout: 300
runas: root
When I deploy "website1", it erases all files from "website2", and vice versa. I have no idea why. Any help would be greatly appreciated.
Before Install for the App (website1)
#/bin/bash
sudo service php-fpm stop
sudo service nginx stop
sudo yum -y update
rm /var/www/app -Rf
rm /usr/share/nginx/html/status.php -Rf
After Install
#/bin/bash
chown app:app /var/www/app/* -Rc
#
find /var/www/app/public_html/files/uploads -type d -exec chmod 777 {} \;
#
cd '/var/www/app'
su app -c 'composer update'
mv /var/www/fairwarning_app/status.php /usr/share/nginx/html/status.php
#
sudo service php-fpm start
sudo service nginx start
Before Install for the API (website2)
#/bin/bash
sudo service php-fpm stop
sudo service nginx stop
sudo yum -y update
rm /var/www/api -Rf
After Install
#/bin/bash
chown api:api /var/www/api/* -Rc
#
cd $'/var/www/api'
su api -c 'composer install'
su api -c 'composer update'
#
sudo service php-fpm start
sudo service nginx start
Looks like AWS CodeDeploy doesn't support simultaneous deployments on the same deployment group. You can track progress and discussion of the issue on their github issue tracker.

docker run, docker exec and logs

If I do :
docker run --name nginx -d nginx:alpine /bin/sh -c 'echo "Hello stdout" > /dev/stdout'
I can see "Hello stdout" when I do :
docker logs nginx
But when the container is running (docker run --name nginx -d nginx:alpine) and I do :
docker exec nginx /bin/sh -c 'echo "Hello stdout" > /dev/stdout'
or when I attach the container with :
docker exec -it nginx /bin/sh
and then :
echo "Hello stdout" > /dev/stdout
I can't see anything in docker logs. And since my Nginx access logs are redirected to /dev/stdout, I can't see them as well.
What is happening here with this stdout ?
When you docker exec you can see you have several process
/ # ps -ef
PID USER TIME COMMAND
1 root 0:00 nginx: master process nginx -g daemon off;
6 nginx 0:00 nginx: worker process
7 root 0:00 /bin/sh
17 root 0:00 ps -ef
/ #
and in Linux, each process has its own stdin, stdout, stderr (and other file descriptors), in /proc/pid/fd
and so, with your docker exec (pid 7) you display something in
/proc/7/fd/1
If you do ls -ltr /proc/7/fd/1, it displays something like
/proc/4608/fd/1 -> /dev/pts/2 which means output is being sent to terminal
while your nginx process (pid 1) displays his output in
/proc/1/fd/1
If you do ls -ltr /proc/1/fd/1, it displays something like /proc/1/fd/1 -> pipe:[184442508] which means output is being sent to docker logging driver

I'm stuck on logrotate mystery

I have two logrotate files:
/etc/logrotate.d/nginx-size
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
size 2G
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
and
/etc/logrotate.d/nginx-daily
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
Command logrotate -d -v /etc/logrotate.d/nginx-sizeoutput:
reading config file /etc/logrotate.d/nginx-size
compress_prog is now /usr/bin/bzip2
compress_options is now -6
compress_ext is now .bz2
uncompress_prog is now /usr/bin/bunzip2
Handling 1 logs
rotating pattern: /var/log/nginx/*.log
/var/log/www/nginx/50x.log
2147483648 bytes (3 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/nginx/access.log
log does not need rotating
considering log /var/log/nginx/error.log
log does not need rotating
considering log /var/log/nginx/get.access.log
log does not need rotating
considering log /var/log/nginx/post.access.log
log needs rotating
considering log /var/log/www/nginx/50x.log
log does not need rotating
rotating log /var/log/nginx/post.access.log, log->rotateCount is 3
dateext suffix '-20141204'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
renaming /var/log/nginx/post.access.log to /var/log/nginx/post.access.log-20141204
creating new /var/log/nginx/post.access.log mode = 0640 uid = 497 gid = 497
running postrotate script
running script with arg /var/log/nginx/*.log
/var/log/www/nginx/50x.log
: "
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
"
compressing log with: /usr/bin/bzip2
Same (normal) output on ngnix-daily..
If I run from root command
logrotate -f /etc/logrotate.d/nginx-size
manually, it do all the thing. BUT! It don't run it automatically!
contab:
*/5 5-23 * * * root logrotate -f -v /etc/logrotate.d/nginx-size 2>&1 > /tmp/logrotate_size
00 04 * * * root logrotate -f -v /etc/logrotate.d/nginx-daily 2>&1 > /tmp/logrotate_daily
Also, files /tmp/logrotate_daily & /tmp/logrotate_size are always empty..
Cron don't give me any errors in /var/log/cron
Dec 4 14:45:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
Dec 4 14:50:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
What's wrong with dat thing?.. Centos 6.5 x86_64, Logrotate version 3.8.7 (out of source) + logrotate version 3.7.8 (via rpm).
Thx in advance.
Your redirections are incorrect in those cron lines. They will not output error information to those files.
Redirection order matters. You want >/tmp/logrotate_size 2>&1 to get what you want.
The underlying issue here is one of the things covered by the "Debugging crontab" section of the cron info page.
Namely "Making assumptions about the environment".
Making assumptions about the environment
Graphical programs (X11 apps), java programs, ssh and sudo are notoriously problematic to run as cron jobs. This is because they rely on things from interactive environments that may not be present in cron's environment.
To more closely model cron's environment interactively, run
env -i sh -c 'yourcommand'
This will clear all environment variables and run sh which may be more meager in features that your current shell.
Common problems uncovered this way:
foo: Command not found or just foo: not found.
Most likely $PATH is set in your .bashrc or similar interactive init file. Try specifying all commands by full path (or put source ~/.bashrc at the start of the script you're trying to run).

Resources