Run Symfony command with Cronjob Kubernetes - symfony

Each minute, I have a K8S Cronjob who run Symfony command. My problem is the huge time my pod warmup Symfony cache before execute the command : 56s ...
I search a solution for stock the cache in docker container but i can't execute cache:warmup during docker build, because commande need database and redis up.
Do you have a solution for me ?
Thanks !

Related

Cannot get persistent logs using KubernetesExecutor with an EFS volume using official airflow helm chart

I see the PVC is bounded to the PV. But when I enable log.persistence.enabled: true, the scheduler and webserver keeps crashing because it does not have permission to the logs folder. This is the error when I describe pod:
PermissionError: [Errno 13] Permission denied: '/opt/airflow/logs/scheduler'
At the end of our custom Dockerfile for airflow we set the user as airflow and chown the entire airflow home dir so that these permissions are not an issue. This way the airflow pod itself runs as airflow and any pod spun up using this image by the kubernetes executor is running as airflow.
RUN chown -R airflow: ${AIRFLOW_USER_HOME}

Cronjob in symfony running on docker

I am trying to run a symfony command through cron but it is now executing never. The application is runningin docker and I can't find information if I need to specify roles or something else. Other standard linux commands are executed successfully but looks like cron doesn't want to start app/console. Here is my cronjob:
* * * * * /usr/local/bin/php /usr/lib/myApp/app/console myCommand --env=prod >> /usr/lib/myApp/testLog.txt 2>&1
Does anyone have any suggestions how to run symfony command in docker using cron?
The philosophy of Docker is to have one process per container. That means, you usually have no init system and thus, no services running inside the container, e.g. dbus or cron.
There are ways to create your own Docker Image with such an init-system/background service. Images based on Alpine often use S6.
Another solution is to have use the cron-service on your host and rewrite the command to something like docker exec <container_name> /usr/local/bin/php ...

Docker run start services

I need nginx-openresty and redis in single docker container. I have written docker file its working fine. But thing i need to start my redis service after login into the docker bash to automate this I have written .sh file which contains instrutions like start and stop of redis server and nginx. ENTRYPOINT ["./startup.sh"]
and .sh file is
cd /etc/redis-installation/utils
echo -n | ./install_server.sh
service redis_6379 stop
cd /
cp ./dump.rdb /var/lib/redis/6379/
service redis_6379 start
openresty
My problem is that docker container start and exist when shell execution completed. How can stay the container keep running with nginx and redis in running state.
Try using docker-compose with a link between your app container and your redis container. I suggest using the official redis container

Run Symfony2 command in background

I've a queue of emails for send to customers. I'm spooling emails: http://symfony.com/doc/2.0/cookbook/email/spool.html
Using this command:
php app/console swiftmailer:spool:send --env=prod
The problem is, how to run this command in background?. I mean don't have to execute this command from console whenever I want to send email queues
I solved this using a crontab like explained here: http://blog.servergrove.com/2012/04/27/spooling-emails-with-symfony2-on-vps-and-shared-hosting/
But for me, using crontab not appears the best solution. I read also about RabbitMQ and her bundle for Symfony2 but with this I have to run another command to consume the queue:
./app/console rabbitmq:consumer -m 50 queue_email
What is the best solution for this?

Gitlab: Problems running Unicorn, Resque with Passenger/Nginx

I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.
As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)
Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!

Resources