I have this issue when I add a column to one of my entities and release it for production I have to restart Apache in order to clear Doctrine metadata APC/APCU cache.
I have tried all commands below but none worked for me:
php -r "apc_clear_cache();"
php -r "apcu_clear_cache();"
sudo php app/console doctrine:cache:clear-metadata
sudo php app/console doctrine:cache:clear-query
sudo php app/console doctrine:cache:clear-result
I get this error message for --env=prod
sudo php app/console doctrine:cache:clear-metadata --env=prod
sudo php app/console doctrine:cache:clear-query --env=prod
sudo php app/console doctrine:cache:clear-result --env=prod
[LogicException]
Cannot clear APC Cache from Console, its share in the Web server memory and not accessible from the CLI.
The only way I can get it to refresh Doctrine cache is to restart my apache server which can sometimes be an issue.
My cache settings for Doctine in my Symfony project:
doctrine:
orm:
metadata_cache_driver: apc
result_cache_driver: apc
query_cache_driver: apc
second_level_cache:
enabled: true
log_enabled: false
region_cache_driver: apc
How can I clear APC cache in this case without restarting Apache each time I release new schema update to production. This is even worse if you have many servers behind a load balancer.
You must understand that Php running under Apache (or Nginx) is different than the Php running via the command line, they are 2 Linux process that cant communicate.
So, even if you can clear the cache via CLI, this will not affect the php under Apache.
The easiest way it to call apcu_clear_cache inside a Symfony controller, or, you can use the Apache Php socket via the CLI.
I recommend to use a tool like http://gordalina.github.io/cachetool/, which do that perfectly.
Instead of trying to clear it from the console, try doing it from a controller or app.php.
I have this line commented in the app.php:
//apcu_clear_cache ();
When I do need to clear the cache, I just uncommented it and load any page. It works for me.
When you run apc_clear_cache(); not from cli but from your app, doctrine cache will be cleared. You can make some button in admin panel, or send curl request to your server after each update to specified file, for example:
http://example.com/apc_cache_clear.php
inside autorize request and clear cache
Take a look at the Smart-Core/AcceleratorCacheBundle Provide a command to clear PHP Accelerator cache from CLI.
This can permit to flush the cache as a cli command as:
php app/console cache:accelerator:clear
The doc include also a Capifony's recipe.
Hope this help
Related
Each minute, I have a K8S Cronjob who run Symfony command. My problem is the huge time my pod warmup Symfony cache before execute the command : 56s ...
I search a solution for stock the cache in docker container but i can't execute cache:warmup during docker build, because commande need database and redis up.
Do you have a solution for me ?
Thanks !
Our team is presently exploring the idea of service discovery for a Symfony2 application using Consul. Being in the relative frontier, there's very little out there in the way of discussion. So far we've discovered:
Runtime configuration has previously been shot down.
A bundle exists to handle such a use case, but has also hasn't seen a lot of activity as of late.
One of the contributors of said bundle suggested that External Parameters might be the answer to the problem.
Sensio has created its own Consul SDK. However, there seems to be little in the way of documentation/official blog articles re: Symfony2 integration
Consol provides watches which can be triggered on various changes
Current thoughts are to explore utilizing the Consul watchers to re-trigger a cache build along with external parameters. That said, there is some concern on the overhead of such an operation if services change semi-frequently.
Based on the above, and knowledge of Consul/Symfony internals, would that be an advisable approach? If not, why, and what alternatives are available?
In the company I work, we took a different route.
Instead of fighting against Symfony to accept runtime configuration (something it should, like Spring Data Consul, for example), we decided to make Consul update Symfony configuration, in a similar in concept, different in implementation than Frank did.
We installed Consul and Consul Template. We create a K/V entry pair that contains the entire parameters.yml file. Example:
Key: eblock/config/parameters.yml
parameters:
router.request_context.host: dev.eblock.ca
router.request_context.scheme: http
router.request_context.base_url: /
Then a consul template configuration file was added at location /opt/consul-template/config/eblock.cfg:
template {
source = "/opt/consul-template/templates/eblock-parameters.yml.ctmpl"
destination = "/var/www/eblock/app/config/parameters.yml"
command = "/opt/eblock/scripts/parameters_updated.sh"
}
The contents of ctmpl file are:
{{key "eblock/config/parameters.yml"}}
Finally, our parameters_updated.sh script does:
#!/bin/bash
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=201
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquire the lock
flock -n $fd \
&& return 0 \
|| return 1
}
lock $PROGNAME || exit 0
export HOME=/root
logger "Starting composer install" && \
/usr/local/bin/composer install -d=/var/www/eblock/ --no-interaction && \
logger "Running composer dump-autoload" && \
/usr/local/bin/composer dump-autoload -d=/var/www/eblock/--optimize && \
logger "Running app/console c:c/c:w" && \
/usr/bin/php /var/www/eblock/app/console c:c -e=prod --no-warmup && \
/usr/bin/php /var/www/eblock/app/console c:w -e=prod && \
logger "Running doctrine commands" && \
/usr/bin/php /var/www/eblock/app/console doctrine:database:create --env=prod --if-not-exists && \
/usr/bin/php /var/www/eblock/app/console doctrine:migrations:migrate -n --env=prod && \
logger "Restarting php-fpm" && \
/bin/systemctl restart php-fpm &
Knowing that both consul and consul-template services are up, as soon as your value changes in the specified key for consul template, it'll dump the file into configured destination and run the command for parameters updated.
It works like a charm. =)
A simple KV watcher that puts the value into parameters.yml, triggers a cache:clear is the simplest option in my opinion and also provides the benefit of compilation so that it doesn't have to go to Consul each time to check if values are updated. Like you said, some overhead but seems to be ok if you don't change your parameters every 5 minutes.
We're exploring that option now but if you made any progress on this, an update would be appreciated.
[Update 2016-02-23] We've implemented the idea I mentioned above and it works as expected: well. Mind you, we change our parameters only on deploy of a new version (because we also use service discovery by Consul so no need to update service lists in parameters). We mostly did it because it saves us the boring job of changing parameters on several servers. As usual: this might not work for you but I think you would be safe if, like I said before, you don't change your parameters every 5 mins :)
I'm using docker-compose to set up a portable development environment for a bunch of symfony2 applications (though nothing I want to do is specific to symfony). I've decided to have the source files on the local machine exposed as a data volume with all the other dependencies in docker. This way developers can edit on the local file-system.
Everything works great, except that after running the app my cache and log files and the files created by composer in the /vendor directory are now owned by root.
I've read about this problem and some possible approaches here:
Changing permissions of added file to a Docker volume
But I can't quite quite tease out what changes I have to make in my docker-compose.yml file so that when my symphony container starts with docker-compose up any files that are created have the permissions of the user on the host machine.
I'm posting the file for reference, worker is where php, etc. live:
source:
image: symfony/worker-dev
volumes:
- $PWD:/var/www/app
mongodb:
image: mongo:2.4
ports:
- "27017:27017"
volumes_from:
- source
worker:
image: symfony/worker-dev
ports:
- "80:80"
- mongodb
volumes_from:
- source
volumes:
- "tmp/:/var/log/nginx"
One of the solutions is to execure the commands inside your container. I've tried multiple workarounds for the same issue I faced in the past. I find executing the command inside the container the most user-friendly.
Example command: docker-compose run CONTAINER_NAME php bin/console cache:clear. You may use make, ant or any modern tool to keep the commands short.
Example with Makefile:
all: | build run test
build: | docker-compose-build
run: | composer-install clear-cache
############## docker compose
docker-compose-build:
docker-compose build
############## composer
composer-install:
docker-compose run app composer install
composer-update:
docker-compose run app composer update
############## cache
clear-cache:
docker-compose run app php bin/console cache:clear
docker-set-permissions:
docker-compose run app chown -R www-data:www-data var/logs
docker-compose run app chown -R www-data:www-data var/cache
############## test
test:
docker-compose run app php bin/phpunit
Alternatively, you may introduce a .env file which contains a environment variables and then user one of the variables to run usermod command in the Docker container.
I've a queue of emails for send to customers. I'm spooling emails: http://symfony.com/doc/2.0/cookbook/email/spool.html
Using this command:
php app/console swiftmailer:spool:send --env=prod
The problem is, how to run this command in background?. I mean don't have to execute this command from console whenever I want to send email queues
I solved this using a crontab like explained here: http://blog.servergrove.com/2012/04/27/spooling-emails-with-symfony2-on-vps-and-shared-hosting/
But for me, using crontab not appears the best solution. I read also about RabbitMQ and her bundle for Symfony2 but with this I have to run another command to consume the queue:
./app/console rabbitmq:consumer -m 50 queue_email
What is the best solution for this?
I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.
As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)
Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!