Redis connection (in Doctrine cache) gets initialized on each command - symfony

I have a Symfony 4.1 app, where I use Doctrine and want to setup Redis cache for Doctrine.
Here is a part of composer.json
"snc/redis-bundle": "^2.1",
"symfony/doctrine-bridge": "^4.1",
"symfony/proxy-manager-bridge": "^4.1",
Here is yml config file:
snc_redis:
clients:
doctrine_cache:
type: phpredis
alias: doctrine_cache
dsn: '%my_dsn%'
doctrine:
metadata_cache:
client: doctrine_cache
entity_manager: default
The problem is: on the very first time when Symfony tries to generate cache for all DI containers, it initializes Redis connection. This means for example when I run any console command, it tries to connect to redis. Example:
// very first command after git clone and composer install
php bin/console about
Output:
In PhpredisClientFactory.php line 64:
php_network_getaddresses: getaddrinfo failed: No such host is known.
I expect that Redis cache service will be initialized lazily, otherwise I cannot run other build commands on independent (non having Redis) environment.
Can someone advise please?

If this happens after composer install then you might have composer run the default commands:
- "post-install-cmd"
- "post-update-cmd"
Try to remove them and add see if the Redis works... If it works then add a script to your deployment entrypoint to run them at the end.
PS: pay attention to doctrine and Redis if you are using migrations: then you should also clear doctrine cache.

Related

How to log request channel into a log file?

I am currently running a Symfony 5 project in dev environment.
I would like to output requests logs (like 10:01:39 request.INFO Matched route "login_route") into a file.
I have the following config/packages/dev/monolog.yaml file:
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: [event]
With the YAML above, it logs correctly into the file /tmp/dev-logs/dev.log when I execute bin/console cache:clear.
But, it does not log anything when I perform requests on the application, no matter if I set channels: [request] or channels: ~ or even no channel param at all.
How can I edit the settings of that monolog.yaml file in order to log request channel logs ?
I have found the answer! This is very specific to my configuration.
In fact, I have two Docker containers (that both mount the project directory as a volume) for development:
one for code edition (with a linter, syntax checker, specific vim configuration...etc...)
one to access the application through HTTP using PHP-FPM (the one that is used when I make HTTP requests on the app)
So, when I perform a bin/console cache:clear from the first container I use for development, it logs into the /tmp/dev-logs/dev.log file of that first container; but when I execute HTTP requests, it logs into the /tmp/dev-logs/dev.log file of that second container;
I was checking the first container file only while it was logging into the second container file instead. So, I was simply not checking the right file.
Everything works. :)

AWS credentials not found for celery-k8s deployment

I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.

Missing encryption key to decrypt file with.Ask your team for your master ... it in the ENV['RAILS_MASTER_KEY']. Platform.sh Deployment aborting,

ERROR MESSAGE:
W: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
when deploying my project on Platform.sh, the operation failed because of the lack of the decryption key. from my google search, I found that the decryption key.
My Ubuntu .bashrc
export RAILS_MASTER_KEY='ad5e30979672cdcc2dd4f4381704292a'
rails project configuration for PLATFORM.SH
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
# The size of the persistent disk of the application (in MB).
disk: 5120
mounts:
'web/uploads':
source: local
source_path: uploads
relationships:
postgresdatabase: 'dbpostgres:postgresql'
hooks:
build: |
gem install bundler:2.2.5
bundle install
RAILS_ENV=production bundle exec rake assets:precompile
deploy: |
RACK_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "\"unicorn -l $SOCKET -E production config.ru\""
locations:
'/':
root: "\"public\""
passthru: true
expires: "24h"
allow: true
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
services.yaml
# The name given to the PostgreSQL service (lowercase alphanumeric only).
dbpostgres:
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 5120
db:
type: postgresql:13
disk: 5120
configuration:
extensions:
- pgcrypto
- plpgsql
- uuid-ossp
environments/production.rb
config.require_master_key = true
I suspect that the master.key is not accessible during deployment, and I don't understand how to solve the problem.
From what I understand, your export is in your .bashrc on your local machine, so it won't be accessible when deploying on Platform.sh. (The logs you see in your terminal when building and deploying are streamed, this doesn't happen on your machine.)
You need to make the RAILS_MASTER_KEY accessible on Platform.sh. To do so, this variable needs to be declared in your project.
Given the nature of the variable, I would suggest to use the Platform CLI to create this variable.
If this variable should be accessible on all your environments, you can make it a project level variable.
$ platform variable:create --level project --sensitive true env:RAILS_MASTER_KEY <your_key>
If it should only be accessible for a specific environment, then you need an environment level variable:
$ platform variable:create --level environment --environment '<your_envrionment>' --inheritable false --sensitive true env:RAILS_MASTER_KEY '<your_key>'
The env: prefix in the variable names tells Platform.sh to expose the variable with the rest of the environment variables. More information about this in the variables prefix section of the environment variables documentation page.
You could do the same via the management console if you prefer to avoid the command line.
Environment variables can also be configured directly in your .platform.app.yaml file, as described here. Keep in mind that this file being versioned, you should not use this method for sensitive information, such as encryption keys, API keys, and other kind of secrets.
The RAILS_MASTER_KEY environment variable should now be accessible during your Platform.sh deployment.

Deploy in projet with DeployBundle

I'm trying to deploy one in my project with DeployBundle and made the following settings:
parameter.yml
jordi_llonch_deploy:
config:
project: delivve
vcs: git
servers_parameter_file: app/config/parameters_deployer_servers.yml
local_repository_dir: /home/deploy/local_repository
clean_max_deploys: 7
ssh:
proxy: cli
user: user
password: 'password'
public_key_file: '/home/user/.ssh/id_rsa.pub'
private_key_file: '/home/user/.ssh/id_rsa'
private_key_file_pwd: 'password'
zones:
prod_myproj:
deployer: delivve
environment: prod
checkout_url: 'https://user#bitbucket.org/user/project-webservice.git'
checkout_branch: master
repository_dir: /var/www/production/delivve/deploy
production_dir: /var/www/production/delivve/code
parameters_deployer_servers.yml
prod_myproj:
urls:
- user#localhost:22
It has also the service and the setting but it seems this working out that part.
My problem is when I give the command:
sudo php app/console deployer:initialize --zones=prod_myproj
of the following error:
[prod_myproj]
[2016-01-04 18:25:55] app.CRITICAL: Not implemented
ROLLBACK [prod_myproj]
[2016-01-04 18:25:55] app.CRITICAL: Not implemented
Anyone know what can this happening, and how could solve, or to deploy with this bundle?
This looks like comming from the password authentication (https://github.com/jordillonch/DeployBundle/blob/3f8e679eb2ac87d0cef9ea9dd4765afd24c6a266/SSH/CLISshProxy.php#L60).
Try removing jordi_llonch_deploy.config.ssh.password from your config.yml (https://github.com/jordillonch/DeployBundle/blob/3f8e679eb2ac87d0cef9ea9dd4765afd24c6a266/SSH/SshClient.php#L76).

Can one use different database credentials for Doctrine migrations in Symfony2?

How can one configure Symfony's DoctrineMigrationsBundle to use different database authentication credentials to its DoctrineBundle—or at very least, a different DoctrineBundle connection to that used elsewhere in the app?
We would like the app to connect to the database with only limited permissions, e.g. no ability to issue DDL commands such as CREATE, ALTER or DROP. However, migrations will need to execute such DDL commands and so should connect as a user with elevated permissions. Is this possible?
I know it's a very old post, but as it is the one which shows on a Google search on the subject, I add my solution, working with Symfony 4.
First, you just have to define a new database connection in config/doctrine.yml (a new entity manager is NOT needed):
doctrine:
dbal:
default_connection: default
connections:
default:
# This will be the connection used by the default entity manager
url: '%env(resolve:DATABASE_URL)%'
driver: 'pdo_pgsql'
server_version: '11.1'
charset: UTF8
migrations:
# This will be the connection used for playing the migrations
url: '%env(resolve:DATABASE_MIGRATIONS_URL)%'
driver: 'pdo_pgsql'
server_version: '11.1'
charset: UTF8
orm:
# As usual...
You also have to define the DATABASE_MIGRATIONS_URL with the admin credentials in the .env file or in environnement variables:
###> doctrine/doctrine-bundle ###
# Format described at http://docs.doctrine-project.org/projects/doctrine-dbal/en/latest/reference/configuration.html#connecting-using-a-url
DATABASE_URL=postgresql://app_user:app_user_pass#localhost:5432/db
# Database url used for migrations (elevated rights)
DATABASE_MIGRATIONS_URL=postgresql://admin_user:admin_user_pass#localhost:5432/db
###< doctrine/doctrine-bundle ###
Then, just execute your migrations with the --db option, passing the name of your new connection:
php bin/console doctrine:migrations:migrate --db=migrations
Yes. Just define a new entity manager with the correct connection details and then use that entity manager when running migration commands
$ php app/console doctrine:migrations:version --em=new_entity_manager

Resources