How to setup Sylius to use Redis as Cache Backend - symfony

According the https://github.com/doctrine/DoctrineCacheBundle#cache-providers there are several parameters necessary to use redis instead of file_system as cache backend.
In the main configuration file of Sylius, there is only one area to put cache settings:
app/config/parameters.yml
sylius.cache:
type: redis (was file_system)
Where to put the rest?
connection_id - Redis connection service id
host - redis host
port - redis port
Thanks!

You can use this syntax:
sylius.cache:
type: redis
namespace: sylius
redis:
host: localhost
port: 6379
database: 10
Sylius then creates a doctrine cache provider with this configuration.

Related

Download file by temp url using Minio and nginx

I generate file's url by Minio and return for RestController with 302 HttpCode, but I need use external address with Nginx location. Minio temp url has X-Amz-Signature header and url for service contains in signature that why I can't redirect user by nginx.
For examples:
host: minio-host
port: minio-port
bucket: file
filename: 333/test.jpg
minio's url: http://minio-host:minio-port/file/333/test.jpg
But, I want to use nginx location (http://my-host/minio)
If I use nginx, I can't get file, because X-Amz-Signature contains host = http://minio-host:minio-port
What should I do to use nginx?
I started minio and nginx in docker
I tried to disabled the header change in nginx
I had similar issues on local.
Solution 1, is that you need to create the presignedUrl through the nginx with your public URL, not through one of the docker-network.
Solution 2, generate the link through the mc client. This worked also. But hardly you can generate every link through the mc, depends on what you are doing.
You can also refere to my answer to a similar question: https://stackoverflow.com/questions/74656068/minio-how-to-get-right-link-to-display-image-on-html/74717273#74717273:%7E:text=If%20you%20have%20Minio%20running%20in%20a%20container%2C%20it%20is%20always%20a%20mess%20with%20127.0.0.1%20or%20localhost:~:text=Try%20to%20generate%20the%20link%20with%20the%20minioclient.
Last. I used an sample of docker-compose with quay.io image and nginx.
Here even the links greated through the browser UI, didnt worked.
This issue i solved using the bitnami/minio image.
try this:
version: '3.7'
services:
minio:
container_name: minio
image: bitnami/minio
ports:
- '9000:9000'
- '9001:9001'
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
healthcheck:
test: ["CMD", "curl", "-f",
"http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
networks:
- mynetwork
volumes:
- miniodata:/data
networks:
mynetwork
volumes:
miniodata:
You can also put an entry in your hosts file on your host.
127.0.0.1 minio.local
and then use the UI with Http://minio.local:9001
This should generate the right presignedURL.
If that all works fine you can create a multiple node installation and use nginx as Loadbalancer.

AKS Network Policy: Why the "host" could not be resolved when adding network policy?

I have kubernetes Cluster deployed on Azure (AKS). On that cluster i have a wordpress deployed with helm. And Azure mariadDB which is accessible to Worpress via External Service Object:
My External service look like :
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: app
spec:
type: ExternalName
externalName: somename.mariadb.database.azure.com
I defined network policy on my Wordpress chart. And the values.yml look like:
# Azure MariaDB infos
externalDatabase:
#host: 10.0.4.68 # IT WORKS FINE WHEN I PUT MARIADB IP
host: mysql # IT DOES NOT WORK WHEN I PUT EXTERNAL SERVICE OBJECT NAME NOR THE FOLLOWING ENFPOINT: somename.mariadb.database.azure.com
port: 3306
database: bitnami_wordpress
networkPolicy:
enabled: true
ingressRules:
accessOnlyFrom:
enabled: true
customRules:
- {}
egressRules:
customRules:
- to:
- ipBlock:
cidr: 10.0.4.64/28 # THE VSUBNET OF MARIADB DATABASE
ports:
- protocol: TCP
port: 3306
When i replace externalDatabase.host with the IP of MariaDB it works fine. But when replace it with the external service object name (ie: mysql which is the 1st manifest) or with the endpoint (ie: somename.mariadb.database.azure.com ) i got the follwing error:
wordpress 15:35:09.41 DEBUG ==> Executing SQL command:
SELECT 1
ERROR 2005 (HY000): Unknown server host 'somename.mariadb.database.azure.com' (-3)
wordpress 15:35:34.43 DEBUG ==> Executing SQL command:
PS: the above error is when i set externalDatabase.host to somename-dev.mariadb.database.azure.com which is the same error as when externalDatabase.hostset to mysql
Any help please

Every exposed port is redirecting to 8080, how do I map container port to any other rather than 8080?

Issue
I've a set of working WordPress Docker Compose containers that includes blog image and db image.
version: '3'
services:
wordpress-app-db:
build:
context: ./mysql
image: wordpress-app-db:5.7-mysql
restart: always
php7-wordpress-app:
depends_on:
- wordpress-app-db
build:
context: ./wordpress
image: wordpress-app:php7
links:
- wordpress-app-db
ports:
- "8080:80"
restart: always
volumes:
data:
Now, here the above yaml will work with no issues at all, but when I want to change the port 8080 to some other port it simply wont work.
ports:
- "<my-custom-port>:80"
All around the url takes me to http://localhost:8080/.
I'm confused of it's behaviour, I'm not able to understand why it is redirecting to 8080 if it has been mapped to other port <my-custom-port>.
For info, I've exposed port 80 in Dockerfile.
Reason
I want to do so, as I've to run this set in kubernetes cluster with nodePort and I can't assign it the port 8080 in nodePort.
Verify that the siteurl and home wp_options in your database are set to the correct hostname and port. I recently migrated a WordPress site from a LAMP compute instance to Kubernetes using the official WordPress images, and I experienced the exact same issue.
WordPress will perform a redirect if those values do not match your domain name.
In my case, the two fields mutated somehow during the migration, causing every request to redirect to :8080. They were changed to localhost:8080.
Have you cross checked the target port in your service file?
There would be 3 entries in service file.
containerPort which maps to internal redirect something like custom port which you use in your docker-compose
targetPort which maps to actual application port it is listening to
nodePort which is one which you can access using <http/s>://nodeIP:nodePort

Manage Symfony App Heroku

I've a Symfony app on Heroku with ClearDb addons. I need to manage the app for test and prod. So I need two database: one for test and one for the production(principle);
I tryed the Heroku Pipeline, but when I promote the app from staging to production, the production app is connetted to staging db. How can solve ?
How you manage it?
EDIT
I discovered the mistake. I set the parameters via
$db = parse_url(getenv('CLEARDB_DATABASE_URL'));
$container->setParameter('database_host', $db['host']);
From a quick search for $container->setParameter I can see that this is a Symfony feature to interpolate values into code, however they mention the following warning in their docs:
NOTE: You can only set a parameter before the container is compiled:
not at run-time. To learn more about compiling the container see
Compiling the Container.
https://symfony.com/doc/current/service_container/parameters.html#getting-and-setting-container-parameters-in-php
Heroku handle only symfony apps in prod env. So the stage app also have the environment var as "prod". How can I set parameters for different env? Or dynamically?
Thanks,
AlterB
I've solved with the Environment Variables.
I changed into app/config/config.yml this:
doctrine:
dbal:
driver: pdo_mysql
host: '%database_host%'
port: '%database_port%'
dbname: '%database_name%'
user: '%database_user%'
password: '%database_password%'
with that
doctrine:
dbal:
driver: pdo_mysql
url: '%env(CLEARDB_DATABASE_URL)%'
The app gets the db connection directly from Heroku Env. And it's done!

Http Communication between 2 docker containers with Docker Compose

I have two docker containers (linux containers on Windows 10 host) that are built from the microsoft/aspnetcore base image. Both containers run fine when I start them individually. I am trying to use Docker Compose to start both containers up (one is an identity provider using IdentityServer4 and the other is an api resource protected by Identity Server). I have the following docker-compose.yml file:
version: '3'
services:
identityserver:
image: eventloom/identityserver
build:
context: ../Eventloom.Web.IdentityProvider/Eventloom.Web.IdentityProvider
dockerfile: DockerFile
ports:
- 8888:80
eventsite:
image: eventloom/eventsite
build:
context: ./Eventloom.Web.Eventsite
dockerfile: Dockerfile
ports:
- 8080:80
links:
- identityserver
depends_on:
- identityserver
environment:
IdentityServer: "http://identityserver"
the startup class for the "eventsite" container uses IdentityModel to ping the Discovery endpoint of "identityserver". For some reason, the startup is never able successfully get the discovery information, even though I can log into the eventsite container and get ping responses from identityserver. Is there something else I need to do to allow eventsite to communicate over port 80 with identityserver?
It turns out that the HTTP communication was working fine and using the internal DNS properly. The issue was with my IdentityModel.DiscoveryClient object and not configuring it to allow HTTP only. I had to use VS to debug as the app was starting inside the container to figure it out. Thanks.

Resources