Connect Doctrine to memcached pool - symfony

Perhaps anybody know, how to connect Doctrine to memcached pool, to use it as a cache driver?
I've check official bundle documentation, and lot of another sources, but didn't find any examples of such connection.
Also due to source code, I could not find any options to use pool, but perhaps I miss something.

Didn't test, but the following should work:
in app/config/parameters.yml, set/add
parameters:
memcached.servers:
- { host: 127.0.0.1, port: 11211 }
- { host: 127.0.0.2, port: 11211 }
in app/config/config.yml set/add
services:
memcache:
# class 'Memcache' or 'Memcached', depending on which PHP module you use
class: Memcache
calls:
- [ addServers, [ %memcached.servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]
in app/config/config_prod.yml, set
doctrine:
orm:
metadata_cache_driver:
type: service
id: doctrine.cache.memcached
query_cache_driver:
type: service
id: doctrine.cache.memcached
result_cache_driver:
type: service
id: doctrine.cache.memcached
As I said, I can't test it, but this is the combination of several known-to-work techniques.
UPDATE: solution updated based on CrazySquirrel's findings.

Thanks lxg for your ideas. I've build right configuration using your ideas. Please find correct service definition below:
application config:
result_cache_driver:
type: service
id: doctrine.cache.memcached
service.yml
memcached:
class: Memcached
calls:
- [ addServers, [ %memcached_servers% ]]
doctrine.cache.memcached:
class: Doctrine\Common\Cache\MemcachedCache
calls:
- [setMemcached, [#memcached]]

Related

disabling discovery for k8s api client

Right now i used the way from the 1st answer from:
Cannot read configmap with name: [xx] in namespace ['default'] Ignoring
But in application logs:
2022-04-19 14:14:57.660 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] INFO i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Start listing and watching...
2022-04-19 14:14:57.662 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] ERROR i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Reflector loop failed unexpectedly
io.kubernetes.client.openapi.ApiException:
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:974)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:886)
at io.kubernetes.client.informer.SharedInformerFactory$1.list(SharedInformerFactory.java:207)
at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:88)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
And its works properly and reading configmaps mounted in deployment.
How can i disable the all another features?
Using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1")
In bootstrap.yml:
spring:
application:
name: toast
cloud:
vault:
enabled: false
kubernetes:
reload:
enabled: true
mode: event
strategy: refresh
config:
sources:
- name: ${spring.application.name}-common
- name: ${spring.application.name}
enabled: true
paths:
#- { { .Values.application } }-common-config/data.yml
#- { { .Values.application } }-config/application.yml
- /etc/${spring.application.name}-common/config/application.yml
- /etc/${spring.application.name}/config/data.yml
enabled: true
I need can use it without rbac resource in k8s.

FOSElasticaBundle & Propel

I have installed FOSElasticaBundle "friendsofsymfony/elastica-bundle": "^3.2"
(with symfony 2.8.8) and tried to define this simple configuration
fos_elastica:
clients:
default: { host: %elastic_host%, port: %elastic_port% }
indexes:
project:
types:
item:
mappings:
id: { type: integer }
itemno: { type: string }
persistence:
driver: propel
model: Model\Item\Item
provider: ~
Unfortunately I'm getting following error
The parent definition "fos_elastica.listener.prototype.propel" defined
for definition "fos_elastica.listener.project.item" does not
exist.
As I read through the documentation, there is no "listener" available for propel - that's why I'm a little bit confused about the error message. I already tried to define the missing definition, but with no result.
Is it necessary to define any class/service as provider for the type? In the documentation I couldn't find anything about that?
Thanks.

Prevent creating multiple connections in RabbitMQ while using RabbitMQ Bundle for the Symfony2

I'm using RabbitMQ Bundle for the Symfony2 web framework. My question is, how can I avoid creating multiple connections (to prevent overloading the broker) after running many workers in terminal? In example below, I've run two workers and ended up having two connections/channel.
config.yml
old_sound_rabbit_mq:
connections:
default:
host: 127.0.0.1
port: 5672
user: guest
password: guest
vhost: /
lazy: true
producers:
order_create_bmw:
connection: default
exchange_options: { name: order_create_ex, type: direct }
queue_options:
name: order_create_bmw_qu
routing_keys:
- bmw
consumers:
order_create_bmw:
connection: default
exchange_options: { name: order_create_ex, type: direct }
queue_options:
name: order_create_bmw_qu
routing_keys:
- bmw
callback: application_frontend.consumer.order_create_bmw
services.yml
services:
application_frontend.producer.order_create_bmw:
class: Application\FrontendBundle\Producer\OrderCreateBmwProducer
arguments:
- #old_sound_rabbit_mq.order_create_bmw_producer
Producer
namespace Application\FrontendBundle\Producer;
use Application\FrontendBundle\Entity\Order;
use OldSound\RabbitMqBundle\RabbitMq\ProducerInterface;
class OrderCreateBmwProducer
{
private $producer;
public function __construct(ProducerInterface $producer)
{
$this->producer = $producer;
}
public function add(Order $order)
{
$message = [
'order_id' => $order->getId(),
'car_model' => $order->getCarModel(),
'timestamp' => date('Y-m-d H:i:s')
];
$this->producer->publish(json_encode($message), 'bmw');
}
}
Running workers
$ app/console rabbitmq:consumer order_create_bmw
$ app/console rabbitmq:consumer order_create_bmw
RabbitMQ Management
Every client (regardless if publisher or subscriber) that connects to rabbitmq will create a connection. Aside from using less clients, I can't think of any other way to achive this. I also can't think of a reason to do so :) If it's performance, than actually having more subscribers will help to "empty" the exchanges (and queues).

FosElasticaBundle: how to dump the actual JSON passed to ElasticSearch?

I am using FosElasticaBundle in a Symfony project. I configured my mappings but I get exception "expected a simple value for field [_id] but found [START_OBJECT]]".
I'd like to see the actual JSON created by FosElasticaBundle so I can directly test it against my ElasticSearch server, and understand more about the exception.
According to FosElastica documentation, everything should be logged when debug mode is enabled (i.e. in DEV environment) but I can't see this happening; I only see Doctrine queries, but no JSON.
How can I dump the JSON created by FosElasticaBundle?
Update: mappings
# FOSElasticaBundle
fos_elastica:
clients:
default: { host: %elasticsearch_host%, port: %elasticsearch_port%, logger: false }
indexes:
app:
types:
user:
mappings:
name: ~
surname: ~
persistence:
driver: orm
model: AppBundle\Entity\User
provider: ~
listener: ~
finder: ~
I think you should only set your logger to true instead of false
fos_elastica:
clients:
default:
host: %elasticsearch_host%
port: %elasticsearch_port%
logger: true <---- set true here
...

Redis with Symfony2 causes problems between sites on my server

I'm using symfony2 snc-redis bundle for caching.
On my server, redis has been installed and working correctly.
My problem is; when i try to clear or flush db with redis, all sites on my server that using redis, crashes. Giving internal server error because of prod env.
I'v tried to change redis configuration ports in my config.yml for every single site on my server but i think didn't work.
My sample snc-redis configuration:
snc_redis:
clients:
default:
type: predis
alias: default
dsn: redis://localhost
logging: %kernel.debug%
cache:
type: predis
alias: cache
dsn: redis://localhost/1
logging: true
cluster:
type: predis
alias: cluster
dsn:
- redis://127.0.0.1/5
- redis://127.0.0.2/6
- redis://pw#/var/run/redis/redis-1.sock/7
- redis://127.0.0.1:6379/8
options:
profile: 2.4
connection_timeout: 10
connection_persistent: true
read_write_timeout: 30
iterable_multibulk: false
throw_errors: true
cluster: Snc\RedisBundle\Client\Predis\Connection\PredisCluster
monolog:
type: predis
alias: monolog
dsn: redis://localhost/1
logging: false
options:
connection_persistent: true
session:
client: default
prefix: foo
use_as_default: true
doctrine:
metadata_cache:
client: cache
entity_manager: default
document_manager: default
result_cache:
client: cache
entity_manager: [default, read]
document_manager: [default, slave1, slave2]
namespace: "dcrc:"
query_cache:
client: cache
entity_manager: default
monolog:
client: monolog
key: monolog
swiftmailer:
client: default
key: swiftmailer
monolog:
handlers:
main:
type: service
id: monolog.handler.redis
level: debug
What i'm doing wrong? How can i get it work correctly and will not cause crashing.
My Redis Bundle for Symfon2:
Snc\RedisBundle\SncRedisBundle()
https://github.com/snc/SncRedisBundle
You can define prefix for each site like this:
snc_redis:
clients:
default:
dsn: "redis://localhost:6379"
options:
prefix : "site_name"
type: phpredis
alias: default
logging: %kernel.debug%
Note: You must to be considered to put this prefix to all clients ;)
Did you try to change client alias for every site ?

Resources