Oldsound rabbitmq bundle multiple consumer configuration - symfony

I'm trying to implement RabbitMQ with https://github.com/php-amqplib/RabbitMqBundle and Symfony2 framework.
I've managed to make the thing work with 1 producer and 1 consumer but the problem is when i use multiple consumers.
This is my configuration:
old_sound_rabbit_mq:
connections:
default:
host: 'localhost'
port: 5672
user: 'guest'
password: 'guest'
vhost: '/'
lazy: false
connection_timeout: 3
read_write_timeout: 3
# requires php-amqplib v2.4.1+ and PHP5.4+
keepalive: false
# requires php-amqplib v2.4.1+
heartbeat: 0
#requires php_sockets.dll
# use_socket: true # default false
producers:
soccer_team_stat:
connection: default
exchange_options: {name: 'soccer_team_stat_ex', type: direct}
queue_options: {name: 'soccer_team_stat_qu'}
soccer_team_stat_form:
connection: default
exchange_options: {name: 'soccer_team_stat_ex', type: direct}
queue_options: {name: 'soccer_team_stat_form_qu'}
consumers:
soccer_team_stat:
connection: default
exchange_options: {name: 'soccer_team_stat_ex', type: direct}
queue_options: {name: 'soccer_team_stat_qu'}
callback: myapp.soccer_team_stat.consume
soccer_team_stat_form:
connection: default
exchange_options: {name: 'soccer_team_stat_ex', type: direct}
queue_options: {name: 'soccer_team_stat_form_qu'}
callback: myapp.soccer_team_stat_form.consume
Service definitions:
<services>
<service class="MyApp\EtlBundle\Producers\SoccerTeamStatProducer" id="myapp.soccer_team_stat.produce">
<argument type="service" id="old_sound_rabbit_mq.soccer_team_stat_producer"/>
</service>
<service class="MyApp\EtlBundle\Producers\SoccerTeamStatProducer" id="myapp.soccer_team_stat_form.produce">
<argument type="service" id="old_sound_rabbit_mq.soccer_team_stat_producer"/>
</service>
<service class="MyApp\EtlBundle\Consumers\SoccerTeamStatConsumer" id="myapp.soccer_team_stat.consume">
<argument type="service" id="service_container"/>
</service>
<service class="MyApp\EtlBundle\Consumers\SoccerTeamStatFormConsumer" id="myapp.soccer_team_stat_form.consume">
<argument type="service" id="service_container"/>
</service>
</services>
And on php app/console rabbitmq:consumer -d soccer_team_stat_form i get:
[Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException]
You have requested a non-existent service
"old_sound_rabbit_mq.soccer_team_stat_form_consumer".
I tried various combinations includin using multiple_consumers configuration key but with no success. What i'm i missing?

If neither of routing_key and binding_key are set, direct exchange will behave like fanout and send the messages to all the queues it knows so based on what I'm seeing from your configurations, you are better off using fanout so do like below.
old_sound_rabbit_mq:
connections:
default:
host: %rabbit_mq_host%
port: %rabbit_mq_port%
user: %rabbit_mq_user%
password: %rabbit_mq_pswd%
vhost: /
lazy: true
producers:
soccer_team_stat:
connection: default
exchange_options: { name: 'soccer_team_stat_ex', type: fanout }
soccer_team_stat_form:
connection: default
exchange_options: { name: 'soccer_team_stat_form_ex', type: fanout }
consumers:
soccer_team_stat:
connection: default
exchange_options: { name: 'soccer_team_stat_ex', type: fanout }
queue_options: { name: 'soccer_team_stat_qu' }
callback: myapp.soccer_team_stat.consume
soccer_team_stat_form:
connection: default
exchange_options: { name: 'soccer_team_stat_form_ex', type: fanout }
queue_options: { name: 'soccer_team_stat_form_qu' }
callback: myapp.soccer_team_stat_form.consume
This RabbitMQ fanout example with symfony including 2 Producer & 2 Exchange & 2 Queue & N Worker & 2 Consumer is the full example (actually the full answer to your question/already made version of what you want to do) that shows how things are done within symfony apps. I would suggest you to follow the pattern used there. Very easy to follow and maintain. If you want more examples, just search for RabbitMQ keyword in that blog.

Related

How to use nginx module on Filebeat in k8s

I am trying to use filebeat with nginx module to collect logs from nginx-ingress-controller and send directly to elasti but I keep getting an error:
Provided Grok expressions do not match field value: [172.17.0.1 - - [03/Dec/2022:00:05:01 +0000] \"GET /healthz HTTP/1.1\" 200 0 \"-\" \"kube-probe/1.24\" \"-\"]
This appear on Kibana under message error.
Note that I am running the latest helm for filebeat (8.5) and the nginx controller is nginx-ingress-controller-9.2.15 1.2.1.
My filebeat setting:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: false
templates:
- condition:
contains:
kubernetes.pod.name: redis
config:
- module: redis
log:
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.pod.name: nginx
config:
- module: nginx
access:
enabled: true
input:
type: container
containers.ids:
- "${data.kubernetes.container.id}"
paths:
- /var/lib/docker/containers/${data.kubernetes.container.id}/*.log
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
setup.ilm:
enabled: true
overwrite: true
policy_file: /usr/share/filebeat/ilm.json
setup.dashboards.enabled: true
setup.kibana.host: "http://kibana:5601"
ilm.json: |
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "7d",
"actions": {
"delete": {}
}
}
}
}
}
And the logs from the controller are:
172.17.0.1 - - [02/Dec/2022:23:43:49 +0000] "GET /healthz HTTP/1.1" 200 0 "-" "kube-probe/1.24" "-"
Can someone help me understand what am I doing wrong?

Symfony Monolog using mailer instead of transport

I'm using symfony 6.0 and i configured my monolog to send errors via email in prod.
How ever, it is using the async messenger transport (doctrine) instead of the configured gmail DSN.
I'm not sure how to use one config or the other, messenger being by default configured, but i don't use it (yet).
Here are my config files:
mailer.yaml
monolog:
channels:
- grouped # Deprecations are logged in the dedicated "deprecation" channel when it exists
-
when#prod:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: grouped
excluded_http_codes: [404, 405]
buffer_size: 50 # How many messages should be saved? Prevent memory leaks
grouped:
type: group
members: [ streamed, deduplicated ]
streamed:
type: stream
path: '%kernel.logs_dir%/%kernel.environment%.log'
level: debug
deduplicated:
type: deduplication
handler: symfony_mailer
symfony_mailer:
type: symfony_mailer
from_email: 'email#email.com'
to_email: 'email#email.com'
subject: 'Company- An Error Occurred! %%message%%'
level: debug
formatter: monolog.formatter.html
content_type: text/html
My mailer.yaml config:
framework:
mailer:
dsn: '%env(MAILER_DSN)%'
And my messenger.yaml config:
framework:
messenger:
failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
async:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
use_notify: true
check_delayed_interval: 60000
retry_strategy:
max_retries: 3
multiplier: 2
failed: 'doctrine://default?queue_name=failed'
# sync: 'sync://'
routing:
Symfony\Component\Mailer\Messenger\SendEmailMessage: async
Symfony\Component\Notifier\Message\ChatMessage: async
Symfony\Component\Notifier\Message\SmsMessage: async
# Route your messages to the transports
# 'App\Message\YourMessage': async
In my .env file:
MESSENGER_TRANSPORT_DSN=doctrine://default?auto_setup=0
MAILER_DSN=gmail://email#email.com:password#localhost
So for now it just save the mail in database, but i want it to be just send by mail.

Envoy: REST gateway + multiple GRPC clusters

I'm trying to config envoy as rest api gateway with multiple grpc servers and have a problem with routing. The only way to match endpoint to grpc cluster, that i've found is to match via request header (http request /first must be resolved by first cluster, /second - by second):
...
routes:
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "first"
route:
cluster: first
- match:
prefix: "/"
headers:
- name: x-service
exact_match: "second"
route:
cluster: second
...
But, in this case i need to set custom header 'x-service' at the client (frontend). This looks like a bad idea, 'couse frontend shouldn't know anything about backend infrastructure.
Is there any other way to match http route with grpc service? Or, can i set such headers somewhere in envoy?
The Envoy configuration pasted below registers a HTTP listener on port 51051 that proxies to helloworld.Greeter service in the cluster grpc1 on port 50051 and bookstore.Bookstore service in the cluster grpc2 on port 50052 by using the gRPC route as the match prefix.
This ensures clean segregation of responsibilities and isolation since the client will not need to inject custom HTTP headers to make multi-gRPC cluster routing work.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener1
address:
socket_address: { address: 0.0.0.0, port_value: 51051 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
stat_prefix: grpc_json
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
# NOTE: by default, matching happens based on the gRPC route, and not on the incoming request path.
# Reference: https://www.envoyproxy.io/docs/envoy/latest/configuration/http_filters/grpc_json_transcoder_filter#route-configs-for-transcoded-requests
- match: { prefix: "/helloworld.Greeter" }
route: { cluster: grpc1, timeout: 60s }
- match: { prefix: "/bookstore.Bookstore" }
route: { cluster: grpc2, timeout: 60s }
clusters:
- name: grpc1
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50051
- name: grpc2
connect_timeout: 1.25s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
typed_extension_protocol_options:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
"#type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicit_http_config:
http2_protocol_options: {}
load_assignment:
cluster_name: grpc2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 50052
https://github.com/envoyproxy/envoy/blob/main/test/proto/helloworld.proto
syntax = "proto3";
package helloworld;
import "google/api/annotations.proto";
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello(HelloRequest) returns (HelloReply) {
option (google.api.http) = {
get: "/say"
};
}
}
https://github.com/envoyproxy/envoy/blob/main/test/proto/bookstore.proto
syntax = "proto3";
package bookstore;
import "google/api/annotations.proto";
import "google/api/httpbody.proto";
import "google/protobuf/empty.proto";
import "google/protobuf/struct.proto";
// A simple Bookstore API.
//
// The API manages shelves and books resources. Shelves contain books.
service Bookstore {
// Returns a list of all shelves in the bookstore.
rpc ListShelves(google.protobuf.Empty) returns (ListShelvesResponse) {
option (google.api.http) = {
get: "/shelves"
};
}
...

connecting to cosmos db graph api from gremlin console: "exception=Keys must be scalars"

I'm trying to connect to cosmos db through the gremlin console 3.3.4, following this the remote_secure.yaml is as follows:
hosts: [*****.gremlin.cosmosdb.azure.com]
port: 443
username: /dbs/sample-database/colls/sample-collection
password: ******
connectionPool: {
enableSsl: true}
{ className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { serializeResultToString: true }}:
but when I run :remote connect tinkerpop.server conf/remote-secure.yaml
I get the following error
==>Error during 'connect' - Can't construct a java object
for tag:yaml.org,2002:org.apache.tinkerpop.gremlin.driver.Settings;
exception=Keys must be scalars but found:
<org.yaml.snakeyaml.nodes.MappingNode (tag=tag:yaml.org,2002:map,
values={ key=<org.yaml.snakeyaml.nodes.ScalarNode (tag=tag:yaml.org,2002:str, value=className)>;
value=<NodeTuple
keyNode=<org.yaml.snakeyaml.nodes.ScalarNode (tag=tag:yaml.org,2002:str,
value=className)>; valueNode=<org.yaml.snakeyaml.nodes.ScalarNode
(tag=tag:yaml.org,2002:str,
value=org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0)
>> }{ key=<org.yaml.snakeyaml.nodes.ScalarNode
(tag=tag:yaml.org,2002:str, value=config)>; value=828088650 })>
in 'reader', line 27, column 1:
hosts: [*****.gremlin.cosm ...
Any ideas what I am doing wrong?
Looks like your configuration is mangled. You are missing the serializer key on that last line:
hosts: [*****.gremlin.cosmosdb.azure.com]
port: 443
username: /dbs/sample-database/colls/sample-collection
password: ******
connectionPool: {
enableSsl: true}
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { serializeResultToString: true }}

Connect remote RabbitMq server

I am using RabbitMQ server for send message in my Symfony2 application.
I have used OldSoundRabbitMqBundle for this.
After successful installation of RabbitMq server on my application server it is working fine.
But when I install RabbitMQ server on different machine and try to connect it from my application server it is not connecting.
I have given the connection config as follows:
old_sound_rabbit_mq:
connections:
default:
host: myrabbitserverIp
port: 80
user: 'test'
password: 'test'
vhost: '/'
lazy: false
producers:
messages:
connection: default
exchange_options: {name: 'messages', type: direct}
consumers:
messages:
connection: default
exchange_options: {name: 'messages', type: direct}
queue_options: {name: 'messages'}
callback: message.amqp_consumer
Do I need to change any configuration for RabbitMq server?

Resources