issue with queue priority on RabbitMqBundle - symfony

I am trying to setup a priority queue. Without priority, it is working just fine but I need to prioritize messages.
I am using RabbitMqBundle 1.14 and rabbitmq-supervisor-bundle 3.1 with RabbitMQ 3.5.7 (Erlang 18.3)
Here is the config.yml :
old_sound_rabbit_mq:
connections:
default:
host: '127.0.0.1'
port: 5672
user: 'xxx'
password: 'xxx'
vhost: '/'
lazy: false
connection_timeout: 3
read_write_timeout: 3
# requires php-amqplib v2.4.1+ and PHP5.4+
keepalive: false
# requires php-amqplib v2.4.1+
heartbeat: 0
#requires php_sockets.dll
use_socket: true # default false
producers:
global:
connection: default
exchange_options: {name: 'global', type: direct}
queue_options:
name: global
consumers:
global:
connection: default
exchange_options: {name: 'global', type: direct}
queue_options: {name: 'global', arguments: {'x-max-priority': ['I', 10]} }
callback: rabbitmq_simu_service
And the message sent to queue :
$msg = array();
$msg['id'] = $id;
$msg['action'] = 'simu';
$additionalProperties = ['priority' => 4] ;
$routing_key = '';
$this->container->get('old_sound_rabbit_mq.global_producer')->publish(serialize($msg), $routing_key , $additionalProperties);
I get the following error when sending the message :
PRECONDITION_FAILED - inequivalent arg 'x-max-priority' for queue 'global' in vhost '/': received none but current is the value '10' of type 'signedint'
I also tried in the config.yml :
queue_options: {name: 'global', arguments: {'x-max-priority': 10} }
In this case, I got no error but messages are not consumed.
Does anyone know how to send priority message ?

The message you have received is the error message for what happens when you try to create a queue, but the queue already exists with different parameters. You must delete the queue first, then try running your program.

PRECONDITION_FAILED - inequivalent arg 'x-max-priority' for queue 'global' in vhost '/': received none but current is the value '10' of type 'signedint'
That message means that you have already created the global queue with a max priority of 10, but something else is trying to declare it with no priority. You must review your code for both your producer and consumer to ensure that if they declare this priority queue they use exactly the same x-max-priority argument.
NOTE: the RabbitMQ team monitors rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

The AWS Lambda returns "Error: connect ETIMEDOUT **.****.***.***:443"

I have to services, Admin Panel(Laravel) и Online Shop(Woocommerce). The relation between these to services was realized with AWS Lambda.
When I try to send an updating product request from my Admin panel to online shop, time to time the lambda couldn't connect to the Woocomerce API.
On time when the system is not updating the product, lambda returns the error "Error: connect ETIMEDOUT"
I originally thought that the Wordpress didn't have enought time for updating process. And decided to increase the lambda's timeout (60000 ms). But it didn't help. I still found the ETIMEDOUT errors in logs.
By the way, the time period between sending the updating request to woocommerce and showing an error is 2 min. If I right understand, the lambda had enough time for getting the answer from woocommerce.
Another strange thing. According the lambda's logs, on time when lambda got an error, the woocommerce API was available. It seems like something disconnects the internet on time when lambda is sending the request.
My question is, why lambda cannot send to Woocommerce API the request. Why it happens time to time?
P.S. Below I added the example of lambda's logs.
The log on starting sending the updating request.
2021-08-14T18:23:48.692Z b228455b-45a8-5cbf-8160-1cc INFO Inside edit Online List {
status: '1',
*********
is_delete: 0,
name: 'Omega Speedmaster Moonwatch Chronograph 42mm ',
price_on_request: 0,
on_sale: 0
}
The log with error.
2021-08-14T18:25:58.299Z b228455b-45a8-5cbf-8aae6 INFO WooCommerce editOnlineStock err::: { Error: connect ETIMEDOUT ***.****.***.***:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
address: '***.****.***.***',
port: 443,
config:
{ url:
'https://domain/wp-json/wc/v3/products/*****',
method: 'put',
params: {},
data:
'{"name":"Omega Speedmaster Moonwatch Chronograph 42mm ","type":"simple"***********',
headers:
{ Accept: 'application/json',
'Content-Type': 'application/json;charset=utf-8',
'User-Agent': 'WooCommerce REST API - JS Client/1.0.1',
'Content-Length': 681 },
auth:
{ username: 'ck_************',
password: 'cs_************' },
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 60000,
adapter: [Function: httpAdapter],
responseType: 'json',
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
****************************
Is your lambda associated with a VPC? If so, i) explore to see if the VPC has a route out to the internet with a NAT gateway/instance, ii) examine the VPC flow logs for errors.

Error: connect ECONNREFUSED 127.0.0.1:8080

I am using wordpress website Local by flywheel ( url: xyz.local ) . I created a new gatsby site using and added gatsby-source-woocommerce. I also generated consumer key and consumer secret from woo-commerce settings. i added them to the api_keys in the config file.
When i run gastby develop, i get this error.
========== WARNING FOR FIELD products ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
08:19:23.204Z > gatsby-source-woocommerce: Fetching 0 nodes for field: products
08:19:23.206Z > gatsby-source-woocommerce: Completed fetching nodes for field: products
warn
========== WARNING FOR FIELD products/categories ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
08:19:23.213Z > gatsby-source-woocommerce: Fetching 0 nodes for field: products/categories
08:19:23.215Z > gatsby-source-woocommerce: Completed fetching nodes for field: products/categories
warn
========== WARNING FOR FIELD products/attributes ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
Can someone pls say if did i miss anything? or any wrong i have done?
I solved it. Problem is with plugin.
In config options of gatsby-source-woocommerce,
comment everything after fields i.e After commenting it looks like,
{
resolve: "#pasdo501/gatsby-source-woocommerce",
options: {
// Base URL of Wordpress site
api: "wordpress.domain",
// set to false to not see verbose output during build
// default: true
verbose: true,
// true if using https. otherwise false.
https: false,
api_keys: {
consumer_key: <key>,
consumer_secret: <secret>,
},
// Array of strings with fields you'd like to create nodes for...
fields: ["products", "products/categories", "products/attributes"],
},
},
Head to the #pasdo501/gatsby-source-woocommerce folder ( node modules ) -> gatsby-node.js
change api_version = "wc/v3" to "wc/v2" and
change wpAPIPrefix = null to "wp-json"
and save it.
voila
no need to change the package. you can do this:
add /index.php to the end of api.
set wpAPIPrefix to wp-json.
set query_string_auth to true (I,m not sure if this one necessary).
{
resolve: '#pasdo501/gatsby-source-woocommerce',
options: {
api: 'pro.com/index.php',
https: true,
verbose: true,
api_keys: {
consumer_key: `ck_...........`,
consumer_secret: `cs_.................`,
},
fields: ['products', 'products/categories', 'products/attributes', 'products/tags'],
wpAPIPrefix: 'wp-json',
query_string_auth: true,
api_version: 'wc/v3',
// per_page: 100,
// encoding: 'utf8',
// axios_config: {}
}
}

Symfony 4 enable logging with Monolog's Redis handler

I have a working ELK stack connected to Redis.
I also have a working stateless Symfony 4 application and I want to send all the production logs to my Redis.
I know Monolog has a Redis handler, but I don't know how I'm supposed to tweak the config/prod/monolog.yaml file to accomplish this of if there’s another approach.
This is how it looks right now:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404]
nested:
type: stream
path: "php://stderr"
level: debug
console:
type: console
process_psr_3_messages: false
channels: ["!event", "!doctrine"]
deprecation:
type: stream
path: "php://stderr"
deprecation_filter:
type: filter
handler: deprecation
max_level: info
channels: ["php"]
The approach I took was, first installing the predis client:
composer require predis/predis
Then create a custom service class that extends the RedisHandler class that comes with the Monolog package:
namespace App\Service\Monolog\Handler;
use Monolog\Handler\RedisHandler;
use Monolog\Logger;
use Predis\Client as PredisClient;
class Redis extends RedisHandler
{
public function __construct( $host, $port = 6379, $level = Logger::DEBUG, $bubble = true, $capSize = false)
{
$predis = new PredisClient( "tcp://$host:$port" );
$key = 'logstash';
parent::__construct($predis, $key, $level, $bubble, $capSize);
}
}
Next, activate the service we just created on the services.yml config file:
services:
monolog.handler.redis:
class: App\Service\Monolog\Handler\Redis
arguments: [ '%redis.host%' ]
Be sure the parameter redis.host is set and points to your Redis server. In my case, my parameter value is the IP of my Redis server.
I added other parameters to the class like port and log level. You can set it at the moment of instantiating your service like with the host parameter.
Finally, configure your custom log handler service in your monolog.yaml config file. In my case, I need it only the production logs with the config as follow:
handlers:
custom:
type: service
id: monolog.handler.redis
level: debug
channels: ['!event']

rabbitmq-bundle - symfony3 - how to configure a topic exchange and queues?

I can't find a great configuration for old sound rabbitmq bundle to deal with topics and wildcard.
All I want is a unique exchange that post to multiple queue using wildcard.
Let says for example, i have my exchange name user.update, and i want to post the same message on user.update.address, user.update.profile for a microservice strategy.
do you know how to configure in the configuration file ?
Thx for reading.
Just because you are looking for
... great configuration for old sound rabbitmq bundle ...
visit http://www.inanzzz.com/ and search for "rabbitmq" which will give you what you wish for.
To address your question, you can use config below (I haven't tested it but it should be fine). However, you still need to write whole functionality/classes/consumers/producers etc. so follow this example: RabbitMQ topic example with symfony including 1 Producer & 1 Exchange & 2 Queue & N Worker & 2 Consumer
old_sound_rabbit_mq:
connections:
default:
host: %rabbitmq.host%
port: %rabbitmq.port%
user: %rabbitmq.user%
password: %rabbitmq.pswd%
vhost: /
lazy: true
producers:
user_update_producer:
connection: default
exchange_options: { name: user.update, type: topic }
consumers:
user_update_consumer:
connection: default
exchange_options: { name: user.update, type: topic }
queue_options:
name: user_update_queue
routing_keys:
- 'user.update.address'
- 'user.update.profile'
callback: your_application.consumer.user_update_consumer
It's flow: user.update (P) -> user.update (E) -> [user.update.address & user.update.profile] -> user_update_queue (Q)

Symfony & ELK : which encoding should I use with gelf?

I'm trying to monitor a symfony app with the ELK stack.
I'm shipping my logs to logstash with the following configuration :
monolog:
handlers:
main:
type: gelf
publisher:
hostname: elk-host
port: 10514
formatter: monolog.formatter.gelf_message
level: INFO
On kibana, I see that I reiceive the logs but the message is encoded in a strange way; here is an example of what kibana displays :
x\x9CMP\xC1n\x830\f\xFD\u0015+\xA7V\xAAB\xA1\f(\xD7j;Nڴ\xDD\"Ui0`)\u0004D\(\x9A\xF6\xEF\v\x9B\xD6\xEDf\xBFg\xFB\xF9\xBD\u000F1\xE1\xE8\xA9w\xA2\u0014\xB1܋\x9Dh{ϡ\u0019\xFA\x915Y\xCF^\xDA\xDEh\e\u0018\xDF\u0006\xECܡ\xF7\xBA\xC10\xF2\x8A5\x8E\xE8\f\xB9\u0006\xB8EP\xC2\xF4#*\u0001xct\xEBQ\xB8,#\xEC\xC1\xE9\u000EaSaM\u000E\xAB\u0015l\x90\x9F\u0003\xB6\xD9n\x81
Here is my monolog configuration file :
input {
gelf {
codec => "json"
}
syslog {
port => 10514
type => "syslog"
}
}
filter {
}
output {
elasticsearch {}
}
I tried to add an encoding option (charset => "UTF-8") but it was not better.
Also why are my logs displayed as "syslog" type instead of "gelf" that I specified in monolog config ?
Your sending GELF (JSON) output to a SYSLOG listener, you need to change to send it to the GELF port rather than the SYSLOG port

Resources