Symfony2 and RabbitMqBundle. Can't publish a message - symfony

I am trying to use syfmony2 framework with RabbitMqBundle from here
I am sure that my rabbitmq server is up and running and I am doing the configuration and publishers code accordingly to the docs delivered on github. Unfortunately I can`t add any message to the queue.
I am sure that my rabbitmq server is up and running. I have queue named accordingly to the symfony configuration file.
Have anyone got any clue what is wrong?
Thanks in advance for any suggestions.

well... try this simple example
# app/config.yml
old_sound_rabbit_mq:
connections: %rabbitmq_connections%
producers: %rabbitmq_producers%
consumers: %rabbitmq_consumers%
parameters:
# connection parameters
rabbitmq_connections:
default: { host: 'localhost', port: 5672, user: 'guest', password: 'guest', vhost: '/' }
# define producers
rabbitmq_producers:
sample:
connection: default
exchange_options: {name: 'exchange_name', type: direct, auto_delete: false, durable: true}
# define consumers
rabbitmq_consumers:
sample:
connection: default
exchange_options: {name: 'exchange_name', type: direct, auto_delete: false, durable: true}
queue_options: {name: 'sample', auto_delete: false}
callback: rabbitmq.callback.service
then you should define your callback service. feel free to put it in app/config.yml
services:
rabbitmq.callback.service:
class: RabbitMQ\Callback\Service
and yes. you should write this callback service. here is simple implementation. should be enough for understanding and check is it works for you.
namespace RabbitMQ\Callback;
use OldSound\RabbitMqBundle\RabbitMq\ConsumerInterface;
use PhpAmqpLib\Channel\AMQPChannel;
use PhpAmqpLib\Message\AMQPMessage;
class Service implements ConsumerInterface
{
public function execute(AMQPMessage $msg)
{
var_dump(unserialize($msg->body));
}
}
then you should start rabbitmq server, run consumer and check was new exchange and queue added.
to run test consumer you should run
app/console rabbitmq:consumer sample --route="sample"
in your controller (where you want to send message to rabbitMQ put next code
# get producer service
$producer = $this->get('old_sound_rabbit_mq.sample_producer');
# publish message
$producer->publish(serialize(array('foo'=>'bar','_FOO'=>'_BAR')), 'sample');
Hope it's more or less clear and will help you with rabbitmq.
PS: it's easier to debug if you have rabbitmq management plugin. if you have no, use console commands like rabbitmqctl to check queues/exchanges/consumers and so on...
and also would be nice to see your configuration for producers/consumers. callback services code as well.

I also had some issue to send messages with this bundle, i recommend you to try SonataNotificationBundle instead.
You can also install the RabbitMq management plugin to see the queued messages.

Related

Enabling account deletion on nats server

I was trying to prune some users from my nats server by doing:
nsc push --system-account SYS -u nats://localhost:4222 -P
but I got the following error:
server nats-comm-2 responded with error: delete accounts request by SOME_KEY_VALUE failed - delete must be enabled in server config
The meaning of the error is pretty obvious, when I examine the help documentation for nsc push -P:
Only works with nats-resolver enabled nats-server. Mutually exclusive of account-removal/diff
But I'm not sure how to enable this in my nats server config. How do I allow for account pruning?
I found documentation in the resolver section, here, showing that I could add allow_delete: true to the config, but as the YAML format is in camel-case, I had to modify it to be allowDelete: true instead.
nats:
auth:
enabled: true
resolver:
type: full
allowDelete: true

Can you define a consumer command that only consumes a specific queue of a multi-queue transport?

I have the impression to miss something while implementing AMQP services with RabbitMQ and Symfony Messenger.
From a RabbitMQ perspective, consumers (also known as workers) consume from queues.
From the Symfony Messenger documentation, one "transport" is linked to one consumer. This is by design as shown by the command bin/console messenger:consume transport. So, for each "handler", you have to configure a dedicated transport in messenger.yaml to be able to allocate a specific number of processes (via Supervisor for instance in configuring the numprocs variable).
As stated, I found a way to configure that use case using 2 different transports. Yet, that looks a bit too complicated to me:
framework:
messenger:
transports:
one_transport:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
exchange:
name: my_exchange
type: direct
queues:
one_queue:
binding_keys:
- one_binding_key
another_transport:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
exchange:
name: my_exchange
type: direct
queues:
another_queue:
binding_keys:
- another_binding_key
routing:
'App\MessageBroker\Message\OneNotification': one_transport
'App\MessageBroker\Message\AnotherNotification': another_transport
# In action 1
$this->dispatchMessage(
new OneNotification(), [new AmqpStamp('one_binding_key')]
);
# Action 2
$this->dispatchMessage(
new AnotherNotification(), [new AmqpStamp('another_binding_key')]
);
# SF consumer 1
[program:messenger-consume]
command=php bin/console messenger:consume one_transport
numprocs=4
# SF consumer 2
[program:messenger-consume]
command=php bin/console messenger:consume another_transport
numprocs=1
No other way to achieve this?

Symfony how to use Mercure with Messenger + RabbitMQ?

I am looking to use Mercury with RabbitMQ. This is the first time that I have used Mercury as well as RabbitMQ so I am not yet good.
Here is where I am:
I've installed Mercure, and Messenger.
Messenger.yaml
framework:
messenger:
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
async: '%env(MESSENGER_TRANSPORT_DSN)%'
failed: '%env(MESSENGER_TRANSPORT_FAILED_DSN)%'
# sync: 'sync://'
routing:
# Route your messages to the transports
# 'App\Message\YourMessage': async
.env:
MERCURE_PUBLISH_URL=http://localhost:3000/.well-known/mercure
MERCURE_JWT_TOKEN=aVerySecretKey
MESSENGER_TRANSPORT_DSN=amqp://bastien:mypassword#localhost:5672/%2f/messages
MESSENGER_TRANSPORT_FAILED_DSN=amqp://bastien:mypassword#localhost:5672/%2f/failed
And in my controller I simulated 50 pings on a URL of my local app:
/**
* #Route("/ping", name="ping", methods={"POST"})
*/
public function ping(MessageBusInterface $bus)
{
for($i=0;$i<=50;$i++)
{
$update = new Update("http://monsite.com/ping", "[]");
$bus->dispatch($update);
}
return $this->redirectToRoute('home');
}
I have successfully started my instance of Mercury as well as that of Messenger which is therefore well connected to my RabbitMQ.
But when I test sending the pings, it works, but without going through my RabbitMQ. Did I miss something? I think of my Messenger.yaml in the routing part but I don't know what to put if it is the case
By default, messages are executed synchronously in messenger.
You will need to configure the Update message in the messenger.yaml to use the async transport:
Symfony\Component\Mercure\Update: async

Jenkins Artifactory plugin: a script in "Declarative Pipeline Syntax" – how to bypass the configured proxy?

https://www.jfrog.com/confluence/display/RTF/Working+With+Pipeline+Jobs+in+Jenkins
This article attempts to describe all the details as well for "Declarative Pipeline Syntax" as for "Scripted Pipeline Syntax".
but for "Declarative …" it does not describe how to bypass the proxy whereas for "Scripted …" it does: "server.bypassProxy = true".
So how would I specify bypassing the proxy in a Jenkins pipeline script with "Declarative Pipeline Syntax"?
In rtServer add bypassProxy: true:
rtServer (
id: "Artifactory-1",
url: "http://my-artifactory-domain/artifactory",
username: "user",
password: "password",
bypassProxy: true
)

Why does Meteor Up (MUP) fail on authentication?

I am currently trying to deploy a Meteor project to an external server for the first time. The server is hosted by DigitalOcean, running ubuntu 16.04, and has an SSH key set up for password-free access.
The error I am getting from MUP is:
[159.203.165.13] - Setup Docker
events.js:165
throw er; // Unhandled 'error' event
^
Error: All configured authentication methods failed
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:290:17)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
at SSH2Stream.emit (events.js:180:13)
at parsePacket (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:3647:10)
at SSH2Stream._transform (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:551:13)
at SSH2Stream.Transform._read (_stream_transform.js:185:10)
at SSH2Stream._read (/usr/lib/node_modules/mup/node_modules/ssh2-streams/lib/ssh.js:212:15)
at SSH2Stream.Transform._write (_stream_transform.js:173:12)
at doWrite (_stream_writable.js:410:12)
at writeOrBuffer (_stream_writable.js:396:5)
at SSH2Stream.Writable.write (_stream_writable.js:294:11)
at Socket.ondata (_stream_readable.js:651:20)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:274:12)
at readableAddChunk (_stream_readable.js:261:11)
at Socket.Readable.push (_stream_readable.js:218:10)
Emitted 'error' event at:
at tryNextAuth (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:292:12)
at SSH2Stream.onUSERAUTH_FAILURE (/usr/lib/node_modules/mup/node_modules/nodemiral/node_modules/ssh2/lib/client.js:469:5)
[... lines matching original stack trace ...]
at Socket.Readable.push (_stream_readable.js:218:10)
At this point I have tried several solutions involving the mup file as per other recommendations such as:
1) Adding in a password - Gives the exact same error as though the change didn't occur.
2) Adding in the same SSH key that I use for authentication to the server as per digital ocean - Says 'privateKey value does not contain a (valid) private key'. I have tried both the key that is used for authentication to the server and every other key I could find short of generating a new one just for Meteor's use.
3) Leaving both blank and allowing it to 'try' ssh-agent - pretends it doesn't know what ssh-agent is and throws an error saying the same thing as when I use a password.
I have looked through and followed the same instructions in the following article: http://meteortips.com/deployment-tutorial/digitalocean-part-1/
This article assumes that there are only two possible states. One being that an ssh key has NOT been used or set up so it needs to be generated. The second being that an ssh key exists and is set up exactly where they expect it. Unfortunately I seem to be in a different situation. I generated a key using putty prior to setting up the D.O server and created the droplet using that. After creation, the file did not exist. The only thing in the ~/.ssh/ directory was a single file named "authorized_keys" that held the key I would use to connect to the server. This file cannot be used, nor any file on the server in the other ssh key locations.I also tried copying over the file directly onto the server to no avail as well.
In some vain hope at finding a solution I also tried running these same commands in both the Meteor build bundle an the source code folder. Neither worked. I should mention that although this is the only article I still have open to try for a solution, I have tried every one I could find using MUP.
If anyone can point me in the right direction with this so I can stop flailing wildly in the dark I would be incredibly grateful.
Edit: As requested, below is the current mup.js file with removed credentials
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '111.111.111.11',
username: 'root',
// ssh-agent: '/home/Meteor/MeteorKey.pem'
pem: '~/.ssh/id_rsa.pub'
// password: 'password1'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app-name',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://www.app-name.com',
MONGO_URL: 'mongodb://mongodb/meteor',
MONGO_OPLOG_URL: 'mongodb://mongodb/local',
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
},
// (Optional)
// Use the proxy to setup ssl or to route requests to the correct
// app when there are several apps
// proxy: {
// domains: 'mywebsite.com,www.mywebsite.com',
The error message you are receiving:
Error: All configured authentication methods failed
Means that the SSH connection is failing. So the credentials you are using (pity you removed them from the config) are not working. Try using a command line ssh using these same credentials, and then trouble shoot that - once you can ssh into the server, then mup should be able to do it's work.
You can get more information out of ssh by specifying one or more -v parameters, eg:
ssh -v -v my_user#remote.com
and it will give you information about the authentication methods it is trying as it goes through them. This will help you narrow down the problem.

Resources