What are the Netty Channel state transitions? - tcp

Netty channels have multiple states but I am unable to find any
documentation on the actual state transitions. The closest to any
documentation on this that I could find for Netty 3.2.x system is
here.
I was able to locate the possible states that a channel can be in
here.
However there is nothing that describes the normal transitions that a
channel can make from one state to another. It appears that not all
channels make all possible state transitions.

Different Netty channels do indeed have different state transitions.
In general the possible state transitions for TCP based server
channels are:
OPEN -> ( BOUND -> UNBOUND )* -> CLOSE
If you are using a SimpleChannelHandler subclass in your pipeline
the equivalent methods for handling the upstream events when one of
these state changes occur are:
channelOpen
channelBound
channelUnbound
channelClose
Server channels never move into the CONNECTED state.
Server channels rarely move back into the BOUND state once they
move into the UNBOUND state, however this appears to be dependent
on the application so YMMV.
Note that server channels can fire events when a child channel is
opened or closed. These events can occur only after the server
channel is in the BOUND state. When these events are sent
upstream on behalf of the server channel then the following methods on
your SimpleChannelHandler subclass are called:
childChannelOpen
childChannelClosed
The possible state transitions for TCP based child and client channels
are:
OPEN -> ( BOUND -> ( CONNECTED -> DISCONNECTED )* -> UNBOUND )* -> CLOSE
It appears that moving into the CONNECTED state first is not
enforced within the channel code; however this state is invariably
fired first for both child and client channels within the Netty
framework before the channel is moved into the CONNECTED state.
If you are using SimpleChannelHandler or a subclass thereof in
your pipeline the equivalent methods are:
channelOpen
channelBound
channelConnected
channelDisconnected
channelUnbound
channelClose
A TCP based channel must be in the CONNECTED state before anything
can be read or written to the channel. This includes server channels,
which can never be read from or written to, which is not much of a
surprise as server channels are invariably used only for managing the
connect operation on behalf of the server.
Datagram sockets operate differently than TCP based sockets in that
they can be used to read and write data without actually being
connected (though connecting a datagram socket can be faster as you
avoid security checks). Datagram sockets can be effectively used
using both of the state transitions listed for TCP based child and
server channels described above.

Related

Synchronous consumption on Node-RED & MQTT

I am working on an orchestration system using Node-RED and MQTT.
I have decided to dissociate the event acquisition from the treatment. The main objective is to quickly push events on a queue and treat them soon as possible in real time.
The system operates like this :
I receive an event on an HTTP Rest API,
Push this event on an MQTT Topic,
On an another flow, listen and read events from the MQTT Topic,
Launch several actions/process from this event (up to 5/10 seconds).
But I am facing an issue: If I receive too quickly 2 related events, the second event could change the processing of the first event. To solve this, I would like to synchronize my event consumption/processing in order to keep them ordered.
MQTT QoS 2 messages will be delivered in order. How can I simply implement a synchronization paradigm in Node-RED ? Is it possible to avoid MQTT Client listening while processing an event ?
No, you can't turn the MQTT client off.
And no there is no concept of synchronisation, mainly because all NodeJS apps are purely single threaded so 2 things can't actually happen at once, tasks just yield normally when they get to something IO bound.
I'm not sure you actually gain anything receiving it via HTTP and then re-consuming it via MQTT.
If you want to queue the incoming events up you could use the delay node to rate limit the input to something you are sure the processing can manage. The rate limit option has 2 modes, one that drops messages and one the queues them,.
Place a simple message Q on output of MQTT client and get it to release next message by sending it trigger=true (or might be able to get it to release a message at a set rate). I am still looking at these.
https://flows.nodered.org/node/node-red-contrib-simple-message-queue
Here is my proposal to solve the problem: https://flows.nodered.org/flow/3003f194750b0dec19502e31ca234847.
Sync req/res REST API with async workers based on MQTT
Example implements one REST API endpoint, one inject node for doing requests to that endpoint and two workers doing their job async.
It's still in progress so expect changes. Feel free to contact me and chat about this or other solutions for orchestration along with Node-RED.

How does logical channels separation is implemented over one physical in RabbitMQ ?

I am current reading in depth about RabbitMQ as I will have a task working with it. I've read about channels. In RabbitMQ Essentials
"Channel: This is a logical connection between a publisher/consumer and
a broker. Multiple channels can be established within a single connection."
Can some explain a bin in details how this is accomplished. I can imagine that a channel represents some kind of a publisher/consumer where they send use the TCP channel IN TURNs, for example in Round Robin fashion. when a logical channel is using the physical one other channels wait until the current is finished then its passed to the next logical channel. Is this assumption is true or I am missing something ?

How can I have my ZeroMQ app reject additional connections?

I have a C++ 0MQ application that does a bind() and sends messages using a PUSH socket. I want to ensure that these messages get sent to no more than one client.
Is there a way to allow just one client to .connect(), and then reject connections from all subsequent clients?
If your server application uses a ROUTER socket instead of PUSH, it has more control over the connections. The first frame of each message contains the id of the sender, so the server can treat one connection specially.
To make this work, the protocol has to be a little more complicated than a simple PUSH/PULL. One way is for the connections to be DEALER sockets, whose first action is to sent an "I'm here" message to the server. The server then knows the id of the connections, and treats the first one specially. Any other connections can be rejected with a "You shouldn't be here" message to the other connections, which of course they must understand and act on it by disconnecting themselves.
After the first "I'm here" message, the clients do not need to send any more messages. They can just sit there waiting for messages from the server, exactly the same as PUSH/PULL.
Yes, there is
While the genuine ZeroMQ messaging framework has lot of built-in features, it allows to integrate additional abstract layers, that can solve your task and many other, custom-specific, needs. So do not worry that there is not a direct API call for doing what you need.
How to do it?
Assuming your formal architecture is given, the viable approach would be to re-use networking security trick known as "port-knocking".
This trick adds an "introduction" phase on a publicly known aPortToKnockAt, after which ( upon having successfully met the condition(s) -- in your case being the first client to have asked for / to have completed a .connect() -- another, working, port is being used privately for a "transport" phase ( and in your case, the original port is being closed ).
This way your application does not devastate either local-side, or the remote-side resources as aPortToKnockAt provides means to protect soliton-archetype only handshaking and forthcoming attempts to knock there will find just a .close()-ed door ( and will handle that remotely ), so a sort of a very efficient passive reject is being achieved.

what order do I get messages coming to MPI Recv from MPI_ANY_SOURCE,

I am implementing a hub/servers MPI application. Each of the servers can get tied up waiting for some data, then they do an MPI Send to the hub. It is relatively simple for me to have the hub waiting around doing a Recv from ANY_SOURCE. The hub can get busy working with the data. What I'm worried about is skipping data from one of the servers. How likely is this scenario:
server 1 and 2 do Send's
hub does Recv and ends up getting data from server 1
while hub busy, server 1 gets more data, does another Send
when hub does its next Recv, it gets the more recent server 1 data rather than the older server2
I don't need a guarantee that the order the Send's occur is the order the ANY_SOURCE processes them (though it would be nice), but if I new in practice it will be close to the order they are sent, I may go with the above. However if it is likely I could skip over data from one of the servers, I need to implement something more complicated. Which I think would be this pattern:
servers each do Send's
hub does an Irecv for each server
hub does a Waitany on all server requests
upon completion of one server request, hub does a Test on all the others
of all the Irecv's that have completed, hub selects the oldest server data (there is timing tag in the server data)
hub communicates with the server it just chose, has it start a new Send, hub a new Irecv
This requires more complex code, and my first effort crashed inside the Waitany call in a way that I'm finding difficult to debug. I am using the Python bindings mpi4py - so I have less control over buffers being used.
It is guaranteed by the MPI standard that the messages are received in the order they are sent (non-overtaking messages). See also this answer to a similar question.
However, there is no guarantee of fairness when receiving from ANY_SOURCE and when there are distinct senders. So yes, it is the responsibility of the programmers to design their own fairness system if the application requires it.

signalr managing connections in external datastore

we are looking for a way to have a background process to push out messages to the connected clients.
The approach we are taking is that whenever a new connection is established (OnConnected) we stored the connectionId alone with some request metadata (for later filtering) in our mongo db. And when an event happened (triggered from client or backend process), a workerrole (another background process) will listen to those events (via messaging or whatever) then based on the event detail it will filter the connected client using the metadata captured.
The approach seems to be ok, but we have a problem when
signalr server goes down
before the server comes backup, the client disconnects (close browser or whatever)
signalr server goes back up
we are left with connections in the mongodb which we dont know their connection status
i am wondering if there is a better way to do this, the goal is to be able to target specific connected client to push message to from a backend service (worker role)
by the way, we are using scaleout option with service bus backplane
The following guide on Mapping SignalR Users to Connections goes over several options you have for managing connections.
The approach you are currently taking falls under the "Permanent, external storage" option.
If you want/need to stick with that option, you could add some sort of cleanup procedure that periodically removes connections from your database that have been inactive for longer than a specified time. Of course, you can also proactively remove old entries when a client with matching metadata reconnects with a new connectionId.
I think the better alternative is to use a IUserIdProvider or (single-user?) groups assuming your filtering requirements aren't too complex. Using either of these options should make it unnecessary to store connectionIds in your database. These options also make it fairly easy to send messages to multiple devices/tabs that a single user could have open simultaneously.

Resources