I have previously used Tornado behind Nginx in production, that too with multiple tornado instances, but presently with Websockets implementation in Tornado, I am unable to understand few things.
NOTE : Just running Websockets with Tornado is not my problem, I can get it to work and have used it in many projects fairly easily,
but all as a single Tornado instance with/without a reverse-proxy or
load balancer (nginx), both working fine.
With that said, my doubts are regarding how to handle multiple tornado websocket instances behind a load balancer. Lets look at a use case:
Use Case : 2 Tornado Websocket Instances, T1 and T2 behind Nginx and there are 3 Clients (Browsers), lets say C1, C2, C3.
______|-------|
C1-----------------| | | T1 |
| | |_______|
C2-----------------|---> Nginx --->|
| | |-------|
C3-----------------| |______| T2 |
|_______|
Step 1 - C1 initiated a websocket connection, at location w.x.y.z:80. Now Nginx load balanced the connection to lets say T1, which triggered the open event in T1's(Tornado's) websocket handler. Now T1 knows of a websocket connection object which is opened, let the object be w1.
Step 2 - Now, C2 initiated a websocket connection, same as above, but now Nginx laod balanced it to T2 which triggered T2's(Tornado's) websocket handler's open event. Now T2 knows of a websocket connection object which is opened, let the object be w2.
Step 3 - Now, similarly C3 initiated a websocket connection, Nginx load balanced it to T1, which now has a new opened websocket connection object, let the object be w3.
Step 4 - Now, C1, our first client sent a message via the browser's ws.send() method, where ws is the browser(client) side's websocket object used to create the websocket connection in Step 1. So when the message reached Nginx, it load balanced it to T2.
Now here is my question.
Does Nginx load balance it to T2 or does it know that it should proxy
the request to T1 itself?
Suppose it sends it to T2, now T2 doesn't have the w1 object with it, since it is with T1, so when T2's websocket handler tries to process the request, which should originally trigger the on_message handler, so now,
what happens at this condition, does T2 raise an Exception or what
happens exactly?
Also, how to manage the websocket connections in these cases when there
are multiple tornado instances running behind a load balancer, how to
solve this?
If suppose we use Redis as a solution, then what would redis actually store? The websocket objects? Or, what exactly should it store to let the instances work properly, if Redis is one of the solutions you are going to propose?
Messages are always attached to a connection; they are not load-balanced separately. So in step 4, nginx sends the message on the C1 connection to T1, not T2.
Related
I want to write a Proxy Server for SMB2 based on Asio and consider using a cumulative buffer to receive a full message so as to do business logic, and introducing a queue for multiple messages which will force me to synchronize the following resouce accesses:
the read and write operation on the queue because the two upstream/downstream queue are shared the frontend client and the backend server,
the backend connection state because reads on the frontend won't wait for the completion of connect or writes on the backend server before the next read, and
the resource release when an error occurs or a connection is normally closed because both read and write handlers on the same scoket registered with the EventLoop are not yet completed and a asynchronous connect operation can be initiated in worker threads while its partner socket has been closed, and those may run concurrently.
If not using the two queues, only one (read, write and connect) handler is register with the EventLoop on the proxy flow for a request, so no need to synchronize.
From the Application level,
I think a cumulative buffer is generally a must in order to process a full message packet (e.g. a message in the fomat | length (4 bytes) | body (variable) |) after multiple related API calls (System APIs: recv or read, or Library APIs: asio::sync_read).
And then, is it necessary to use a queue to save messages received from clients and pending to be forwarded to the backend server
use the following diagram from http://www.partow.net/programming/tcpproxy/index.html, it turned out to have similar thoughts to mine (the upstream concept as in NGINX upstream servers).
---> upstream ---> +---------------+
+---->------> |
+-----------+ | | Remote Server |
+---------> [x]--->----+ +---<---[x] |
| | TCP Proxy | | +---------------+
+-----------+ | +--<--[x] Server <-----<------+
| [x]--->--+ | +-----------+
| Client | |
| <-----<----+
+-----------+
<--- downstream <---
Frontend Backend
For a Request-Response protocol without a message ID field (useful for matching each reply message to the corresponding request message), such as HTTP, I can use one single buffer for every connection in the two downstream and upstream flows, and then continue processing the next request (note for the first request, a connection to the server is attempted, so it's slower than the subsequent processes), because clients always wait (may block or get notified by an asynchronous callback function) for the response after sending requests.
However, for a protocol in which clients don't wait for the response before sending the next request, a message ID field can be used to uniquely identify or distinguish request-replies pairs. For example, JSON-RPC 2.0, SMB2, etc. If I strictly complete the two above flows regardless of next read (without call to read and make TCP data accumulated in kernel), the subsequent requests from the same connection cannot be timely processed. After reading What happens if one doesn't call POSIX's recv “fast enough”? I think it can be done.
I also did a SMB2 proxy test using one single buffer for the two downstream and upstream flows on windows and linux using the ASIO networking library (also included in Boost.Asio). I used smbclient as a client on linux to create 251 of connections (See the following command):
ft=$(date '+%Y%m%d_%H%M%S.%N%z'); for ((i = 2000; i <= 2250; ++i)); do smbclient //10.23.57.158/fromw19 user_password -d 5 -U user$i -t 100 -c "get 1.96M.docx 1.96M-$i.docx" >>smbclient_${i}_${ft}_out.txt 2>>smbclient_${i}_${ft}_err.txt & done
Occasionally, it printed several errors, "Connection to 10.23.57.158 failed (Error NT_STATUS_IO_TIMEOUT)". But if increasing the number of connections, the number of errors would increase, so it's a threshold? In fact, those connections were completed within 30 seconds, and I also set the timeout for smbclient to 100. What's wrong?
Now, I know those problems need to be resolved. But here, I just want to know "Is it necessary to use a queue to save messages received from clients and pending to be forwarded to the backend server?" so I can determine my goal because it causes a great deal of difference.
Maybe they cannot care about the application message format, the following examples will reqest the next read after completing the write operation to it peer.
HexDumpProxyFrontendHandler.java or tcpproxy based on c++ Asio.
Other References
[Computer Networks: A Systems Approach] 5.3 Remote Procedure Call - Overcoming Network Limitations
[Computer Networks: A Systems Approach] 5.3 Remote Procedure Call - Overcoming Network Limitations at github
JSON RPC at wikipedia
I have a Spring mvc application, and I'm using websockets to communicate a phisical device that sends data with an angular 2 front end.
the architecture is like this
device ----> Spring Mvc <-----angular 2 front end
I have a datasource listener that publish to a websocket topic everytime a new message appears and I consume that topic from angular.
My problem is that this is working properly in my local tomcat install but when I upload it to a faster server it doesn't.
The main problem I'm having is that is buffering the messages and is reaching the limit and closing the websocket session.
What I noticed checking the logs is that in my local server messages come 20 milliseconds after last one is finished but in the other server sometimes are coming at the same time and they are being buffered throwing the session limit exception.
I tried setting a higher buffer size.
Also tried a thread.sleep but it doesn't worked.
Do you have some ideas what can I do ? should I implement some kind of message queue ?
Thanks in advance
Ì have a setup where I have a chain of servers that I need to send messages between:
A -> B -> C
On A I have an application which puts a message on a local MSMQ queue (MSMQ-A) on A. This queue needs to forward the message to a MSMQ queue on B (MSMQ-B) which in turn should forward the message to a MSMQ queue on C (MSMQ-C). On C there is an application which listens to messages from MSMQ-C.
The messages do not need to be transactional.
How do I configure MSMQ-A and MSMQ-B for forwarding of messages?
UPDATE
Based on the suggested answer I have done this:
I've enabled HTTP support under the Windows Message Queuing feature.
I've added a mapping file under the System32/msmq/mappings folder looking like this:
<redirections xmlns="msmq-queue-redirections.xml">
<redirection>
<from>http://machineA/msmsq/private$/logger</from>
<to>http://machineB/msmq/private$/logger</to>
</redirection>
</redirections>
and still the messages get stuck on machineA.
I am using powershell to send the messages to the queue on A like this: Get-MsmqQueue -name logger | Send-MsmqQueue -body "asdasd"
The design you are describing is not something that MSMQ provides.
MSMQ delivers a message from sender to receiver and that's it. You can't have a chain where the receiver automatically becomes the sender to the next receiver. You would need to write an application on each machine that receives the message from the queue and creates a NEW copy of it to send to the next.
MSMQ routing is a different concept. Compare it to parcel shipping where a parcel is routed through different depots across the country until it reaches the final destination. Each depot does nothing with the parcel except to pass it on. They don't open up the parcel, use the contents, and then repackage to send to the next depot.
You may instead be wanted to redirect MSMQ messages. For example, if A is an Internet-based PC, B is an Internet-facing server and C is a PC on an internal LAN (and you want to send A->B->C). If you have that sort of scenario then you need to look at Redirections.
Delivering Messages Sent over the Internet
HTTP Message Redirection
I am implementing a hub/servers MPI application. Each of the servers can get tied up waiting for some data, then they do an MPI Send to the hub. It is relatively simple for me to have the hub waiting around doing a Recv from ANY_SOURCE. The hub can get busy working with the data. What I'm worried about is skipping data from one of the servers. How likely is this scenario:
server 1 and 2 do Send's
hub does Recv and ends up getting data from server 1
while hub busy, server 1 gets more data, does another Send
when hub does its next Recv, it gets the more recent server 1 data rather than the older server2
I don't need a guarantee that the order the Send's occur is the order the ANY_SOURCE processes them (though it would be nice), but if I new in practice it will be close to the order they are sent, I may go with the above. However if it is likely I could skip over data from one of the servers, I need to implement something more complicated. Which I think would be this pattern:
servers each do Send's
hub does an Irecv for each server
hub does a Waitany on all server requests
upon completion of one server request, hub does a Test on all the others
of all the Irecv's that have completed, hub selects the oldest server data (there is timing tag in the server data)
hub communicates with the server it just chose, has it start a new Send, hub a new Irecv
This requires more complex code, and my first effort crashed inside the Waitany call in a way that I'm finding difficult to debug. I am using the Python bindings mpi4py - so I have less control over buffers being used.
It is guaranteed by the MPI standard that the messages are received in the order they are sent (non-overtaking messages). See also this answer to a similar question.
However, there is no guarantee of fairness when receiving from ANY_SOURCE and when there are distinct senders. So yes, it is the responsibility of the programmers to design their own fairness system if the application requires it.
I have a defined number of servers that can locally process data in their own way. But after some time I want to synchronize some states that are common on each server. My idea was that establish a TCP connection from each server to the other servers like a mesh network.
My problem is that in what order do I make the connections since there is no "master" server here, so that each server is responsible for creating there own connections to each server.
My idea was that make each server connect and if the server that is getting connected already has a connection to the connecting server, then just drop the connection.
But how do I handle the fact that 2 servers is trying to connect at the same time? Because then I get 2 TCP connections instead of 1.
Any ideas?
You will need to have a handshake protocol when you're connection to a server so you can verify whether it's ok to start sending/receiving data, otherwise you might end up with one of the endpoint connecting and start sending data immediately only to have the other end drop the connection.
For ensuring only one connection is up to a server,you just need something like this pseudocode:
remote_server = accept_connection()
lock mutex;
if(already_connected(remote_server)) {
drop_connection(remote_server)
}
unlock mutex;
If your server isn't multithreaded you don't need any locks to guard you when you check whether you're already connected - as there won't be any "at the same time" issues.
You will also need to have a retry mechanism to connect to a server based on a small random grace period in the event the remote server closed the connection you just set up.
If the connection got closed, you wait a little while, check if you're already connected (maybe the other end set up a connection to you in the mean time) and try to connect again. This is to avoid the situation where both ends set up the connection at the same time but the other end closes it because of the above logic.
Just as an idea. Each server accept a connection, then find out that it has got two TCP connections between the same servers. Then one connection is chosen to be closed. The way to choose what connection to close you just need to implement. For example both servers should compare their names ( or their IP address or their UID) and connection initiated by the server whose name is less (or UID) should be closed.
While better solution implies making a separate "LoadBalancer" to which all your servers are connected here is the small suggestion to make sure that connections are not created simultaneously.
Your servers can start connections in different times by using
bool CreateConnection = (time() % i == 0)
if (CreateConnection){ ... }
where i is the ID of the particular server.
and time() could be in seconds or fractions of a second, depending on your requerements.
This will guarantee that you never get two servers connecting at the same time to each other. If you do not have IDs for servers, you can use a random value.