Handling multiple different requests and responses in client-server Chat Program - networking

I've successfully written a UDP client-server chat application but my way of handling requests and responses is hacky and not very scalable. The server basically listens for messages coming in and then runs some code depending on the message type:
if command == "CONN":
# handle new connection from client then send "OK"
if command == "MSG":
# send message to other connected clients
...
I'm happy with the design of the server but the client is really fiddly.
Here's a sample of the commands the client can send from the server:
Command Name | Argument | Outcome/Description
------------------------------------------------------------------------------
CONN | username | OK, ERR, or timeout if server isn't running
MSG | message | -
USRS | - | ["username1", "username2"]
QUIT | - | -
And receive from the server:
USRC | username | new user connected
USRD | username | user disconnected
MSG | username, message | print message from user
SHDW | - | server shut down
Basically I'm having trouble building a system that will handle these different sets of commands and responses. I know I have a state machine of sorts and can conceptualize a solution in my head, I just don't seem to be able to translate this to anything other than:
socket.send("CONN username")
if response == "OK":
# connected to the server ok
if response == "ERR":
# oops, there was a problem of sorts
# otherwise handle timeout
socket.send("USRS")
if response == "":
# no other users connected
else:
# print users
# start main listening loop
while True:
# send typed text as MSG
# handle any messages received from the server on separate thread
Any help appreciated and apologies for the weird python-esqe pseudocode.

To make things more scalable, your application could benefit from using multiple threads on both client side and the server side. Be sure to use locks when handling common data.
First, the client side could certainly benefit from using three threads. The first thread can listen for input from server (the recvfrom() call). The second thread can listen for input from the user and put these messages in a queue. The third thread can process messages from the queue and call socket.send() to send those messages to the server.
Since the server is handling multiple clients, it could also benefit from having threads to listen for messages from the client and to process them. Once again, you could use one thread to get messages from the client and then queue them. You can use second thread to process the received messages (make sure to store client information) and call sendto() to send responese; btw, recvfrom() does provide client information.

Related

In TCP, How many data is buffered if the connection is not accepted by the server?

I write a simple server application. In that application, I created a server socket and put it into the listen state with listen call.
After that, I did not write any code to accept the incoming connection request. I simply waited for the termination with pause call.
I want to figure out practically that how many bytes are buffered in the server side if the connection is not accepted. Then I want to validate the number with the theory of the TCP.
To do that,
First, I started my server application.
Then I used "dd" and "netcat" to send the data from client to server. Here is the command:
$> dd if=/dev/zero count=1 bs=100000 | nc 127.0.0.1 45001
Then I opened wireshark and wait for the zero-window message.
From the last properly acknowledged tcp frame. the client side can successfully send 64559 byte data to the server.
Then I execute the above dd-netcat command to create another client and send data again.
In this case, I got the following wireshark output:
From the last successfully acknowledged tcp frame, I understand that the client application can successfully sent 72677 bytes to the server.
So, it seems that the size of the related buffer can change in runtime. Or, I misinterpret the output of the wireshark.
How can I understand the size of the related receive buffer? What is the correct name to refer that receive buffer in terminology? How can I show the default size of the related receive buffer?
Note that the port number of the tcp server is "45001".
Thank you!

Is it necessary to use a queue to save messages received from clients and pending to be forwarded to the backend server?

I want to write a Proxy Server for SMB2 based on Asio and consider using a cumulative buffer to receive a full message so as to do business logic, and introducing a queue for multiple messages which will force me to synchronize the following resouce accesses:
the read and write operation on the queue because the two upstream/downstream queue are shared the frontend client and the backend server,
the backend connection state because reads on the frontend won't wait for the completion of connect or writes on the backend server before the next read, and
the resource release when an error occurs or a connection is normally closed because both read and write handlers on the same scoket registered with the EventLoop are not yet completed and a asynchronous connect operation can be initiated in worker threads while its partner socket has been closed, and those may run concurrently.
If not using the two queues, only one (read, write and connect) handler is register with the EventLoop on the proxy flow for a request, so no need to synchronize.
From the Application level,
I think a cumulative buffer is generally a must in order to process a full message packet (e.g. a message in the fomat | length (4 bytes) | body (variable) |) after multiple related API calls (System APIs: recv or read, or Library APIs: asio::sync_read).
And then, is it necessary to use a queue to save messages received from clients and pending to be forwarded to the backend server
use the following diagram from http://www.partow.net/programming/tcpproxy/index.html, it turned out to have similar thoughts to mine (the upstream concept as in NGINX upstream servers).
---> upstream ---> +---------------+
+---->------> |
+-----------+ | | Remote Server |
+---------> [x]--->----+ +---<---[x] |
| | TCP Proxy | | +---------------+
+-----------+ | +--<--[x] Server <-----<------+
| [x]--->--+ | +-----------+
| Client | |
| <-----<----+
+-----------+
<--- downstream <---
Frontend Backend
For a Request-Response protocol without a message ID field (useful for matching each reply message to the corresponding request message), such as HTTP, I can use one single buffer for every connection in the two downstream and upstream flows, and then continue processing the next request (note for the first request, a connection to the server is attempted, so it's slower than the subsequent processes), because clients always wait (may block or get notified by an asynchronous callback function) for the response after sending requests.
However, for a protocol in which clients don't wait for the response before sending the next request, a message ID field can be used to uniquely identify or distinguish request-replies pairs. For example, JSON-RPC 2.0, SMB2, etc. If I strictly complete the two above flows regardless of next read (without call to read and make TCP data accumulated in kernel), the subsequent requests from the same connection cannot be timely processed. After reading What happens if one doesn't call POSIX's recv “fast enough”? I think it can be done.
I also did a SMB2 proxy test using one single buffer for the two downstream and upstream flows on windows and linux using the ASIO networking library (also included in Boost.Asio). I used smbclient as a client on linux to create 251 of connections (See the following command):
ft=$(date '+%Y%m%d_%H%M%S.%N%z'); for ((i = 2000; i <= 2250; ++i)); do smbclient //10.23.57.158/fromw19 user_password -d 5 -U user$i -t 100 -c "get 1.96M.docx 1.96M-$i.docx" >>smbclient_${i}_${ft}_out.txt 2>>smbclient_${i}_${ft}_err.txt & done
Occasionally, it printed several errors, "Connection to 10.23.57.158 failed (Error NT_STATUS_IO_TIMEOUT)". But if increasing the number of connections, the number of errors would increase, so it's a threshold? In fact, those connections were completed within 30 seconds, and I also set the timeout for smbclient to 100. What's wrong?
Now, I know those problems need to be resolved. But here, I just want to know "Is it necessary to use a queue to save messages received from clients and pending to be forwarded to the backend server?" so I can determine my goal because it causes a great deal of difference.
Maybe they cannot care about the application message format, the following examples will reqest the next read after completing the write operation to it peer.
HexDumpProxyFrontendHandler.java or tcpproxy based on c++ Asio.
Other References
[Computer Networks: A Systems Approach] 5.3 Remote Procedure Call - Overcoming Network Limitations
[Computer Networks: A Systems Approach] 5.3 Remote Procedure Call - Overcoming Network Limitations at github
JSON RPC at wikipedia

what order do I get messages coming to MPI Recv from MPI_ANY_SOURCE,

I am implementing a hub/servers MPI application. Each of the servers can get tied up waiting for some data, then they do an MPI Send to the hub. It is relatively simple for me to have the hub waiting around doing a Recv from ANY_SOURCE. The hub can get busy working with the data. What I'm worried about is skipping data from one of the servers. How likely is this scenario:
server 1 and 2 do Send's
hub does Recv and ends up getting data from server 1
while hub busy, server 1 gets more data, does another Send
when hub does its next Recv, it gets the more recent server 1 data rather than the older server2
I don't need a guarantee that the order the Send's occur is the order the ANY_SOURCE processes them (though it would be nice), but if I new in practice it will be close to the order they are sent, I may go with the above. However if it is likely I could skip over data from one of the servers, I need to implement something more complicated. Which I think would be this pattern:
servers each do Send's
hub does an Irecv for each server
hub does a Waitany on all server requests
upon completion of one server request, hub does a Test on all the others
of all the Irecv's that have completed, hub selects the oldest server data (there is timing tag in the server data)
hub communicates with the server it just chose, has it start a new Send, hub a new Irecv
This requires more complex code, and my first effort crashed inside the Waitany call in a way that I'm finding difficult to debug. I am using the Python bindings mpi4py - so I have less control over buffers being used.
It is guaranteed by the MPI standard that the messages are received in the order they are sent (non-overtaking messages). See also this answer to a similar question.
However, there is no guarantee of fairness when receiving from ANY_SOURCE and when there are distinct senders. So yes, it is the responsibility of the programmers to design their own fairness system if the application requires it.

is there any way to broadcast a single http request to multiple server at the same time

Is there any way to send (broadcast) a single request to multiple server?
My requirement is I need a module which can send a single request to multiple servers(generally speaking broadcast the request).After it wait for the response for a certain time say 5 mili sec) and the response it get from from different server it either clubbed together and send back to the client or based on a parameter in the response(suppose price)it send the response to the client.
e.g: request(get the price) need to be send to server1,server2, server3 and server 4 at a time. server1 response: price:$5 server2 response: price:$3 server3 does not respond back in 5 milisec server4 response: price:$8
My module either send back the server4 response back to client as it has highest price or it can send all the response to client by clubbed together and client will validate on the price.
You can use the cURL multi interface to do this.
Set up a separate handle for each request, initialize a multi-handle and add your individual request handles to it. Execute the threads and go into a wait loop for their responses/timeouts. Afterwards you parse each response to gather all of the results.

Writing A Socket Server Application

I have to design a server socket program.The requirement is Each connection from client will be in different threads.
The challenge is Suppose Server is now connected with two client Client A and client B.They will be in two different thread.
My application requirement is when server will get some message from Client A or Client B ,after processing this message it will send the messages to both Client A and client B.
Can you please suggest what will be the right approach for it .How to know what clients are open at a time .
Quite simple really - have data structures shared by the two threads and protected from concurrent access. You can design the sending based on a message queue like pattern.

Resources