I have a multi-process TCPServer which creates (by fork() on linux) one process (child) per client's request, and in the meanwhile it is listening other connection's request. So I have a 1 to 1 mapping between client and server.
Suppose that one client crashes...is it possible to reconnect it to the same child server process?In other terms..is it possible to restore a pre-exhistent connection which is failed or the attempts to reconnect create a new connection (and then a new child server process)? thank you...
Without some knowledge (by the forker) of the interior session-related details (of the forkee), you have to make assumptions about external details being adequate to determine which remote connections get re-associated with which local connection end-points.
You could change the way things work in your application, though. Oracle SQL*Net does this on some platforms (due to platform limitations).
The initial connection to the TCPServer causes a fork and then opens up a new listening socket, sends back a redirection instruction to connect to the new listening socket & identifying details (to avoid someone else connecting and impersonating the original connector). The client then connects to the new socket, and uses this socket to do any re-connections upon disconnects before their time.
I have done something very similar to this in .NET platform. If you have something which is unique for every connection (for example IMEI of the connecting device this can be done).You should have some global two-dimensional array variable with combination of ProcessID and IMEI. So when device is disconnected and then the device reconnects you only search in this array for this IMEI and you have the process for this device. You should be very carefull with this global variable.
Edited:
I gave an example of some unique identifier. In my case that was the IMEI of the devices. In your case that could be something else which you know it is unique.
I had to do this because I had very big problem with devices breaking up the connection. Every new device was new connection so afterward I ended up with very big CPU usage.
Maybe you can refer https://eternalterminal.dev/howitworks/. Both the client and the server need change.
Related
Currently we receive on port 9020 from a client. That client uses port switching on their end and sometimes we wind up with multiple "Established" connections on the one port - all with different remote ports. We can manually end each established connection and our job will connect again after a few seconds. We can also run ENDTCPCNN for each remote port listed. We are looking for a way to programmatically see if there are multiple remote ports connected to the local port and if so end the established connections (while leaving the Listener running). Does anyone know of a way to get the information?
To answer your question, assuming you are on a supported version of the OS, take a look at the QSYS2.NETSTAT_INFO SQL view.
select local_port, remote_port, protocol, tcp_state, idle_time, network_connection_type
from qsys2.netstat_info
where local_port = 9020;
Otherwise, you'd need to use the List Network Connections and Retrieve Network Connection Data APIs yourself.
But are you sure you need to do this? Sounds like the client is leaving the connection open for re-use. That's a good thing for performance. You server code should be automatically timing out and closing idle connections.
My job is to write a distributed client/server application with some concurrent tasks. So i decided to use akka.net for the concurrency issues. To implement the ipc between server and client akka remote is used. For some reasons there may run more than one client of the same type on a workstation. So i configured these clients for dynamic assignment of a tcp port. This worked fine for sending messages to the server.
My problem is to push some information to the clients. To accomplish this task an actor on the client exist. Now the server creates a reference for this actor. Therefor it needs the port the client is listening on . My idea is to send the tcp port the client uses to the server in some sort of connection procedure using a actor on the server.
After searching for some hours I didn't find any hint where to find the dynamically assigned tcp port. So how would the client get the assigned tcp port?
Ok, I could use akka.cluster. But using akka.cluster is breaking a fly on the wheel, I think. And if it solves my issue reamins to be seen.
Two suggestions, assuming that it is your client that makes the first contact with the server.
I'd have the server keep track of which clients are connected. I'd probably have a heartbeat message that gets sent once every few seconds from each client system. This way you can store an IActorRef for each alive client and send messages back without the need for finding the port. IActorRefs are preferable wherever possible for location transparency.
If you actually need to explicitly find the port, you may be able to extract it from the Path property of the IActorRef of one of the actors on the client system.
Thanks to patricks suggestions my issue is solved.
The solution is to extract the needed information from the senders path available while executing the hello message. With this information the server is able to maintain a list of all connected clients and theire network address.
Thanks a lot # patrick.
Regards Gregor
I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino
The client use ssh login and start up a server on remote machine, then the clinet create a tcp connect to the server.
The server need exit when the client has exit normally or crashed or network is dropped.
So the question is how to detect if the client which the server has connected to is crashed.
The first try is using error() signal, catch QAbsoluteSocket::NetworkError to determine the network has dropped. But I can't receive error() signal at all even if i pull out the network cable.
The second try is using the SocketState, i think whenever SocketState is UnconnectedState,the client may has exit normally and the server should exit too. This way works fine for "normal exit", but I don't know how to deal with "crash" and "dead network".
Help me, thanks!
I'd recommend using TCP keep alive. It is not exposed through the public QTcpSocket interface, but you can use setsockopt with QAbstractSocker::socketDescriptor to activate the SO_KEEPALIVE feature.
EDIT: It appears that keep alive was added to QAbstractSocket at some point. So, simply call QAbstractSocket::setSocketOption with QAbstractSocket::KeepAliveOption.
You can find information about adjusting the timeout of keep alive request here: http://www.gnugk.org/keepalive.html
Most of the time, the only way you will know there is a problem with a socket connection is when you try to read or write with it. There are some exceptions: Windows will change the state of sockets if the network cable is unplugged, Linux (in my experience) will not.
The most reliable way to detect connection problems is to have the client regularly send a small message at an agreed upon interval with the server. If the server does not see this message within a reasonable time, it should consider the client dead and drop the connection. This will also give both sides regular opportunities to detect a problem via reads and writes.
I have a defined number of servers that can locally process data in their own way. But after some time I want to synchronize some states that are common on each server. My idea was that establish a TCP connection from each server to the other servers like a mesh network.
My problem is that in what order do I make the connections since there is no "master" server here, so that each server is responsible for creating there own connections to each server.
My idea was that make each server connect and if the server that is getting connected already has a connection to the connecting server, then just drop the connection.
But how do I handle the fact that 2 servers is trying to connect at the same time? Because then I get 2 TCP connections instead of 1.
Any ideas?
You will need to have a handshake protocol when you're connection to a server so you can verify whether it's ok to start sending/receiving data, otherwise you might end up with one of the endpoint connecting and start sending data immediately only to have the other end drop the connection.
For ensuring only one connection is up to a server,you just need something like this pseudocode:
remote_server = accept_connection()
lock mutex;
if(already_connected(remote_server)) {
drop_connection(remote_server)
}
unlock mutex;
If your server isn't multithreaded you don't need any locks to guard you when you check whether you're already connected - as there won't be any "at the same time" issues.
You will also need to have a retry mechanism to connect to a server based on a small random grace period in the event the remote server closed the connection you just set up.
If the connection got closed, you wait a little while, check if you're already connected (maybe the other end set up a connection to you in the mean time) and try to connect again. This is to avoid the situation where both ends set up the connection at the same time but the other end closes it because of the above logic.
Just as an idea. Each server accept a connection, then find out that it has got two TCP connections between the same servers. Then one connection is chosen to be closed. The way to choose what connection to close you just need to implement. For example both servers should compare their names ( or their IP address or their UID) and connection initiated by the server whose name is less (or UID) should be closed.
While better solution implies making a separate "LoadBalancer" to which all your servers are connected here is the small suggestion to make sure that connections are not created simultaneously.
Your servers can start connections in different times by using
bool CreateConnection = (time() % i == 0)
if (CreateConnection){ ... }
where i is the ID of the particular server.
and time() could be in seconds or fractions of a second, depending on your requerements.
This will guarantee that you never get two servers connecting at the same time to each other. If you do not have IDs for servers, you can use a random value.