I've got a C++ server application which provides a listening TCP port for support personnel to connect to. They can issue commands and get responses. It's working fine from the application perspective.
My problem arises when they use telnet(1) to connect, and if they (for some reason) type a ^C. My server sees the parsed control characters that telnet sends me, and I can ignore or process them as I see fit. But the telnet client itself goes into some state where it stops outputting my server's responses to the client's screen.
I know I could either 1) tell them not to use telnet or 2) tell them to do a toggle autoflush inside the telnet app, or via ~/.telnetrc or whatever. But what I would prefer to do, if possible, is respond in the server with the correct protocol sequence to get their client to do the right thing with the text that follows. This just feels like it'd be a better UX for them. Their job sucks enough as it is.
Is this possible? I've been through the RFC and it's not clear. From my own past use of telnet in the past, this feels like it's doable, but my memory may be fuzzy.
The idea behind IAC DO TIMING-MARK (FF FD 06) is to suppress the output of the process that is to be interrupted by the IAC IP interrupt process (FF F4) command. In this way the telnet client program hides all output from the user until it receives a proper timing mark or notification that timing mark is not supported by the server.
You may or may not respond or take action to the IAC IP but you have to respond to IAC DO TIMING-MARK. The easiest way in your case is to respond that you ignored it by IAC WONT TIMING-MARK (FF FC 06) and the client should continue displaying all the output normally.
If you really terminate the current job, then you should flush your buffers and then respond with IAC WILL TIMING-MARK, which means that the client will discard all server's output from the moment the user pushed ^C to the place in the stream where it finds the IAC WILL TIMING-MARK (FF FB 06).
Related
as stated above, im trying to checking connection between server and client, without using ping, or some packet that will send every second.
i already try ping method, but this method will cause flooding, and i already try tcp method that act like icmp, the tcp packet will send tcp packet every second, to make sure the connection betweet sever and client still on, but this doesnt solve the flooding problem.
do you guys have any idea how to do this, without causing flooding?
all i need is server only send like 3 way handshake, and the connection built, and when the client off, something will trigger the server, and tell that server that, this client in particular are offline.
in simple, how to monitor client and server connectoin without sending multiple packet?
thank you
Say some link halfway between the client and server stops passing traffic. It may or may not still be possible for the client and server to communicate, depending on whether there are alternate links. But there is no way to tell whether or not that communication is possible without doing something active. There is no passive way to tell whether or not a link failure has made a connection usable or unusable.
There is, in general, no easier or more efficient way to tell whether or not communication is possible than attempting that communication and seeing whether or not it works.
I created a simple persistent socket connection for our game using TcpClient and NetworkStream. There's no problem connecting, sending messages, and disconnecting normally (quitting app/server shuts down/etc).
However, I'm having some problems where, in certain cases, the client isn't detecting a disconnection from the server. The easiest way I have of testing this is to pull out the network cable on the wifi box, or set the phone to airplane mode, but it's happened in the middle of a game on what should otherwise be a stable wifi.
Going through the docs for NetworkStream etc, it says that the only way to detect a disconnection is to try to write to the socket. Fair enough, except, when I try, the write passes as if nothing is wrong. I can write multiple messages like this, and everything seems fine. It's only when I plug the cable back in that it sees that it's disconnected (all messages are buffered?).
The TcpClient is set to NoDelay, and there's a Flush() called after every write anyway.
What I've tried:
Writing a message to the NetworkStream - no joy
CanWrite, Connected, etc all return true
TcpClient.Client.Poll( 1000, SelectMode.SelectWrite ); - returns true
TcpClient.Client.Poll( 1000, SelectMode.SelectRead ) && TcpClient.Client.Available == 0 - returns true
TcpClient.Client.Receive(buffer, SocketFlags.Peek) == 0 - when connected, blocks for about 10-20s, then returns true. When no server, blocks forever(?)
NetworkStream.Write() - doesn't throw an error
NetworkStream.BeginWrite() - doesn't throw an error (not even when calling EndWrite())
Setting a WriteTimeout - had no effect
Having a specific time where we haven't received a message from the server (normally there's a keep-alive) - I had this, but removed it, as we were getting a lot of false-positives due to lag etc (some clients would see between 10-20s of lag)
So am I doing something wrong here? Is there any way to get the NetworkStream to throw an error (like it should) when writing to a socket that should be disconnected?
I've no problem with a keep-alive (the default case is the server will notify the client that it hasn't received anything in a while, and the client will send a heartbeat), but at the minute, according to the NetworkStream everything's hunky-dory.
It's for a game, so ideally the detection should be quick enough (as the user can still move through the game until they need to make a server call, some of which will block the UI, so the game seems broken).
I'm using Unity, so it's .Net 2.0
is to pull out the network cable on the wifi box
That's a good test. If you do that the remote party is not notified. How could it possibly find out? It can't.
when I try, the write passes as if nothing is wrong
Writes can (and are) buffered. They eventually enter a block hole... No reply comes back. The only way to detect this is a timeout.
So am I doing something wrong here?
You have tried a lot of things but fundamentally you cannot find out about disconnects if no reply comes back telling you that. Use a timeout.
We currently experience a problem with a self-written server application running on Windows (occurs on different versions). The server listens at a TCP port, accepts connections, exchanges some data and then closes the connections again. There are about 100 clients that connect from time to time.
Sometimes the server stops to work: Log files show that connections are still accepted, but that at the first read attempt a socket error (10054 - Connection reset by peer) occurs. I don't think it is a client issue because it suddenly stops working for all clients.
Now we found out, that the same problem occurs with our old server software, that is even written in another programming language. So it doesn't seem to be an error in our program - I think it has to be some kind of OS / firewall issue? Of course, firewalls have been deactivated, which didn't solve the issue yet.
Any ideas where to look into? Wireshark logs will follow soon..
Excerpt from the log (Timestamp, Thread Id, message)
11:37:56.137 T#3960 Connection from 10.21.13.3
11:37:56.138 T#3960 Client Exception: Socket Error # 10054
Connection reset by peer.
11:37:56.138 T#3960 ClientDisconnected
11:38:00.294 T#4144 Connection from 10.21.13.3
You can see that the exception occurs almost at the same time as the connection is accepted, in this case the client reconnects after a few seconds.
A "stateful" firewall or NAT keeps track of connections, and ought to send RSTs for connectiosn it doesn't know about. If the firewall loses track of connections for some reason, then you'll probably see random connections being reset.
Our router at work does this — it forgets about connections when the PPP connection dies, which is remarkably unhelpful when it rains and the DSL restart takes a bit too long. However, instead of resetting connections, it just drops packets (even more unhelpful!).
Sounds like a firewall or routing issue - maybe stale connections get disconnected after a timeout period. Are you using a ping/keepalive inside your protocol.
Otherwise you may ask Wireshark to see what is going on.
First, thanks for many hints - I'm afraid the problem was a completely different one which you couldn't possibly solve by reading my question.
The server application uses log4net, configured with a log file an ImmediateFlush = true. If every log statement is directly written into the file and multiple socket connections occur this slows down the whole application.
The server needed about a minute to really accept the connection. This was far more than the timeout on clientside. So in the log there was only shown "accepted" followed by "disconnected" - even the log was delayed!
Sorry for the inconvenience...
Have you tried changing the backlog and then see how much time or how many clients are served before this problem occurs
You don't say what Windows versions you're using for the server, but you should be aware that the Windows TCP/IP stack behaves differently in server and client OSes. There are limits on how many simultaneous incoming connections a client OS will allow, and they are significantly lower than you might expect.
What do the logs look like from the client side?
Since the error is stating that the client is dropping the connection; if you see the same error on the client side then it is a firewall or proxy that is dropping the connection (both side seeing the opposite side dropping the connection is indicative of a proxy/firewall).
If the error is not present on the client side; then I would say that your client side is where you will see the actual error.
The client use ssh login and start up a server on remote machine, then the clinet create a tcp connect to the server.
The server need exit when the client has exit normally or crashed or network is dropped.
So the question is how to detect if the client which the server has connected to is crashed.
The first try is using error() signal, catch QAbsoluteSocket::NetworkError to determine the network has dropped. But I can't receive error() signal at all even if i pull out the network cable.
The second try is using the SocketState, i think whenever SocketState is UnconnectedState,the client may has exit normally and the server should exit too. This way works fine for "normal exit", but I don't know how to deal with "crash" and "dead network".
Help me, thanks!
I'd recommend using TCP keep alive. It is not exposed through the public QTcpSocket interface, but you can use setsockopt with QAbstractSocker::socketDescriptor to activate the SO_KEEPALIVE feature.
EDIT: It appears that keep alive was added to QAbstractSocket at some point. So, simply call QAbstractSocket::setSocketOption with QAbstractSocket::KeepAliveOption.
You can find information about adjusting the timeout of keep alive request here: http://www.gnugk.org/keepalive.html
Most of the time, the only way you will know there is a problem with a socket connection is when you try to read or write with it. There are some exceptions: Windows will change the state of sockets if the network cable is unplugged, Linux (in my experience) will not.
The most reliable way to detect connection problems is to have the client regularly send a small message at an agreed upon interval with the server. If the server does not see this message within a reasonable time, it should consider the client dead and drop the connection. This will also give both sides regular opportunities to detect a problem via reads and writes.
I'm writing a Comet-like app using Flex on the client and my own hand-written server.
I need to be able to send short bursts of data from the client at quite a high frequency (e.g. of the order of 10ms between sends).
I also need the server to push short bursts of data at a similarly high frequency.
I'm using NetConnection.call() to send the data to the server, and URLStream (with chunked encoding) to push the data from the server to the client.
What I've found is that the data isn't being sent/received as soon as it's available. For example, in IE, it seems the data is sent every 200ms rather than as soon as NetConnection.call() is called. Similarly, URLStream isn't making the data available as soon as the server is sending it.
Judging by the difference in behaviour between the browsers, it seems as though the Flash Player (version 10) is relying on the host browser to do all the comms. Can anyone confirm this? Update: This is very likely as only the host browser would know about the proxy settings that might be set.
I've tried using the Socket class and there's no problem with speed there: it works perfectly. However, I'd like to be able to use HTTP-based (port 80) connections so that my app can run in heavily fire-walled environments (I tried using a Socket over port 80, but that has its problems).
Incidentally, all development/testing has been done on an internal LAN, so bandwidth/latency is not an issue.
Update: The data being sent/received is in small packets and doesn't need to be in any particular format. For example, I might need to send a short array of Numbers, and this could either be encoded in AMF (e.g. via NetConnection.call()) or could be put into GET parameters (e.g. using sendToURL()). The main point of my question is really to see whether anyone else has experienced the same problem in calling NetConnection/URLStream frequently, and whether there is a workaround (it's also possible that the fault lies with my server code of course, rather than Flash).
Thanks.
Turns out the problem had nothing to do with Flash/Flex or any of the host browsers. The problem was in my server code (written in C++ on Linux), and without access to my source code the cause is hard to find (so I couldn't have hoped for an answer from this forum).
Still - thank you everyone who chipped in.
It was only after looking carefully at the output shown in Wireshark that I noticed the problem, which was twofold:
Nagle's algorithm
I was sending replies in multiple packets by calling write() multiple times (e.g. once for the HTTP response header, and again for the HTTP response body). The server's TCP/IP stack was waiting for an ACK for the first packet before sending the second, but because of Nagle's algorithm the client was waiting 200ms before sending back the ACK to the first packet, so the server took at least 200ms to send the full HTTP response.
The solution is to use send() with the flag MSG_MORE until all the logically connected blocks are written. I could also have used writev() or setsockopt() with TCP_CORK, but it suited my existing code better to use send().
Chunk-encoded streams
I'm using a never-ending HTTP response with chunk encoding to push data back to the client. Naggle's algorithm needs to be turned off here because even if each chunk is written as one packet (using MSG_MORE), the client OS TCP/IP stack will still wait up to 200ms before sending back an ACK, and the server can't push a subsequent chunk until it gets that ACK.
The solution here is to ask the server not to wait for an ACK for each sent packet before sending the next packet, and this is done by calling setsockopt() with the TCP_NODELAY flag.
The above solutions only work on Linux and aren't POSIX-compliant (I think), but that isn't a problem for me.
I'm almost 100% sure the player relies on the browser for such communications. Can't find an official page stating so atm, but check this out for example:
Applications hosting the Flash Player
ActiveX control or Flash Player
plug-in can use the
EnforceLocalSecurity and
DisableLocalSecurity API calls to
control security settings.
Which I think somehow implies the idea. Also, I've suffered some network related bugs on FF/IE only which again points out to the player using each browser for networking (otherwise there wouldn't be such differences).
And regarding your latency problem, I think that if speed is critical, your best bet is sockets. You have some work to do, but seems possible, check out the docs again:
This error occurs in SWF content.
Dispatched if a call to
Socket.connect() attempts to connect
either to a server outside the
caller's security sandbox or to a port
lower than 1024. You can work around
either problem by using a cross-domain
policy file on the server.
HTH,
Juan