What is the difference between ACTIVE and PASSIVE connect in RFC 1006 TCP connections?
It's explained here: https://www.rfc-editor.org/rfc/rfc793
A passive OPEN request means that the process wants to accept incoming connection requests rather than attempting to initiate a connection.
In short passive OPEN are listen() and active OPEN are connect().
Related
Architecture:
We have a bunch of IoT devices connected via an AWS network loadbalancer (NLB) to our backend servers.
This is a bidirectional channel (not a request response style, but messages passed from either party to the other).
Objective:
How to keep connections (both sides of NLB) alive during inactivity.
Description:
Frequently clients go to inactive mode and do not send (or receive) anything to (or from) servers. If this state lasts longer than 350 seconds (connection idle timeout value of NLBs) the LB silently kill the connection. This is bad, because we see a lot of RST packets everywhere.
Questions:
I'm aware of SO_KEEPALIVE feature and can enable it on our backend servers. This keeps the connection between backend servers and NLB alive. But what about clients? Do NLBs forward TCP keep-alive packets to the other party? (Here it says it does not). If it does not, how to keep clients connections open? (At them moment, I'm thinking to send an empty message to keep the connection.)
Is this behavior specific to AWS NLBs or do loadbalancers generally work this way?
AWS docs say that NLB TCP listener has ability to keep connection alive with TCP keep-alive packets: link
For TCP listeners, clients or targets can use TCP keepalive packets to reset the idle timeout.
Based on my tests client is receiving TCP keep alive packets sent by server and correctly responds back.
Server doesn't interrupt connection what means it receives response from client.
It means that NLB TCP listener actually forwards keep-alive packets.
Based on the same docs, NLB TLS listener shouldn't react the same on TCP keep-alive packets.
TCP keepalive packets are not supported for TLS listeners.
But actual tests result shocked me when Wireshark showed keep-alive packets received on client connected through TLS listener.
My previous test results performed 2 months ago don't correspond what I'm experiencing now and I'm thinking behaviour may changed.
(previously server was keeping the connection even after client became unavailable in unexpected manner)
Not an answer, just to document what I found/did:
NELBs do not forward keep-alive packets. Meaning you have to enable them on both server and clients.
NELB's timeout cannot be changed. it's 350 second
I couldn't find any way to forge an empty TCP packet to fool the LB to forward it to the other side of the LB.
At the end, we implemented the keep alive feature at the application layer (sending an empty message to clients periodically.)
I was discussing Websocket overhead with a fellow engineer and we were both uncertain of how a Websocket actually detects the status of a client connection.
Is there a "status" packet sent to the client/server periodically?
Does it have anything to do with ping or pong in the low level API?
What about frames?
How does the Websocket detect that it's disconnected on the client? the server?
I'm surprised I couldn't find this answered on SO, but that may be my mistake. I found this answer addresses scalability, but that's not what I'm asking here. This answer touches on implementation, but not at the depth I'm pursuing here.
A webSocket connection is a TCP connection that uses the webSocket protocol. By default, a server or client knows when the connection has disappeared only when the underlying TCP realizes that the connection has been closed and the webSocket layer is listening for a close event on the connection so it is informed that way.
The webSocket protocol does not, by itself, require heartbeat packets that can regularly test if the connection is still working. The TCP socket may still appear to be alive, but the connection may not actually still work. The other end could have disappeared or been interrupted in between and one or both endpoints might not know that at any given time.
Socket.io which is built on top of webSocket, uses the ping and pong packets to implement a heartbeat which regularly tests the connection and will, in fact, detect a non-functioning connection at the client, close the socket and then reconnect automatically.
Is there a "status" packet sent to the client/server periodically?
Not by default for a regular webSocket connection.
Does it have anything to do with ping or pong in the low level API?
It is up to a client or server if they want to send ping or pong packets themselves to implement some sort of connection validation detection.
What about frames?
webSocket frames are the data format for sending data over a webSocket. They don't have anything to do with this issue.
How does the Websocket detect that it's disconnected on the client? the server?
Described above. Unless the client/server implement their own ping/pong system to detect when a connection has gone awry, they just rely on TCP signaling to know when a connection has been closed by the other end. A webSocket connection can be non-functional without the client or server knowing it until they try to send.
When a browser window/tab opens a webSocket connection and then the window/tab is retargeted to a new URL, the browser will close all resources associated with that window/tab including any webSocket connections. If the link between client and server is functional at that point, then the server will be told the underlying TCP connection (and thus the webSocket) has been closed. If the network link goes down and then the user moves that window/tab to a new URL, the server will not necessarily know that the connection is non-functional without relying on ping/pong type signalling to regularly test the connection.
If the browser crashes, the OS should close any sockets opened by that process.
If the computer/OS crashes, the sockets will likely not get closed normally (though this may be somewhat OS dependent and may be crash-dependent too).
http header : Connection: Keep-Alive
After reading alot about this , I still can't understand how its working.
wiki :
A keepalive signal can
also be used to indicate to Internet infrastructure that the
connection should be preserved. Without a keepalive signal
intermediate NAT-enabled routers can drop the connection after
timeout.
I dont udnerstand :
a Server can have 1,000,000 cuncorrent connections .
John sends a request to the server.
Paul's compter is on the same lan near paul. paul also sends a request to the same server.
John's and paul organization is behind router.
How the hell the server knows how to keep connection alive for both paul and john ?
Also , when john sends request the second time , it "doesnt open a new conneciton" , so how does keep-alive is applied here ?
First of all, TCP/IP connection is not some thin wire that is temporarily connecting two computers. At the end of the day both TCP/IP and UDP are just a series of packets. It's the operating system that pretends you have a connection by putting the IP packets back together in correct order.
Now back to your question. Note that the problem is not really HTTP-specific, all of this works on TCP/IP layer. Say Paul has 192.168.0.100 and John has 192.168.0.101 internal IP addresses while NAT has public 1.2.3.4 address. When Paul connects to some server, his OS uses 192.168.0.100:54321 address (port is chosen randomly by OS). This request hits NAT which remembers that address and forwards request to external server. The external server sees 1.2.3.4:4321 (notice the different port) as the user is behind the NAT so internal IP is not visible.
When the external server (let it be web server) sends a reply, it sends it to 1.2.3.4:4321. NAT, on the other hand, remembers that 4321 port should be forwarded to 192.168.0.100:54321` - and so it is.
Now imagine John sends request to the same server. This TCP/IP connection is routed through NAT which remember that request from 192.168.0.101:32123 was made. This request is then forwarded using public 1.2.3.4:4322 (notice different port). When response arrives, NAT checks the port if it is 4322, it routes to 192.168.0.101:32123 (John). Otherwise (on port 4321) Paul will get his reply.
Note: do not confuse client ephemeral port with server port (80 in HTTP by default).
I've been studying RFC 1928 and the description of the BIND operation wasn't clear to me. The setup sequence is described as follows, as I understand it:
The client establishes connection to the SOCKS5 server
The client performs the CONNECT request
The client establishes new TCP connection to the SOCKS5 server and requests BIND
The server replies immediately with the result of the BIND operation
Upon receiving incoming connection, the SOCKS5 server sends the notification to the client
What is not immediately clear for me is the step 5. Do I have to re-request BIND afterwards to allow for more incoming connections?
As far as I understand, the same TCP connection (established at the step 3) is used for communication with the accepted peer. What if I need to keep accepting connections on the same address:port? Is it possible, after all?
You need a separate BIND request for each connection you want to accept, as there is only 1 notification sent back by the SOCKS proxy when a client connects to the bound port. Whether or not the SOCKS5 proxy allows multiple BIND requests on the same IP/Port depends on the proxy's implementation.
Some things look strange to me:
What is the distinction between 0.0.0.0, 127.0.0.1, and [::]?
How should each part of the foreign address be read (part1:part2)?
What does a state Time_Wait, Close_Wait mean?
etc.
Could someone give a quick overview of how to interpret these results?
0.0.0.0 usually refers to stuff listening on all interfaces.
127.0.0.1 = localhost (only your local interface)
I'm not sure about [::]
TIME_WAIT means both sides have agreed to close and TCP
must now wait a prescribed time before taking the connection
down.
CLOSE_WAIT means the remote system has finished sending
and your system has yet to say it's finished.
I understand the answer has been accepted but here is some additional information:
If it says 0.0.0.0 on the Local Address column, it means that port is listening on all 'network interfaces' (i.e. your computer, your modem(s) and your network card(s)).
If it says 127.0.0.1 on the Local Address column, it means that port is ONLY listening for connections from your PC itself, not from the Internet or network. No danger there.
If it displays your online IP on the Local Address column, it means that port is ONLY listening for connections from the Internet.
If it displays your local network IP on the Local Address column, it means that port is ONLY listening for connections from the local network.
Foreign Address - The IP address and port number of the remote computer to which the socket is connected. The names that corresponds to the IP address and the port are shown unless the -n parameter is specified. If the port is not yet established, the port number is shown as an asterisk (*). (from wikipedia)
What is the distinction between 0.0.0.0, 127.0.0.1, and [::]?
0.0.0.0 indicates something that is listening on all interfaces on the machine.
127.0.0.1 indicates your own machine.
[::] is the IPv6 version of 0.0.0.0
My machine also shows *:\* for UDP which shows that UDP connections don't really have a foreign address - they receive packets from any where. That is the nature of UDP.
How should each part of the foreign address be read (part1:part2)?
part1 is the hostname or IP addresspart2 is the port
127.0.0.1 is your loopback address also known as 'localhost' if set in your HOSTS file. See here for more info: http://en.wikipedia.org/wiki/Localhost
0.0.0.0 means that an app has bound to all ip addresses using a specific port. MS info here: http://support.microsoft.com/default.aspx?scid=kb;en-us;175952
'::' is ipv6 shorthand for ipv4 0.0.0.0.
Send-Q is the amount of data sent by the application, but not yet acknowledged by the other side of the socket.
Recv-Q is the amount of data received from the NIC, but not yet consumed by the application.
Both of these queues reside in kernel memory.
There are guides to help you tweak these kernel buffers, if you are so inclined. Although, you may find the default params do quite well.
This link has helped me a lot to interpret netstat -a
A copy from there -
TCP Connection States
Following is a brief explanation of this handshake. In this context the "client" is the peer requesting a connection and the "server" is the peer accepting a connection. Note that this notation does not reflect Client/Server relationships as an architectural principal.
Connection Establishment
The client sends a SYN message which contains the server's port and the client's Initial Sequence Number (ISN) to the server (active open).
The server sends back its own SYN and ACK (which consists of the client's ISN + 1).
The Client sends an ACK (which consists of the server's ISN + 1).
Connection Tear-down (modified three way handshake).
The client sends a FIN (active close). This is a now a half-closed connection. The client no longer sends data, but is still able to receive data from the server. Upon receiving this FIN, the server enters a passive close state.
The server sends an ACK (which is the clients FIN sequence + 1)
The server sends its own FIN.
The client sends an ACK (which is server's FIN sequence + 1). Upon receiving this ACK, the server closes the connection.
A half-closed connection can be used to terminate sending data while sill receiving data. Socket applications can call shutdown with the second argument set to 1 to enter this state.
State explanations as shown in Netstat:
State Explanation
SYN_SEND Indicates active open.
SYN_RECEIVED Server just received SYN from the client.
ESTABLISHED Client received server's SYN and session is established.
LISTEN Server is ready to accept connection.
NOTE: See documentation for listen() socket call. TCP sockets in listening state are not shown - this is a limitation of NETSTAT. For additional information, please see the following article in the Microsoft Knowledge Base:
134404 NETSTAT.EXE Does Not Show TCP Listen Sockets
FIN_WAIT_1 Indicates active close.
TIMED_WAIT Client enters this state after active close.
CLOSE_WAIT Indicates passive close. Server just received first FIN from a client.
FIN_WAIT_2 Client just received acknowledgment of its first FIN from the server.
LAST_ACK Server is in this state when it sends its own FIN.
CLOSED Server received ACK from client and connection is closed.
For those seeing [::] in their netstat output, I'm betting your machine is running IPv6; that would be equivalent to 0.0.0.0, i.e. listen on any IPv6 address.