I start RYU controller which opens TCP listening port 6633.Now I connect my mininet topology to the controller port 6633.My mininet topology consists of 6 switches.Hence there will be 6 connections one from each switch to the RYU Controller port.
Now I bring down my controller and start the controller again.
I find that all the switches in my topology can talk to the controller as if the controller process were never killed.
This is not how I understand a TCP connection between a server and a client.If the server goes down I would expect the connection to be disconnected.
This set of connections seem to survive a server process restart.Can someone explain how this is happening?I am just curious
When Ryu shuts down, it will close the active TCP connection with a FIN packet, upon receiving which the switches will also tear down the active TCP connection.
The reason why the switches and Ryu start talking again is simply because the switches always attempts reestablish a TCP session with the controller after the previous TCP connection is down.
Using Wireshark to capture the packets (by display-filtering on tcp.port==6633) will show you how this all happened.
The OpenFlow rules present in the switches have an expiry time. So if the controller can restart within the expiry time, the topology will be fine.
Completely turn off the controller for 30 seconds, then you will see the topology will be completely dead, that is, no one can ping each other.
Related
The goal is to make an http request from the client browser to my server. Simple stuff; however I'm hitting a wall with the networking portion. In order to expose my server to WAN I have used one of my public IPs and NAT to translate to the private ip of my server on inbound traffic and to my public IP on outbound traffic.
The issue is that I can't make a connection. Specifically I can't get the last part of the TCP handshake. Using a test setup with Wireshark on the client and server I can see that the client send the SYN -> the server receives the SYN -> the server sends a SYN/ACK -> the client receives a SYN/ACK -> the the client send an ACK -> the server DOES NOT receive the ACK. It waits for a moment then does a retransmission. Eventually resetting.
I have tried adding various firewall rules even though I don't think it could be the firewall because the first packets make a successful round trip.
I've turned windows firewall off(the server)
I've tried disabling TCP checksum offloading
I've looked for network anti virus settings on the server and on the sonic wall(the router)
I would expect the the tcp connection to complete. I can't for the life of me think of a reason why the ACK would consistently go missing.
That is another thing. The behavior is consistent.
pings also work just fine.
NOTE: The server is actually a VM and the physical server that manages it is in my network.
Any guidance on what to try and where to look would be very much appreciated. Thanks.
UPDATE: I can make connection using port 5000(It's another port I have opened on the firewall). Port 80 still doesn't work though.
In my case this was caused by COX not allowing inbound traffic to port 80. I'm not sure why the first portions of the tcp handshake were getting through. If anyone can explain that part leave a comment.
Can you describe the main process with TCP connection status?
In fact I'm more concerned about whether those connections that have been established can be closed after the client receives a proper reply from the server ...... That's part of the graceful shutdown, I think.
The accepted connections are totally independent of the listening socket. So the server can stop listening and the accepted sockets can still be used as if nothing happened. This means that each accepted socket has it's own tcp connection state (diagram).
Often, though, servers stop listening when they are shut down, so they close all sockets at that time.
Architecture:
We have a bunch of IoT devices connected via an AWS network loadbalancer (NLB) to our backend servers.
This is a bidirectional channel (not a request response style, but messages passed from either party to the other).
Objective:
How to keep connections (both sides of NLB) alive during inactivity.
Description:
Frequently clients go to inactive mode and do not send (or receive) anything to (or from) servers. If this state lasts longer than 350 seconds (connection idle timeout value of NLBs) the LB silently kill the connection. This is bad, because we see a lot of RST packets everywhere.
Questions:
I'm aware of SO_KEEPALIVE feature and can enable it on our backend servers. This keeps the connection between backend servers and NLB alive. But what about clients? Do NLBs forward TCP keep-alive packets to the other party? (Here it says it does not). If it does not, how to keep clients connections open? (At them moment, I'm thinking to send an empty message to keep the connection.)
Is this behavior specific to AWS NLBs or do loadbalancers generally work this way?
AWS docs say that NLB TCP listener has ability to keep connection alive with TCP keep-alive packets: link
For TCP listeners, clients or targets can use TCP keepalive packets to reset the idle timeout.
Based on my tests client is receiving TCP keep alive packets sent by server and correctly responds back.
Server doesn't interrupt connection what means it receives response from client.
It means that NLB TCP listener actually forwards keep-alive packets.
Based on the same docs, NLB TLS listener shouldn't react the same on TCP keep-alive packets.
TCP keepalive packets are not supported for TLS listeners.
But actual tests result shocked me when Wireshark showed keep-alive packets received on client connected through TLS listener.
My previous test results performed 2 months ago don't correspond what I'm experiencing now and I'm thinking behaviour may changed.
(previously server was keeping the connection even after client became unavailable in unexpected manner)
Not an answer, just to document what I found/did:
NELBs do not forward keep-alive packets. Meaning you have to enable them on both server and clients.
NELB's timeout cannot be changed. it's 350 second
I couldn't find any way to forge an empty TCP packet to fool the LB to forward it to the other side of the LB.
At the end, we implemented the keep alive feature at the application layer (sending an empty message to clients periodically.)
I was discussing Websocket overhead with a fellow engineer and we were both uncertain of how a Websocket actually detects the status of a client connection.
Is there a "status" packet sent to the client/server periodically?
Does it have anything to do with ping or pong in the low level API?
What about frames?
How does the Websocket detect that it's disconnected on the client? the server?
I'm surprised I couldn't find this answered on SO, but that may be my mistake. I found this answer addresses scalability, but that's not what I'm asking here. This answer touches on implementation, but not at the depth I'm pursuing here.
A webSocket connection is a TCP connection that uses the webSocket protocol. By default, a server or client knows when the connection has disappeared only when the underlying TCP realizes that the connection has been closed and the webSocket layer is listening for a close event on the connection so it is informed that way.
The webSocket protocol does not, by itself, require heartbeat packets that can regularly test if the connection is still working. The TCP socket may still appear to be alive, but the connection may not actually still work. The other end could have disappeared or been interrupted in between and one or both endpoints might not know that at any given time.
Socket.io which is built on top of webSocket, uses the ping and pong packets to implement a heartbeat which regularly tests the connection and will, in fact, detect a non-functioning connection at the client, close the socket and then reconnect automatically.
Is there a "status" packet sent to the client/server periodically?
Not by default for a regular webSocket connection.
Does it have anything to do with ping or pong in the low level API?
It is up to a client or server if they want to send ping or pong packets themselves to implement some sort of connection validation detection.
What about frames?
webSocket frames are the data format for sending data over a webSocket. They don't have anything to do with this issue.
How does the Websocket detect that it's disconnected on the client? the server?
Described above. Unless the client/server implement their own ping/pong system to detect when a connection has gone awry, they just rely on TCP signaling to know when a connection has been closed by the other end. A webSocket connection can be non-functional without the client or server knowing it until they try to send.
When a browser window/tab opens a webSocket connection and then the window/tab is retargeted to a new URL, the browser will close all resources associated with that window/tab including any webSocket connections. If the link between client and server is functional at that point, then the server will be told the underlying TCP connection (and thus the webSocket) has been closed. If the network link goes down and then the user moves that window/tab to a new URL, the server will not necessarily know that the connection is non-functional without relying on ping/pong type signalling to regularly test the connection.
If the browser crashes, the OS should close any sockets opened by that process.
If the computer/OS crashes, the sockets will likely not get closed normally (though this may be somewhat OS dependent and may be crash-dependent too).
I have around 20 clients communicating together with a central server in the same LAN. The clients can make transaction simultaneously with the server. The server forward each transaction to external appliance in the network. Sometimes it works, sometimes my application shows a "time out" message in a client screen (randomly)
I mirrored all traffic and found TCP Retransmission after TCP Reset packets for the first TCP Sequence. I immediately thought about packet loss but all my cables/NIC are fine, and I do not see DUP ACK in the capture.
It seems that RST packets may have different significations.
What causes those TCP Reset?
Where should I focus my investigation: network or application design ?
I would appreciate any help. Thanks in advance.
Judging by the capture, I assume your central server is 137.56.64.31. What's happening is the clients are initiating a connection to the server with a SYN packet and the server responds with a RST. This is typical if the server has no application listening on that particular port e.g. the webserver application isn't running and a client tries to connect to port 80.
The clients are all connecting to different ports on the server, which is unusual for an central server, but not unheard of. The destination ports the clients are connecting to on the server are: 11007, 11012, 11014, 11108, and 11115. Is that normal for the application? If not, the clients should be connecting to whatever port the application server is listening on.
The reason for the retransmits is that instead of giving up on the connection upon receiving a RST from the server, the client tries to initiate the connection again so Wireshark considers it a retransmission.