Broken connection - tcp

I have a long running PHP script runned on centos 7 , that have some long TCP connection to other machine
When I disable firewall in centos everything works well , But when I enable firewall ,firstly everything is good but after hours , I see some error like
Broken Pipe
Connection timed out
Why? I have heartbeat mechanism in script , I try both heartbeat = 15 and 60
Why firewall affect my long TCP connections? Why it close my connection after hours?

Related

IIS with secure socket connection keeps responding after stopping website

I'm running IIS 10.0 on Windows Server 2019 Standard a simple ASP.NET Framework 4.7.2 with long running Websocket connection and SignalR.
Things work well when it comes to stopping the website and sockets are closed if I'm using non secured sockets. However if I have a secure socket connection (TLS/SSL) the worker process will hang as long as the sockets are open. The client will continue to send and receive responses from the server. The only way to fix this is to have the client restart the connection.
Both direct websockets or via SignalR will cause this issue, the application will keep on running after trying to stop the website, transmitting and receiving messages over the secure socket; as soon as the socket closes the worker process dies as expected.
Here is a similar issue with no response : Server keeps sending ping messages to client after IIS site is stopped.
Connections do not timeout after the app pool timeout (90s)
Here is a screenshot of the active connections
What could be causing this and how do I make sure these connections are dropped when a stop/recycle is requested by IIS ?
Update:
If I change the port but still use a secure connection, the problem goes away. On website stop, the connections are dropped and the worker process dies as expected. So it seems to have something to do with port 443...

Websocket connection opens and closes immediately

I'm using websockets and have managed to successfully set up my war file on the AWS Beanstalk. I am using Nginx as a proxy server and a classic Load Balancer listening on port 80, and the protocol is TCP instead of HTTP. Cross-zone load balancing is enabled and Connection draining is also enabled with the draining timeout as 60 seconds. The logs aren't showing any errors.
I have not changed the default nginx.conf file.
The connection upgrade is happening successfully, with the status code returning '101 Switching Protocol'. I just cannot seem to figure out why the connection is closing immediately.
Any help is appreciated. Thanks.
This wonderful answer helped me solve this problem.
It turns out that the corporate WiFi connection that I was on had a firewall which was immediately terminating my WebSocket connection. It worked perfectly fine when I tried on a different WiFi network that did not have any firewall configured.

How to prevent ssh connection from timing out?

I got a problem with ssh connection.
I executed commands on a remote Linux server via python paramiko. The command executed for a long time(more than 40 minutes) and then returned the message which I want to check, there was no message returned during the executing.
I have tried different ways:
The ssh connection fails before the result returns if I don't change the sshd config on the remote server.
Enabled TCPKeepAlive/ClientAliveInterval/ClientAliveCountMax on the remote server, the server sent messages to the client during the idle period cyclically, the message was unhandled keepalive request...... It prevents the connection from timing out, but in this way, I can't get the message returned by the command.
Help me, thank you!

DB2 ODBC inconsistent connectivity

My ASP.net application uses ODBC(64 bit) to connect to DB2 Database(9.5.3). I am using 64 bit IBM DB2 Client 10.5. on Windows Server 2008 R2.
Connection pool is turned on.
It works fine(immediate connectivity) for most of the time, but occasionally too much time is consumed to establish connectivity. By too much time I mean to say 10 to 11 minutes. No error is reported. No issue with database, as it can be accessed from other servers at same timestamp.
All database request issued during this time via DB2 Client are kept on hold and then once connectivity is established, all are executed immediately.
When the issue is going on, I tried to connect to database through Windows server CMD and that too waits for around 10 mins. No error is reported.
Network team says they dont see network traffic from Windows server to Database server when issue is going on. And no errors from network side. Which means DB2 client is not making connectivity request.
What could be causing delay in connectivity? This is not a consistent issue. It automatically resolves after around 10 mins. Is there any issue with DB2 driver ? Was any resource withheld during that time? I am closing all connections properly.

What may be the reason for JMS or OS not to recognize connection drop?

We have a system with servers communicating over JMS. Sometimes some of the servers can not reconnect after loosing connection to JMS. Loss of connection happens when server restarts, bad network... The reason that prevents reconnecting is "ClientID already in use." error.
excerpt from JMS log:
"A client on connection guest#10.0.0.106:2390 tried to use client id ABC which is already in use
In-conflict clientID ABC is owned by local connection guest#10.0.0.106:1098"
All servers have distinct clientID. Connection on remote port 1098 existed before server 10.0.0.106 lost connection to JMS. Port 1098 on server ABC is not even open.
I tried using TCPview when the problem occurred. Old connection to server on port 1098 still exists.
I have 2 questions:
Is it possible that JMS sends control packets to nonexistant remote port 1098 without error?
What may be the reason for OS not to recognize connection drop?

Resources