Zeo server log output with multiple zeo clients - plone

We use a zeo server and three zeo clients.
Zope version is 2.13.19
It works fine but I want to understand why some of the following connections are connected and unconnected.
Ultimately, only three connections show received handshakes which accounts for the three clients.
2018-03-30T16:23:24 (23621) terminated by SIGTERM
2018-03-30T16:23:24 (23621) closing storage '1'
2018-03-30T16:23:24 (127.0.0.1:51272) disconnected
2018-03-30T16:23:24 (23621) removed PID file '/home/magikzope/var/zeo.pid'
2018-03-30T16:23:25 (9915) created PID file '/home/magikzope/var/zeo.pid'
2018-03-30T16:23:25 (9915) opening storage '1' using FileStorage
2018-03-30T16:23:25 StorageServer created RW with storages: 1:RW:/home/magikzope/var/filestorage/Data.fs
2018-03-30T16:23:25 (9915) listening on ('127.0.0.1', 9100)
2018-03-30T16:23:27 new connection ('127.0.0.1', 45414): <ManagedServerConnection ('127.0.0.1', 45414)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45416): <ManagedServerConnection ('127.0.0.1', 45416)>
2018-03-30T16:23:27 (127.0.0.1:45416) received handshake 'Z3101'
2018-03-30T16:23:27 (127.0.0.1:45414) received handshake 'Z3101'
2018-03-30T16:23:27 new connection ('127.0.0.1', 45418): <ManagedServerConnection ('127.0.0.1', 45418)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45420): <ManagedServerConnection ('127.0.0.1', 45420)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45422): <ManagedServerConnection ('127.0.0.1', 45422)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45424): <ManagedServerConnection ('127.0.0.1', 45424)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45426): <ManagedServerConnection ('127.0.0.1', 45426)>
2018-03-30T16:23:27 new connection ('127.0.0.1', 45428): <ManagedServerConnection ('127.0.0.1', 45428)>
2018-03-30T16:23:27 (127.0.0.1:45426) received handshake 'Z3101'
2018-03-30T16:23:27 (unconnected) disconnected
2018-03-30T16:23:27 (unconnected) disconnected
2018-03-30T16:23:27 (unconnected) disconnected
2018-03-30T16:23:27 (unconnected) disconnected
2018-03-30T16:23:27 (unconnected) disconnected
2018-03-30T16:23:27 new connection ('127.0.0.1', 45430): <ManagedServerConnection ('127.0.0.1', 45430)>
2018-03-30T16:23:27 (unconnected) disconnected

SIGTERM hints that the threshold for an allowed max-RAM was suceeded, like it can happen on a virtual-server, in that case you'll want to upgrade your hosting-plan, to have more RAM available.
In case you did reboot your OS, or were the reason for the SIGTERM in another way yourself, this can also happen and the reasons may be various, one can be the cache-validation, as described in the source-code of ZEO-5.1.1-py2.7.egg/ZEO/cache.py at line 64:
# When the client is connected to the server, it receives
# invalidations every time an object is modified. When the client is
# disconnected then reconnects, it must perform cache verification to make
# sure its cached data is synchronized with the storage's current state.
So in general it's nothing to have sorrows about, the clients try to connect and disconnect until a valid connection is established.
However it is always a good idea to start instances in a staging- or development-enviroment in foreground, before changing anything in production.

Related

TCP connection close automatically

My TCP Server not close client socket,but the client read EOF.,Then client closes socket,my server recv failed with 64..(winsock)
I want to konw when does this happen.

Apache-Http-Client : Ideal Value for Keep-Alive Timeout

I have a server-client setup, both being Java applications. I am using Spring's RestTemplate with PoolingHttpClientConnectionManager at the client end to call the server. The server (tomcat) has the default keep-alive value of an http-connection as 60 seconds. This means that if the client uses the same connection (lying idle in pool) again after 60 seconds for another request, the server will close the connection. Then, client automatically creates another connection object and sends the request via this connection to the server.
Without going into the code and internal working, I simply want to know that what is the best practice regarding http-client's keep-alive time? Do I
keep the default value (connection is kept-alive indefinitely by the client), OR
set the keep-alive time less than server keep-alive time? In this case http-client will always check, before leasing a connection, that whether that connection has expired or not. If expired, it is discarded from the pool, a new connection is created and leased to the requesting thread.

I am getting Connection TIME OUT Error When Users Access .net Application

Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement.
This could be because the pre-login handshake failed or the server was unable to respond back in time.
It seems to be a firewall issue. Firewall seems to be blocking connection to sql server for port 1433. You will need to add a rule to allow connection on this port.
You can find more on this here: SQL Server Pre-Login Handshake Acknowledgement Error

rabbitmq keepalive timeout error

Below is part of my rabbitmq log file.
=ERROR REPORT==== 22-Dec-2016::09:36:20 ===
closing MQTT connection "xxx.xxx.xxx.xxx:11030 -> xx.xx.xx.xx:1884" (keepalive timeout)
=ERROR REPORT==== 22-Dec-2016::09:36:20 ===
closing MQTT connection "xxx.xxx.xxx.xxx:14653 -> xx.xx.xx.xx:1884" (keepalive timeout)
=ERROR REPORT==== 22-Dec-2016::09:36:22 ===
closing MQTT connection "xxx.xxx.xxx.xxx:14494 -> xx.xx.xx.xx:1884" (keepalive timeout)
I think this is closing MQTT connection by keepalive timeout.
Is this normal?
These are repeatedly shown. so log disk is full.
Is there way of connection is keep alive?
Your clients are failing to send something within the keep alive time
they specified when they connected to the broker. See MQTT specs
section 3.1.2.10 Keep Alive.
Compliant clients would send a PINGREQ to keep the connection alive.

Firebase facebook login - very erratic

We are trying to use facebook login with Firebase in an Ionic app with the inappbrowser. It works fine on the browser (ionic serve) and fine in iOS (emulator and devices). However on android devices the login process is rather erratic. The callback doesn't get called on most tries (although strangely it works sometimes!). Upon enabling logging in firebase, I noticed that the browser tends to 'connect' and 'disconnect' multiple times. Here are the console logs from just clicking on the button that initiates the facebook login:
Facebook Popup initiation
p:0: Browser went online.
p:0: Making a connection attempt
c:0:4: Connection created
c:0:4:0 Websocket connecting to wss://s-dal5-nss-23.firebaseio.com/.ws?v=5&ns=crisscross
p:0: Browser went offline. Killing connection.
c:0:4: Closing realtime connection.
c:0:4: Shutting down all connections
c:0:4:0 WebSocket is being closed
WebSocket connection to 'wss://s-dal5-nss-23.firebaseio.com/.ws?v=5&ns=crisscross' failed: WebSocket is closed before the connection is established.
c:0:4:0 WebSocket error. Closing connection.
c:0:4:0 Websocket connection was disconnected.
p:0: data client disconnected
0: onDisconnectEvents
p:0: Browser went online.
p:0: Making a connection attempt
c:0:5: Connection created
c:0:5:0 Connecting via long-poll to https://s-dal5-nss-23.firebaseio.com/.lp?start=t&ser=19709246&cb=3&v=5&ns=crisscross
p:0: Browser went offline. Killing connection.
c:0:5: Closing realtime connection.
c:0:5: Shutting down all connections
c:0:5:0 Longpoll is being closed.
p:0: data client disconnected
0: onDisconnectEvents
Long-poll script failed to load: https://s-dal5-nss-23.firebaseio.com/.lp?start=t&ser=19709246&cb=3&v=5&ns=crisscross
p:0: Browser went online.
p:0: Making a connection attempt
c:0:6: Connection created
c:0:6:0 Connecting via long-poll to https://s-dal5-nss-23.firebaseio.com/.lp?start=t&ser=57829348&cb=4&v=5&ns=crisscross
c:0:6: Realtime connection established.
p:0: connection ready
p:0: {"r":37,"a":"auth","b":{"cred":"..."}}
p:0: Listen on /users/facebook:... for default
p:0: {"r":39,"a":"q","b":{"p":"/users/facebook:...","h":"pExrvNpPeLgfcNuQk/FRu4iWQrg="}}
...
...
c:0:6: Primary connection is healthy.
c:0:6:1 Websocket connecting to wss://s-dal5-nss-23.firebaseio.com/.ws?v=5&ns=crisscross&s=j9noKHDr1wjh5MiKjiKFIMxqNY2RvKoh
p:0: Browser went offline. Killing connection.
c:0:6: Closing realtime connection.
c:0:6: Shutting down all connections
c:0:6:0 Longpoll is being closed.
c:0:6:1 WebSocket is being closed
WebSocket connection to 'wss://s-dal5-nss-23.firebaseio.com/.ws?v=5&ns=crisscross&s=j9noKHDr1wjh5MiKjiKFIMxqNY2RvKoh' failed: WebSocket is closed before the connection is established.
c:0:6:1 WebSocket error. Closing connection.
c:0:6:1 Websocket connection was disconnected.
p:0: data client disconnected
0: onDisconnectEvents
Long-poll script failed to load: https://s-dal5-nss-23.firebaseio.com/.lp?id=1125523&pw=ma9Rk4SNwf&ser=53936…1qY3lPRFo5LlRaeTdjZXdMY09XRDZJWGFMOUZtNUFBcGQxTzBrVTVCNXloVHQ0cGVXTVkifX19
Long-poll script failed to load: https://s-dal5-nss-23.firebaseio.com/.lp?id=1125523&pw=ma9Rk4SNwf&ser=53936342&ns=crisscross
p:0: Authenticating using credential: eyJ0eX...fn81k
The logs don't have time stamps, but the whole thing above happened without any major delays in between, over 3-4 seconds.
I understand that it's trying to switch between long polling and some realtime connection (limited knowledge here), and that could explain why it goes offline and online. But so many times? And it ends up just not connecting.
Any idea anyone?

Resources