How to gracefully shutdown Mariadb? - mariadb

I have to shutdown a mariaDB machine which is a part of galera cluster, which have only read access for secondaries. How can i shutdown mariaDb gracefully so that all the connections read successfully completes?

Do you have a load-balancer or other proxy in front of the nodes? If so, turn that off, then wait for connections to terminate.
Do you have sufficient error checking in your clients? If so, simply "pull the plug". The clients will catch the error, and recover.

Related

Multiple HTTP active sessions connected to an application

Given a C++-application running on a linux-pc.
It has an integrated webserver associated with one thread.
Considering that, is it possible to have more than one client (active HTTP-Sessions) connected to the webserver ?
In my mind, I always though we need something like a thread pool in the webserver task. Acceptor threads on a listen socket accept new connection and put it into a connection queue. The task will then pick up connections and service requests.
Is there any possibilities to have multiple session with one thread ?
Many thanks

How do I solve a WSAECONNRESET error?

I am using Perforce as part of a small development team. Everyone was able to connect to the P4V client except for one person who gets the following error:
TCP receive failed.
read: socket: WSAECONNRESET
We have deactivated his McAfee firewall and virus scan, but the error persists. I really don't know what to do with this error and it seems to be rather undocumented on the perforce website. From what I gather, it's because it's not a perforce-specific issue, but rather a TCP communication problem that might be caused by something else.
Any tips?
a TCP communication problem that might be caused by something else.
This is possible, or it's possible that whenever this user connects it causes some sort of server fault.
https://msdn.microsoft.com/en-us/library/ms740668.aspx
WSAECONNRESET 10054 Connection reset by peer.
An existing connection was forcibly closed by the remote host. This normally results if the peer application on the remote host is suddenly stopped, the host is rebooted, the host or remote network interface is disabled, or the remote host uses a hard close (see setsockopt for more information on the SO_LINGER option on the remote socket). This error may also result if a connection was broken due to keep-alive activity detecting a failure while one or more operations are in progress. Operations that were in progress fail with WSAENETRESET. Subsequent operations fail with WSAECONNRESET.
Beyond the usual connection troubleshooting questions (is this user on the same subnet? same version of the client software? same exact P4PORT setting? is the user able to connect via the command line client and if not does it give a more helpful error? why is this user unlike all other users?) I'd look at the server logs to see if it's logging any sort of more helpful error when this user tries to connect.

gRPC client reconnect inside Kubernetes

If we define our microservice inside Kubernetes pods, do we need to instrument a gRPC client reconnection if the service pod is restarting?
When the pod restarts the host name is not changed, but we cannot guarantee the IP address remains the same. So is the gRPC client still be able to detect the new server to reconnect to?
When the TCP connection is disconnected (because the old pod stopped) gRPC's channel will attempt to reconnect with exponential backoff. Each reconnect attempt implies resolving the DNS address, although it may not detect the new address immediately because of the TTL (time-to-live) of the old DNS entry. Also, I believe some implementations resolve the address when a failure is detected instead of before an attempt.
This process happens naturally without your application doing anything, although it may experience RPC failures until the connection is re-established. Enabling "wait for ready" on an RPC would reduce the chances the RPC fails during this period of transition, although such an RPC generally implies you don't care about response latency.
If the DNS address is not (eventually) re-resolved, then that would be a bug and you should file an issue.
You need client-side load balancing as described here. You can watch the endpoints of a service with Kubernetes api. I have created a package for Go programming language and it is on github. Sorry but I didn't write a documentation yet. Basic concept is get service endpoints at beginning than watch service endpoints for changes.

Whats the difference between ConnectTimeout and ServerAliveInterval in ssh

I am doing ssh on several remote servers, Some of the servers doesn't respond and some of them might be down.
To preclude such scenarios I used ConnectTimeout in ssh command. It was timing out as I configured It to be.
My current way of doing ssh
ssh -o LogLevel=Error -oConnectTimeout=5 -oBatchMode=yes -l becomeaccount servername './command.sh'
All was going good until one day when I found a stale ssh connection on one of my server. It was on for more than 3 days.
So now I think I might have missed something, I tried to google it and found there is something called as ServerAliveInterval...would that solve my problem? how is it different than ConnectTimeOut ?
The "ServerAliveInterval" specifies a periodic polling time between the SSH server and client. The intent is twofold:
(1) To close down idle ssh sessions where either
[a] one side or the other crashes hard (i.e.: machine failure/poweroff)
[b] one side or the other changes IP addresses
(2) To MAINTAIN idle ssh sessions over a NAT that would tear down (or terminate) idle TCP sessions
ServerAliveInterval affects the "ssh" client. There's a corresponding parameter for the "sshd" server. (There is also a TCPKeepAlive option too.) If you're seeing orphaned sshd sessions on your remote servers, you should consider making appropriate changes in the remote servers's sshd_config. If you can't implement changes in the remote server's sshd_config but still need to have idle logins die, check to see if your shell has an idle-timeout ("bash" does.)

Can a Winsock TCP connection between server and client resume as-was after the server must restart?

Is there a way to save the "state" of Winsock so that the server program can be stopped and restarted and all the client TCP connections continue as though nothing happened, without the clients having to do anything special?
Or is it the case that once a Winsock server process terminates, client connections can only be reestablished through all the usual initialization calls?
A lost/closed connection must be re-established through a new connect handshake. So if you don't want the client to know the server is restarted, you will have to move the existing connection to another process first, then move it back after the restart. You can use WSADuplicateSocket() for that.

Resources