How to know about memory consumption in mysql? - mysql-management

how can one know that each process or a thread is consuming how much memory in MYSQL?

Assuming you just want just the memory usage of the mysql server program.
On windows you can use Process Explorer
On linux you can use the top command.
Use "ps -e" to find the pid of the mysql process
Then use "top -p {pid}" where {pid} is the pid of the mysql process.

on linux you can also use top|grep mysql to get a running report of the stats of the mysql process, 1 row per top refresh period.

Related

Seeking solution for implementing backups in only one DC

We are running MariaDB Galera cluster in 3 Datacenters. We are using mariabackup tool for taking backups in each datacenter but, since the same data is replicating on all the 3 datacenters we are trying to implement a solution which includes executing the backup script in only DC and if there is downtime in the DC which is taking backups the backups should run in other DC automatically. Any solution for this approach is much appreciated.
You need some sort of "Event & Trigger" to accomplish this.
I use zabbix to monitor my daily mariabackup and I had the problem that the node was down while mariabackup was running.
I don't really care if I lose one day backup since I also have ZFS SNAPSHOT backup.
But if you want to, you can also set some trigger action in zabbix to make the backup script run on another server.
Another solution I would choose would be saltstack 'beacon & reactor'. A beacon can be created to send an event and a reactor can be triggered to take some actions. Since I am running saltstack for all my servers, this would be a solution I prefer.

How to cleanup sockets after mono-process crashes?

I am creating a chat server in mono that should be able to have many sockets open. Before deciding on the architecture, I am doing a load test with mono. Just for a test, I created a small mono-server and mono-server that opens 100,000 sockets/connections and it works pretty well.
I tried to hit the limit and at sometime the process crashes (of course).
But what worries me is that if I try to restart the process, it directly gives "Unhandled Exception: System.Net.Sockets.SocketException: Too many open files".
So I guess that somehow the filedescriptions(sockets) are kept open even when my process ends. Even several hours later it still gives this error, the only way I can deal with it is to reboot my computer. We cannot run into this kind of problem if we are in production without knowing how to handle it.
My question:
Is there anything in Mono that keeps running globally regardless of which mono application is started, a kind of service I can restart without rebooting my computer?
Or is this not a mono problem but a unix problem, that we would run into even if we would program it in java/C++?
I checked the following, but no mono processes alive, no sockets open and no files:
localhost:~ root# ps -ax | grep mono
1536 ttys002 0:00.00 grep mono
-
localhost:~ root# lsof | grep mono
(nothing)
-
localhost:~ root# netstat -a
Active Internet connections (including servers)
(no unusual ports are open)
For development I run under OSX 10.7.5. For production we can decide which platform to use.
This sounds like you need to set (or unset) the Linger option on the socket (using Socket.SetSocketOption). Depending on the actual API you're using there might be better alternatives (TcpClient has a LingerState property for instance).

Amazon EC2 / RStudio : Is there are a way to run a job without maintaining a connection?

I have a long running job that I'd like to run using EC2 + RStudio. I setup the EC2 instance then setup RStudio as a page on my web browser. I need to physically move my laptop that I use to setup the connection and run the web browser throughout the course of the day and my job gets terminated in RStudio but the instance is still running on the EC2 dashboard.
Is there a way to keep a job running without maintaining an active connection?
Does it have to be started / controlled via RStudio?
If you make your task a "normal" R script, executed via Rscript or littler, then you can run them from the shell ... and get to
use old-school tools like nohup, batch or at to control running in the background
use tools like screen, tmux or byobu to maintain one or multiple sessions in which you launch the jobs, and connect / disconnect / reconnect at leisure.
RStudio Server works in similar ways but AFAICT limits you to a single user per user / machine -- which makes perfect sense for interactive work but is limiting if you have a need for multiple sessions.
FWIW, I like byobu with tmux a lot for this.
My original concern that it needed to maintain a live connection was incorrect. It turns out the error was from running out of memory, it just coincided with being disconnected from the internet connection.
An instance is started from the AWS dashboard and stopped or terminated from there also. As long as it is still running it can be accessed from an RStudio tab by copying the public DNS to the address bar on the web page and logging in again.

Is it possible to choose whether to generate heap dump or not on the fly?

We have an application which is deployed to a WebSphere server running on UNIX, and we are experiencing two issues:
a system hang which recovers after a few minutes - to investigate, we will need the thread dump (javacore).
a system hang which does not recover and requires WebSphere to be restarted - to investigate, we will need the thread dump and heap dump.
The problem is: when a system hang occurs, we do not know whether it is issue 1 or 2.
Ideally we would like to manually generate the thread dump first, and wait to see if the system recovers. If it does not, then we generate the thread dump and the heap dump, before restarting WebSphere.
I know about the kill -3 (or kill -QUIT) command. The command would generate thread dump only (if the parameter IBM_HEAPDUMP=false), or thread dump and heap dump (if IBM_HEAPDUMP=true). However, IBM_HEAPDUMP has to be set before WebSphere is started and cannot be changed while WebSphere is running.
Is my understanding correct, regarding the IBM_HEAPDUMP parameter and the kill -3 command?
Also, is it possible get the logs in the way I described? (i.e. when generating JVM diagnostics, choose whether to generate heap dump or not on the fly)
Your understanding is consistent with everything I've read.
However, I believe you can accomplish what you want by using wsadmin scripting. This article describes how to force javacores and heapdumps on a Windows platform where kill -3 is not available, but the same commands can be run on any WebSphere system.
From within wsadmin or a wsadmin script, execute:
set jvm [$AdminControl completeObjectName type=JVM,process=server1,*]​
$AdminControl invoke $jvm generateHeapDump​
$AdminControl invoke $jvm dumpThreads​

Get process occupying a port in Solaris 10 (alternative for pfiles)

I am currently using pfiles to get the process occupying certain port in Solaris10,
but it causes problem when run parallely.
problem is pfiles can't be run parallely for the same pid.
the second one will return with error message.
pfiles: process is traced :
Is there any alternative to pfiles to get the process occupying a port in Solaris.
OR any information on OS API's to get port/process information on Solaris could help.
A workaround would be to use some lock mechanism to avoid this.
Alternatively, you might install lsof from a freeware repository and see if it supports concurrency (I think it does).
I just tested Solaris 11 Express pfiles and it doesn't seem to exhibit this issue.

Resources