Process kill in whm - cpu-usage

If I execute a process kill in through whm, how do i get back those killed processes. Can the killed processes be brought back and what are the effects of the kill on my websites.
Thanks
I have killed one or two processes. My server load is too high thus leading to server shut downs. My webhost suggested i kill some of the running processes creating the high server load

Related

AWS EC2 Instance w/ WordPress keeps crashing from 25% CPU utilization spikes

I have an EC2 t2.medium instance i-0bf4623a779064e0a with a WordPress installation which keeps crashing (can't be accessed via http or SSH). It seems whenever CPU utilization gets to about 25% or more (which I would think isn't very much), the server crashes. I have an alert setup to restart the server whenever Network Out is <=50,000 bytes for 5 minutes and tonight it's had to restart 10 times. It has been doing this nearly everyday for weeks. Here is a screenshot of the monitoring http://i.imgur.com/zQQ4oiy.png
What can I do to stop this crashing? Can I do some sort of server config optimization? I hope I do not need a larger instance, since I am already paying quite a bit for AWS and previously using a $10/mo shared hosting which rarely went down.

Restart supervisor without stopping child processes

I am using gearman and supervisor to manage my worker processes. I have a case where I need to stop/restart supervisor without affecting the currently running processes.
e.g. I have configured the numprocs value in supervisor conf file as 2. I start supervisor and have 2 processes running.
Now I want to restart supervisor, such that the number of processes becomes 4, where supervisor controls only the two new processes. Is there a way this can be done.
I tried renaming the sock file before restarting. It didn’t work. I am using Amazon Linux.
EDIT
I will just elaborate more on what I am doing. I have some worker processes for gearman which are managed by supervisor. It works fine, but when I restart supervisor, the workers are killed and new workers are launched even if the workers are busy processing a job. So, I lose all the work done by processes.
So, I am using php-pcntl to intercept the signals produced by supervisor and then exit after the job is finished. This seems to be working well.
I send the interrupts like the following:
supervisorctl signal SIGTERM all
and then in my worker code, I have the following.
public static function work(GearmanWorker $gw){
pcntl_signal(SIGTERM, function($sig){
exit;
});
while(true){
$gw->work();
// Calls signal handlers for pending signals
pcntl_signal_dispatch();
}
}
Now, when I update my worker code, I should restart supervisor. But, if I directly restart, then even with the above implementation, all the processes are terminated. I want to avoid terminating the old prcoesses as they will quit by themselves when pcntl_signal_dispatch(); is called.

How to fork properly within an nginx/php-fpm environment?

I have a very simple php socket server that runs on my machine. I created a convinience class with simple methods like "restart" and "stop" and etc. to control the server once it is already running.
What the restart function does is it sends the server the command to stop and then it forks the process and starts a new instance of the socket server within the child process while the parent process returns to the caller.
This works great on the command line, however I am trying to make an admin webpage which restarts the socket server and the forking is causing problems in php-fpm. Basically, what it appears is happening is that the life of the "page loading" is not ending when the parent process ends and nginx/php-fpm are not reassigning the process to new connections until the forked process also ends.
In my case, I do not want the forked process to ever end, and I end up with a completely debilitated server. (in my test environment, for simplicity i have the worker pool set to only 1, in a production environment this will be higher, but this issue would lead to one worker slot being permanently occupied).
I have tried a few things including calling fastcgi_finish_request() just prior to the forking, however this had no affect.
How can I restart my service from a php-fpm worker process without locking up an assignment in the nginx connection pool?
The solution was simple and elementary.
My child processes were not redirecting STDOUT and STDERR to /dev/null so therefore even though the parent process finished, the fact that the child process still had active file descriptors was causing php-fpm to consider that connection in its pool as still active, and therefore it would never be re-assigned to new connections on the basis that the child process would run continually.
Redirecting STDERR and STDOUT to /dev/null caused php-fpm to correctly reassign connections while simultaneously allowing the child process (the socket server) to run forever. Case closed.
./some/command >/dev/null 2>&1
Should have seen this one a mile off...
(I solved this problem months ago, but haven't signed into this account in a long time ... it didn't take me 7 months to figure out I need to direct output to /dev/null!)
Sounds like you have your protocol design wrong. The server should be capabele of restarting itself. There's quite a few examples of how to do that in the internet.
The client (your webpage) should not need to do any forking.
The server should also not run inside php-fpm, but be a console application that uses a daemon(3) type interface to detach itself from the console.

how do I kill a zeopack without restarting zeo?

It seems that in using the zeopack script, the zeopack script returns to the shell after signalling zeo to do a pack on a database. How can I kill a zeopack operation in zeo without bringing down the database?
The only way to kill the pack is to restart the server:
bin/zeoctl restart
Clients should reconnect automatically.
The server starts the packing in a separate thread to handle the packing but offers no 'stop' hook.

SSH Equivalent on Windows

I am making an Asp.Net application which does the following on the client computer:
Establish a Connection
Check client's cpu usage to see if it is idle or not
if the client is idle it starts executing a c application
while executing the script if client starts doing something (also checked by monitoring his cpu usage) stop signal is sent
start signal is again sent to the client if he is back to his idle position
If the client is Ubuntu, I use ssh and execute what I want to. What is the way of doing this in Windows without the root access.
thanks in advance for replying.
This sounds a bit dodgy to me. However, what you are looking for is called PsExec (http://technet.microsoft.com/en-us/sysinternals/bb897553)
UPDATE
The only other way I can think of doing this is to use the built in task scheduler for windows.
With the task scheduler you can set a task to start when a computer has been idle for a particular amount of time and pause or stop it when it ceases to be idle.
Once the task is installed, just forget about it.
try SSH FTP or SFTP is analogous to SSH in windows

Resources