Reduce CPU usage of a running process: Unix Command [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I was trying to find a UNIX/Linux command to limit memory and CPU usage once I realize that a process is consuming 90% or more. So basically a command to reduce usage on a process that is already running without restarting the process. Thank you.

nice (built-in)
or
cpulimit http://cpulimit.sourceforge.net/

/usr/bin/ulimit or the shell builtin ulimit can be used when you launch the process to set resource limits. Not after the process is running. The nice command is run as the process owner BEFORE you run the image file. Not after
renice is the command used after the process has already started.

I don't know how to set a hard limit on CPU usage. But you can force a process to be nicer to other processes with the renice command
renice -n 10 -p PID
where PID is the process id of the process whose priority you want to reduce.
What this does is tell the OS scheduler to reduce the process's priority, i.e. other processes that want to run get more of the CPU. man 1 renice has the details.

Related

How to make sure that a process on Gunicorn3 runs continuously? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I have a flask app deployed on a Ubuntu server. I use Nginx, and Gunicorn3. I know from this StackOverflow post that one of the correct ways to have the app running continuously on the server is to use something like:
gunicorn3 app:application --workers 3 --bind=127.0.0.1:1001 --daemon
But to be completely safe, since there are many other processes running on that server, I would like to find a way to check whether this process IS indeed running, and if it is NOT running (for whatever reasons) then to start it again.
In addition to that question, to make the app working at reboot, I use the following cronjob:
#reboot bash ~/restart_processes.sh
Where the .sh file executes the command line given above for starting Gunicorn3. Is this good practice or is there a better way to achieve the same result?
Thank you!
Kind regards,
I always use to deploy it in production with supervisorctl + nginx. Check this tutorial . You can simply start, restart or stop with a command.

AutoIt exe gets killed by antivirus [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I compiled an AutoIt script to an .exe file. When I give this program to another user, their antivirus detects it as a virus and kills it.
How can I avoid this?
How can I avoid killing AutoIT .exe by antivirus.
Assuming it does not classify as malicious:
White-list the executable (or folder containing it) in antivirus program.
File a false-positive report with concerning antivirus program's vendor.
Change antivirus software.
Resulting executable is the AutoIt script appended to an interpreter (no actual compiling). Incompetent vendors flag the interpreter instead of the script. False-positive reports usually solve this (until that vendor flags the next malicious AutoIt script, hence #3).

Commands to start and stop Nginx and how many nginx processes will be created? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am new in using Nginx. I have two questions:
(a) I noticed that are two sets of commands to start and stop Nginx:
$ sudo nginx
$ sudo nginx -s stop
and
$ sudo service nginx start
$ sudo service nginx stop
What are the differences between them?
(b) Once Nginx is started, there are a number of its processes running. So, how many copies will be created and how does the system determine the number of processes to create?
A - In the first way you are using nginx executable to restart it, the second one uses built-in (I typically use it on Centos) operating system util which handles service scripts.
B - It is configured in nginx.conf, by default it is (as far as I remember) set to 2 working processes. Nginx always creates one master process as well
http://nginx.org/en/docs/beginners_guide.html

Run a job after another [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In unix/linux, I have a job ./a.out, which is currently running in the background over ssh as
nohup ./a.out &
And I have logged out of the original shell. I want another job ./b.out to start running only after ./a.out finishes. How to do it? The overall effect is equivalent to
./a.out && ./b.out
But I do not want to kill ./a.out.
Edit: clarify that ./a.out is running in the background using nohup. (Thanks to Marc B.)
One approach would be to see what the process ID of a.out is, with
top or ps, and have a launcher program that checks once a second to
see if that PID is still active. When it isn't, the launcher runs
b.out.

Debian Wheezy Networking Spontaneously Shuts Down [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have just recently upgraded to Wheezy. Since I have updated my server will spontaneously kill networking. What logs could I look in to see if I can find the issue? I have looked in /var/log and there are no logs that look relevant that have been updated in the past few days. This server runs headless so re-enabling networking means turning the server off and on again as I can't ssh to it.
Any suggestions would be welcome.
Thanks
var/log/syslog should have something. You can run dmesg which may pick it up if it's a kernel module problem or something - to find the module name use lspci -v | grep -i ethernet and look for the module name a few lines later (it could be e1000 or something). Use the module name when grepping the dmesg output.

Resources