I found out that I have quite many Symfony local web server workers registered (around ~35), and the number keeps growing. I usually just start server with symfony serve and then kill it (Ctrl + \) when no longer needed. Apparently killing it leaves a worker behind, as seen in symfony server:status. Running symfony serve again just creates a new worker.
symfony server:status output:
Local Web Server
Not Running
Workers
PID 6327: /usr/bin/php7.2 -S 127.0.0.1:43653 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 24596: /usr/bin/php7.2 -S 127.0.0.1:37789 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 6575: /usr/bin/php7.4 -S 127.0.0.1:42505 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 41550: /usr/bin/php7.4 -S 127.0.0.1:36313 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
...
Environment Variables
None
So my questions regarding this:
#1: is it possible to quickly kill the server? I assume symfony server:stop is more correct way, but that requires additional console window and entering the command.
#2: how to kill those workers that are registered from previous sessions? Trying e.g. kill 6327 says that there's no such process. Also they're not gone after system restart.
Those extra workers are bothering me because for each one of them the server log output in the console is duplicated. So right now on each request to the server I get around 3k lines of log output in the console. Which makes it pretty useless.
I have the same problem after upgrading to Symfony CLI version v4.19.0...
My (very) bad workaround:
rm /home/myusername/.symfony/var/83247c3521c3ac3990bf3f823ef473db0a9445e1/*
Edit: this answer is not accurate, as hinted a by #CrSrr's answer above.
The symfony command adds data to both the ./log and ./var directories. Deleting entries in only one of those does not remove the appearance of non-existent workers in the project directory. I was fooled by checking status in a directory where the server:start had never been run.
A bug report is on file with symfony here.
Just faced with a similar issue. The PIDs were not to be found.
PS G:\workspace\joined> symfony server:status
Local Web Server
Not Running
Workers
PID 7732: C:\php\php-cgi.exe -b 63801 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 19324: C:\php\php-cgi.exe -b 62927 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 17968: C:\php\php-cgi.exe -b 50197 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 14040: C:\php\php-cgi.exe -b 55075 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
Environment Variables
None
In the Windows OS, the log files are kept in %USERPROFILE%.symfony. There's most likely a similar location in your home directory. Deleting all the contents of that directory allowed a new Windows Terminal app to show:
PS G:\workspace> symfony server:status
Local Web Server
Not Running
Workers
No Workers
Environment Variables
None
do symfony serve:stopto stopped the server and do against symfony serve to run the server good luck
Related
I have pulled into my web server so it has the latest code from my repo, i try to restart nginx - this doesnt do anything.
So I try the command
sudo nginx -s stop, and get the response that its failed because there is no such file or directory "run/nginx.pid" failed.
Trying to run the command ps aux | grep nginx gives me the response: unsupported option (BSD syntax) -- it actually comes out as ps aux > grep nginx in the digital ocean console.
Basically it seems that even though there are apparently no nginx processes running (although the command to check isnt working) my website is still running and using the old code, is there a way for me to check more definitively on the running processes?
Thanks if you can help.
Try sudo netstat -plunt to check if there's any nginx process running. See if there's anything running on port 80 or 443 and then look at the corresponding program name. You might have another server running possibly apache since it ships by default with most distributions which may be why nginx failed to start.
Another reason why it won't start might be due to faulty config. Go to /etc/nginx/ and double check that it's correct. You can also run sudo nginx -t to ensure that the config syntax is correct.
Alternatively, just check your nginx access log to see if it's actually serving any requests. You can also check the error log to see why it might fail to start. These resides in /var/log/nginx by default or check your nginx.conf for any custom path to logs.
I am trying to kill a process with pid 38456 using Symfony with this code:
$process1 = new Process('kill -9 38456');
$process1->run();
Pitifully this doesn't work. I think this is due to permissions (Symfony only can kill its own process) but I am not sure about it.
Try to understand which user runs your code:
$process = new Process('whoami');
$process->run();
echo $process->getOutput();
If you are apache or www-data then most probably you have very limited rights.
Your options:
run your script from CLI (command-line) as root. This is the way I recommend
run the process you want to kill from your process
run web-server (apache or php-fpm) as root - very insecure
strange way - web-server can kill process not directly but for example create some file with process id to kill. Also there will be CLI process running by rout that will read this file and kill that process.
I am testing out OpenMPI, provided and compiled by another user, (I am using soft link to his directories for all bin, include, etc - all the mandatory directories) but I ran into this weird thing:
First of all, if I ran mpirun with -n setting <= 10, I can run this below. testrunmpi.py simply prints out "run." from each core.
# I am in serverA.
bash-3.2$ /home/karl/bin/mpirun -n 10 ./testrunmpi.py
run.
run.
run.
run.
run.
run.
run.
run.
run.
run.
However, when I tried running -n more than 10, I will run into this:
bash-3.2$ /home/karl/bin/mpirun -n 24 ./testrunmpi.py
karl#serverB's password: Could not chdir to home directory /home/karl: No such file or directory
bash: /home/karl/bin/orted: No such file or directory
--------------------------------------------------------------------------
A daemon (pid 19203) died unexpectedly with status 127 while attempting
to launch so we are aborting.
There may be more information reported by the environment (see above).
This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
bash-3.2$
bash-3.2$
Permission denied, please try again.
karl#serverB's password:
Permission denied, please try again.
karl#serverB's password:
I see that the work is dispatched to serverB, while I was on serverA. I don't have any account on serverB. But if I invoke mpirun -n <= 10, the work will be on serverA.
This is strange, so I checked out /home/karl/etc/openmpi-default-hostfile, and tried set the following:
serverA slots=24 max_slots=24
serverB slots=0 max_slots=32
But the problem persists and still gives out the same error message above. What must I do in order to have my program run on serverA only?
The default hostfile in Open MPI is system-wide, i.e. its location is determined while the library is being built and installed and there is no user-specific version of it. The actual location can be obtained by running the ompi_info command like this:
$ ompi_info --param orte orte | grep orte_default_hostfile
MCA orte: parameter "orte_default_hostfile" (current value: <LOOK HERE>, data source: default value)
You can override the list of hosts in several different ways. First, you can provide your own hostfile via the -hostfile option to mpirun. If so, you don't have to put hosts with zero slots inside it - simply omit machines that you have no access to. For example:
localhost slots=10 max_slots=10
serverA slots=24 max_slots=24
You can also change the path to the default hostfile by setting the orte_default_hostfile MCA parameter:
$ mpirun --mca orte_default_hostfile /path/to/your/hostfile -n 10 executable
Instead of passing each time the --mca option, you can set the value in an exported environment variable called OMPI_MCA_orte_default_hostfile. This could be set in your shell's dot-rc file, e.g. in .bashrc if using Bash.
You can also specify the list of nodes directly via the -H (or -host) option.
Below is by code for spawing a fcgi script for nginx.
spawn-fcgi -d /home/ubuntu/workspace -f /home/ubuntu/workspace/index.py -a 127.0.0.1 -p 9001
Now, lets I want to make changes to the index.py script and reload with out bring down the system. How do reload the spawned program so the next connections are using the updated program while the others finish? For now I am killing the spawned process and running command again. I am hoping for something more graceful.
I tried this by the way.
sudo kill -1 `sudo lsof -t -i:9001
I have recently made something similar for node.js.
The idea is to have index.py as a very simple bootstrap script (which doesnât actually change much over time). It should catch SIGHUP, and reload/reread the application files (which are expected to change frequently).
I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often