rsync over ssh hangs in a file sync daemon - qt

I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often

Related

How to shut down a computer (host)

I know it sounds weird, but I have a case where Deno would need to shutdown its own host (and kill its own process therefore). Is this possible?
I am specifically needing this for linux (lubuntu), if that's relevant. I guess this requires sudo rights, which sucks but would be an option.
For those interested in details: I'm coding a minecraft server software and if the server has no player for 30 minutes, it will shut itself down to save some power. A raspberry PI that runs 24/7 anyways, has a wake on lan feature, so that it can boot again. After boot, the server manager software would automatically start as a linux service.
You can create a subprocess to do this:
await Deno.run({ cmd: ["shutdown", "-h", "now"] }).status();
Concepts
Deno is capable of spawning a subprocess via Deno.run.
--allow-run permission is required to spawn a subprocess.
Spawned subprocesses do not run in a security sandbox.
Communicate with the subprocess via the stdin, stdout and stderr streams.
Use a specific shell by providing its path/name and its string input switch, e.g. Deno.run({cmd: ["bash", "-c", "ls -la"]});
See also command line - Shutdown from terminal without entering password? - Ask Ubuntu for ideas on how to avoid needing sudo to call shutdown or alternative commands that you can invoke from Deno instead.
To extend on what #mfulton2 wrote, here is how I made it work, so that I did not need to start the program with sudo rights, but still was able to shut down the computer without the use of sudo outside or within the app.
Open or create the following file: sudo nano /etc/sudoer
Add the line username ALL = NOPASSWD: /sbin/shutdown
Add this line %admin ALL = NOPASSWD: /sbin/shutdown
In your deno script, write Deno.run({ cmd: ["shutdown", "-h", "now"]}).status();
Execute script!
Keep in mind that any experienced linux user would potentially tell you that this is very dangerous (it probably is) and that it might not be the very best way. But IMHO, the damage this can cause is minor enough, as it only affects the shutdown command.

How to avoid duplicating workers with Symfony local server?

I found out that I have quite many Symfony local web server workers registered (around ~35), and the number keeps growing. I usually just start server with symfony serve and then kill it (Ctrl + \) when no longer needed. Apparently killing it leaves a worker behind, as seen in symfony server:status. Running symfony serve again just creates a new worker.
symfony server:status output:
Local Web Server
Not Running
Workers
PID 6327: /usr/bin/php7.2 -S 127.0.0.1:43653 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 24596: /usr/bin/php7.2 -S 127.0.0.1:37789 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 6575: /usr/bin/php7.4 -S 127.0.0.1:42505 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
PID 41550: /usr/bin/php7.4 -S 127.0.0.1:36313 -d variables_order=EGPCS /home/mindaugas/.symfony/php/83247c3521c3ac3990bf3f823ef473db0a9445e1-router.php
...
Environment Variables
None
So my questions regarding this:
#1: is it possible to quickly kill the server? I assume symfony server:stop is more correct way, but that requires additional console window and entering the command.
#2: how to kill those workers that are registered from previous sessions? Trying e.g. kill 6327 says that there's no such process. Also they're not gone after system restart.
Those extra workers are bothering me because for each one of them the server log output in the console is duplicated. So right now on each request to the server I get around 3k lines of log output in the console. Which makes it pretty useless.
I have the same problem after upgrading to Symfony CLI version v4.19.0...
My (very) bad workaround:
rm /home/myusername/.symfony/var/83247c3521c3ac3990bf3f823ef473db0a9445e1/*
Edit: this answer is not accurate, as hinted a by #CrSrr's answer above.
The symfony command adds data to both the ./log and ./var directories. Deleting entries in only one of those does not remove the appearance of non-existent workers in the project directory. I was fooled by checking status in a directory where the server:start had never been run.
A bug report is on file with symfony here.
Just faced with a similar issue. The PIDs were not to be found.
PS G:\workspace\joined> symfony server:status
Local Web Server
Not Running
Workers
PID 7732: C:\php\php-cgi.exe -b 63801 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 19324: C:\php\php-cgi.exe -b 62927 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 17968: C:\php\php-cgi.exe -b 50197 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
PID 14040: C:\php\php-cgi.exe -b 55075 -d error_log=C:\Users\George\.symfony\log\e79ad2f4b30a2f0a35c3b5ab08772770b382a3d6.log
Environment Variables
None
In the Windows OS, the log files are kept in %USERPROFILE%.symfony. There's most likely a similar location in your home directory. Deleting all the contents of that directory allowed a new Windows Terminal app to show:
PS G:\workspace> symfony server:status
Local Web Server
Not Running
Workers
No Workers
Environment Variables
None
do symfony serve:stopto stopped the server and do against symfony serve to run the server good luck

ssh to execute all commands in guest machine

i was created a bash script my_vp.sh that use 2 command:
setterm -cursor off
setterm -powersave off
[...]
#execute video commands
[...]
and is in a computerA
but when i execute it by ssh by another computerB_terminal:
ssh pi#192.168.1.1
execute video commands work correctly in the computerA (the same where is the script)
but the command setterm works in the computerB (the terminal where i execute the ssh command).
somebody can help me with solucione it?
thank you very much!
I am not sure I understood the question:
to execute a local script, but on another machine:
scp /path/to/local/script.bash pi#192.168.1.1:/tmp/copy_of_script.bash
and then, if it's copied correctly, execute it:
ssh pi#192.168.1.1 "chmod +x /tmp/copy_of_script.bash"
ssh pi#192.168.1.1 "bash /tmp/copy_of_script.bash"
to have the remote video (Xwindows, etc) commands appear on the originating machine:
replace : ssh with : ssh -x (to allow X-Forwarding, which will allocate a DISPLAY automatically on the remote machine that will be tunneled back to the originating machine)
for the X-forwarding to work, there are some requirements (usually ok by default, but ymmv) : read more about those requirements in this Unix.se answer

Openmpi trouble with mpirun and ssh

Here's the thing. I've installed openmpi on two different computer, I already compile and run separetly the hello_world example on this machines and it's works well. But the problem is when I launched this command :
mpirun -hostfile hosts -n 3 hello_c
with in the hosts file : localhost and the ip of my other machine. Then, the program ask me my ssh password, and after I fill it nothing append like mpirun just crashed. My really problem is that I can't run an mpi process on two different computers trough ssh.
I want to precise that all openmpi binary and library are well set in path, even the hello_world.
update
I've already setup a pass_wordless ssh with rsa certificate, but it does'nt work too. I've launched mpirun in debug mode (-d) and I got this :
[baptiste#baptiste RE51]$ mpirun -d -hostfile hosts hello_c
[baptiste.thinkFed:02666] procdir: /tmp/openmpi-sessions-baptiste#baptiste.thinkFed_0/53471/0/0
[baptiste.thinkFed:02666] jobdir: /tmp/openmpi-sessions-baptiste#baptiste.thinkFed_0/53471/0
[baptiste.thinkFed:02666] top: openmpi-sessions-baptiste#baptiste.thinkFed_0
[baptiste.thinkFed:02666] tmp: /tmp
[roommateServer:01102] procdir: /tmp/openmpi-sessions-baptiste#roommateServer_0/53471/0/1
[roommateServer:01102] jobdir: /tmp/openmpi-sessions-baptiste#roommateServer_0/53471/0
[roommateServer:01102] top: openmpi-sessions-baptiste#roommateServer_0
[roommateServer:01102] tmp: /tmp
And nothing else, it stay here and I've to kill mpirun.
For information, I tried to lauchn mpirun hello_c trough ssh on the remote node with this command :
ssh roomServer mpirun hello_c
This work well... I definetly can't understand why it doesn't work on all nodes ..
Assuming your compiler is setup properly as well as your hosts file. Your problem is that you need to setup passwordless ssh between the two computers, otherwise you will get the error you described. This is because MPI needs to communicate quick and efficiently and not have messages be prompted for a password which would cause the messages to stall and the program to crash.

How to reload a spawned script for nginx fast cgi

Below is by code for spawing a fcgi script for nginx.
spawn-fcgi -d /home/ubuntu/workspace -f /home/ubuntu/workspace/index.py -a 127.0.0.1 -p 9001
Now, lets I want to make changes to the index.py script and reload with out bring down the system. How do reload the spawned program so the next connections are using the updated program while the others finish? For now I am killing the spawned process and running command again. I am hoping for something more graceful.
I tried this by the way.
sudo kill -1 `sudo lsof -t -i:9001
I have recently made something similar for node.js.
The idea is to have index.py as a very simple bootstrap script (which doesn‘t actually change much over time). It should catch SIGHUP, and reload/reread the application files (which are expected to change frequently).

Resources