Ravenscar Task / Program Termination in Native Compilation - ada

As I understand it, one restriction of the Ravenscar profile is that tasks should not terminate.
This certainly makes sense on bare metal, however when testing on a native system (as a executable program) it has the side effect that doing a Control-C to exit the main task leaves the program running in the background.
I plan to move my program to bare metal eventually and would like to be able to use the Ravenscar profile -- how can one allow the program to exit correctly when doing something like this? Abort statements are forbidden. If the Ravenscar profile was not applied, I could easily make this work by allowing tasks to terminate. Right now I am doing a killall -9, which works, but doesn't seem very elegant.

As it turns out, the issue had to do with how I was executing the program. In my case I was doing it over a remote ssh command, eg:
ssh myhost "sudo su -c mycommand"
Adding a -t to allocate a tty fixes the issue, that is:
ssh -t myhost "sudo su -c mycommand"

Related

How to shut down a computer (host)

I know it sounds weird, but I have a case where Deno would need to shutdown its own host (and kill its own process therefore). Is this possible?
I am specifically needing this for linux (lubuntu), if that's relevant. I guess this requires sudo rights, which sucks but would be an option.
For those interested in details: I'm coding a minecraft server software and if the server has no player for 30 minutes, it will shut itself down to save some power. A raspberry PI that runs 24/7 anyways, has a wake on lan feature, so that it can boot again. After boot, the server manager software would automatically start as a linux service.
You can create a subprocess to do this:
await Deno.run({ cmd: ["shutdown", "-h", "now"] }).status();
Concepts
Deno is capable of spawning a subprocess via Deno.run.
--allow-run permission is required to spawn a subprocess.
Spawned subprocesses do not run in a security sandbox.
Communicate with the subprocess via the stdin, stdout and stderr streams.
Use a specific shell by providing its path/name and its string input switch, e.g. Deno.run({cmd: ["bash", "-c", "ls -la"]});
See also command line - Shutdown from terminal without entering password? - Ask Ubuntu for ideas on how to avoid needing sudo to call shutdown or alternative commands that you can invoke from Deno instead.
To extend on what #mfulton2 wrote, here is how I made it work, so that I did not need to start the program with sudo rights, but still was able to shut down the computer without the use of sudo outside or within the app.
Open or create the following file: sudo nano /etc/sudoer
Add the line username ALL = NOPASSWD: /sbin/shutdown
Add this line %admin ALL = NOPASSWD: /sbin/shutdown
In your deno script, write Deno.run({ cmd: ["shutdown", "-h", "now"]}).status();
Execute script!
Keep in mind that any experienced linux user would potentially tell you that this is very dangerous (it probably is) and that it might not be the very best way. But IMHO, the damage this can cause is minor enough, as it only affects the shutdown command.

paramiko and nohup ''

OK so I have paramiko v2.2.1 and I am trying to login to a machine and restart a service. Inside the service scripts it basically starts a process via nohup. However if I allow paramiko to disconnect as soon as it is done the process started terminates with a PIPE signal when it writes to stdout.
If I start the service by ssh'ing into the box and manually starting it there is no issue and it runs in the background fine. Also if I add long sleep 10 before disconnecting (close) paramiko it also seems to work just fine.
The service is started via a init.d script via a line like this:
env LD_LIBRARY_PATH=$bin_path nohup $bin_path/ServerLoop.sh \
"$bin_path/Service service args" "$#" &
Where ServerLoop.sh simply calls the service forever in a loop like this so it will never die:
SERVER=$1
shift
ARGS=$#
logger $ARGS
while [ 1 ]; do
$SERVER $ARGS
logger "$SERVER terminated with exit code: $STATUS. Server has been restarted"
sleep 1
done
I have noticed when I start the service by ssh'ing into the box I get a nohup.out file written to the root. However when I run through paramiko I get no nohup.out written anywhere on the system ... ie this after I manually ssh into the box and start the service:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
/nohup.out
And this is after I run through paramiko:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
As I understand it nohup will only redirect the output to nohup.out if "If standard output is a terminal" (from the manual), otherwise it thinks it is saving the output to a file so it does not redirect. Hence I tried the following:
In [43]: import paramiko
In [44]: paramiko.__version__
Out[44]: '2.2.1'
In [45]: ssh = paramiko.SSHClient()
In [46]: ssh.set_missing_host_key_policy(AutoAddPolicy())
In [47]: ssh.connect(ip, username='root', password=not_for_so_sorry, look_for_keys=False, allow_agent=False)
In [48]: stdin, stdout, stderr = ssh.exec_command("tty")
In [49]: stdout.read()
Out[49]: 'not a tty\n'
So I am thinking that nohup is not redirecting to nohup.out when I run it through paramiko because tty is not returning a terminal. I don't know why adding a sleep(10) would fix this though as the service if run on the command line is quite verbose.
I have also noticed that if the service is started from a manual ssh its tty in the ps ax output is still set to the ssh tty ... however if the process is started by paramiko its tty in the ps ax output is set to "?" .. since both processes are run through nohup I would have expected this to be the same.
If the problem is that nohup is indeed not redirecting the output to nohup.out because of the tty is there a way to force this to happen or a better way to run this sort of command via paramiko?
Thanks all, any help with this would be great :)

Phabricator Daemon: `phd` was unable to switch to the correct user with `sudo`

I am currently trying to install and run Phabricator on a Raspberry Pi for personal use (even though It's not recommended by Phacility, I thought I still give it a try). So far, I was able to setup everything except the phd user as daemon.
/etc/passwd
phd:x:1001:1001:,,,:/home/phd:/bin/bash
/etc/shadow
phd:NP:17107:0:99999:7:::
I created the user phd and gave im NP in shadow, but that still makes Phabricator unable to switch to phd when starting the daemon.
sudo ./bin/phd restart
Interrupting process 19517...
Process 19517 exited.
Freeing active task leases...
Freed 0 task lease(s).
Starting daemons as phd
Launching daemons:
(Logs will appear in "/var/tmp/phd/log/daemons.log".)
PhabricatorRepositoryPullLocalDaemon (Static)
PhabricatorTriggerDaemon (Static)
PhabricatorTaskmasterDaemon (Autoscaling: group=task, pool=4, reserve=0)
Usage Exception: Daemons are configured to run as user "phd" in
configuration option `phd.user`, but the current user is "root" and
`phd` was unable to switch to the correct user with `sudo`. Command output:
Command failed with error #255!
COMMAND
exec sudo -En -u 'phd' -- ./phd-daemon '--verbose'
STDOUT
(empty)
STDERR
[2016-11-04 08:54:54] EXCEPTION: (Exception) Specified daemon PID directory
('/var/tmp/phd/pid') does not exist or is not writable by the daemon user!
at [<phutil>/src/daemon/PhutilDaemonOverseer.php:115]
arcanist(head=master, ref.master=fad85844314b), phabricator(head=master,
ref.master=6982bded7124), phutil(head=master, ref.master=2b7b1007bf87)
#0 PhutilDaemonOverseer::__construct(array) called at
[<phabricator>/scripts/daemon/launch_daemon.php:13]
What I tried is starting the phd user via su phd -c "/home/phd/phabricator/bin/phd restart" but that queries a password from me.
I kept close to this guide https://secure.phabricator.com/book/phabricator/article/diffusion_hosting/ as well as this https://gist.github.com/sparrc/b4eff48a3e7af8411fc1
Any help is really, really appreciated!
Thanks to #JSON who just made me aware of a line that I apparently always missed, the solution was:
sudo chmod go+w /var/tmp/phd/pid
This will make the directoy writeable and free for all and let me start the error
We usually run
sudo -u phd ./bin/phd restart

How to reload a spawned script for nginx fast cgi

Below is by code for spawing a fcgi script for nginx.
spawn-fcgi -d /home/ubuntu/workspace -f /home/ubuntu/workspace/index.py -a 127.0.0.1 -p 9001
Now, lets I want to make changes to the index.py script and reload with out bring down the system. How do reload the spawned program so the next connections are using the updated program while the others finish? For now I am killing the spawned process and running command again. I am hoping for something more graceful.
I tried this by the way.
sudo kill -1 `sudo lsof -t -i:9001
I have recently made something similar for node.js.
The idea is to have index.py as a very simple bootstrap script (which doesn‘t actually change much over time). It should catch SIGHUP, and reload/reread the application files (which are expected to change frequently).

rsync over ssh hangs in a file sync daemon

I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often

Resources