How to shut down a computer (host) - deno

I know it sounds weird, but I have a case where Deno would need to shutdown its own host (and kill its own process therefore). Is this possible?
I am specifically needing this for linux (lubuntu), if that's relevant. I guess this requires sudo rights, which sucks but would be an option.
For those interested in details: I'm coding a minecraft server software and if the server has no player for 30 minutes, it will shut itself down to save some power. A raspberry PI that runs 24/7 anyways, has a wake on lan feature, so that it can boot again. After boot, the server manager software would automatically start as a linux service.

You can create a subprocess to do this:
await Deno.run({ cmd: ["shutdown", "-h", "now"] }).status();
Concepts
Deno is capable of spawning a subprocess via Deno.run.
--allow-run permission is required to spawn a subprocess.
Spawned subprocesses do not run in a security sandbox.
Communicate with the subprocess via the stdin, stdout and stderr streams.
Use a specific shell by providing its path/name and its string input switch, e.g. Deno.run({cmd: ["bash", "-c", "ls -la"]});
See also command line - Shutdown from terminal without entering password? - Ask Ubuntu for ideas on how to avoid needing sudo to call shutdown or alternative commands that you can invoke from Deno instead.

To extend on what #mfulton2 wrote, here is how I made it work, so that I did not need to start the program with sudo rights, but still was able to shut down the computer without the use of sudo outside or within the app.
Open or create the following file: sudo nano /etc/sudoer
Add the line username ALL = NOPASSWD: /sbin/shutdown
Add this line %admin ALL = NOPASSWD: /sbin/shutdown
In your deno script, write Deno.run({ cmd: ["shutdown", "-h", "now"]}).status();
Execute script!
Keep in mind that any experienced linux user would potentially tell you that this is very dangerous (it probably is) and that it might not be the very best way. But IMHO, the damage this can cause is minor enough, as it only affects the shutdown command.

Related

Ravenscar Task / Program Termination in Native Compilation

As I understand it, one restriction of the Ravenscar profile is that tasks should not terminate.
This certainly makes sense on bare metal, however when testing on a native system (as a executable program) it has the side effect that doing a Control-C to exit the main task leaves the program running in the background.
I plan to move my program to bare metal eventually and would like to be able to use the Ravenscar profile -- how can one allow the program to exit correctly when doing something like this? Abort statements are forbidden. If the Ravenscar profile was not applied, I could easily make this work by allowing tasks to terminate. Right now I am doing a killall -9, which works, but doesn't seem very elegant.
As it turns out, the issue had to do with how I was executing the program. In my case I was doing it over a remote ssh command, eg:
ssh myhost "sudo su -c mycommand"
Adding a -t to allocate a tty fixes the issue, that is:
ssh -t myhost "sudo su -c mycommand"

paramiko and nohup ''

OK so I have paramiko v2.2.1 and I am trying to login to a machine and restart a service. Inside the service scripts it basically starts a process via nohup. However if I allow paramiko to disconnect as soon as it is done the process started terminates with a PIPE signal when it writes to stdout.
If I start the service by ssh'ing into the box and manually starting it there is no issue and it runs in the background fine. Also if I add long sleep 10 before disconnecting (close) paramiko it also seems to work just fine.
The service is started via a init.d script via a line like this:
env LD_LIBRARY_PATH=$bin_path nohup $bin_path/ServerLoop.sh \
"$bin_path/Service service args" "$#" &
Where ServerLoop.sh simply calls the service forever in a loop like this so it will never die:
SERVER=$1
shift
ARGS=$#
logger $ARGS
while [ 1 ]; do
$SERVER $ARGS
logger "$SERVER terminated with exit code: $STATUS. Server has been restarted"
sleep 1
done
I have noticed when I start the service by ssh'ing into the box I get a nohup.out file written to the root. However when I run through paramiko I get no nohup.out written anywhere on the system ... ie this after I manually ssh into the box and start the service:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
/nohup.out
And this is after I run through paramiko:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
As I understand it nohup will only redirect the output to nohup.out if "If standard output is a terminal" (from the manual), otherwise it thinks it is saving the output to a file so it does not redirect. Hence I tried the following:
In [43]: import paramiko
In [44]: paramiko.__version__
Out[44]: '2.2.1'
In [45]: ssh = paramiko.SSHClient()
In [46]: ssh.set_missing_host_key_policy(AutoAddPolicy())
In [47]: ssh.connect(ip, username='root', password=not_for_so_sorry, look_for_keys=False, allow_agent=False)
In [48]: stdin, stdout, stderr = ssh.exec_command("tty")
In [49]: stdout.read()
Out[49]: 'not a tty\n'
So I am thinking that nohup is not redirecting to nohup.out when I run it through paramiko because tty is not returning a terminal. I don't know why adding a sleep(10) would fix this though as the service if run on the command line is quite verbose.
I have also noticed that if the service is started from a manual ssh its tty in the ps ax output is still set to the ssh tty ... however if the process is started by paramiko its tty in the ps ax output is set to "?" .. since both processes are run through nohup I would have expected this to be the same.
If the problem is that nohup is indeed not redirecting the output to nohup.out because of the tty is there a way to force this to happen or a better way to run this sort of command via paramiko?
Thanks all, any help with this would be great :)

Phabricator Daemon: `phd` was unable to switch to the correct user with `sudo`

I am currently trying to install and run Phabricator on a Raspberry Pi for personal use (even though It's not recommended by Phacility, I thought I still give it a try). So far, I was able to setup everything except the phd user as daemon.
/etc/passwd
phd:x:1001:1001:,,,:/home/phd:/bin/bash
/etc/shadow
phd:NP:17107:0:99999:7:::
I created the user phd and gave im NP in shadow, but that still makes Phabricator unable to switch to phd when starting the daemon.
sudo ./bin/phd restart
Interrupting process 19517...
Process 19517 exited.
Freeing active task leases...
Freed 0 task lease(s).
Starting daemons as phd
Launching daemons:
(Logs will appear in "/var/tmp/phd/log/daemons.log".)
PhabricatorRepositoryPullLocalDaemon (Static)
PhabricatorTriggerDaemon (Static)
PhabricatorTaskmasterDaemon (Autoscaling: group=task, pool=4, reserve=0)
Usage Exception: Daemons are configured to run as user "phd" in
configuration option `phd.user`, but the current user is "root" and
`phd` was unable to switch to the correct user with `sudo`. Command output:
Command failed with error #255!
COMMAND
exec sudo -En -u 'phd' -- ./phd-daemon '--verbose'
STDOUT
(empty)
STDERR
[2016-11-04 08:54:54] EXCEPTION: (Exception) Specified daemon PID directory
('/var/tmp/phd/pid') does not exist or is not writable by the daemon user!
at [<phutil>/src/daemon/PhutilDaemonOverseer.php:115]
arcanist(head=master, ref.master=fad85844314b), phabricator(head=master,
ref.master=6982bded7124), phutil(head=master, ref.master=2b7b1007bf87)
#0 PhutilDaemonOverseer::__construct(array) called at
[<phabricator>/scripts/daemon/launch_daemon.php:13]
What I tried is starting the phd user via su phd -c "/home/phd/phabricator/bin/phd restart" but that queries a password from me.
I kept close to this guide https://secure.phabricator.com/book/phabricator/article/diffusion_hosting/ as well as this https://gist.github.com/sparrc/b4eff48a3e7af8411fc1
Any help is really, really appreciated!
Thanks to #JSON who just made me aware of a line that I apparently always missed, the solution was:
sudo chmod go+w /var/tmp/phd/pid
This will make the directoy writeable and free for all and let me start the error
We usually run
sudo -u phd ./bin/phd restart

R Parallel - connecting to remote cores

Working in R 2.14.1, on Windows 7
Using the package parallel in R, I'm trying to take advantage of cores outside of my local machine available on my network, where all remote hosts I am connecting to are identical Windows machines.
The basic form of the commands are as such to make the connection.
library(parallel)
#assume 8 cores per machine
cl<-makePSOCKcluster(c(rep("localhost", 8), rep("otherhost", 8)))
Of course, trying to debug these things can be pretty tricky, but here is where I'm at with it.
If I specify the manual = TRUE flag as below
cl<-makePSOCKcluster(c(rep("localhost", 8), rep("otherhost", 8)), manual=TRUE)
there are no problems connecting to the remote host, and running a parallel process. The computers have identical setups to the one that I am working on. Yet, when this manual flag is not set, the connection command hangs.
This seems to indicate to me that since the manual flag bypasses ssh to make the connection to the host, that ssh is the problem when manual=FALSE.
It is not guaranteed at the moment that the remote computers have ssh on them. The question is, given that I have all the pertinent windows login information for my remote hosts, and that I cannot change the settings on the remote computers, how would I connect to cores on remote machines with the package parallel in R without specifying manual = true?
Alternatively, if ssh must be installed for this to happen, let's assume all computers have ssh on them. How would I connect to cores on the remote machines without circumventing ssh?
If you need any more information please let me know, I appreciate the time.
UPDATE 1
8-26-14
Thanks to Steve Weston for his insights. I will provide an update with the exact tools and setup I use to get my system working when it's up and running.
Feel free to comment or post if you have anything else to add as to what may be the best route to go in remote connecting to a windows machine from a windows machine via makePSOCKcluster, where the manual flag is set to FALSE.
When creating a PSOCK cluster with manual=FALSE, the only way to start a worker on a remote machine is with "ssh", "rsh", or something command-line compatible, such as "plink" from PuTTY. The reason is that makePSOCKcluster starts the remote workers using the "system" function to execute commands of the form:
ssh -l user otherhost '/usr/lib/R/bin/Rscript' -e 'parallel:::.slaveRSOCK()' MASTER=myhost PORT=10187 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
You can confirm this by looking at the source code for the newPSOCKnode function in the file snowSOCK.R from the parallel package.
For this to work, the ssh-compatible command must be available on the local machine and a corresponding ssh daemon must be running on each of the remote machines, otherwise makePSOCKcluster will simply hang. I've found that installing a good, working ssh daemon is the difficult part on Windows.
Unfortunately, manual=TRUE is generally the easiest way to create a PSOCK cluster on multiple Windows machines.
Helle everyone, I had the same problem and I managed to solve it. It is June 2018 when I'm writing this answer, my OS is windows 10 and the R version is 3.2.2. It is surprising to see this problem still exists after 4 years. I hope it can be fixed in the following release.
Before you move on, please make sure you can access the server in cmd using ssh. I didn't put any password in my code because I have the private key, you don't need to do that and you will see the reason later.
Fixing The problem
File directory
Since the function makePSOCKcluster works when manually start the workers, my first trying is to let manual=TRUE, and see what's the output. Here is my result:
machineAddresses <-list(list(host='192.168.1.220',user='jeff'))
cl <- makePSOCKcluster(spec,manual = F)
> Manually start worker on 192.168.1.220 with
"C:/PROGRA~1/R/R-32~1.2/bin/x64/Rscript" -e
"parallel:::.slaveRSOCK()" MASTER=DESKTOP-U5JA32O PORT=11756
OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
Ok, Here is the first problem. The Rscript location is incorrect(The location of Rscript in the server). Generally, it locates in C:\Program Files. In my server is C:\Program Files\R\R-3.2.2\bin. So we need to correct them by adding more option to tell this stupid code where the Rscript is:
machineAddresses <-list(list(host='192.168.1.220',
user='jeff',rscript="C:/Program Files/R/R-3.3.2/bin/Rscript"))
CMD problem
Once you fix the directory problem, you will find that the code still hangs forever. Then we need to check if we can manually access the server in R, my code is:
system("ssh jeff#192.168.1.220")
> GetConsoleMode on STD_INPUT_HANDLE failed with 6
I honestly don't know what does this error mean, but we just need to fix that. Inspired by #Steve Weston, I decide to use PuTTY, so I install it, and change my code to:
machineAddresses <-list(list(host='192.168.1.220',user='jeff',rscript="C:/Program Files/R/R-3.3.2/bin/Rscript",rshcmd="plink -pw qwer"))
The option -pw means the password. Because I'm a newbie to PuTTY, I don't know how to let the private key automatically work in PuTTY. Therefore, I use the easiest way to deal with that: put your password! The above code is equivalent to the following in cmd:
plink -pw qwer jeff#192.168.1.220 Rscript -e parallel:::.slaveRSOCK() MASTER=DESKTOP-U5JA32O PORT=11063 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
And this is exactly what we will do if we manually create the workers. For those who are new like me, you need to add the PuTTY directory in PATH in your environmental variables to run plink. Here are my final codes:
machineAddresses <-list(list(host='192.168.1.220',user='jeff',rscript="C:/Program Files/R/R-3.3.2/bin/Rscript",rshcmd="plink -pw qwer"))
cl <- makePSOCKcluster(machineAddresses,manual = F)
I run it with no problem at all. In summary, the function makePSOCKcluster makes two mistakes:
Assuming a wrong R directory in the server(At least it should assume the same directory as my local computer, but it didn't! I don't know where that strange directory comes from)
Using ssh command to start the connection, which does not work in R. It works well in cmd, but not in R. I don't know the reason.
If you are still not able to use makePSOCKcluster, here is one trick: Try to connect to the server in R using system function first. It can give you some error code, that may instruct you where the problem is. Here is my debugging code:
system("plink -pw qwer jeff#192.168.1.220 Rscript -e parallel:::.slaveRSOCK() MASTER=DESKTOP-U5JA32O PORT=11063 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE")

rsync over ssh hangs in a file sync daemon

I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often

Resources