Mapreduce-used functions in erlang modules has to be included in Riak in order to work.
Since I have a cluster, I have to restart all the nodes every time I made some changes to the source code in that function. I thought I could use:
riak-admin erl-reload
I was wrong.
How can I do this quickly? Not every time waiting for all the nodes to stop and start again...it takes about 20 seconds for five nodes. It doesn't feel right.
You could do that from a riak attach session using the network load nl/1 shell function:
$ sudo riak attach
Remote Shell: Use "Ctrl-C a" to quit. q() or init:stop() will terminate the riak node.
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:16:16] [async-threads:0] [kernel-poll:false]
Eshell V5.9.1 (abort with ^G)
1> nl(custom_mr).
abcast
Related
I know it sounds weird, but I have a case where Deno would need to shutdown its own host (and kill its own process therefore). Is this possible?
I am specifically needing this for linux (lubuntu), if that's relevant. I guess this requires sudo rights, which sucks but would be an option.
For those interested in details: I'm coding a minecraft server software and if the server has no player for 30 minutes, it will shut itself down to save some power. A raspberry PI that runs 24/7 anyways, has a wake on lan feature, so that it can boot again. After boot, the server manager software would automatically start as a linux service.
You can create a subprocess to do this:
await Deno.run({ cmd: ["shutdown", "-h", "now"] }).status();
Concepts
Deno is capable of spawning a subprocess via Deno.run.
--allow-run permission is required to spawn a subprocess.
Spawned subprocesses do not run in a security sandbox.
Communicate with the subprocess via the stdin, stdout and stderr streams.
Use a specific shell by providing its path/name and its string input switch, e.g. Deno.run({cmd: ["bash", "-c", "ls -la"]});
See also command line - Shutdown from terminal without entering password? - Ask Ubuntu for ideas on how to avoid needing sudo to call shutdown or alternative commands that you can invoke from Deno instead.
To extend on what #mfulton2 wrote, here is how I made it work, so that I did not need to start the program with sudo rights, but still was able to shut down the computer without the use of sudo outside or within the app.
Open or create the following file: sudo nano /etc/sudoer
Add the line username ALL = NOPASSWD: /sbin/shutdown
Add this line %admin ALL = NOPASSWD: /sbin/shutdown
In your deno script, write Deno.run({ cmd: ["shutdown", "-h", "now"]}).status();
Execute script!
Keep in mind that any experienced linux user would potentially tell you that this is very dangerous (it probably is) and that it might not be the very best way. But IMHO, the damage this can cause is minor enough, as it only affects the shutdown command.
As I understand it, one restriction of the Ravenscar profile is that tasks should not terminate.
This certainly makes sense on bare metal, however when testing on a native system (as a executable program) it has the side effect that doing a Control-C to exit the main task leaves the program running in the background.
I plan to move my program to bare metal eventually and would like to be able to use the Ravenscar profile -- how can one allow the program to exit correctly when doing something like this? Abort statements are forbidden. If the Ravenscar profile was not applied, I could easily make this work by allowing tasks to terminate. Right now I am doing a killall -9, which works, but doesn't seem very elegant.
As it turns out, the issue had to do with how I was executing the program. In my case I was doing it over a remote ssh command, eg:
ssh myhost "sudo su -c mycommand"
Adding a -t to allocate a tty fixes the issue, that is:
ssh -t myhost "sudo su -c mycommand"
In below example, if shell script shell_script.sh sends a job to cluster, is it possible to have snakemake aware of that cluster job's completion? That is, first, file a should be created by shell_script.sh which sends its own job to the cluster, and then once this cluster job is completed, file b should be created.
For simplicity, let's assume that snakemake is run locally meaning that the only cluster job originating is from shell_script.sh and not by snakemake .
localrules: that_job
rule all:
input:
"output_from_shell_script.txt",
"file_after_cluster_job.txt"
rule that_job:
output:
a = "output_from_shell_script.txt",
b = "file_after_cluster_job.txt"
shell:
"""
shell_script.sh {output.a}
touch {output.b}
"""
PS - At the moment, I am using sleep command to give it a waiting time before the job is "completed". But this is an awful workaround as this could give rise to several problems.
Snakemake can manage this for you with the --cluster argument on the command line.
You can supply a template for the jobs to be executed on the cluster.
As an example, here is how I use snakemake on a SGE managed cluster:
template which will encapsulate the jobs which I called sge.sh:
#$ -S /bin/bash
#$ -cwd
#$ -V
{exec_job}
then I use directly on the login node:
snakemake -rp --cluster "qsub -e ./logs/ -o ./logs/" -j 20 --jobscript sge.sh --latency-wait 30
--cluster will tell which queuing system to use
--jobscript is the template in which jobs will be encapsulated
--latency-wait is important if the file system takes a bit of time to write the files. You job might end and return before the output of the rules are actually visible to the filesystem which will cause an error
Note that you can specify rules not to be executed on the nodes in the Snakefile with the keyword localrules:
Otherwise, depending on your queuing system, some options exist to wait for job sent to cluster to finish:
SGE:
Wait for set of qsub jobs to complete
SLURM:
How to hold up a script until a slurm job (start with srun) is completely finished?
LSF:
https://superuser.com/questions/46312/wait-for-one-or-all-lsf-jobs-to-complete
I use crontask to regularly run Rscript. Unfortunately, I need to do this on a small instance of aws and the process may hang, building more and more processes on top of each other until the whole system is lagging.
I would like to write a crontask to kill all R processes lasting longer than one minute. I found another answer on Stack Overflow that I've adapted that I think would solve the problem. I came up with;
if [[ "$(uname)" = "Linux" ]];then killall --older-than 1m "/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";fi
I copied the task directly from htop, but it does not work as I expect. I get the No such file or directory error but I've checked it a few times.
I need to kill all R processes that have lasted longer than a minute. How can I do this?
You may want to avoid killing processes from another user and try SIGKILL (kill -9) after SIGTERM (kill -15). Here is a script you could execute every minute with a CRON job:
#!/bin/bash
PROCESS="R"
MAXTIME=`date -d '00:01:00' +'%s'`
function killpids()
{
PIDS=`pgrep -u "${USER}" -x "${PROCESS}"`
# Loop over all matching PIDs
for pid in ${PIDS}; do
# Retrieve duration of the process
TIME=`ps -o time:1= -p "${pid}" |
egrep -o "[0-9]{0,2}:?[0-9]{0,2}:[0-9]{2}$"`
# Convert TIME to timestamp
TTIME=`date -d "${TIME}" +'%s'`
# Check if the process should be killed
if [ "${TTIME}" -gt "${MAXTIME}" ]; then
kill ${1} "${pid}"
fi
done
}
# Leave a chance to kill processes properly (SIGTERM)
killpids "-15"
sleep 5
# Now kill remaining processes (SIGKILL)
killpids "-9"
Why imply an additional process every minute with cron?
Would it not be easier to start R with timeout from coreutils, the processes will then be killed automatically after the time you chose.
timeout [option] duration command [arg]…
I think the best option is to do this with R itself. I am no expert, but it seems the future package will allow executing a function in a separate thread. You could run the actual task in a separate thread, and in the main thread sleep for 60 seconds and then stop().
Previous Update
user1747036's answer which recommends timeout is a better alternative.
My original answer
This question is more appropriate for superuser, but here are a few things wrong with
if [[ "$(uname)" = "Linux" ]];then
killall --older-than 1m \
"/usr/lib/R/bin/exec/R --slave --no-restore --file=/home/ubuntu/script.R";
fi
The name argument is either the name of image or path to it. You have included parameters to it as well
If -s signal is not specified killall sends SIGTERM which your process may ignore. Are you able to kill a long running script with this on the command line? You may need SIGKILL / -9
More at http://linux.die.net/man/1/killall
I am using
ffmpeg.exe -f dshow -i audio=virtual-audio-capturer -q 4 -y -tune zerolatency outfile.mp3
to grap sound from speakers. This command runs infinitely. I have to use
Process->kill();
command to stop execution of this command in my application.
I want to know if it is safe to kill it the way I am killing, or is there any better way?
As the Qt documentation states, using the function kill will terminate the process immediately.
So, depending on what you count as 'safe' this may or may not be ok for you. If ffmpeg is in the middle of processing data, or saving to file, calling this will stop it from finishing its task.
You could ask the process politely to stop, by sending it a signal such as kill(pid, SIGABRT), if you're using Linux or OSX. Windows will likely have an equivalent method.