Is there any way to make sure zsh doesn't exit when cluster administrators auto-kill all idle processes after some fairly short amount of time?
I have already figured out how to do it with tmux, but my zsh sessions keep dying and the admins won't budge on the policy. I can always leave a dumb while loop going, but that is really tedious to do at the start of every single zsh session, but it is very frustrating to come back and find that my process has exited, but I have no idea why because the shell that hosted it was killed and the output history is gone.
I use oh-my-zsh, so if there is a module in there that can do this, that would be great too.
I just added a simple script to to my .zshrc. It is an inelegant solution but it does work.
if [[ "$HOSTNAME" =~ "^myhost*" ]]; then
while true; do echo 'hi' > /dev/null; sleep 120; done &
I am sure there is a better way because this makes shell startup slower than I would like, but that is what I came up with for now.
Related
I have a workflow that has many sessions that run in parallel to each other. When one of the session fails, the workflow waits for the other session to complete and then the entire workflow gets failed. We have selected the option "fail parent if this task fails". But we want the workflow to fail and stop immediately if any of the session fails without waiting for other sessions to finish.
ps: We have a unix shell script that calls all the workflows one by one. So if we can solve it using unix shell scripting that would be fine aswell.
Does anyone have any solution for it?
Best thing you can do in Informatica is use a Control Task to Abort the worklfow, and have it connected from all sessions with an OR condition. Something like:
start--S1--S2--S3
\ \ \
\---\---\-(OR)-CTL
I have started n slurm jobs and I would like to let a separate process wait until at least one of them has finished. The waiting process should use as little cpu time as possible so polling would not be ideal (unless there is no other way).
I know of the scontrol wait_job, but as far as I can see this can only wait for exactly one job.
If you have sufficient privileges, you can use strigger.
Otherwise, you could use a workflow manager (for instance Fireworks). They typically do polling but at a reasonable pace.
Note that if the action to take is to submit another job, you can also submit it right away and use the --dependency parameter to delay its execution until ready.
If you want to run something in the current session and wait for jobs you have already submitted, you can wait by submitting a job with a dependency that finishes immediately(will still need to wait for the scheduler, but doesn't require installing anything new and doesn't require a bunch of polling):
sbatch --time=<some short time> <usual account args> --wait --dependency afterok:<your jobs> /dev/stdin <<< '#!/bin/bash \n sleep 1'
For example:
sbatch --time=0:15:00 --account my-account-abc --wait --dependency afterok:12345678:12345679 /dev/stdin <<< '#!/bin/bash \n sleep 1'
I know this is kind of late, but hope this helps someone :)
I believe what I am going to asked, will not be possible. Still trying to find if there is a way or approach that I wont be aware of.
I got a broken pipe error where I am having pid of destination process but not the name. Is there any way, I can find out the process name (which possibly already terminated) using pid.
As Barmar said in his comment, this isn't possible normally. The system forgets all information about processes as soon as they terminate.
But of course your processes might be able to comply in order to help you find out more. In case you can modify the processes in question you can let them log their PID into a special place where you can look up later which PID belonged once to which process.
This won't work for programs you cannot modify, though. In these cases it still might be possible to put a wrapper around them which first logs the PID and then execs to the wanted program.
#!/bin/bash
echo "$$: $*" >> /home/alfe/var/pid.log
exec "$#"
In case you are neither starting the program in question nor can you modify it, you are out of options I fear.
I created a client-server application and now I would like to deploy it.
While development process I started the server on a terminal and when I wanted to stop it I just had to type "Ctrl-C".
Now want to be able to start it in background and stop it when I want by just typing:
/etc/init.d/my_service {stop|stop}
I know how to do an initscript, but the problem is how to actually stop the process ?
I first thought to retrieve the PID with something like:
ps aux | grep "my_service"
Then I found a better idea, still with the PID: Storing it on a file in order to retrieve it when trying to stop the service.
Definitely too dirty and unsafe, I eventually thought about using sockets to enable the "stop" process to tell the actual process to shut down.
I would like to know how this is usually done ? Or rather what is the best way to do it ?
I checked some of the files in the init.d and some of them use PID files but with a particular command "start-stop-daemon". I am a bit suspicious about this method which seems unsafe to me.
If you have a utility like start-stop-daemon available, use it.
start-stop-daemon is flexible and can use 4 different methods to find the process ID of the running service. It uses this information (1) to avoid starting a second copy of the same service when starting, and (2) to determine which process ID to kill when stopping the service.
--pidfile: Check whether a process has created the file pid-file.
--exec: Check for processes that are instances of this executable
--name: Check for processes with the name process-name
--user: Check for processes owned by the user specified by username or uid.
The best one to use in general is probably --pidfile. The others are mainly intended to be used in case the service does not create a PID file. --exec has the disadvantage that you cannot distinguish between two different services implemented by the same program (i.e. two copies of the same service). This disadvantage would typically apply to --name also, and, additionally, --name has a chance of matching an unrelated process that happens to share the same name. --user might be useful if your service runs under a dedicated user ID which is used by nothing else. So use --pidfile if you can.
For extra safety, the options can be combined. For example, you can use --pidfile and --exec together. This way, you can identify the process using the PID file, but don't trust it if the PID found in the PID file belongs to a process that is using the wrong executable (it's a stale/invalid PID file).
I have used the option names provided by start-stop-daemon to discuss the different possibilities, but you need not use start-stop-daemon: the discussion applies just as well if you use another utility or do the matching manually.
I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.