How can I run a shell script and immediately background it, however keep the ability to inspect its output any time by tailing /tmp/output.txt.
It would be nice if I can foreground the process too later.
P.S.
It would be really cool if you can also show me how to "send" the backgrounded process in to a GNU screen that may or may not have been initialized.
To 'background' a process when you start it
Simply add an ampersand (&) after the command.
If the program writes to standard out, it will still write to your console / terminal.
To foreground the process
Simply use the fg command. You can see a list of jobs in the background with jobs.
For example:
sh -c 'sleep 3 && echo I just woke up' & jobs
To background a currently running process
If you have already started the process in the foreground, but you want to move it to the background, you can do the following:
Press Ctrl+z to put the current process to sleep and return to your shell. This process will be paused until you send it another signal.
Run the bg command to resume the process, but have it run in the background instead of the foreground.
Another way is using the nohup command with & at the end of the line.
Something like this
nohup whatevercommandyouwant whateverparameters &
This will run it in the background and send its output to a nohup.log file.
One easy to use approach that allows managing multiple processes and has a nice terminal UI is hapless utility.
Install with pip install hapless (or python3 -m pip install hapless) and just run
$ hap run my-command # e.g. hap run python my_long_running_script.py
$ hap status # check all the launched processes
$ hap logs 4 # output logs for you 4th background process
$ hap logs -f 2 # continuously stream logs for the 2nd process
See docs for more info.
We run an interactive perl script which takes a set of inputs(an input which is time till which the script should run) from the terminal and runs till the completion. The job runs in the foreground. I have made the job to run in the background by hitting CTRL-Z followed by bg %- command.
Once the current job runs in the background, if I run the same perl script with a different set of inputs and try to put it into the background the first job is getting terminated when I hit CTRL-Z as follows.
^Z[1] Terminated scriptname
[2]+ Stopped scriptname
Please point out if I am making any mistake
When you press ^Z the running script is stopped and after bg it will continue (in the background).
You get the feedback of the bg job when you do something on the commandline.
With [1] Terminated scriptname you see that the first script is finished. With [2]+ Stopped scriptname you see the second job is paused and waiting for a bg/fg command.
Using ^Z + bg is about the same as appending & to your command (for not interactive programs). You can try things with
sleep 1 &
# now wait few seconds
ps
# you will see the terminated script
sleep 6 &
sleep 3; ps
sleep 4; ps
# and now repeat these cases with ^Z / bg instead of &
try nohup
nohup nice <your_script followed by parameters(separated by space)> &
It will give you process_id. Hit Ctrl+C to exit the command and verify by ps -ef that it is still running. I remember running multiple instance of same script with different parameter like this.
I am using ipmitool to get remote console output with SOL. This gets called from within a background process. When I call it in the foreground, it correctly logs the console output to the log file. But when called in the background, ipmitool doesn't work. Any idea why ?
ipmitool write the SOL data on the standout output(stdout) file descriptor. When called in background, ipmitool can't write to stdout because of which you are not seeing console logs.
If you want to run it as a background process, then redirect the stdout to a file and tail that file.
I had this issue. Solution of redirecting stdout was not enough.
This ended up working:
tail -f /dev/null --pid="$$" \
| ipmitool -H "$ip" -U "$username" -P "$password" -I lanplus sol activate \
2>> stderr.txt >> stdout.txt &
Idea of using tail -f /dev/null came form this answer. There are a few other solutions listed there, but I didn't try them.
--pid="$$" means this process will get killed when parent process gets killed, which is what I wanted, but may or may not fit your needs. You will probably need some mechanism for avoiding tail -f zombies.
I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1
I am trying to create a Jenkins job that restarts a program that runs all the time on one of our servers.
I specify the following as the command to run:
cd /usr/local/tool && ./tool stop && ./tool start
The script 'tool' contains a line like:
nohup java NameOfClass &
The output of that ends up in my build console instead of in nohup.out, so the job never terminates unless I terminate it manually, which terminates the program.
How can I cause nohup to behave the same way it does from a terminal?
If I understood the question correctly, Jenkins is killing all processes at the end of the build and you would like some process to be left running after the build has finished.
You should read https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
Essentially, Jenkins searches for processes with some secret value in BUILD_ID environment variable. Just override it for the processes you want to be left alone.
In the new Pipeline jobs, setting BUILD_ID no longer prevents Jenkins from killing your processes once the job finishes. Instead, you need to set JENKINS_NODE_COOKIE:
sh 'JENKINS_NODE_COOKIE=dontKillMe nohup java NameOfClass &'
See the wiki on ProcessTreeKiller and this comment in the Jenkins Jira for more information.
Try adding the & in the Jenkins build step and redirecting the output using > nohup.out.
I had a similar problem with runnning a shell script from jenkins as a background process. I fixed it by using the below command:
BUILD_ID=dontKillMe nohup ./start-fitnesse.sh &
In your case,
BUILD_ID=dontKillMe nohup java NameOfClass &