ipmitool gets stopped when called in background - background-process

I am using ipmitool to get remote console output with SOL. This gets called from within a background process. When I call it in the foreground, it correctly logs the console output to the log file. But when called in the background, ipmitool doesn't work. Any idea why ?

ipmitool write the SOL data on the standout output(stdout) file descriptor. When called in background, ipmitool can't write to stdout because of which you are not seeing console logs.
If you want to run it as a background process, then redirect the stdout to a file and tail that file.

I had this issue. Solution of redirecting stdout was not enough.
This ended up working:
tail -f /dev/null --pid="$$" \
| ipmitool -H "$ip" -U "$username" -P "$password" -I lanplus sol activate \
2>> stderr.txt >> stdout.txt &
Idea of using tail -f /dev/null came form this answer. There are a few other solutions listed there, but I didn't try them.
--pid="$$" means this process will get killed when parent process gets killed, which is what I wanted, but may or may not fit your needs. You will probably need some mechanism for avoiding tail -f zombies.

Related

Dynamic exclusion list in lsyncd

Our cloud platform is powered by opennebula. So we have two instances of the frontend in "cold swap". We use lsyncd daemon trying to keep instances in datastores synced, but there are some points: we don't want to sync VM's images that have an extension .bak cause of the other script moves all the .bak to other storage on schedule. The sync script logic looks like find all the .bak in /var/lib/one/datastores/ then create exclude.lst and then start lsyncd. Seems OK until we take a look at the datastores:
oneadmin#nola:~/cluster$ dir /var/lib/one/datastores/1/
006e099c57061d87d4b8f78ec7199221
008a10fa0764c9ac8d6fb9206c9b69bd
069299977f2fea243a837efed271182f
0a73a9adf74d92b4f175abcb578cabac
0b1cc002e370e1acd880cf781df0a6fb
0b470b182ac6d554774a3615ce87e292
0c0d98d1e0aabc23ef548ddb564c578d
0c3fad9c92a8efc7e13a73d8ae85caa3
..and so on.
We solved it with this monstrous function:
function create_exclude {
oneimage list -x | \
xmlstarlet sel -t -m "IMAGE_POOL/IMAGE" -v "ID" -o ";" -v "NAME" -o ";" -v "SOURCE" -o ":" | \
sed s/:/'\n'/g | \
awk -F";" '/.bak;\/var\/lib/ {print $3}' | \
cut -d / -f8 > /var/lib/one/cluster/exclude.lst
}
The result is the list which contains VM IDs with .bak images inside so we can exclude the whole VM folder from syncing. That's not kinda what we wanted, as the original image stays not synced. But it could be solved by restart the lsyncd script at the moment when other script moves all the .bak to other storage.
Now we get to the topic of the question.
It works until a new .bak will created. No way to add new string in exclude.lst "on the go" but to stop lsync and restart script which re-creates exclude.lst. But there is also no possibility to check the moment of creation a new .bak except another script that will monitor it in some period.
I believe that less complicated solution exists. It depends on opennebula of course, particularly in the way of the /datastores/ folder stores VMs.
Glad to know you are using OpenNebula to run your cloud :) Have you tried to use our Community Forum for support? I'm sure the rest of the Community will be happy to give a hand!
Cheers!

Run a process in background from a unix sh script

Ok, my first question so sorry for any errors in the way I've asked.
I am trying to create a script jenkins.sh which should call/launch another process (start_rhino.sh) in the background (if started normally it will take over the console and will not free the command line unless it is stopped).
So far I have tried:
./start-rhino.sh & -> starts the process the same as ./start-rhino.sh
nohup ./start-rhino.sh & -> Requires a key press to continue
and none of them immediately released the command line in order for the script to progress. In desperation I also tried with double && and /& but with no success. I think the nohup worked the best but it required an 'enter' key press in order to continue to command line (I've tried them directly from command line not by running the script)
nohup ./start-rhino.sh < /dev/null 2>&1 > /dev/null &

Redirect not working correctly, 2> /dev/null becomes 2 > /dev/null and stderr doesn't get redirected

I am hoping someone can help me figure out what setting I might need to overwrite. I am working on a Unix terminal server, running a Linux Xterm linux shell. Everytime I use a command like grep "blah" 2> /dev/null at the shell prompt, the command is run as grep "blah" 2 > /dev/null and needless to say the redirection fails.
xterm version is X.Org 6.8.99.903(238)
I can not update or install anything, this is a locked down production server.
Thanks for any help and illumination on the topic, it is making my grep useless at high directory levels with recursion.
That's Bourne shell syntax, and it doesn't work in c-shell.
The best you can do is
( command >stdout_file ) >&stderr_file
Where you get stdout to one file, and stderr to another. Redirecting just stderr is not possible.
In a comment, you say "A minor note, this is csh". That's not a minor note, that's the cause of the problem. xterm is just a terminal emulator, not a shell; all it does is set up a window that provides textual input and output. csh (or bash, or ...) is the shell, the program that interprets the commands you type.
csh has different syntax for redirection, and doesn't let you redirect just stderr. command > file redirects stdout; command >& file redirects both stdout and stderr.
You say the system doesn't have bash, but it does have ksh. I suggest just using ksh; it will be a lot more familiar to you. Both bash and ksh are derived from the old Bourne shell.
All (?) Unix-like systems will have a Bourne-like shell installed as /bin/sh. Even if you're using csh (or tcsh?) as your interactive shell, you can still invoke sh, even in a one-liner. For example:
sh -c 'command 2>/dev/null'
will invoke sh, which in turn will invoke command and redirect just its stderr to /dev/null.
The purpose of an interactive shell is (mostly) to let you use other commands that are available on the system. sh, or any shell, can be used as just another command.

Force line-buffering of stdout in a pipeline

Usually, stdout is line-buffered. In other words, as long as your printf argument ends with a newline, you can expect the line to be printed instantly. This does not appear to hold when using a pipe to redirect to tee.
I have a C++ program, a, that outputs strings, always \n-terminated, to stdout.
When it is run by itself (./a), everything prints correctly and at the right time, as expected. However, if I pipe it to tee (./a | tee output.txt), it doesn't print anything until it quits, which defeats the purpose of using tee.
I know that I could fix it by adding a fflush(stdout) after each printing operation in the C++ program. But is there a cleaner, easier way? Is there a command I can run, for example, that would force stdout to be line-buffered, even when using a pipe?
you can try stdbuf
$ stdbuf --output=L ./a | tee output.txt
(big) part of the man page:
-i, --input=MODE adjust standard input stream buffering
-o, --output=MODE adjust standard output stream buffering
-e, --error=MODE adjust standard error stream buffering
If MODE is 'L' the corresponding stream will be line buffered.
This option is invalid with standard input.
If MODE is '0' the corresponding stream will be unbuffered.
Otherwise MODE is a number which may be followed by one of the following:
KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so on for G, T, P, E, Z, Y.
In this case the corresponding stream will be fully buffered with the buffer
size set to MODE bytes.
keep this in mind, though:
NOTE: If COMMAND adjusts the buffering of its standard streams ('tee' does
for e.g.) then that will override corresponding settings changed by 'stdbuf'.
Also some filters (like 'dd' and 'cat' etc.) dont use streams for I/O,
and are thus unaffected by 'stdbuf' settings.
you are not running stdbuf on tee, you're running it on a, so this shouldn't affect you, unless you set the buffering of a's streams in a's source.
Also, stdbuf is not POSIX, but part of GNU-coreutils.
Try unbuffer (man page) which is part of the expect package. You may already have it on your system.
In your case you would use it like this:
unbuffer ./a | tee output.txt
The -p option is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments.
You can use setlinebuf from stdio.h.
setlinebuf(stdout);
This should change the buffering to "line buffered".
If you need more flexibility you can use setvbuf.
You may also try to execute your command in a pseudo-terminal using the script command (which should enforce line-buffered output to the pipe)!
script -q /dev/null ./a | tee output.txt # Mac OS X, FreeBSD
script -c "./a" /dev/null | tee output.txt # Linux
Be aware the script command does not propagate back the exit status of the wrapped command.
The unbuffer command from the expect package at the #Paused until further notice answer did not worked for me the way it was presented.
Instead of using:
./a | unbuffer -p tee output.txt
I had to use:
unbuffer -p ./a | tee output.txt
(-p is for pipeline mode where unbuffer reads from stdin and passes it to the command in the rest of the arguments)
The expect package can be installed on:
MSYS2 with pacman -S expect
Mac OS with brew install expect
Update
I recently had buffering problems with python inside a shell script (when trying to append timestamp to its output). The fix was to pass -u flag to python this way:
run.sh with python -u script.py
unbuffer -p /bin/bash run.sh 2>&1 | tee /dev/tty | ts '[%Y-%m-%d %H:%M:%S]' >> somefile.txt
This command will put a timestamp on the output and send it to a file and stdout at the same time.
The ts program (timestamp) can be installed with the moreutils package.
Update 2
Recently, also had problems with grep buffering the output, when I used the argument grep --line-buffered on grep to it stop buffering the output.
If you use the C++ stream classes instead, every std::endl is an implicit flush. Using C-style printing, I think the method you suggested (fflush()) is the only way.
The best answer IMO is grep's --line-buffer option as stated here:
https://unix.stackexchange.com/a/53445/40003

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Resources