unix command 'tail' lost option '--line-buffered' - unix

With the last update of our SuSE Enterprise Linux 11 (now bash 3.2.51(1)-release), the command "tail" seems to have lost its option to stream files:
tail: unrecognized option '--line-buffered'
Our tail is from "GNU coreutils 8.12, March 2013". Is there another, equivalent solution?

As far as can be told by simple googling, tail doesn't appear to have a --line-buffered option, grep does. --line-buffered is useful to force line buffering even when writing to a non-TTY, a typical idiom being:
tail -f FILE | grep --line-buffered REGEXP > output
Here the point of --line-buffered is to prevent grep from buffering output in 8K chunks and forcing the matched lines to immediately appear in the output file.
tail -f is unbuffered regardless of output type, so it doesn't need a --line-buffered option equivalent to the one in grep. This can be verified by running tail -f somefile | cat and appending a line to the file from another shell. One observes that, despite its standard output being a pipe, tail immediately flushes the newly arrived line.

Related

How can I add a newline in between head and tail in Unix?

In the following Unix command (I’m in zsh), I’d like to have a blank line appear between the head and tail of a long text file for readability.
Here’s the command:
cat LongTextFile.txt | tee >(head) >(tail) >/dev/null
I’m already aware of
(head; echo; tail) < LongTextFile.txt
but I’m wondering if it’s possible to use the tee command.
The process substitutions >(head) >(tail) are not sequenced; they run in parallel. head and tail are running concurrently. tee is reading its standard input and distributing it to those two processes. So there is no concept of "between them" where we could insert a newline.
You're just lucky that when the file is long enough, head has a chance to finish before tail starts outputting anything.
If the file is so small that head and tail overlap, you may get interleaved output, or reordered output, depending on the exact buffering going on.
Here you go, this works in Zsh:
print -l "$(head LongTextFile.txt)" '' "$(tail LongTextFile.txt)"
Try:
seq 100 | { s=$(cat); head <<<$s; echo; tail <<<$s; }

Stdout to both pipe and console?

Is there a way to output to both stdout and to the stdin of another process? That is, have the intermediate stdout be output before it reaches the pipe of the other process?
I know of the tee command lets you write to a file and to stdout, but I don't want any files involved here.
This is a little "hacky" I guess, but you can redirect the output of tee to stderr. Since most programs take input from stdin, redirecting to stderr will leave the original output as is while still piping through to the next process.
For example,
cat file.txt | tee >&2 | wc -l
Will output the entire contents of file.txt, and then output just the number of lines (wc -l) in file.txt.
Obviously this will only work if stderr outputs where you want it to (like the terminal/console).
Not an ideal solution since it involves using stderr for something it's not necessarily made for but it works.

Unix Pipes for Command Argument [duplicate]

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Read expression for grep from standard input
(1 answer)
Closed last month.
I am looking for insight as to how pipes can be used to pass standard output as the arguments for other commands.
For example, consider this case:
ls | grep Hello
The structure of grep follows the pattern: grep SearchTerm PathOfFileToBeSearched. In the case I have illustrated, the word Hello is taken as the SearchTerm and the result of ls is used as the file to be searched. But what if I want to switch it around? What if I want the standard output of ls to be the SearchTerm, with the argument following grep being PathOfFileToBeSearched? In a general sense, I want to have control over which argument the pipe fills with the standard output of the previous command. Is this possible, or does it depend on how the script for the command (e.g., grep) was written?
Thank you so much for your help!
grep itself will be built such that if you've not specified a file name, it will open stdin (and thus get the output of ls). There's no real generic mechanism here - merely convention.
If you want the output of ls to be the search term, you can do this via the shell. Make use of a subshell and substitution thus:
$ grep $(ls) filename.txt
In this scenario ls is run in a subshell, and its stdout is captured and inserted in the command line as an argument for grep. Note that if the ls output contains spaces, this will cause confusion for grep.
There are basically two options for this: shell command substitution and xargs. Brian Agnew has just written about the former. xargs is a utility which takes its stdin and turns it into arguments of a command to execute. So you could run
ls | xargs -n1 -J % grep -- % PathOfFileToBeSearched
and it would, for each file output by ls, run grep -e filename PathOfFileToBeSearched to grep for the filename output by ls within the other file you specify. This is an unusual xargs invocation; usually it's used to add one or more arguments at the end of a command, while here it should add exactly one argument in a specific place, so I've used -n and -J arguments to arrange that. The more common usage would be something like
ls | xargs grep -- term
to search all of the files output by ls for term. Although of course if you just want files in the current directory, you can this more simply without a pipeline:
grep -- term *
and likewise in your reversed arrangement,
for filename in *; do
grep -- "$#" PathOfFileToBeSearched
done
There's one important xargs caveat: whitespace characters in the filenames generated by ls won't be handled too well. To do that, provided you have GNU utilities, you can use find instead.
find . -mindepth 1 -maxdepth 1 -print0 | xargs -0 -n1 -J % grep -- % PathOfFileToBeSearched
to use NUL characters to separate filenames instead of whitespace

How to transfer data in unix pipeline in real time?

As i know, if we do for example "./program | grep someoutput", it greps after program process finished. Do you know, how do it by income?
The shell will spawn both processes at the same time and connect the output of the first to the input of the second. If you have two long-running processes then you'll see both in the process table. There's no intermediate step, no storage of data from the first program. It's a pipe, not a bucket.
Note - there may be some buffering involved.
You're wrong.
The pipe recieves data immediately, but of course writing to the pipe by the source process can block if the other end (the sink) is not reading the data out fast enough.
Of course, this doesn't necessarily mean that pipes are suitable for "hard real-time" use, but perhaps that's not what you meant.
Actually, grep can be used "in real time" since data passed through pipes are not buffered, so something like tail -f /var/log/messages | grep something will work.
If you're not getting the expected output, it's more likely that the preceding command is buffering its output. Take for instance the double grep:
tail -f /var/log/messages | grep something | grep another
The output won't appear immediately since grep will buffer its output when stdout is not connected to a terminal which means the second grep won't see any data until the buffer is flushed. Forcing the line-buffering mode solves this issue:
tail -f /var/log/messages | grep --line-buffered something | grep another
Depending on how buffering is done, it may be possible to modify the buffering mode of a command using stdbuf e.g:
stdbuf -oL ./program | grep something

How can I grep for a string that begins with a dash/hyphen?

I want to grep for the string that starts with a dash/hyphen, like -X, in a file, but it's confusing this as a command line argument.
I've tried:
grep "-X"
grep \-X
grep '-X'
Use:
grep -- -X
Documentation
Related: What does a bare double dash mean? (thanks to nutty about natty).
The dash is a special character in Bash as noted at http://tldp.org/LDP/abs/html/special-chars.html#DASHREF. So escaping this once just gets you past Bash, but Grep still has it's own meaning to dashes (by providing options).
So you really need to escape it twice (if you prefer not to use the other mentioned answers). The following will/should work
grep \\-X
grep '\-X'
grep "\-X"
One way to try out how Bash passes arguments to a script/program is to create a .sh script that just echos all the arguments. I use a script called echo-args.sh to play with from time to time, all it contains is:
echo $*
I invoke it as:
bash echo-args.sh \-X
bash echo-args.sh \\-X
bash echo-args.sh "\-X"
You get the idea.
grep -e -X will do the trick.
grep -- -X
grep \\-X
grep '\-X'
grep "\-X"
grep -e -X
grep [-]X
I dont have access to a Solaris machine, but grep "\-X" works for me on linux.
The correct way would be to use "--" to stop processing arguments, as already mentioned. This is due to the usage of getopt_long (GNU C-function from getopt.h) in the source of the tool.
This is why you notice the same phenomena on other command-line tools; since most of them are GNU tools, and use this call,they exhibit the same behavior.
As a side note - getopt_long is what gives us the cool choice between -rlo and --really_long_option and the combination of arguments in the interpreter.
If you're using another utility that passes a single argument to grep, you can use:
'[-]X'
you can use nawk
$ nawk '/-X/{print}' file
None of the answers not helped me (ubuntu 20.04 LTS).
I found a bit another option:
My case:
systemctl --help | grep -w -- --user
-w will match a whole word.
-- means end of command arguments (to mark -w as not part of the grep command)
ls -l | grep "^-"
Hope this one would serve your purpose.
grep "^-X" file
It will grep and pick all the lines form the file.
^ in the grep"^" indicates a line starting with

Resources