tail multiple files and grep the output [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I would like to grep a pattern from multiple log files which are being constantly updated by some processes and tail the output of this grep continuosly.
Below command doesnt work and I get
tail: warning: following standard input indefinitely is ineffective
tail -f | grep --line-buffered "Search this: " /var/links/proc2/id/myprocess*/Daily/myprocess*.log
Can someone help sort this out?

You should have a look at multitail tool (Install using sudo apt-get install multitail)
In short, with multitail, you need to use the --mergeall flag for viewing output of all in one place
multitail --mergeall /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
You can do the same without using grep
multitail -E "Search this: " --mergeall /var/links/proc2/id/myprocess*/Daily/myprocess*.log
To view the output individually using multitail, this will give the filename as well.
multitail -E "Search this: " /var/links/proc2/id/myprocess*/Daily/myprocess*.log

the mistake is that you give the files to the grep command and not the tail.
the tail -f needs to get the files as input. try:
tail -f /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
to get also the file names (however it will not be like grep output it is):
tail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered -e'^==> .* <==$' -e'Search this: '

This is an interesting question and the simple answer should be: Use the prefix switch with tail, but unfortunately this is currently not implemented in most versions of tail.
As I see it, you have two options: adapt the standard tools to the task (see Udys answer) or write your own tool with your favorite scripting/programming language.
Below is one way you could do it with the File::Tail::Multi module for perl. Note that you may need to install the module from CPAN (cpan -i File::Tail::Multi).
Save the following script e.g. mtail to your executable path and make the script executable.
#!/usr/bin/env perl
use File::Tail::Multi;
$| = 1; # Enable autoflush
$tail = File::Tail::Multi->new(RemoveDuplicate => 0,
OutputPrefix => 'f',
Files => \#ARGV);
while(1) { $tail->read; $tail->print; sleep 2 }
Change OutputPrefix to 'p' if you prefer full path prefixes.
Run it like this:
mtail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
You do not need to specify --line-buffered when grep is the last command, so this is sufficient:
mtail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep "Search this: "

Related

How can I execute script with xargs after find command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
find . -name "recovery_script" | xargs
I try to execute but it only prints it. How can I run it parallel ?
find . -name "recovery_script" | xargs -n1 -P8 sh
for 8 processes in parallel.
Provided there are at least 8 places where "recovery_script" can be found.
The -n1 argument is necessary to feed one argument at a time to sh. Otherwise, xargs will feed a reasonable number of arguments all at once to sh, meaning it's trying to execute something like
sh dir1/recovery_script dir2/recovery_script dir3/recovery_script ...
instead of
sh dir1/recovery_script
sh dir2/recovery_script
sh dir3/recovery_script
...
in parallel.
Bonus: your command can be longer than just a single command, including options. I often use nice to allow other processes to still continue without problems:
find . -name "recovery_script" | xargs -n1 -P8 nice -n19
where -n19 is an option to nice, not to xargs.
(Aside: if you ever use wildcards for -name in find, use the -print0 option to find, and the -0 option to xargs: that separates output and input by the null character, instead of whitespace (since the latter may be part of the filename). Since you search for the full name here, that is not a problem.)
From the xargs manual page:
SYNOPSIS: xargs ... [command [initial-arguments]]
and
... and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.
The default behaviour is thus to echo whatever arguments you give to xargs. Providing a command like sh (perhaps depending on what executable you're trying to run) then works.
This solution is not using xargs but a simple bash script. Maybe it can help:
#!/bin/sh
for i in $(find -name recovery_script)
do
{
echo "Started $i"
$i
echo "Ended $i"
} &
done
wait

how can I highlight just one item from the ls output

real beginner in Unix commands so not sure if the following is actually possible but here goes.
Is it possible to highlight just one item in a ls output?
I.e.: in a directory I use the following
ls -l --color=auto
this lists 4 items in green
file1.xls
file2.xls
file3.xls
file4.xls
But I want to highlight a specific item, in this case file2.
Is this possible?
The ls program will not do this for you. But you could filter the results from ls through a custom script which modifies the text to highlight just one item. It would be simpler if no color was originally given; then you could match on the given filename (for example as the pattern in an awk script, or in a sed script) and modify just that one item, adding colors.
That is, certainly it is possible. Writing a sample script is a different question.
How you approach the problem depends on what you want from the output. If that is (literally) the output from ls with a single filename in color, then a script would be the normal approach. You could use grep as suggested in the other answer, which raises a few issues:
commenting on ls -l --color=auto makes it sound as if you are using GNU ls, hence likely using Linux. An appropriate tag for the question would be linux rather than unix. If you ask for unix, the answers should differ.
supposing that you are using Linux. Then likely you have GNU grep, which can do colors. That would let you do something like this:
ls -l | grep --color=always file2 |less -R
however, there is a known bug in GNU grep's use of color (see xterm FAQ "grep --color" does not show the right output).
using grep like this shows only the matching lines. For ls that might be a good choice. For matches in a manual page -- definitely not.
Alternatively, less (which is found more often on Unix systems than GNU grep) also can highlight matches (not in color) and would show the file you are looking for in context. You could do this:
ls -l | less -p file2
(Both grep and less use patterns aka regular expressions, but I left the example simple — read the documentation to learn more).
If you're a beginner I would strongly suggest you learn the grep command if you want to filter results - A Unix users best friend (mine anyway)
Use grep to only display the list items you want to see...
ls- l | grep "file2"
NOTE: This is no different to typing ls -l file2 by the way but your pattern could be expanded based on what you actually want displayed on the screen.
So if you had a directory full of files ".txt", ".xls", ".doc" and you wanted to only see ".doc" with the word "work" in the name (work1.doc) you could write:
ls -ls | grep "work" | grep "txt"
This would list work1.txt, work2.txt, work3.txt and so on.
This is a very basic example but I use grep extensively whilst in the unix shell and would advise using this to filter all results instead of colours.
A little side note using grep -v will show you everything but the pattern you give it
ls -l | grep -v ".txt" will show everything BUT .txt files.

GNU `ls` has `--quoting-style` option, what's the equivalent in BSD `ls`

I will use ls output for pipe input, so I need to escape the file name. when I use GNU ls, It works well. what's the equivalent in BSD ls? I hoping the output is like this.
$ gls --quoting-style escape t*1
text\ 1 text1
Why are/were you trying to use ls in a pipeline? You should probably be using find (maybe with -print0 and xargs -0, or -exec).
I suppose you could use ls -1f and then run the output through vis (or some similar filter) with some appropriate options to add the necessary quoting or escaping of your choice, but without knowing what you are feeding filenames into, and what (if any) other options you would want to use with ls, it's impossible to give much better guidance.
From the freebsd man page on ls there is no such option, however, you can try -m which will give you a comma separated streamed output:
-m Stream output format; list files across the page, separated by
commas.
I tried it on osx and it gave me:
$ ls -m
Hello World, Hello World.txt, foo.txt
That is a lot easier to parse from a script.

Remove character from a File [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Improve this question
Using UNIX Scripting it is possible to remove all the firsts characters from a file till a specific character is found ?
I have a file with "garbage" at the beginning. I want to remove that "garbage, meaning that all the character till the first "{" must be removed. How can I do this ?
cat file.txt | grep -A 1000000000 '{' | sed '1 s/^[^{]*//'
This will print the changed contents (i. e. without the garbage) to stdout. You can redirect this using > outfile.txt appended to the command:
cat file.txt | grep -A 1000000000 '{' | sed '1 s/^[^{]*//' > outfile.txt
And if you want to change the file in-place, this can be done by renaming the outfile.txt to the original name file.txt afterwards:
mv outfile.txt file.txt

In Unix, user having the highest UID? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Can some one please tell me how can I find the following.
List from /etc/passwd the UID and the user having the highest UID.
cat /etc/passwd | awk -F: '{print $3,$1}' | sort -n | tail -n 1
Instead of reading /etc/passwd, it would be better to get the output from
getent passwd
As you could be using another source of UIDs via nsswitch, such as LDAP.
/etc/passwd contains user information separated by colons. The user id is in the third column.
The sort command line tool can be used to sort the lines of a file. It has options, to choose which separator the columns are separated by, which column to sort by and whether to sort numerically or alphabetically.
So you can use sort to sort /etc/passwd by user id and then use tail to get the last line from that, which will contain the user with highest id.
getent passwd | awk -F : '$3>h{h=$3;u=$1}END{print h " " u}'
The getent output needs to be sorted for the awk command.
In addition, I found that nfsnobody (on Linux) can be ignored and the next highest UID is what is often needed. So this worked well:
getent passwd |sort -t: -k3 -n |awk -F: '$3>h{ph=h;pu=u;h=$3;u=$1}END{print h,u"\n"ph,pu}'
65534 nfsnobody 1002 user2

Resources