Grep OR Operator Not Working - unix

I need help with Unix.
I am trying to see if one of two statements (printf and fprintf) are in a file. I used the command:
search=`cat $file | grep -w "fprintf\|printf"`
For some reason, it doesn't find either in files where one of those two exists. Why?

You have two problems.
First, standard grep doesn't support the | operator. You need to use egrep or the -E flag.
Second, inside double-quotes, \| means \|. The backslash gets passed through to the grep command, so even if grep understood the | operator, the backslash would turn it into a normal character.
Try this:
search=`cat $file | egrep -w "fprintf|printf"`
Or you can provide each alternative as a separate argument to grep:
search=`cat $file | grep -w -e fprintf -e printf

grep -w "fprintf\|printf" $file
This works fine in my shell. Still, here are some alternatives:
egrep -w 'fprintf|printf' $file
grep -wE 'fprintf|printf' $file
grep -we 'fprintf\|printf' $file

Related

Combine find, grep and xargs with printf

I have a find command combined with exec grep and a printf option :
find -L /home/blast/dirtest -maxdepth 3 **-exec grep -q "pattern" {} \;** -printf '%y/#/%TY-%Tm-%Td %TX/#/%s/#/%f/#/%l/#/%h\n' 2> /dev/null
Result :
f/#/2018-01-01 10:00:00/#/191/#/filee.xml/#//#//home/blast/dirtest/01/05
I need the printf to get all the desired file informations at once (date, type size etc)
The above command works fine. But the exec option is too slow comparing to xargs.
I tryed to do the same with xarg but I did not succeed.
Any Idea on how to acheive that ? using the xargs command keeping the desired printf or similar .
Thanks
Your code is:
find -L /home/blast/dirtest -maxdepth 3 \
-exec grep -q "pattern" {} \; \
-printf '%y/#/%TY-%Tm-%Td %TX/#/%s/#/%f/#/%l/#/%h\n' 2> /dev/null
This invokes a new grep process for each file.
If you are using GNU utilities, you can reduce the number of grep processes by something like:
(
format=\''%y/#/%TY-%Tm-%Td %TX/#/%s/#/%f/#/%l/#/%h\n'\'
find -L /home/blast/dirtest -maxdepth 3 -print0 |\
xargs -0 grep -l -Z "pattern" |\
xargs -0 sh -c 'find "$#" -printf '"$format" --
) 2>/dev/null
for clarity, store the formatstring in a variable
use -print0 / -0 / -Z options to enable null-delimited data
generate initial filelist with find
filter on "pattern" with grep (use of xargs minimises the number of times grep gets called)
feed the filtered filelist into another xargs to run a minimal number of find -printf
in second xargs, call a subshell so that extra arguments can be appended (find requires the paths to precede the operators)
dummy second argument (--) to the sh -c invocation prevents the first filename being lost due to assignment to $0
To do it exactly how you want:
find -L /home/blast/dirtest/ -maxdepth 3 \
-printf '%p#%y/#/%TY-%Tm-%Td %TX/#/%s/#/%f/#/%l/#/%h\n' \
> tmp.out
cut -d# -f1 tmp.out \
| xargs grep -l "pattern" 2>/dev/null \
| sed 's/^/^/; s/$/#/' \
| grep -f /dev/stdin tmp.out \
| sed 's/^.*#//'
This operates under the assumption that you have no character # in your file names.
What it does is avoid the grep at first and just dump all the files with the requested metadata to a temporary file.
But it also prefixes each line with the full path (%p#).
Then we extract (cut) the full paths out of this list and list the files which contains the pattern (xargs grep).
We then use sed to prefix each such file name with ^ and suffix it with #, which makes it a greppable pattern in our tmp.out file.
Then we use this pattern (grep -f /dev/stdin) to extract only those paths from the big list in tmp.out.
Now all that's left is to remove the artificial full path we prefixed using the last sed command.
Seeing how you used /home, there's a good chance you're on Linux, which, if you're willing to accept some output format changes, allows you to do it somewhat more elegantly:
find -L /home/blast/dirtest/ -maxdepth 3 \
| xargs grep -l "pattern" 2>/dev/null \
| xargs stat --printf '%F/#/%y/#/%s/#/%n\n'
The output of stat --printf is different from that of find -printf (and from that of MacOS' stat -f), but it's the same information.
Do note, however, that because you passed -L to find, and you're grepping the result:
The results are limited to file types which can be grepped, so they will never be directories, links, etc..
If you stumble upon a broken link, it will not be in the output because it cannot be grepped.
I'v found an intresting thing about the -exec option.
We could run the grep once using the exec with the plus-sign (+)
-exec command {} +
This variant of the -exec option runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total
number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command
lines. Only one instance of ’{}’ is allowed within the command. The command is executed in the starting directory.
That means if I change this :
-exec grep -l 'pattern' {} \;
By this ( replace the semicolon with the plus signe ):
-exec grep -l 'pattern' {} \+
Will improve the performance significantly.
Then I can pipe only one xargs for the format printing needs only.

Append "/" to end of directory

Completely noob question but, using ls piped to grep, I need to find files or directories that have all capitals in their name, and directories need to have "/" appended to indicate that it is a directory. Trying to append the "/" is the only part I am stuck on. Again, I apologize for the amateur question. I currently have ls | grep [A-Z] and the example out should be: BIRD, DOG, DOGDIR/
It's an interesting question because it's a somewhat difficult thing to accomplish with a bash one-liner.
Here's what I came up with. It doesn't seem very elegant, but I'm not sure how to improve.
find /animals -type d -or -type f \
| grep '/[A-Z]*$' \
| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' \
| sed -e 's/, *$//'; echo
I'll break that down for you
find /animals -type d -or -type f writes out, once per line, the directories and files it found in /animals (see below for my test environment dockerfile - I created /animals to match your desired output). Find can't do a regex match as far as I know on the name, so...
grep '/[A-Z]*$' filter's find's output so that only paths are shown where the last part of the file or directory name, after the final /, is all uppercase
xargs -I + bash -c '...' when you're in a shell and you want to use a "for" loop, chances are what you should be using is xargs. Learn it, know it, love it. xargs takes its input, separated by default by $IFS, and runs the command you give it for each piece of input . So this is going to run a bash shell for each path. that passed the grep filter. In my case, -I + will make xargs replace the literal '+' character with its current input filename. -I also makes it pass one at a time through xargs. For more information, see the xargs manual page.
'echo -n $(basename +)$( test -d + && echo -n /),\\ ' this is the inner bash script that will be run by xargs for each path that got through grep.
basename + cuts the directory component off the path; from your example output you don't want eg /animals/DOGDIR/, you want DOGDIR/. basename is the program that trims the directories for us.
test -d + && echo -n / checks to see whether + (remember xargs will replace it with filename) is a directory ,and if so, runs echo -n /. the -n argument to echo suppresses the newline, important to get the output in the CSV format you specified.
now we can put it all together to see that we're echo -n the output of basename + , with / appended, if it's a directory, and then , appended to that. All the echos run with -n to suppress newlines to keep output CSV looking.
| sed -e 's/, *$//'; echo is purely for formatting. Adding , to each individual output was an easy way to get the CSV, but it leaves us with a final , at the end of the list. The sed invocation removes , followed by any number of spaces at the end of the output so far - eg the entire output from all the xargs invocations. And since we never did output a newline at the end of that output, the final echo is adding that.
Usually in unix shells, you probably wouldn't want a CSV style output. You'd probably instead want a newline-separated output in most cases, one matching file per line, and that would be somewhat simpler to do because you wouldn't need all that faffing with -n and , to make it CSV style. But, valid requirement if the need is there.
FROM debian
RUN mkdir -p /animals
WORKDIR /animals
RUN mkdir -p DOGDIR lowerdir && touch DOGDIR/DOG DOGDIR/lowerDOG2 lowerdir/BIRD
ENTRYPOINT [ "/bin/bash" ]
CMD [ "-c" , "find /animals -type d -or -type f | grep '/[A-Z]*$'| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' | sed -e 's/, *$//'; echo"]
$ docker run --rm test
BIRD, DOGDIR/, DOG
You can start looking at
ls -F | grep -v "[[:lower:]]"
I did not add something for a comma-seperated line, because this is the wrong method: Parsing ls should be avoided ! It will go wrong for filenames like
I am a terribble filename,
with newlines inside me,
and the ls command combined with grep
will only show the last line
BECAUSE THIS LINE HAS NO LOWERCASE CHARACTERS
To get the files without a pipe, you can use
shopt -s extglob
ls -dp +([[:upper:]])
shopt -u extglob
An explanation of the extglob and uppercase can be found at https://unix.stackexchange.com/a/389071/57293
When you want the output in one line, you can get troubles with filenames that have newlines or commas in its name. You might want something like
# parsing ls, yes wrong and failing for some files
ls -dp +([[:upper:]]) | tr "\n" "," | sed 's/,$/\n/'

What is the easiest way for grepping the 'man grep' for flags

I do use grep a lot, but I would love to improve a bit.
Regarding the question. I wanted to narrow the man entry to find the explanation of what the -v in grep -v 'pattern' filename stood for, mainly this:
-v, --invert-match
Selected lines are those not matching any of the specified patterns.
Thus, to find the next five lines after the line which contains -v I tried:
man grep | grep -A 5 -v
and
man grep | grep -A 5 '-v'
but they return:
usage: grep [-abcDEFGHhIiJLlmnOoqRSsUVvwxZ] [-A num] [-B num] [-C[num]]
[-e pattern] [-f file] [--binary-files=value] [--color=when]
[--context[=num]] [--directories=action] [--label] [--line-buffered]
[--null] [pattern] [file ...]
This confuses me since:
man grep | grep -A 5 'Selected'
and
man grep | grep -A 5 Selected
do work.
What is wrong in my approach? Is there any easier way to achieve what I need?
One approach is to parse the Info documents for the command directly. If you run info grep (or other command) you will often find much more detailed and better-structured documentation, which will let you pin-point just the section you need.
Here's a function that will print out the relevant Info section for an option/variable/etc:
info_search() {
info --subnodes "$1" -o - 2>&- \
| awk -v RS='' "/(^|\n)(‘|'|\`)$2((,|\[| ).*)?(’|')\n/"
}
This should work on Linux/macOS/BSD. Output is like:
$ info_search grep -v
‘-v’
‘--invert-match’
Invert the sense of matching, to select non-matching lines. (‘-v’
is specified by POSIX.)
$ info_search gawk RS
'RS == "\n"'
Records are separated by the newline character ('\n'). In effect,
every line in the data file is a separate record, including blank
...
$ info_search bash -i
`-i'
Force the shell to run interactively. Interactive shells are
...

Possible to return number of matches per line of a regex file using `grep -f regexfile queryfile`?

I'm wondering if there is a simple way to transform a grep command such as
grep -c -f regex.txt file.txt
to return the total number of matched lines in file.txt for each line of regex.txt, instead of the sum of the matched lines found for all patterns in regex.txt as the above command does.
My current method of handling this is to use xargs (or GNU parallel interchangeably):
cat regex.txt | xargs -I{} grep -c {} file.txt
Can grep do this in one fell swoop?
grep -o -f regex.txt | sort | uniq -c

Passing output from one command as argument to another [duplicate]

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
I have this for:
for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done
ls will give access.log, access.log.1, access.log.2 etc.
tail will give me the last line of each file, which looks like: 192.168.1.23 - - [08/Oct/2010:14:05:04 +0300] etc. etc. etc
awk+cut will extract the date (08/Oct/2010 - but different in each access.log), which will allow me to grep for it and redirect the output to a separate file.
But I cannot seem to pass the output of awk+cut to grep.
The reason for all this is that those access logs include lines with more than one date (06/Oct, 07/Oct, 08/Oct) and I just need the lines with the most recent date.
How can I achieve this?
Thank you.
As a sidenote, tail displays the last 10 lines.
A possible solution would be to grepthis way:
for i in `ls -lf access.log*`; do grep $(tail $i |awk {'print $4'} |cut -d: -f 1| sed 's/\[/\\[/') $i > $i.output; done
why don't you break it up into steps??
for file in *access.log
do
what=$(tail "$i" |awk {'print $4'} |cut -d: -f 1)
grep "$what" "$file" >> output
done
You shouldn't use ls that way. Also, ls -l gives you information you don't need. The -f option to grep will allow you to pipe the pattern to grep. Always quote variables that contain filenames.
for i in access.log*; do awk 'END {sub(":.*","",$4); print substr($4,2)}' "$i" | grep -f - $i > "$i.output"; done
I also eliminated tail and cut since AWK can do their jobs.
Umm...
Use xargs or backticks.
man xargs
or
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html , section 3.4.5. Command substitution
you can try:
grep "$(stuff to get piped over to be grep-ed)" file
I haven't tried this, but my answer applied here would look like this:
grep "$(for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done)" $i

Resources