grep -f maximum number of patterns? - unix

I'd like to use grep on a text file with -f to match a long list (10,000) of patterns. Turns out that grep doesn't like this (who, knew?). After a day, it didn't produce anything. Smaller lists work almost instantaneously.
I was thinking I might split my long list up and do it a few times. Any idea what a good maximum length for the pattern list might be?
Also, I'm rather new with unix. Alternative approaches are welcome. The list of patterns, or search terms, are in a plaintext file, one per line.
Thank you everyone for your guidance.

From comments, it appears that the patterns you are matching are fixed strings. If that is the case, you should definitely use -F. That will increase the speed of the matching considerably. (Using 479,000 strings to match on an input file with 3 lines using -F takes under 1.5 seconds on a moderately powered machine. Not using -F, that same machine is not yet finished after several minutes.)

i got the same problem with approx. 4 million patterns to search for in a file with 9 million lines. Seems like it is a problem of RAM. so i got this neat little work around which might be slower than splitting and joining but it just need this one line.
while read line; do grep $line fileToSearchIn;done < patternFile
I needed to use the work around since the -F flag is no solution for that large files...
EDIT: This seems to be really slow for large files. After some more research i found 'faSomeRecords' and really other awesome tools from Kent NGS-editing-Tools
I tried it on my own by extracting 2 million fasta-rec from 5.5million records file. Took approx. 30 sec..
cheers
EDIT: direct download link

Here is a bash script you can run on your files (or if you would like, a subset of your files). It will split the key file into increasingly large blocks, and for each block attempt the grep operation. The operations are timed - right now I'm timing each grep operation, as well as the total time to process all the sub-expressions.
Output is in seconds - with some effort you can get ms, but with the problem you are having it's unlikely you need that granularity.
Run the script in a terminal window with a command of the form
./timeScript keyFile textFile 100 > outputFile
This will run the script, using keyFile as the file where the search keys are stored, and textFile as the file where you are looking for keys, and 100 as the initial block size. On each loop the block size will be doubled.
In a second terminal, run the command
tail -f outputFile
which will keep track of the output of your other process into the file outputFile
I recommend that you open a third terminal window, and that you run top in that window. You will be able to see how much memory and CPU your process is taking - again, if you see vast amounts of memory consumed it will give you a hint that things are not going well.
This should allow you to find out when things start to slow down - which is the answer to your question. I don't think there's a "magic number" - it probably depends on your machine, and in particular on the file size and the amount of memory you have.
You could take the output of the script and put it through a grep:
grep entire outputFile
You will end up with just the summaries - block size, and time taken, e.g.
Time for processing entire file with blocksize 800: 4 seconds
If you plot these numbers against each other (or simply inspect the numbers), you will see when the algorithm is optimal, and when it slows down.
Here is the code: I did not do extensive error checking but it seemed to work for me. Obviously in your ultimate solution you need to do something with the outputs of grep (instead of piping it to wc -l which I did just to see how many lines were matched)...
#!/bin/bash
# script to look at difference in timing
# when grepping a file with a large number of expressions
# assume first argument = name of file with list of expressions
# second argument = name of file to check
# optional third argument = initial block size (default 100)
#
# split f1 into chunks of 1, 2, 4, 8... expressions at a time
# and print out how long it took to process all the lines in f2
if (($# < 2 )); then
echo Warning: need at leasttwo parameters.
echo Usage: timeScript keyFile searchFile [initial blocksize]
exit 0
fi
f1_linecount=`cat $1 | wc -l`
echo linecount of file1 is $f1_linecount
f2_linecount=`cat $2 | wc -l`
echo linecount of file2 is $f2_linecount
echo
if (($# < 3 )); then
blockLength=100
else
blockLength=$3
fi
while (($blockLength < f1_linecount))
do
echo Using blocks of $blockLength
#split is a built in command that splits the file
# -l tells it to break after $blockLength lines
# and the block$blockLength parameter is a prefix for the file
split -l $blockLength $1 block$blockLength
Tstart="$(date +%s)"
Tbefore=$Tstart
for fn in block*
do
echo "grep -f $fn $2 | wc -l"
echo number of lines matched: `grep -f $fn $2 | wc -l`
Tnow="$(($(date +%s)))"
echo Time taken: $(($Tnow - $Tbefore)) s
Tbefore=$Tnow
done
echo Time for processing entire file with blocksize $blockLength: $(($Tnow - $Tstart)) seconds
blockLength=$((2*$blockLength))
# remove the split files - no longer needed
rm block*
echo block length is now $blockLength and f1 linecount is $f1_linecount
done
exit 0

You could certainly give sed a try to see whether you get a better result, but it is a lot of work to do either way on a file of any size. You didn't provide any details on your problem, but if you have 10k patterns I would be trying to think about whether there is some way to generalize them into a smaller number of regular expressions.

Here is a perl script "match_many.pl" which addresses a very common subset of the "large number of keys vs. large number of records" problem. Keys are accepted one per line from stdin. The two command line parameters are the name of the file to search and the field (white space delimited) which must match a key. This subset of the original problem can be solved quickly since the location of the match (if any) in the record is known ahead of time and the key always corresponds to an entire field in the record. In one typical case it searched 9400265 records with 42899 keys, matching 42401 of the keys and emitting 1831944 records in 41s. The more general case, where the key may appear as a substring in any part of a record, is a more difficult problem that this script does not address. (If keys never include white space and always correspond to an entire word the script could be modified to handle that case by iterating over all fields per record, instead of just testing the one, at the cost of running M times slower, where M is the average field number where the matches are found.)
#!/usr/bin/perl -w
use strict;
use warnings;
my $kcount;
my ($infile,$test_field) = #ARGV;
if(!defined($infile) || "$infile" eq "" || !defined($test_field) || ($test_field <= 0)){
die "syntax: match_many.pl infile field"
}
my %keys; # hash of keys
$test_field--; # external range (1,N) to internal range (0,N-1)
$kcount=0;
while(<STDIN>) {
my $line = $_;
chomp($line);
$keys {$line} = 1;
$kcount++
}
print STDERR "keys read: $kcount\n";
my $records = 0;
my $emitted = 0;
open(INFILE, $infile ) or die "Could not open $infile";
while(<INFILE>) {
if(substr($_,0,1) =~ /#/){ #skip comment lines
next;
}
my $line = $_;
chomp($line);
$line =~ s/^\s+//;
my #fields = split(/\s+/, $line);
if(exists($keys{$fields[$test_field]})){
print STDOUT "$line\n";
$emitted++;
$keys{$fields[$test_field]}++;
}
$records++;
}
$kcount=0;
while( my( $key, $value ) = each %keys ){
if($value > 1){
$kcount++;
}
}
close(INFILE);
print STDERR "records read: $records, emitted: $emitted; keys matched: $kcount\n";
exit;

Related

/usr/xpg4/bin/grep -q [^0-9] does not always work as expected

I have a Unix ksh script that has been in daily use for years (kicked off at night by the crontab). Recently one function in the script is behaving erratically as never happened before. I tried various ways to find out why, but have no success.
The function validates an input string, which is supposed to be a string of 10 numeric characters. The function checks if the string length is 10, and whether it contains any non-numeric characters:
#! /bin/ksh
# The function:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if [[ $(print ${#1}) -ne 10 ]] || print "$1" | /usr/xpg4/bin/grep -q [^0-9] ; then
return 1
else
return 0
fi
}
cat $input_file | while read line ; do
id=$(print $line | awk -F: '{print $5}')
# Calling the function:
is_valid_id $id
stat=$?
if [[ $stat -eq 1 ]] ; then
print "The ID $id is invalid. Request rejected.\n" >> $ERRLOG
continue
else
...
fi
done
The problem with the function is that, every night, out of scores or hundreds of requests, it finds the IDs in several requests as invalid. I visually inspected the input data and found that all the "invalid" IDs are actually strings of 10 numeric characters as should be. This error seems to be random, because it happens with only some of the requests. However, while the rejected requests persistently come back, it is consistently the same IDs that are picked out as invalid day after day.
I did the following:
The Unix machine has been running for almost a year, therefore might need to be refreshed. The system admin to reboot the machine at my request. But the problem persists after the reboot.
I manually ran exactly the same two tests in the function, at command prompt, and the IDs that have been found invalid at night are all valid.
I know the same commands may behave differently invoked manually or in a script. To see how the function behaves in script, the above code excerpt is the small script I ran to reproduce the problem. And indeed, some (though not all) of the IDs found to be invalid at night are also found invalid by the small trouble-shooting script.
I then modified that troubleshooting script to run the two tests one at a time, and found it is the /usr/xpg4/bin/grep -q [^0-9] test that erroneously finds some of the ID as containing non-numeric character(s). Well, the IDs are all numeric characters, at least visually.
I checked if there is any problem with the xpg4 grep command file (ls -l /usr/xpg4/bin/grep), to see if it is put there recently. But its timestamp is year 2005 (this machine runs Solaris 10).
Knowing that the data comes from a central ERP system, to which data entry is performed from different locations using all kinds of various terminal machines running all kinds of possible operating systems that support various character sets and encodings. The ERP system simply allows them. But can characters from other encodings visually appear as numeric characters but the encoded values are not as the /usr/xpg4/bin/grep command expects to be on our Unix machine? I tried the od (octal dump) command but it does not help me much as I am not familiar with it. Maybe I need to know more about od for solving this problem.
My temporary work-around is omitting the /usr/xpg4/bin/grep -q [^0-9] test. But the problem has not been solved. What can I try next?
Your validity test function happens to be more complicated than it should be. E.g. why do you use a command substitution with print for ${#1}? Why don't you use ${#1} directly? Next, forking grep to test for a non-number is a slow and expensive operation. What about this equivalent function, 100% POSIX and blazingly fast:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if test ${#1} -ne 10; then
return 1 # ID length not exactly 10.
fi
case $1 in
(*[!0-9]*) return 1;; # ID contains a non-digit.
(*) return 0;; # ID is exactly 10 digits.
esac
}
Or even more simple, if you don't mind repeating yourself:
is_valid_id () {
# Takes one argument, which is the ID being tested.
case $1 in
([0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]) # 10 digits.
return 0;;
(*)
return 1;;
esac
}
This also avoids your unquoted use of a grep pattern, which is error-prone in the presence of one-character file names. Does this work better?

Parsing variable in loop incorrectly [duplicate]

I want to run certain actions on a group of lexicographically named files (01-09 before 10). I have to use a rather old version of FreeBSD (7.3), so I can't use yummies like echo {01..30} or seq -w 1 30.
The only working solution I found is printf "%02d " {1..30}. However, I can't figure out why can't I use $1 and $2 instead of 1 and 30. When I run my script (bash ~/myscript.sh 1 30) printf says {1..30}: invalid number
AFAIK, variables in bash are typeless, so how can't printf accept an integer argument as an integer?
Bash supports C-style for loops:
s=1
e=30
for i in ((i=s; i<e; i++)); do printf "%02d " "$i"; done
The syntax you attempted doesn't work because brace expansion happens before parameter expansion, so when the shell tries to expand {$1..$2}, it's still literally {$1..$2}, not {1..30}.
The answer given by #Kent works because eval goes back to the beginning of the parsing process. I tend to suggest avoiding making habitual use of it, as eval can introduce hard-to-recognize bugs -- if your command were whitelisted to be run by sudo and $1 were, say, '$(rm -rf /; echo 1)', the C-style-for-loop example would safely fail, and the eval example... not so much.
Granted, 95% of the scripts you write may not be accessible to folks executing privilege escalation attacks, but the remaining 5% can really ruin one's day; following good practices 100% of the time avoids being in sloppy habits.
Thus, if one really wants to pass a range of numbers to a single command, the safe thing is to collect them in an array:
a=( )
for i in ((i=s; i<e; i++)); do a+=( "$i" ); done
printf "%02d " "${a[#]}"
I guess you are looking for this trick:
#!/bin/bash
s=1
e=30
printf "%02d " $(eval echo {$s..$e})
Ok, I finally got it!
#!/bin/bash
#BSD-only iteration method
#for day in `jot $1 $2`
for ((day=$1; day<$2; day++))
do
echo $(printf %02d $day)
done
I initially wanted to use the cycle iterator as a "day" in file names, but now I see that in my exact case it's easier to iterate through normal numbers (1,2,3 etc.) and process them into lexicographical ones inside the loop. While using jot, remember that $1 is the numbers amount, and the $2 is the starting point.

Getting highest extensions value in unix script

I need to create new files with extensions like: file.1, file.2, file.3 and then check if files with certain numbers exist and create file.(n+1) where n is number of highest, existing file. I was trying to get extensions using basename but it doesn't want to get couple of files
file=`basename $file.*`
ext=${file##*.}
It only works when I input whole file name like $file.3
If the filenames are guaranteed not to have newline characters in them, you can, for example, use standard unix text processing tools:
printf '%s\n' file.* | #full list
sed 's/.*\.//' | #extensions
grep '^[0-9][0-9]*$' | #numerical extensions
awk '{ if($0>m) m=$0} END{ print m }' #get maximum
Here's my take on this.
You can do this entirely in standard awk.
$ awk '{ext=FILENAME;sub(/.*\./,"",ext)} ext>n&&ext~/^[0-9]+$/{n=ext}{nextfile} END {print n}' *.*
Broken out for easier reading:
$ awk '
{
# Capture the extension...
ext=FILENAME
sub(/.*\./,"",ext)
}
# Then, if we have a numeric extension that is bigger than "n"...
ext > n && ext ~ /^[0-9]+$/ {
# let "n" be that extension.
n=ext
}
{
# We aren't actually interested in the contents of this file, so move on.
nextfile
}
# No more files? Print our result.
END {print n}
' *.*
The idea here is that we'll step through the list of filenames and let awk do ALL the processing to capture and "sort" the extensions. (We're not really sorting, we're just recording the highest number as we pass through the files.)
There are a few provisos with this solution:
This only works if all the files have a non-zero length. Technically awk conditions are being compared on "lines of the file", so if there are no lines, awk will pass right by that file.
You don't really need to use the ext variable, you can modify FILENAME directly. I included it for improved readability.
The nextfile command is fairly standard, but not universal. If you have a very old machine, or are running an esoteric variety of unix, nextfile may not be included. (I don't expect this to be a problem.)
Another alternative, which might be easier for you, would be to implement the same logic directly in POSIX shell:
$ n=0; for f in *.*; do ext=${f##*.}; if expr "$ext" : '[0-9][0-9]*$' >/dev/null && [ "$ext" -gt "$n" ]; then n="$ext"; fi; done; echo "$n"
Or, again broken out for easier reading (or scripting):
n=0
for f in *.*; do
ext=${f##*.}
if expr "$ext" : '[0-9][0-9]*$' >/dev/null && [ "$ext" -gt "$n" ]; then
n="$ext"
fi
done
echo "$n"
This steps through all files using a for loop, captures the extension, makes sure it's numeric, determines whether it's greater than "n" and records if it it is, then prints its result.
It requires no pipes and no external tools except expr, which is a POSIX.1 tool available on every system.
One proviso for this solution is that if you have NO files with extensions (i.e. *.* returns no files), this script will erroneously report that the highest numbered extension is 0. You can of course handle that easily enough, but I thought I should mention it.
Thanks for all answers, I've came up with quite similar and a bit simpler idea which I'd like to present it:
for i in file.*; do
#reading the extensions
ext=${i##*.}
if [ "$ext" -gt "$n" ];
then
#increasing n
n=$((n+1))
fi
done
then if we want to get number exceeding n by one
until [[ $a -gt "$n" ]]; do
a=$((a+1))
done
and finally a is one number bigger then number of file extensions. So if there are three files: file.1 file.2 file.3 the returned value will be 4.

nested for loop too slow: 1MN record traversal

I've huge file count, around 200,000 records in a file. I have been testing some cases where in I have to figure out the naming pattern of the files match to some specific strings. Here's how I preceded-
Test Strings, I stored in a file (let's say for one case, they are 10). The actual file that contains string records, separated by newline; totaling upto 200,000 records. To check if the test string patterns are present in the large file, I wrote a small nested for loop.
for i in `cat TestString.txt`
do
for j in `cat LargeFile.txt`
do
if [[ $i == $j ]]
then
echo "Match" >> result.txt
fi
done
done
This nested loop actual has to do the traversal (if I'm not wrong in the concepts), 10x200000 times. Normally I don't see that's too much of a load on the server, but the time taken is like all along. The excerpt is running for the past 4 hours, with ofcourse some "matched" results.
Does anyone has any idea on speeding this up? I've found so many answers with python or perl touch, but I'm honestly searching for something in Unix.
Thanks
Try the following:
grep -f TestString.txt LargeFile.txt >> result.txt
Check out grep
while read line
do
cat LargeFile.txt | grep "$line" >> result.txt
done < TestString.txt
grep will output any matching strings. This may be faster. Note that your TestString.txt file should not have any blank lines or grep will return everything from LargeFile.txt.

grep -f alternative for huge files

grep -F -f file1 file2
file1 is 90 Mb (2.5 million lines, one word per line)
file2 is 45 Gb
That command doesn't actually produce anything whatsoever, no matter how long I leave it running. Clearly, this is beyond grep's scope.
It seems grep can't handle that many queries from the -f option. However, the following command does produce the desired result:
head file1 > file3
grep -F -f file3 file2
I have doubts about whether sed or awk would be appropriate alternatives either, given the file sizes.
I am at a loss for alternatives... please help. Is it worth it to learn some sql commands? Is it easy? Can anyone point me in the right direction?
Try using LC_ALL=C . It turns the searching pattern from UTF-8 to ASCII which speeds up by 140 time the original speed. I have a 26G file which would take me around 12 hours to do down to a couple of minutes.
Source: Grepping a huge file (80GB) any way to speed it up?
So what I do is:
LC_ALL=C fgrep "pattern" <input >output
I don't think there is an easy solution.
Imagine you write your own program which does what you want and you will end up with a nested loop, where the outer loop iterates over the lines in file2 and the inner loop iterates over file1 (or vice versa). The number of iterations grows with size(file1) * size(file2). This will be a very large number when both files are large. Making one file smaller using head apparently resolves this issue, at the cost of not giving the correct result anymore.
A possible way out is indexing (or sorting) one of the files. If you iterate over file2 and for each word you can determine whether or not it is in the pattern file without having to fully traverse the pattern file, then you are much better off. This assumes that you do a word-by-word comparison. If the pattern file contains not only full words, but also substrings, then this will not work, because for a given word in file2 you wouldn't know what to look for in file1.
Learning SQL is certainly a good idea, because learning something is always good. It will hovever, not solve your problem, because SQL will suffer from the same quadratic effect described above. It may simplify indexing, should indexing be applicable to your problem.
Your best bet is probably taking a step back and rethinking your problem.
You can try ack. They are saying that it is faster than grep.
You can try parallel :
parallel --progress -a file1 'grep -F {} file2'
Parallel has got many other useful switches to make computations faster.
Grep can't handle that many queries, and at that volume, it won't be helped by fixing the grep -f bug that makes it so unbearably slow.
Are both file1 and file2 composed of one word per line? That means you're looking for exact matches, which we can do really quickly with awk:
awk 'NR == FNR { query[$0] = 1; next } query[$0]' file1 file2
NR (number of records, the line number) is only equal to the FNR (file-specific number of records) for the first file, where we populate the hash and then move onto the next line. The second clause checks the other file(s) for whether the line matches one saved in our hash and then prints the matching lines.
Otherwise, you'll need to iterate:
awk 'NR == FNR { query[$0]=1; next }
{ for (q in query) if (index($0, q)) { print; next } }' file1 file2
Instead of merely checking the hash, we have to loop through each query and see if it matches the current line ($0). This is much slower, but unfortunately necessary (though we're at least matching plain strings without using regexes, so it could be slower). The loop stops when we have a match.
If you actually wanted to evaluate the lines of the query file as regular expressions, you could use $0 ~ q instead of the faster index($0, q). Note that this uses POSIX extended regular expressions, roughly the same as grep -E or egrep but without bounded quantifiers ({1,7}) or the GNU extensions for word boundaries (\b) and shorthand character classes (\s,\w, etc).
These should work as long as the hash doesn't exceed what awk can store. This might be as low as 2.1B entries (a guess based on the highest 32-bit signed int) or as high as your free memory.

Resources