This is bit of a hadoop & unix mix issue and I'm really not sure which one's responsible for the error.
I've a bash script that performs validation of file in terms of:
Checks whether the total row count of the file matches with the value mentioned in the footer of the file. (Footer is a metadata that contains total number of actual rows above it, and it's positioned at the end of data file)
Checks whether the name of file matches with what it should be.
The function calculating row count & extracting footer count of file is:
count() {
rowCount=$(echo $(hdfs dfs -cat ${hdfspath}/${fileName} | wc -l) - 2 | bc -l)
footer=$(hdfs dfs -tail ${hdfspath}/${fileName} | tail -1)
footerRecordCount=`echo $footer | sed "s/[^0-9]//g"`
}
The code snippet calling the above function & performing validation test is below:
count()
if [[ ${footerRecordCount} -ne ${rowCount} ]]; then
echo "error=Number of records in the file doesn't match with record count value mentioned in the footer of the file" >&2
exit 1
else
fn_logMessage "Footer Record Count $footerRecordCount matched with rows count $rowCount"
fi
if [ ${fileName} -ne ${actualFileName} ]; then
echo "error=File name mismatch"
exit 1
else
echo "File name matched"
fi
The code looks fairly straighforward & simple; it is indeed and works as well perfectly.
However, the issue comes up when I started this test on huge file size (>400 GB). I receive the error as below:
Footer Record Count 00000003370000000002000082238384885577696960005044939533796567041020102349250692990597110000000000000000002222111111111111110200000003440000100013060089448361739204836173971223 matched with rows count 929901602
error=File name mismatch
Strange!!
The footer record count should actually be the number 929901602 but the number that comes up is some random number which doesn't exist anywhere in the file at all. However, even by the looks of it, it doesn't match, the output is thrown as "matched".
While the error of the next if loop is shown.
Not sure who is the culprit here, unix or hadoop. But I performed this test 3 times in a row. Every time the "Huge number" pops up is completely different from the previous one. So, there isn't even a correlation between these large numbers.
Any idea what on earth is going wrong?
PS: Like I said, the code works perfectly for small files like 20 GB, 50 GB.
Thanks in advance.
Related
I have a Unix ksh script that has been in daily use for years (kicked off at night by the crontab). Recently one function in the script is behaving erratically as never happened before. I tried various ways to find out why, but have no success.
The function validates an input string, which is supposed to be a string of 10 numeric characters. The function checks if the string length is 10, and whether it contains any non-numeric characters:
#! /bin/ksh
# The function:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if [[ $(print ${#1}) -ne 10 ]] || print "$1" | /usr/xpg4/bin/grep -q [^0-9] ; then
return 1
else
return 0
fi
}
cat $input_file | while read line ; do
id=$(print $line | awk -F: '{print $5}')
# Calling the function:
is_valid_id $id
stat=$?
if [[ $stat -eq 1 ]] ; then
print "The ID $id is invalid. Request rejected.\n" >> $ERRLOG
continue
else
...
fi
done
The problem with the function is that, every night, out of scores or hundreds of requests, it finds the IDs in several requests as invalid. I visually inspected the input data and found that all the "invalid" IDs are actually strings of 10 numeric characters as should be. This error seems to be random, because it happens with only some of the requests. However, while the rejected requests persistently come back, it is consistently the same IDs that are picked out as invalid day after day.
I did the following:
The Unix machine has been running for almost a year, therefore might need to be refreshed. The system admin to reboot the machine at my request. But the problem persists after the reboot.
I manually ran exactly the same two tests in the function, at command prompt, and the IDs that have been found invalid at night are all valid.
I know the same commands may behave differently invoked manually or in a script. To see how the function behaves in script, the above code excerpt is the small script I ran to reproduce the problem. And indeed, some (though not all) of the IDs found to be invalid at night are also found invalid by the small trouble-shooting script.
I then modified that troubleshooting script to run the two tests one at a time, and found it is the /usr/xpg4/bin/grep -q [^0-9] test that erroneously finds some of the ID as containing non-numeric character(s). Well, the IDs are all numeric characters, at least visually.
I checked if there is any problem with the xpg4 grep command file (ls -l /usr/xpg4/bin/grep), to see if it is put there recently. But its timestamp is year 2005 (this machine runs Solaris 10).
Knowing that the data comes from a central ERP system, to which data entry is performed from different locations using all kinds of various terminal machines running all kinds of possible operating systems that support various character sets and encodings. The ERP system simply allows them. But can characters from other encodings visually appear as numeric characters but the encoded values are not as the /usr/xpg4/bin/grep command expects to be on our Unix machine? I tried the od (octal dump) command but it does not help me much as I am not familiar with it. Maybe I need to know more about od for solving this problem.
My temporary work-around is omitting the /usr/xpg4/bin/grep -q [^0-9] test. But the problem has not been solved. What can I try next?
Your validity test function happens to be more complicated than it should be. E.g. why do you use a command substitution with print for ${#1}? Why don't you use ${#1} directly? Next, forking grep to test for a non-number is a slow and expensive operation. What about this equivalent function, 100% POSIX and blazingly fast:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if test ${#1} -ne 10; then
return 1 # ID length not exactly 10.
fi
case $1 in
(*[!0-9]*) return 1;; # ID contains a non-digit.
(*) return 0;; # ID is exactly 10 digits.
esac
}
Or even more simple, if you don't mind repeating yourself:
is_valid_id () {
# Takes one argument, which is the ID being tested.
case $1 in
([0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]) # 10 digits.
return 0;;
(*)
return 1;;
esac
}
This also avoids your unquoted use of a grep pattern, which is error-prone in the presence of one-character file names. Does this work better?
I just started learning awk and sed after my first 2 questions in stackOverflow. Thanks to Roman,Hek,Randomnir, Edmorton and many who corrected and helped me wholeheartedly.
Right now i could do changes to awk commands to suit my data requirements. I just need some help from all out here.I'm taking baby steps to fix all unix errors on my own.Some advice will be helpful
My data -
ID | Passcode
41-1|10551
1-105|5569
4-7|10043
78-3|217631
3-1|19826
12-1|19818912
My output has to be
ID | Passcode
41-1|10551
4-7|10043
78-3|217631
3-1|19826
12-1|19818912
All the records from the 2nd col which is less than 5char must be deleted or filtered . My o/p file should have only 5chars length or above Passcodes only.
It is pretty simple, just use the length() function to get words of length greater than or equal to 5 after setting the input and output field separators to |
awk 'BEGIN{FS=OFS="|"} NR==1 || length($2)>=5' file
I've huge file count, around 200,000 records in a file. I have been testing some cases where in I have to figure out the naming pattern of the files match to some specific strings. Here's how I preceded-
Test Strings, I stored in a file (let's say for one case, they are 10). The actual file that contains string records, separated by newline; totaling upto 200,000 records. To check if the test string patterns are present in the large file, I wrote a small nested for loop.
for i in `cat TestString.txt`
do
for j in `cat LargeFile.txt`
do
if [[ $i == $j ]]
then
echo "Match" >> result.txt
fi
done
done
This nested loop actual has to do the traversal (if I'm not wrong in the concepts), 10x200000 times. Normally I don't see that's too much of a load on the server, but the time taken is like all along. The excerpt is running for the past 4 hours, with ofcourse some "matched" results.
Does anyone has any idea on speeding this up? I've found so many answers with python or perl touch, but I'm honestly searching for something in Unix.
Thanks
Try the following:
grep -f TestString.txt LargeFile.txt >> result.txt
Check out grep
while read line
do
cat LargeFile.txt | grep "$line" >> result.txt
done < TestString.txt
grep will output any matching strings. This may be faster. Note that your TestString.txt file should not have any blank lines or grep will return everything from LargeFile.txt.
Have a data set where the max number of records in one file is ~ 130,000.
Here is a subset of the first file, 1.txt:
CID|UID|Key|sis_URL
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
11|C024565|WSLDOOZREJYCGB|http://sis.gov/regno=0000107062
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0000120821
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0063697187
Here is a subset of the second file, 2.txt:
CID|bro_URL
11|http://bro.gov/nmbr=0149
13|http://bro.gov/nmbr=0119
Am running gnuwin32 under Windows 7, 64 bit with 8gb memory; therefore need to use double quote for windows. The join command is:
join -t"|" -1 1 -2 1 -a1 -a2 -e "NULL" -o "0,1.2,1.3,1.4,2.2" 1.txt 2.txt > 3_.txt
Here is the output file, 3.txt.
CID|UID|Key|sis_URL|bro_URL
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779|NULL
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622|NULL
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779|NULL
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622|NULL
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779|NULL
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622|NULL
11|NULL|NULL|NULL|http://bro.gov/nmbr=0149
13|NULL|NULL|NULL|http://bro.gov/nmbr=0119
11|C024565|WSLDOOZREJYCGB|http://sis.gov/regno=0000107062|NULL
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0000120821|NULL
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0063697187|NULL
For CID:11 and CID:13, I am expecting:
11|C024565|WSLDOOZREJYCGB|http://sis.gov/regno=0000107062|http://bro.gov/nmbr=0149
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0000120821|http://bro.gov/nmbr=0119
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0063697187|http://bro.gov/nmbr=0119
Why does the join on CID:11 and CID:13 fail?
Note: before posting this question I ran the subset above and produced the proper results. When I run the complete set, I get the improper result (the subset shown here).
Any idea why? Any recommended alternative?
When I've completed the join process, my final table will be 15 columns wide. But I'm already stymied at column 4.
Any proposed work-around, such as awk?
You can try the following command:
awk -f a.awk 2.txt 1.txt > 3.txt
where a.awk is:
BEGIN { FS=OFS="|" }
NR==FNR{
a[$1]=$2
next
}
{
if ($1 in a)
$(NF+1)=a[$1]
else
$(NF+1)="NULL"
print
}
with output:
CID|UID|Key|sis_URL|bro_URL
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D000108|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D000644|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0000870779
1|D002331|RDHQFKQIGNGIED|http://sis.gov/regno=0014992622
11|C024565|WSLDOOZREJYCGB|http://sis.gov/regno=0000107062|http://bro.gov/nmbr=0149
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0000120821|http://bro.gov/nmbr=0119
13|C009947|PBKONEOXTCPAFI|http://sis.gov/regno=0063697187|http://bro.gov/nmbr=0119
Explanation
We read the data in 2.txt into the associative array a
The test NR==FNR is used to match only the first file on the command line, that is the file 2.txt
The next statement is important so that the next rules are not executed for 2.txt
The second rule (the one containing the if test) is then executed only for 1.txt but the information from 2.txt is still available through the array a
If the first field matches a field in the first column in 2.txt, that is: if ($1 in a), then we insert that value at the end of the line (NF is the number of fields (that is: columns) read from 1.txt)
If there is no match, we insert the string "NULL"
I'd like to use grep on a text file with -f to match a long list (10,000) of patterns. Turns out that grep doesn't like this (who, knew?). After a day, it didn't produce anything. Smaller lists work almost instantaneously.
I was thinking I might split my long list up and do it a few times. Any idea what a good maximum length for the pattern list might be?
Also, I'm rather new with unix. Alternative approaches are welcome. The list of patterns, or search terms, are in a plaintext file, one per line.
Thank you everyone for your guidance.
From comments, it appears that the patterns you are matching are fixed strings. If that is the case, you should definitely use -F. That will increase the speed of the matching considerably. (Using 479,000 strings to match on an input file with 3 lines using -F takes under 1.5 seconds on a moderately powered machine. Not using -F, that same machine is not yet finished after several minutes.)
i got the same problem with approx. 4 million patterns to search for in a file with 9 million lines. Seems like it is a problem of RAM. so i got this neat little work around which might be slower than splitting and joining but it just need this one line.
while read line; do grep $line fileToSearchIn;done < patternFile
I needed to use the work around since the -F flag is no solution for that large files...
EDIT: This seems to be really slow for large files. After some more research i found 'faSomeRecords' and really other awesome tools from Kent NGS-editing-Tools
I tried it on my own by extracting 2 million fasta-rec from 5.5million records file. Took approx. 30 sec..
cheers
EDIT: direct download link
Here is a bash script you can run on your files (or if you would like, a subset of your files). It will split the key file into increasingly large blocks, and for each block attempt the grep operation. The operations are timed - right now I'm timing each grep operation, as well as the total time to process all the sub-expressions.
Output is in seconds - with some effort you can get ms, but with the problem you are having it's unlikely you need that granularity.
Run the script in a terminal window with a command of the form
./timeScript keyFile textFile 100 > outputFile
This will run the script, using keyFile as the file where the search keys are stored, and textFile as the file where you are looking for keys, and 100 as the initial block size. On each loop the block size will be doubled.
In a second terminal, run the command
tail -f outputFile
which will keep track of the output of your other process into the file outputFile
I recommend that you open a third terminal window, and that you run top in that window. You will be able to see how much memory and CPU your process is taking - again, if you see vast amounts of memory consumed it will give you a hint that things are not going well.
This should allow you to find out when things start to slow down - which is the answer to your question. I don't think there's a "magic number" - it probably depends on your machine, and in particular on the file size and the amount of memory you have.
You could take the output of the script and put it through a grep:
grep entire outputFile
You will end up with just the summaries - block size, and time taken, e.g.
Time for processing entire file with blocksize 800: 4 seconds
If you plot these numbers against each other (or simply inspect the numbers), you will see when the algorithm is optimal, and when it slows down.
Here is the code: I did not do extensive error checking but it seemed to work for me. Obviously in your ultimate solution you need to do something with the outputs of grep (instead of piping it to wc -l which I did just to see how many lines were matched)...
#!/bin/bash
# script to look at difference in timing
# when grepping a file with a large number of expressions
# assume first argument = name of file with list of expressions
# second argument = name of file to check
# optional third argument = initial block size (default 100)
#
# split f1 into chunks of 1, 2, 4, 8... expressions at a time
# and print out how long it took to process all the lines in f2
if (($# < 2 )); then
echo Warning: need at leasttwo parameters.
echo Usage: timeScript keyFile searchFile [initial blocksize]
exit 0
fi
f1_linecount=`cat $1 | wc -l`
echo linecount of file1 is $f1_linecount
f2_linecount=`cat $2 | wc -l`
echo linecount of file2 is $f2_linecount
echo
if (($# < 3 )); then
blockLength=100
else
blockLength=$3
fi
while (($blockLength < f1_linecount))
do
echo Using blocks of $blockLength
#split is a built in command that splits the file
# -l tells it to break after $blockLength lines
# and the block$blockLength parameter is a prefix for the file
split -l $blockLength $1 block$blockLength
Tstart="$(date +%s)"
Tbefore=$Tstart
for fn in block*
do
echo "grep -f $fn $2 | wc -l"
echo number of lines matched: `grep -f $fn $2 | wc -l`
Tnow="$(($(date +%s)))"
echo Time taken: $(($Tnow - $Tbefore)) s
Tbefore=$Tnow
done
echo Time for processing entire file with blocksize $blockLength: $(($Tnow - $Tstart)) seconds
blockLength=$((2*$blockLength))
# remove the split files - no longer needed
rm block*
echo block length is now $blockLength and f1 linecount is $f1_linecount
done
exit 0
You could certainly give sed a try to see whether you get a better result, but it is a lot of work to do either way on a file of any size. You didn't provide any details on your problem, but if you have 10k patterns I would be trying to think about whether there is some way to generalize them into a smaller number of regular expressions.
Here is a perl script "match_many.pl" which addresses a very common subset of the "large number of keys vs. large number of records" problem. Keys are accepted one per line from stdin. The two command line parameters are the name of the file to search and the field (white space delimited) which must match a key. This subset of the original problem can be solved quickly since the location of the match (if any) in the record is known ahead of time and the key always corresponds to an entire field in the record. In one typical case it searched 9400265 records with 42899 keys, matching 42401 of the keys and emitting 1831944 records in 41s. The more general case, where the key may appear as a substring in any part of a record, is a more difficult problem that this script does not address. (If keys never include white space and always correspond to an entire word the script could be modified to handle that case by iterating over all fields per record, instead of just testing the one, at the cost of running M times slower, where M is the average field number where the matches are found.)
#!/usr/bin/perl -w
use strict;
use warnings;
my $kcount;
my ($infile,$test_field) = #ARGV;
if(!defined($infile) || "$infile" eq "" || !defined($test_field) || ($test_field <= 0)){
die "syntax: match_many.pl infile field"
}
my %keys; # hash of keys
$test_field--; # external range (1,N) to internal range (0,N-1)
$kcount=0;
while(<STDIN>) {
my $line = $_;
chomp($line);
$keys {$line} = 1;
$kcount++
}
print STDERR "keys read: $kcount\n";
my $records = 0;
my $emitted = 0;
open(INFILE, $infile ) or die "Could not open $infile";
while(<INFILE>) {
if(substr($_,0,1) =~ /#/){ #skip comment lines
next;
}
my $line = $_;
chomp($line);
$line =~ s/^\s+//;
my #fields = split(/\s+/, $line);
if(exists($keys{$fields[$test_field]})){
print STDOUT "$line\n";
$emitted++;
$keys{$fields[$test_field]}++;
}
$records++;
}
$kcount=0;
while( my( $key, $value ) = each %keys ){
if($value > 1){
$kcount++;
}
}
close(INFILE);
print STDERR "records read: $records, emitted: $emitted; keys matched: $kcount\n";
exit;