Is there a way to skip first x lines of a bz2 file in Python without calling next()? - wikidata

I'm trying to read the latest Wikidata dump while skipping the first, say, 100 lines.
Is there a better way to do this than calling next() repeatedly?
WIKIDATA_JSON_DUMP = bz2.open('latest-all.json.bz2', 'rt')
for n in range(100):
next(WIKIDATA_JSON_DUMP)
Alternatively, is there a way to split up the file in bash by, say, using bzcat to pipe select chunks to smaller files?

If it was compressed using something like bgzip, you can skip blocks, but they will contain a variable number of lines, depending on the compression ratio. For raw bzip files which are a single stream, I don't think you have any choice but to read and throw away the lines to be skipped, due to the nature of the compression format.

You can try the following in bash, to skip the first 10 lines for example:
bzcat -d -c /tmp/myfile.bz2 | tail -n +11
Notice the tail gets the N+1 number of lines you want to skip.

Related

UNIX split command splitting this file, but what names are resulting?

We receive a big csv file from a client (500k lines, est) that we split into smaller chunks using the split command.
You can see how we're using the command below, but my bash knowledge is a bit rusty, could someone refresh me on the ${processFile}_ bit below, and how the files are being named in the end? Not recalling what the underscore does...
split -l 50000 $PROCESSING_CURRENT_DIR/$processFile ${processFile}_
This isn't anything to do with bash but how split(1) command processes its arguments to split the input.
Syntax is:
split [OPTION]... [FILE [PREFIX]]
DESCRIPTION
Output pieces of FILE to PREFIXaa, PREFIXab, ...; default size is 1000 lines, and default PREFIX is 'x'.
With no FILE, or when FILE is -, read standard input.
So it uses the given prefix and makes output files.

How can I tail -f but only in whole lines?

I have a constantly updating huge log file (MainLog).
I want to create another file which is only the last n lines of the log file BUT also updating.
If I use:
tail -f MainLog > RecentLog
I get ALMOST what I want except RecentLog is written as MainLog is available and might at any point only have part of the last MainLog line.
How can I specify to tail that I only want it to write when a WHOLE line is available?
By default, tail outputs whole lines unless you use the -c switch to count characters. Something like
tail -n 20 -f MainLog > RecentLog
(substituting the number of lines you want prepended to the second file for "20") should work as you want.
But if if doesn't, it is possible that using grep to line-buffer your output will fix this condition. See this question.
After many attempts, the only solution for multiple files that worked (fantastically well) for me is the fdlinecombine command. It's a small binary that reads multiple file descriptors and prints data to stdout linewise.
My use case is spawning multiple long-running ssh commands in the background and following their output, without having the lines garbled or interrupted in between.

Split files linux and then grep

I'd like to split a file and grep each piece without writing them to indvidual files.
I've attempted a couple variations of split and grep and no such luck; any suggestions?
Something along the lines of:
split -b SIZE filename | grep "string"
I've attempted grep/fgrep to find the string but my shell complains that the files are too large. See: use fgrep instead
There is no point in splitting the file if you plan to [linearly] search each of the pieces anyway (assuming that's the only thing you are doing with it). Consider running grep on the entire file.
If however you plan to utilize the fact that the file is split later on, then the typical way would be:
Create a temporary directory and step into it
Run split/csplit on the original file
Use for loop over written fragment to do your processing.

grep -f alternative for huge files

grep -F -f file1 file2
file1 is 90 Mb (2.5 million lines, one word per line)
file2 is 45 Gb
That command doesn't actually produce anything whatsoever, no matter how long I leave it running. Clearly, this is beyond grep's scope.
It seems grep can't handle that many queries from the -f option. However, the following command does produce the desired result:
head file1 > file3
grep -F -f file3 file2
I have doubts about whether sed or awk would be appropriate alternatives either, given the file sizes.
I am at a loss for alternatives... please help. Is it worth it to learn some sql commands? Is it easy? Can anyone point me in the right direction?
Try using LC_ALL=C . It turns the searching pattern from UTF-8 to ASCII which speeds up by 140 time the original speed. I have a 26G file which would take me around 12 hours to do down to a couple of minutes.
Source: Grepping a huge file (80GB) any way to speed it up?
So what I do is:
LC_ALL=C fgrep "pattern" <input >output
I don't think there is an easy solution.
Imagine you write your own program which does what you want and you will end up with a nested loop, where the outer loop iterates over the lines in file2 and the inner loop iterates over file1 (or vice versa). The number of iterations grows with size(file1) * size(file2). This will be a very large number when both files are large. Making one file smaller using head apparently resolves this issue, at the cost of not giving the correct result anymore.
A possible way out is indexing (or sorting) one of the files. If you iterate over file2 and for each word you can determine whether or not it is in the pattern file without having to fully traverse the pattern file, then you are much better off. This assumes that you do a word-by-word comparison. If the pattern file contains not only full words, but also substrings, then this will not work, because for a given word in file2 you wouldn't know what to look for in file1.
Learning SQL is certainly a good idea, because learning something is always good. It will hovever, not solve your problem, because SQL will suffer from the same quadratic effect described above. It may simplify indexing, should indexing be applicable to your problem.
Your best bet is probably taking a step back and rethinking your problem.
You can try ack. They are saying that it is faster than grep.
You can try parallel :
parallel --progress -a file1 'grep -F {} file2'
Parallel has got many other useful switches to make computations faster.
Grep can't handle that many queries, and at that volume, it won't be helped by fixing the grep -f bug that makes it so unbearably slow.
Are both file1 and file2 composed of one word per line? That means you're looking for exact matches, which we can do really quickly with awk:
awk 'NR == FNR { query[$0] = 1; next } query[$0]' file1 file2
NR (number of records, the line number) is only equal to the FNR (file-specific number of records) for the first file, where we populate the hash and then move onto the next line. The second clause checks the other file(s) for whether the line matches one saved in our hash and then prints the matching lines.
Otherwise, you'll need to iterate:
awk 'NR == FNR { query[$0]=1; next }
{ for (q in query) if (index($0, q)) { print; next } }' file1 file2
Instead of merely checking the hash, we have to loop through each query and see if it matches the current line ($0). This is much slower, but unfortunately necessary (though we're at least matching plain strings without using regexes, so it could be slower). The loop stops when we have a match.
If you actually wanted to evaluate the lines of the query file as regular expressions, you could use $0 ~ q instead of the faster index($0, q). Note that this uses POSIX extended regular expressions, roughly the same as grep -E or egrep but without bounded quantifiers ({1,7}) or the GNU extensions for word boundaries (\b) and shorthand character classes (\s,\w, etc).
These should work as long as the hash doesn't exceed what awk can store. This might be as low as 2.1B entries (a guess based on the highest 32-bit signed int) or as high as your free memory.

How do I split a log file with an offset value in unix?

I have a really big log file (9GB -- I know I need to fix that) on my box. I need to split into chunks so I can upload it to amazon S3 for backup. S3 has a max file size of 5GB. So I would like to split this into several chunks and then upload each one.
Here is the catch, I only have 5GB on my server free so I can't just do a simple unix split. Here is what I want to do:
grab the first 4GB of the log file and spit out into a seperate file (call it segment 1)
Upload that segment1 to s3.
rm segment1 to free up space.
grab the middle 4GB from the log file and upload to s3. Cleanup as before
Grab the remaining 1GB and upload to S3.
I can't find the right unix command to split with an offset. Split only does things in equal chunks and csplit doesn't seem to have what I need either. Any recommendations?
One (convoluted) solution is to compress it first. A textual log file should easily go from 9G to well below 5G, then you delete the original, giving you 9G of free space.
Then you pipe that compressed file directly through split so as to not use up more disk space. What you'll end up with is a compressed file and the three files for upload.
Upload them, then delete them, then uncompress the original log.
=====
A better solution is to just count the lines (say 3 million) and use an awk script to extract and send the individual parts:
awk '1,1000000 {print}' biglogfile > bit1
# send and delete bit1
awk '1000001,2000000 {print}' biglogfile > bit2
# send and delete bit2
awk '2000001,3000000 {print}' biglogfile > bit3
# send and delete bit3
Then, at the other end, you can either process bit1 through bit3 individually, or recombine them:
mv bit1 whole
cat bit2 >>whole ; rm bit2
cat bit3 >>whole ; rm bit3
And, of course, this splitting can be done with any of the standard text processing tools in Unix: perl, python, awk, head/tail combo. It depends on what you're comfortable with.
First, gzip -9 your log file.
Then, write a small shell script to use dd:
#!/bin/env sh
chunk_size = 2048 * 1048576; #gigs in megabytes
input_file = shift;
len = `stat '%s' $input_file`
chunks = $(($len/$chunk_size + 1))
for i in {0...$chunks}
do
dd if=$input_file skip=$i of=$input_file.part count=1 bs=$chunk_size
scp $input_file.part servername:path/$input_file.part.$i
done
I just plopped this in off the top of my head, so I don't know if it will work without modification, but something very similar to this is what you need.
You can use dd. You will need to specify bs (the memory buffer size), skip (the number of buffers to skip), and count (the number of buffers to copy) in each block.
So using a buffer size of 10Meg, you would do:
# For the first 4Gig
dd if=myfile.log bs=10M skip=0 count=400 of=part1.logbit
<upload part1.logbit and remove it>
# For the second 4Gig
dd if=myfile.log bs=10M skip=400 count=400 of=part2.logbit
...
You might also benefit from compressing the data you are going to transfer:
dd if=myfile.log bs=10M skip=800 count=400 | gzip -c > part3.logbit.gz
There may be more friendly methods.
dd has some real shortcomings. If you use a small buffer size, it runs much more slowly. But you can only skip/seek in the file by multiples of bs. So if you want to start reading data from a prime offset, you're in a real fiddle. Anyway I digress.
Coreutils split creates equal sized output sections, excepting for the last section.
split --bytes=4GM bigfile chunks

Resources