Gtar in Solaris, tar to multiple splits based on size - unix

I am using a gtar command that should create the file of 10 GB in pendrive. I researched on it little bit and got to know that FAT 32 file system supports file of size 4gb max. How can I put check in middle of running gtar command that creates multiple files by splitting based on file size less than 4gb.
Gtar should be able to detect if file it creating is exceeding 4 GB size and then it should stop creating that file and continue creating the other one.
I know that we can make 10 GB file at one location and split that static file, but we do not want this.

You can use split command like this:
cd /path/to/flash
tar cvf - /path/of/source/files |split -b 4198400

Check the man page for these options.
[--tape-length=NUMBER] [--multi-volume]
Device selection and switching:
-f, --file=ARCHIVE
use archive file or device ARCHIVE
--force-local
archive file is local even if it has a colon
-F, --info-script=NAME, --new-volume-script=NAME
run script at end of each tape (implies -M)
-L, --tape-length=NUMBER
change tape after writing NUMBER x 1024 bytes
-M, --multi-volume
create/list/extract multi-volume archive
--rmt-command=COMMAND
use given rmt COMMAND instead of rmt
--rsh-command=COMMAND
use remote COMMAND instead of rsh
--volno-file=FILE
use/update the volume number in FILE
Device blocking:
-b, --blocking-factor=BLOCKS
BLOCKS x 512 bytes per record
-B, --read-full-records
reblock as we read (for 4.2BSD pipes)
-i, --ignore-zeros
ignore zeroed blocks in archive (means EOF)
--record-size=NUMBER
NUMBER of bytes per record, multiple of 512

Related

Merge multiple files based on size: Limit resultant file size as well

Merging multiple files in one single file isn't an issue in unix. I however wanted to combine multiple files into fewer files and limit the formation of these multiple files based on size.
Here's full explanation:
1) There are 200 files of varying sizes ranging from 1KB to 2 GB.
2) I wish to combine multiple files at random and create multiple files of 5 GB each.
3) So if there are 200 files ranging from 1KB to 2GB per file, the resultant set might be 10 files of 5GB each.
Below is the approach I'm trying to make but couldn't devise the logic, need some assistance:
for i in ls /tempDir/``
do
if [[ -r $i ]]
then
for files in find /tempDir/ -size +2G``
cat $files > combinedFile.csv
fi
done
This will only create one file combinedFile.csv whatever the size may be. But I need to limit the size of combinedFile.csv to 5GB and create multiple files combinedFile_1.csv combinedFile_2.csv, etc.
Also, I would also like to check that when these multiple merged files are created, the rows aren't broken in multiple files.
Any ideas how to achieve it?
I managed a workaround with cating then splitting the files with the code below:
for files in `find ${dir}/ -size +0c -type f`
do
if [[ -r $files ]]
then
cat $files >> ${workingDirTemp}/${fileName}
else
echo "Corrupt Files"
exit 1
fi
done
cd ${workingDir}
split --line-bytes=${finalFileSize} ${fileName} --numeric-suffixes -e --additional-suffix=.csv ${unserInputFileName}_
cat is a CPU intensive operation for big files like 10+Gigs. Does anyone have any solution that could reduce the CPU load or increase the processing speed?

Viewing the full contents of a file Unix

I want to be able to see all lines of text in a file, originally i only needed the top of the file and had been using
head -n 50 'filename.txt'
I could just do head -n 1000 as most files contain less than this but would prefer a bettr alternative
Have you considered the use of a text editor. These are often times installed by default on *nix systems? Vi is usually available.
vi filename
nano filename
or
pico filename

How to get the folder size in BYTES or the smallest unit possible in SOLARIS

Is there any script or command to get the FOLDER size in BYTES or BITS so that every small change in the files in the folder is reflected by checking the Folder size in SOLARIS/
The directory size doesn't change when you add few bytes to files. Files are allocated in fragments / blocks.
Should you want the cumulative size of all files in a directory, you have to compute it yourself. See https://superuser.com/a/603302/19279
Note that this size doesn't represent what the files are using, which is usually larger but can also be smaller depending on various factors.
Edit:
Here is a simplified solution giving the size in bytes:
#!/bin/sh
find ${1:-.} -type f -exec ls -lnq {} \+ | awk '{sum+=$5} END{print sum}'
du -sk foldername is a Pop Favorite. Just multiply the result by 1024 for #/bytes.

Minimize disk usage while doing unix sort

I have a lot of files, say 1000 files, each with 4mb. Totally there are 4gb. I would like to sort them by using unix sort, here is my command:
sort -t ',' -k 1,1 -k 5,7 -k 22,22 -k 2,2r INPUT_UNSORTED_${current_time}.DAT -o INPUT_SORTED_${current_time}.DAT
where INPUT_UNSORTED is a big file created by appending the 1000 files. So there is another 4gb. INPUT_SORTED is another 4gb too.
And I discovered unix sort used a temp folder to sort the files, and the temp files may reach to 4gb too.
How can I reduce disk usage without losing performance?
Is your goal to get a single big sorted output file? Take a look at sort's --merge option. You can sort the small input files individually, and then merge them all into the large sorted output. If you delete each unsorted input file immediately after producing its sorted counterpart, you won't use more than 4MB of space on intermediate results.

How do I split a log file with an offset value in unix?

I have a really big log file (9GB -- I know I need to fix that) on my box. I need to split into chunks so I can upload it to amazon S3 for backup. S3 has a max file size of 5GB. So I would like to split this into several chunks and then upload each one.
Here is the catch, I only have 5GB on my server free so I can't just do a simple unix split. Here is what I want to do:
grab the first 4GB of the log file and spit out into a seperate file (call it segment 1)
Upload that segment1 to s3.
rm segment1 to free up space.
grab the middle 4GB from the log file and upload to s3. Cleanup as before
Grab the remaining 1GB and upload to S3.
I can't find the right unix command to split with an offset. Split only does things in equal chunks and csplit doesn't seem to have what I need either. Any recommendations?
One (convoluted) solution is to compress it first. A textual log file should easily go from 9G to well below 5G, then you delete the original, giving you 9G of free space.
Then you pipe that compressed file directly through split so as to not use up more disk space. What you'll end up with is a compressed file and the three files for upload.
Upload them, then delete them, then uncompress the original log.
=====
A better solution is to just count the lines (say 3 million) and use an awk script to extract and send the individual parts:
awk '1,1000000 {print}' biglogfile > bit1
# send and delete bit1
awk '1000001,2000000 {print}' biglogfile > bit2
# send and delete bit2
awk '2000001,3000000 {print}' biglogfile > bit3
# send and delete bit3
Then, at the other end, you can either process bit1 through bit3 individually, or recombine them:
mv bit1 whole
cat bit2 >>whole ; rm bit2
cat bit3 >>whole ; rm bit3
And, of course, this splitting can be done with any of the standard text processing tools in Unix: perl, python, awk, head/tail combo. It depends on what you're comfortable with.
First, gzip -9 your log file.
Then, write a small shell script to use dd:
#!/bin/env sh
chunk_size = 2048 * 1048576; #gigs in megabytes
input_file = shift;
len = `stat '%s' $input_file`
chunks = $(($len/$chunk_size + 1))
for i in {0...$chunks}
do
dd if=$input_file skip=$i of=$input_file.part count=1 bs=$chunk_size
scp $input_file.part servername:path/$input_file.part.$i
done
I just plopped this in off the top of my head, so I don't know if it will work without modification, but something very similar to this is what you need.
You can use dd. You will need to specify bs (the memory buffer size), skip (the number of buffers to skip), and count (the number of buffers to copy) in each block.
So using a buffer size of 10Meg, you would do:
# For the first 4Gig
dd if=myfile.log bs=10M skip=0 count=400 of=part1.logbit
<upload part1.logbit and remove it>
# For the second 4Gig
dd if=myfile.log bs=10M skip=400 count=400 of=part2.logbit
...
You might also benefit from compressing the data you are going to transfer:
dd if=myfile.log bs=10M skip=800 count=400 | gzip -c > part3.logbit.gz
There may be more friendly methods.
dd has some real shortcomings. If you use a small buffer size, it runs much more slowly. But you can only skip/seek in the file by multiples of bs. So if you want to start reading data from a prime offset, you're in a real fiddle. Anyway I digress.
Coreutils split creates equal sized output sections, excepting for the last section.
split --bytes=4GM bigfile chunks

Resources