cut exact time range from mp3/ogg - http

I have a heap of audio files on a CDN. Those are mp3's and ogg vorbises, in parallel. Those files are each worth about one hour of playback. I need to extract arbitrary parts from those files: I am given a filename (I can choose if I use the mp3 or ogg version) and two timestamps and I need the audio exactly between the given time positions. I don't want to download the whole file, so I think of using the Range http header.
I have complete control over the audio files, so I encoded them in fixed bitrate, to be able to estimate which bytes I should reach for. However, both formats use frames (or pages in vorbis's case), which must be decoded atomically.
The program I write is in Perl. I tried downloading a part of the file where I believe the given window to be contained, and then using Audio::Mad and Ogg::Vorbis::Decoder to parse the audio file fragments. However, both seem to fail to process the fragments, and only succeed when I serve an integral file.
So my question is: How can I get an exact span of an audio file without downloading the whole thing?

Answering "cut exact time range from mp3/ogg" - You can check out if the following fits Your needs:
ffmpeg -i InFile -vn -acodec copy -ss 00:00:00 -t 00:01:32 -threads 0 OutFile
where ss - start time, t - duration. It cuts indeed - no re-compressions.

Related

unix command to search contents from one file in a bzip file

I have file1.txt with 100 entries. Need to search the contents of file1.txt in file2.bz2 file which is a large bzip file.
bzgrep -f file1.txt file2.bz2 takes long time.
You can do nothing. File is compressed and the only way to search is to decompress it.
One possible workaround is to keep uncompressed version of the file.
You can do a lot, but it's a truly excessive amount of work.
bzip2 files are composed of chunks. You can cut the file up by chunks, full-text index each one, and save the indexes. If you have some idea of the keywords you can filter your indexes, otherwise you get the full index mayhem from all the text. This tends to be something like 10-100 times the size of the original uncompressed document.
If there's only certain places the words to be indexed occur or you can limit the number of words to be indexed AND searches are much more frequent than documents you can make this work.
Idea blatantly stolen from here: https://www.thanassis.space/buildWikipediaOffline.html

Gtar in Solaris, tar to multiple splits based on size

I am using a gtar command that should create the file of 10 GB in pendrive. I researched on it little bit and got to know that FAT 32 file system supports file of size 4gb max. How can I put check in middle of running gtar command that creates multiple files by splitting based on file size less than 4gb.
Gtar should be able to detect if file it creating is exceeding 4 GB size and then it should stop creating that file and continue creating the other one.
I know that we can make 10 GB file at one location and split that static file, but we do not want this.
You can use split command like this:
cd /path/to/flash
tar cvf - /path/of/source/files |split -b 4198400
Check the man page for these options.
[--tape-length=NUMBER] [--multi-volume]
Device selection and switching:
-f, --file=ARCHIVE
use archive file or device ARCHIVE
--force-local
archive file is local even if it has a colon
-F, --info-script=NAME, --new-volume-script=NAME
run script at end of each tape (implies -M)
-L, --tape-length=NUMBER
change tape after writing NUMBER x 1024 bytes
-M, --multi-volume
create/list/extract multi-volume archive
--rmt-command=COMMAND
use given rmt COMMAND instead of rmt
--rsh-command=COMMAND
use remote COMMAND instead of rsh
--volno-file=FILE
use/update the volume number in FILE
Device blocking:
-b, --blocking-factor=BLOCKS
BLOCKS x 512 bytes per record
-B, --read-full-records
reblock as we read (for 4.2BSD pipes)
-i, --ignore-zeros
ignore zeroed blocks in archive (means EOF)
--record-size=NUMBER
NUMBER of bytes per record, multiple of 512

Ring buffer log file on unix

I'm trying to come up with a unix pipeline of commands that will allow me to log only the most recent n lines of a program's output to a text file.
The text file should never be more than n lines long. (it may be less when it is first filling the file)
It will be run on a device with limited memory/resources, so keeping the filesize small is a priority.
I've tried stuff like this (n=500):
program_spitting_out_text > output.txt
cat output.txt | tail -500 > recent_output.txt
rm output.txt
or
program_spitting_out_text | tee output.txt | tail -500 > recent_output.txt
Obviously neither works for my purposes...
Anyone have a good way to do this in a one-liner? Or will I have to write a script/utility?
Note: I don't want anything to do with dmesg and must use standard BSD unix commands. The "program_spitting_out_text" prints out about 60 lines/second, every second.
Thanks in advance!
If program_spitting_out_text runs continuously and keeps it's file open, there's not a lot you can do.
Even deleting the file won't help since it will still continue to write to the now "hidden" file (data still exists but there is no directory entry for it) until it closes it, at which point it will be really removed.
If it closes and reopens the log file periodically (every line or every ten seconds or whatever), then you have a relatively easy option.
Simply monitor the file until it reaches a certain size, then roll the file over, something like:
while true; do
sleep 5
lines=$(wc -l <file.log)
if [[ $lines -ge 5000 ]]; then
rm -f file2.log
mv file.log file2.log
touch file.log
fi
done
This script will check the file every five seconds and, if it's 5000 lines or more, will move it to a backup file. The program writing to it will continue to write to that backup file (since it has the open handle to it) until it closes it, then it will re-open the new file.
This means you will always have (roughly) between five and ten thousand lines in the log file set, and you can search them with commands that combine the two:
grep ERROR file2.log file.log
Another possibility is if you can restart the program periodically without affecting its function. By way of example, a program which looks for the existence of a file once a second and reports on that, can probably be restarted without a problem. One calculating PI to a hundred billion significant digits will probably not be restartable without impact.
If it is restartable, then you can basically do the same trick as above. When the log file reaches a certain size, kill of the current program (which you will have started as a background task from your script), do whatever magic you need to in rolling over the log files, then restart the program.
For example, consider the following (restartable) program prog.sh which just continuously outputs the current date and time:
#!/usr/bin/bash
while true; do
date
done
Then, the following script will be responsible for starting and stopping the other script as needed, by checking the log file every five seconds to see if it has exceeded its limits:
#!/usr/bin/bash
exe=./prog.sh
log1=prog.log
maxsz=500
pid=-1
touch ${log1}
log2=${log1}-prev
while true; do
if [[ ${pid} -eq -1 ]]; then
lines=${maxsz}
else
lines=$(wc -l <${log1})
fi
if [[ ${lines} -ge ${maxsz} ]]; then
if [[ $pid -ge 0 ]]; then
kill $pid >/dev/null 2>&1
fi
sleep 1
rm -f ${log2}
mv ${log1} ${log2}
touch ${log1}
${exe} >> ${log1} &
pid=$!
fi
sleep 5
done
And this output (from an every-second wc -l on the two log files) shows what happens at the time of switchover, noting that it's approximate only, due to the delays involved in switching:
474 prog.log 0 prog.log-prev
496 prog.log 0 prog.log-prev
518 prog.log 0 prog.log-prev
539 prog.log 0 prog.log-prev
542 prog.log 0 prog.log-prev
21 prog.log 542 prog.log-prev
Now keep in mind that's a sample script. It's relatively intelligent but probably needs some error handling so that it doesn't leave the executable running if you shut down the monitor.
And, finally, if none of that suffices, there's nothing stopping you from writing your own filter program which takes standard input and continuously outputs that to a real ring buffer file.
Then you would simply do:
program_spitting_out_text | ringbuffer 4096 last4k.log
That program could be a true ring buffer in that it treats the 4k file as a circular character buffer but, of course, you'll need a special marker in the file to indicate the write-point, along with a program that can turn it back into a real stream.
Or, it could do much the same as the scripts above, rewriting the file so that it's always below the size desired.
Since apparently this basic feature (circular file) does not exist on GNU/Linux, and because I needed it to track logs on my Raspberry Pi with limited storage, I just wrote the code as suggest above!
Behold: circFS
Unlike other tools quoted on this post and other similar, the maximum size is arbitrary and only limited by the actual available storage.
It does not rotate with several files, all is kept in the single file, which is rewritten on "release".
You can have as many log files as needed in the virtual directory.
It is a single C file (~600 lines including comments), and it builds with a single compile line after having installed fuse development dependencies.
This first version is very basic (see the README), if you want to improve it with some of the TODOs (see the TODO) be welcome to submit pull requests.
As a joke, this is my first "write only" fuse driver! :-)

Minimize disk usage while doing unix sort

I have a lot of files, say 1000 files, each with 4mb. Totally there are 4gb. I would like to sort them by using unix sort, here is my command:
sort -t ',' -k 1,1 -k 5,7 -k 22,22 -k 2,2r INPUT_UNSORTED_${current_time}.DAT -o INPUT_SORTED_${current_time}.DAT
where INPUT_UNSORTED is a big file created by appending the 1000 files. So there is another 4gb. INPUT_SORTED is another 4gb too.
And I discovered unix sort used a temp folder to sort the files, and the temp files may reach to 4gb too.
How can I reduce disk usage without losing performance?
Is your goal to get a single big sorted output file? Take a look at sort's --merge option. You can sort the small input files individually, and then merge them all into the large sorted output. If you delete each unsorted input file immediately after producing its sorted counterpart, you won't use more than 4MB of space on intermediate results.

How do I split a log file with an offset value in unix?

I have a really big log file (9GB -- I know I need to fix that) on my box. I need to split into chunks so I can upload it to amazon S3 for backup. S3 has a max file size of 5GB. So I would like to split this into several chunks and then upload each one.
Here is the catch, I only have 5GB on my server free so I can't just do a simple unix split. Here is what I want to do:
grab the first 4GB of the log file and spit out into a seperate file (call it segment 1)
Upload that segment1 to s3.
rm segment1 to free up space.
grab the middle 4GB from the log file and upload to s3. Cleanup as before
Grab the remaining 1GB and upload to S3.
I can't find the right unix command to split with an offset. Split only does things in equal chunks and csplit doesn't seem to have what I need either. Any recommendations?
One (convoluted) solution is to compress it first. A textual log file should easily go from 9G to well below 5G, then you delete the original, giving you 9G of free space.
Then you pipe that compressed file directly through split so as to not use up more disk space. What you'll end up with is a compressed file and the three files for upload.
Upload them, then delete them, then uncompress the original log.
=====
A better solution is to just count the lines (say 3 million) and use an awk script to extract and send the individual parts:
awk '1,1000000 {print}' biglogfile > bit1
# send and delete bit1
awk '1000001,2000000 {print}' biglogfile > bit2
# send and delete bit2
awk '2000001,3000000 {print}' biglogfile > bit3
# send and delete bit3
Then, at the other end, you can either process bit1 through bit3 individually, or recombine them:
mv bit1 whole
cat bit2 >>whole ; rm bit2
cat bit3 >>whole ; rm bit3
And, of course, this splitting can be done with any of the standard text processing tools in Unix: perl, python, awk, head/tail combo. It depends on what you're comfortable with.
First, gzip -9 your log file.
Then, write a small shell script to use dd:
#!/bin/env sh
chunk_size = 2048 * 1048576; #gigs in megabytes
input_file = shift;
len = `stat '%s' $input_file`
chunks = $(($len/$chunk_size + 1))
for i in {0...$chunks}
do
dd if=$input_file skip=$i of=$input_file.part count=1 bs=$chunk_size
scp $input_file.part servername:path/$input_file.part.$i
done
I just plopped this in off the top of my head, so I don't know if it will work without modification, but something very similar to this is what you need.
You can use dd. You will need to specify bs (the memory buffer size), skip (the number of buffers to skip), and count (the number of buffers to copy) in each block.
So using a buffer size of 10Meg, you would do:
# For the first 4Gig
dd if=myfile.log bs=10M skip=0 count=400 of=part1.logbit
<upload part1.logbit and remove it>
# For the second 4Gig
dd if=myfile.log bs=10M skip=400 count=400 of=part2.logbit
...
You might also benefit from compressing the data you are going to transfer:
dd if=myfile.log bs=10M skip=800 count=400 | gzip -c > part3.logbit.gz
There may be more friendly methods.
dd has some real shortcomings. If you use a small buffer size, it runs much more slowly. But you can only skip/seek in the file by multiples of bs. So if you want to start reading data from a prime offset, you're in a real fiddle. Anyway I digress.
Coreutils split creates equal sized output sections, excepting for the last section.
split --bytes=4GM bigfile chunks

Resources