I am using vicidial based asterisk server 1.2 version. I want my asterisk server to record incoming and outgoing calls on mp3 format rather than the default format wav.
Please help
Install sox and libsox-fmt-mp3, in Debian/Ubuntu:
apt-get install sox libsox-fmt-mp3
Put the command below in your crontab:
nice find /tmp -iname "*.wav" -type f -exec bash \
-c 'WAV={}; MP3=${WAV/%wav/mp3}; sox -r 8000 -c 1 $WAV $MP3' \;
My solution for FreeBSD, one-time run the script, and then set up its periodical launch on the crontab:
#!/bin/sh
find /usr/asterisk/recording -name '*.wav' -type f -mmin +180 | while read filename; do
nice -19 lame -h -v -b 32 "$filename" "${filename%.wav}.mp3" && touch -r "$filename" "${filename%.wav}.mp3" && rm "$filename"
done
Related
I wrote a script in R that has several arguments. I want to iterate over 20 directories and execute my script on each while passing in a substring from the file path as my -n argument using sed. I ran the following:
find . -name 'xray_data' -exec sh -c 'Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
which results in this error:
ubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
sh: command substitution: line 0: syntax error near unexpected token `('
sh: command substitution: line 0: `sed -e s/.*DeMMO.*[/](.*)_.*[/]xray_data/1/ "./DeMMO1/D1T3rep_Dec2019_Ellison/xray_data"'
When I try to use sed with my pattern on an example file path, it works:
echo "./DeMMO1/D1T1exp_Dec2019_Poorman/xray_data" | sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/'
which produces the correct substring:
D1T1exp_Dec2019
I think there's an issue with trying to use single quotes inside the interpreted string but I don't know how to deal with this. I have tried replacing the single quotes around the sed pattern with double quotes as well as removing the single quotes, both result in this error:
sed: RE error: illegal byte sequence
How should I extract the substring from the file path dynamically in this case?
To loop through the output of find.
while IFS= read -ru "$fd" -d '' files; do
echo "$files" ##: do whatever you want to do with the files here.
done {fd}< <(find . -type f -name 'xray_data' -print0)
No embedded commands in quotes.
It uses a random fd just in case something inside the loop is eating/slurping stdin
Also -print0 delimits the files with null bytes, so it should be safe enough to handle spaces tabs and newlines on the path and file names.
A good start is always put an echo in front of every commands you want to do with the files, so you have an idea what's going to be executed/happen just in case...
This is the solution that ultimately worked for me due to issues with quotes in sed:
for dir in `find . -name 'xray_data'`;
do sampleID="`basename $(dirname $dir) | cut -f1 -d'_'`";
Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f "$dir" -b "$dir/SEM_images" -c "$dir/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "$sampleID";
done
Want to know if the ffmpeg has in-built encryption . I have grabbed the frames from the camera and now I encode the video with this frame using ffmpeg.
But is it possible to encrypt the frames (AES) just like we mention the encode format.
Yes, ffmpeg supports AES.
You can create encrypted HLS segments for example using:
ffmpeg -i <input> -hls_time 10 -hls_key_info_file key_info playlist.m3u8
The same lib is also used for SRTP and possibly other formats.
If you want to encrypt just the I-frames you'll most likely need to write a custom program using the ffmpeg libs.
Yes Support AES Encryption
#!/bin/bash
mkdir -p /opt/FFMPEG/rawContent/
cd /opt/FFMPEG/rawContent/
mkdir -p /opt/FFMPEG/$1/processed/
mkdir -p /opt/FFMPEG/$1/encrypted/
var=`ls | grep -i 'mp4'`
for z in ${var}
do
cd /opt/FFMPEG/$1/encrypted/
fname=`echo ${z} | awk -F "." '{print $1}'`
BASE_URL=" ${fname}.key"
openssl rand 16 > ${fname}.key
echo $BASE_URL > ${fname}.keyinfo
echo ${fname}.key >> ${fname}.keyinfo
echo $(openssl rand -hex 16) >> ${fname}.keyinfo
fname=`echo ${z} | awk -F "." '{print $1}'`
sleep 1
ffmpeg -i /opt/FFMPEG/rawContent/${fname}.mp4 -profile:v baseline -level 4.0 -start_number 0 -hls_time 10 -hls_list_size 0 -hls_key_info_file ${fname}.keyinfo* -f hls ${fname}.m3u8
mv /opt/FFMPEG/rawContent/${fname}.mp4 /opt/FFMPEG/$1/processed/
done
cd -
Run above Shell as
./shellname.sh NameOfTheFolder
(./test.sh TestOne)
Below Unix commands are used to get the list of last 30 minutes modified files which works perfectly.
touch -t 02231249.00 /tmp/last30min
find /mydirectory -type f -newer /tmp/last30min
rm /tmp/last30min
Can someone please provide me the commands to gzip those files and move it to home or tmp directory.
Thanks for your help!!!.
Pipe the arguments of your find command separated with null characters (important if your filenames include whitespaces) to xargs to do the job
find /mydirectory -type f -newer /tmp/last30min -print0 | xargs -0 -I{} sh -c 'gzip "{}"; mv "{}".gz ~'
where -I{} tells xargs to replace every {} in the command with the input line, i.e. the current file found by find.
If your are using the Z shell (zsh), it's much more simper, everything can be done in a oneliner:
for i (/mydirectory/**/*(mm-30)) { gzip $i && mv $i.gz ~ }
Here ** searches recursively, and (mm-30) means modified in the last (-) 30 minutes.
Your touch command doesn't work correctly, I checked the time stamps and they are for, in my timezone the following date:
$ touch -t 02231249.00 /tmp/last30min
$ perl -e'print scalar localtime((stat("/tmp/last30min"))[9])'
Sat Feb 23 12:49:00 2013jamie#jamie-Ideapad-Z570:~/temp$
I think this command will do what you are asking for
for f in `find . -mmin -30 -print`;do echo $f;gzip -c $f > $HOME/$f.gz;done
I'm currently running successful mysql backups, and, on some sites, I'm deleting files older that 7 days using this command
find /path/to/file -mtime +7 -exec rm -f {} \;
What I'd like to do, because I'm paranoid and would still like some archived information, is delete files older than 31 days, but maintain at least one file from each previous month, perhaps spare any file that was created on the 1st of the month.
Any ideas?
You can also write a script to contain something like this using xargs:
find /path/to/files -mtime +7| xargs -i rm {};
then add the script to your cron job
The grep is almost right, it only has one space too many. This works (at least for me, I use Debian):
rm `find /path/to/file -type f -mtime +7 -exec ls -l {} + | grep -v ' [A-S][a-z][a-z] 1 ' | sed -e 's:.* /path/to/file:/path/to/file:g'`
You can create a file with these commands:
SRC_DIR=/home/USB-Drive
DATE=$(/bin/date "+%4Y%2m%2d%2H%2M")
TIME_STAMP=$(/bin/date --date 'now' +%s)
TIME_CAL=$[$TIME_STAMP-2592000+25200] #last day, 25200 is my GMT+7hour
TIME_LAST=$(/bin/date --date "1970-01-01 $TIME_CAL sec" "+%4Y%2m%2d%2H%2M")
/bin/touch -t ${TIME_LAST} /tmp/lastmonth
/usr/bin/find -P -H ${SRC_DIR} ! -newer /tmp/lastmonth -type d -exec rm -r {} \;
You can modified last command based on what you want to delete, in this case I want to delete sub-folders in SRC_DIR. With the 'time attribute' more than 1 month ago.
Kind of ugly, but you can try parsing the output of ls -l
rm `find /path/to/file -type f -mtime +7 -exec ls -l {} + | grep -v ' [A-S][a-z][a-z] 1 ' | sed -e 's:.* /path/to/file:/path/to/file:g'`
Or write a script to get the list then run rm on them one at a time.
G'day,
I need to see if a specific file is more than 58 minutes old from a sh shell script. I'm talking straight vanilla Solaris shell with some POSIX extensions it seems.
I've thought of doing a
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
where the timestamp is 58 minutes ago and then doing a
find ./logs_dir \! -newer /var/tmp/toto -print
We need to postprocess some log files that have been retrieved from various servers using mirror. Waiting for the files to be stable is the way this team decides if the mirror is finished and hence that day's logs are now complete and ready for processing.
Any suggestions gratefully received.
cheers,
I needed something to test age of a specific file, to not re-download too often. So using GNU date and bash:
# if file's modtime hour is less than current hour:
[[ $(date +%k -r GPW/mstall.zip) -lt $(date +%k) ]] && \
wget -S -N \
http://bossa.pl/pub/metastock/mstock/mstall.zip \
Update--this version works much better for me, and is more accurate and understandable:
[[ $(date +%s -r mstall.zip) -lt $(date +%s --date="77 min ago") ]] && echo File is older than 1hr 17min
The BSD variant (tested on a Mac) is:
[[ $(stat -f "%m" mstall.zip) -lt $(date -j -v-77M +%s) ]] && echo File is older than 1hr 17min
You can use different units in the find command, for example:
find . -mtime +0h55m
Will return any files with modified dates older than 55 minutes ago.
This is now an old question, sorry, but for the sake of others searching for a good solution as I was...
The best method I can think of is to use the find(1) command which is the only Un*x command I know of that can directly test file age:
if [ "$(find $file -mmin +58)" != "" ]
then
... regenerate the file ...
fi
The other option is to use the stat(1) command to return the age of the file in seconds and the date command to return the time now in seconds. Combined with the bash shell math operator working out the age of the file becomes quite easy:
age=$(stat -c %Y $file)
now=$(date +"%s")
if (( (now - age) > (58 * 60) ))
then
... regenerate the file ...
fi
You could do the above without the two variables, but they make things clearer, as does use of bash math (which could also be replaced). I've used the find(1) method quite extensively in scripts over the years and recommend it unless you actually need to know age in seconds.
A piece of the puzzle might be using stat. You can pass -r or -s to get a parseable representation of all file metadata.
find . -print -exec stat -r '{}' \;
AFAICR, the 10th column will show the mtime.
Since you're looking to test the time of a specific file you can start by using test and comparing it to your specially created file:
test /path/to/file -nt /var/tmp/toto
or:
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
if [/path/to/file -nt /var/tmp/toto]
...
You can use ls and awk to get what you need as well. Awk has a c-ish printf that will allow you to format the columns any way you want.
I tested this in bash on linux and ksh on solaris.
Fiddle with options to get the best values for your application. Especially "--full-time" in bash and "-E" in ksh.
bash
ls -l foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37
ls --full-time foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37:51.211982332
ksh
ls -l bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
May 3 11:19
ls -E bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
2011-05-03 11:19:23.723044000 -0400