Asterisk use say with new mode - asterisk

How do i use the new mode with say.conf?
I put in my dialplan same => n,Playback(num:55|say) but i get the following errors
[Aug 24 15:12:28] WARNING[2054461][C-000001a6]: file.c:789 ast_openstream_full: File num:55|say does not exist in any format
[Aug 24 15:12:28] WARNING[2054461][C-000001a6]: file.c:1262 ast_streamfile: Unable to open num:55|say (format (ulaw)): No such file or directory
I took the example from here https://www.voip-info.org/asterisk-config-sayconf/
How can i get it to work?

You are using '|' which was delimiter BEFORE 1.4 version(very very long time ago).
For all modern versions use comma instead of that (',')

Related

Looping through the content of a file in Zsh

I'm trying to loop through the contents of a file in zsh. In my loop I want to get user input. Going off of this answer for Bash, I'm attempting to do:
while read -u 10 line; do
echo $line;
# TODO read from stdin here, etc.
done 10<myfile.txt
However I get an error:
zsh: parse error near `10'
Referring to the 10 after the done. Obviously I'm not getting the file descriptor syntax right, but I'm having trouble figuring out the docs.
Use a file descriptor number less than 10. If you want to hard code file descriptor numbers, stick to the range 3-9 (plus 0-2 for stdin,out,err). When zsh needs file descriptors itself, it uses them in the 10+ range.
If you're even getting close to needing more than the 7 available hard coded file descriptors, you should really think about using variables to name them. Syntax like exec {myfd}<myfile.txt will open a file with zsh allocating a file descriptor greater than 10 and assigning it to $myfd.
Bourne shell syntax is not entirely unambiguous given file descriptors numbering 10 and over and even in bash, I'd advise against using them. I'm not entirely sure how bash avoids conflicts if it needs to open any for internal use - I guess it never needs to leave any open. This may look like a zsh limitation at first sight but is actually a sensible feature.

View JSON file in Midnight Commander using jq

So there is that awesome tool for working with JSON data called jq.
And there is that awesome linux file manager called mc.
One day (today) I came around an idea to integrate these two, so I could easily preview JSON files in a pretty/formatted way using F3 keyboard shortcut when in Midnight Commander.
I opened MC extension file using Command → Edit extension file menu actions and then added following to such opened configuration file:
# json
regex/\.json$
View=%view{ascii} jq < %f
I thought it is straightforward, but unexpectedly it does not work: trying to view the JSON (F3) results in error popup with contents of jq's help page (the same as when you type jq by itself), so starting with: "jq - commandline JSON processr [version 1.5]..."
Can anybody tell me why this configuration is incorrect?
Two minutes after I submitted my question I've been revealed.
I thought that maybe jq does not produce standard output... It led me to this question: How to use jq in a shell pipeline? and so I have modified the extension file to look like:
# json
regex/\.json$
View=%view{ascii} jq '.' < %f
And now it works as expected, piping result of jq to the internal mc viewer.
Thank you, me ;)
You don't have to use redirection < here, you could use just a plain filename %f:
# json
regex/\.json$
View=%view{ascii} jq '.' %f
and as you mentioned you have to use a simple filter: .
For anyone wondering, why this no longer works. In version 4.8.29, MC switched from mc.ext to the new mc.ext.ini ini file, which has slightly different syntax. The new entry should look like this
[JSON]
Regex=\.json$
View=%view{ascii} jq '.' < %f
The [JSON] line is necessary.

cel_pgsql Inserting duplicate columns

The log shows there is duplicated columns, insert will fail.
[Aug 29 08:14:42] DEBUG[8683] cel_pgsql.c: Inserting a CEL record: [INSERT INTO cel ("id","eventtype","eventtime","userdeftype","cid_name","cid_num","cid_ani","cid_rdnis","cid_dnid","exten","context","channame","appname","appdata","amaflags","accountcode","peeraccount","uniqueid","linkedid","userfield","peer","id","eventtype","eventtime","userdeftype","cid_name","cid_num","cid_ani","cid_rdnis","cid_dnid","exten","context","channame","appname","appdata","amaflags","accountcode","peeraccount","uniqueid","linkedid","userfield","peer") VALUES (DEFAULT,'CHAN_END','2017-08-29 08:14:42.167195','','9004','9004','9004','','9001','9001','public','SIP/9004-00000008','','',3,'','','1503994474.12','1503994474.12','','',DEFAULT,'CHAN_END','2017-08-29 08:14:42.167195','','9004','9004','9004','','9001','9001','public','SIP/9004-00000008','','',3,'','','1503994474.12','1503994474.12','','')].
alp-test*CLI> core show version
Asterisk 14.4.1 built by buildozer # build-3-6-x86_64 on a x86_64 running Linux on 2017-05-22 06:13:12 UTC
There is no update for cel_pgsql from 2015
cel_pgsql.conf
[global]
hostname=127.0.0.1
port=5432
dbname=ast
password=ast
user=ast
table=cel
cel.conf
[general]
enable=yes
apps=dial,park
events=APP_START,CHAN_START,CHAN_END,ANSWER,HANGUP,BRIDGE_ENTER,BRIDGE_EXIT
Because I load cel_pgsql.so two times, after remove preload => cel_pgsql.so everything is fine.

Unix SQLLDR scipt gives 'Unexpected End of File' error

All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH

How do I convert my 5GB 1 liner file to lines based on pattern?

I have a 5GB 1 liner file with JSON data and each line starts from this pattern "{"created". I need to be able to use Unix commands on my Mac to convert this monster of a 1 liner into as many lines as it deserves. Any commands?
ASCII English text, with very long lines, with no line terminators
If you have enough memory you can open the file once with the TextWrangler application (the free BBEdit cousin) and use regular search/replace on the whole file. Use \r in replace to add a return. Will be very slow at opening the file, may even hang if low on memory, but in the end it may probably work. No scripting, no commands,.. etc.. I did this with big SQL files and sometimes it did the job.
You have to replace your line-start string with the same string with \n or \r or \r\n in front of it.
Unclear how it can be a “one liner” file but then each line starts with "{"created", but perhaps python -mjson.tool can help you get started:
cat your_source_file.json | python -mjson.tool > nicely_formatted_file.json
Piping raw JSON through ``python -mjson.tool` will cleanly format the JSON to be more human readable. More info here.
OS X ships with both flex and bison, you can use those to write a parser for your data.
You can use PHP as a shell command (if PHP is installed), just save a text file with name "myscript" and appropriate code (I cannot test code now, but the idea is as follows)
UNTESTED CODE
#!/usr/bin/php
<?php
$REPLACE_STRING='{"created'; // anything you like
// open input file with fopen() in read mode
$inFp=fopen('big_in_file.txt', "r");
// open output file with fopen() in write mode
$outFp=fopen('big_out_file.txt', "w+");
// while not end of file
while (!feof($inFp)) {
// read file chunks here with fread() in variable $chunk
$chunk = fread($inFp, 8192);
// do a $chunk=str_replace($REPLACE_STRING,"\r".$REPLACE_STRING; // to add returns
// (or use \r\n for windows end of lines)
$chunk=str_replace($REPLACE_STRING,"\r".$REPLACE_STRING,$chunk);
// problem: if chunk contains half the string at the end
// easily solved if $REPLACE_STRING is a one char like '{'
// otherwise test for fist char { in the end of $chunk
// remove final part and save it in a var for nest iteration
// write $chunk to output file
fwrite($outFp, $chunk);
// End while
}
?>
After you save it you must make it executable whith sudo chmod a+x ./myscript
and then launch it as ./myscript in terminal
After this, the myscript file is a full unix command

Resources