Asterisk Realtime pattern matching - odbc

I'm using the realtime db lookup feature to generate my extensions.conf
This works fine if I specify a complete number in the exten field, however if I try and use a pattern _0. for example it doesn't work. (Gives an error that the extension isn't found)
If I use that pattern directly in extensions.conf though it works fine.
Thanks

You probably need read this
http://www.voip-info.org/wiki/view/Asterisk+RealTime+Extensions
You need also consider this
; While most dynamic realtime engines are automatically used when
defined in ; this file, 'extensions', distinctively, is not. To
activate dynamic realtime ; extensions, you must turn them on in each
respective context within ; extensions.conf with a switch statement.
The syntax is: ; switch =>
Realtime/[[db_context#]tablename]/ ; The only option available
currently is the 'p' option, which disallows ; extension pattern
queries to the database. If you have no patterns defined ; in a
particular context, this will save quite a bit of CPU time. However,
; note that using dynamic realtime extensions is not recommended
anymore as a ; best practice; instead, you should consider writing a
static dialplan with ; proper data abstraction via a tool like
func_odbc.

Related

Saxon XQuery and collection : is it possible in FLWOR query to retrieve file names of returned items?

I would like to execute this kind of flwor query (I am using Saxon) :
for $baseItem in collection('file:/xmlDir?select=*.xml;recurse=yes')/item
let $itemToRetrieve := xs:string($baseItem/item)
let $itemFilter := xs:string($baseItem/filter)
let $fileName := tokenize("*xmlPath($baseItem)*"),'/')[last()]
where $itemFilter = 'test'
return ($itemToRetrieve, $fileName)
This way I could quickly find, when working on a large collection, where the returned items where found by the processor, without having to use a external program, like find commands.
I have tried to use document-uri() and base-uri() functions but without success.
Is there a way to achieve this ?
The document-uri() function should give you what you want. I just tried
collection($someURI)!document-uri(.)
and it works for me provided the items in the collection are all document nodes (but it fails with a type error if the collection includes non-XML resources which are retrieved as items other than document nodes)
Another approach is to use uri-collection() which gives you the URIs of the resources rather than the resources themselves; you can then fetch the particular resources you want using the doc() function (or json-doc() or unparsed-text() depending on the type of resource).

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

How to use MarkLogic xquery to tell if a document is 'in-memory'

I want to tell if an XML document has been constructed (e.g. using xdmp:unquote) or has been retrieved from a database. One method I have tried is to check the document-uri property
declare variable $doc as document-node() external;
if (fn:exists(fn:document-uri($doc))) then
'on database'
else
'in memory'
This seems to work well enough but I can't see anything in the MarkLogic documentation that guarantees this. Is this method reliable? Is there some other technique I should be using?
I think that behavior has been stable for a while. You could always check for the URI too, as long as you expect it to be from the current database:
xdmp:exists(fn:doc(fn:document-uri($doc)))
Or if you are in an update context and need ACID guarantees, use fn:exists.
The real test would be to try to call xdmp:node-replace or similar, and catch the expected error. Those node-level update functions do not work on constructed nodes. But that requires an update context, and might be tricky to implement in a robust way.
If your XML document is in-memeory, you can use in-mem-update API
import module namespace mem = "http://xqdev.com/in-mem-update" at "/MarkLogic/appservices/utils/in-mem-update.xqy";
If your XML document exists in your database you can use fn:exists() or fn:doc-available()
The real test of In-memory or In-Db is xdmp:node-replace .
If you are able to replace , update , delete a node then it is in database else if it throws exception then it's not in database.
Now there are two situation
1. your document is not created at all:
you can use fn:empty() to check if it is created or not.
2. Your document is created and it's in memory:
if fn:empty() returns false and xdmp:node-replace throws exception then it's in-memory

Proper LINQ to Lucene Index<T> usage pattern for ASP.NET?

What is the proper usage pattern for LINQ to Lucene's Index<T>?
It implements IDisposible so I figured wrapping it in a using statement would make the most sense:
IEnumerable<MyDocument> documents = null;
using (Index<MyDocument> index = new Index<MyDocument>(new System.IO.DirectoryInfo(IndexRootPath)))
{
documents = index.Where(d => d.Name.Like("term")).ToList();
}
I am occasionally experiencing unwanted deleting of the index on disk. It seems happen 100% of the time if multiple instances of the Index exist at the same time. I wrote a test using PLINQ to run 2 searches in parallel and 1 search works while the other returns 0 results because the index is emptied.
Am I supposed to use a single static instance instead?
Should I wrap it in a Lazy<T>?
Am I then opening myself up to other issues when multiple users access the static index at the same time?
I also want to re-index periodically as needed, likely using another process like a Windows service. Am I also going to run into issues if users are searching while the index is being rebuilt?
The code looks like Linq-to-Lucene.
Most cases of completely cleared Lucene indexes are new IndexWriters created with the create parameter set to true. The code in the question does not handle indexing so debugging this further is difficult.
Lucene.Net is thread-safe, and I expect linq-to-lucene to also inhibit this behavior. A single static index instance would cache stuff in memory, but I guess you'll need to handle index reloading of changes yourself (I do not know if linq-to-lucene does this for you).
There should be no problems using several searchers/readers when reindexing, Lucene is build to support that scenario. However, there can only be one writer per directory, so no other process can write documents to the index while your windows service were to optimize the index.

Convenient way to use LIKE case sensivity in sqlite

As I see LIKE operator can optimize query if I switch PRAGMA case_sensitive_like=ON. I measured, it really worked, queries "LIKE someth%" becomes ten times faster on a compartively large binary indexed tables.
But the problem is that my library implemented as an add-on to my application, it maintains its own tables with any db it is connected. So the problems are
I can not read case_sensitive_like since it is only supported to be set, not read. So I can not temporarily read the state and return it after the query,
As an addon that should obey the main functionality of the db, I should not change the setting to my need for good since it can affect other routines.
As I see there's no Like(case-sensitive) internal equivalent for me to call optimizid query directly. For example use LIKECASESENSITIVE instead of LIKE
I can call sqlite3_create_function, but I don't know whether or not I can call LIKE(CASE SENSITIVE) internally.
I can not read case_sensitive_like
since it is only supported to be set,
not read. So I can not temporarily
read the state and return it after the
query
You can get the state of case_sensitive_like with a query like this one:
select case when 'a' like 'A' then 0 else 1 end
which will return 1 if case_sensitive_like = ON and 0 if it's OFF.

Resources