I followed the following link to output CDR records to my call logging server via sFTP : http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/service/9_1_1/car/CUCM_BK_C28A807A_00_cdr-analysis-reporting-admin-guide91_chapter_01.html
Both publisher and subscriber were configured to send data to the call logging server.
The call logging server received 19674 records from the Call Manager, but only 25 records are of CDR type and the rest are of CMR type.
From my experience, I would expect at much higher number of CDR type records. Additionally, there were certainly more than 25 calls were made/received on the CUCM extensions/gateways.
Are there any settings that need to be configured on the CUCM in order to generate the rest of CDRs? Should I configure only Publisher or only Subscriber to generate CDR? Is there a way of switching off CMR type records?
Assuming that CDRs are in fact enabled, You likely have a very long "CDR File Interval". You could set this to once a day, which would only send over one CDR a day (they'd be huge though).
Look here:
Callmanager -> Cisco Unified CM Administration -> System -> Enterprise Parameters
CDR File Time Interval - Set this to 1 minute (which is the default)
If you look under system parameters there is a "CDR Enable Flag" which is disabled by default. The CDR DB is updated across all systems so you do not need to setup the offload on multiple servers. The CMR records are records on call quality average jitter MOS scores etc and would be referenced by callid
Related
I am working on the Zabbix monitoring tool.
Could any one advise whether we have any tool to generate reports.
Not at my knowledge out-of-the box.
Zabbix is tricky because MySQL backend history tables grow extremely fast and they don't have primary keys. Our current history tables have 440+ million records and we monitor 6000 servers by Zabbix. Single table scan takes 40 minutes on the active server.
So your challenge could be splitted in three smaller challenges:
History
Denormalization is the key because joins don't work on huge history tables because you have to join history, items, functions, triggers and hosts tables.
Besides you want to evaluate global and host macros, replace {ITEM.VALUE} and {HOST.NAME} in trigger and item names/descriptions.
BTW there is experimental version of Zabbix which uses Elasticsearch for keeping history and it makes possible sorting and selecting item values by intervals. Zabbix using Elasticsearch for History Tables
My approach is to generate structures like this for every Zabbix record from history tables and dump them to the document database. Make sure you don't use buffered cursors.
{'dns_name': '',
'event_clock': 1512501556,
'event_tstano': '2017-12-05 19:19:16',
'event_value': 1,
'host_id': 10084,
'host_name': 'Zabbix Server',
'ip_address': '10.13.37.82',
'item_id': 37335,
'item_key': 'nca.test.backsync.multiple',
'item_name': 'BackSync - Test - Multiple',
'trig_chg_clock': 1512502800,
'trig_chg_tstamp': '2017-12-05 19:40:00',
'trig_id': 17206,
'trig_name': 'BackSync - TEST - Multiple - Please Ignore',
'trig_prio': 'Average',
'trig_value': 'OK'
}
Current Values
Zabbix APIs are documented pretty good and JSON is handy to dump the structure like proposed for the history. Don't expect Zabbix APIs will return more than 500 metrics / second max. We currently pull 350 metrics / second.
And finally reporting ... There are many options but you have to integrate them:
Jasper
Kibana (Elasticsearch)
Tableau
Operations Bridge Reporter (Vertica)
..
JasperReports - IMHO good "framework" for reports:
connect it with SQL data connector => you have to be familiar with SQL structure of your Zabbix DB
more generic solution will be Zabbix API data connector for JasperReports, but you have to code this data connector, because it doesn't exist
You can use the Event API to export data for reporting.
Quoting from the main reference page:
Events:
Retrieve events generated by triggers, network discovery and other
Zabbix systems for more flexible situation management or third-party
tool integration
Additionally, if you've set up IT Services & SLA, you can use the Service API to extract service % availabilities
What is WSM Buffer in IBM DataPower's context?
As per my understanding, WSM Buffer is used to monitor incoming requests in DataPower. The statistics for all incoming requests such as 'completed records', 'pending records' etc are captured in 'WSM Agent Status' in DataPower.
Can you please provide a detailed description of WSM Buffer and how does it help in logging and monitoring?
A WSM buffer is relatively easy to configure setting in datapower XI52. You need to enable it like following:
Use the DataPower Web Interface
Navigate to Object > Device Management
Enable data retention in the WSM buffer
Select XML Management Interface
Check the WS-Management Endpoint checkbox
Click Apply
Choose to retain data when data collection is stopped
Select Web Services Management Agent
Select Buffering Mode = Buffer
Click Apply
Then you need to use products like ITCAM for SOA which have capability to read WSM buffer and make sense out of it. You need to configure ITCAM for SOA for it. Once done you can test your WSM buffer configuration like below:
Send four requests to your DataPower device
Check the WSM buffer
1.Choose Status > Web Service > WSM Agent Status
2. Status is 4 (for example) records seen, 4 (for example) complete records
Parameters have following meaning
a.) Records seen: total lost + complete
b.) Records lost: discarded when the buffer was full
c.) Complete records: ready for data collection
d.) Pending records: requests awaiting a server response
See if it helps you !
I would like to know the Process Integration steps.
Through Outbound ports
If any of the event occurs at AX Dynamics, we just want to know that events in the form of XML(Process Integration).
Example: Sales Order Creation, Customer Creation, Purchase Order Creation..
Outbound ports are only useful for asynchronous communication.
See AX 2012 Export Data with Outbound ports for an example (using the file system).
The steps to initiate sending data is in the AIF_SendCustomer.
As this is no lightweight operation, you may consider logging the records which needs integration in a custom integration table, then doing the processing in batch.
This is done in the insert and/or update and maybe delete method.
Deletes requires you store the RecId field value in the external system to be used for delete requests. The following does not cover this.
For logged table make the following method:
void syncRecord()
{
XXXRecordLog log;
log.RefTableId = this.TableId;
log.RefRecId = this.RecId;
log.insert();
}
Then call this.syncRecord() in the insert and update methods.
In the query to the outbound service be sure to exists join your table and the log table. This way only changed records are exported.
Make a batch job to do the transfer using the AIF_SendCustomer as a template.
After a synchronous (AifSendMode::Sync) transfer of the records, delete the log records (or mark them transferred).
Finally call AIFoutboundProcessingService to flush the file:
new AIFoutboundProcessingService().run();
Try to keeps things simple. It might be simpler to do a comma file export of the changed records!
I have 2 asterisk servers (server_A and server_B), On both I store CDR records in MySql.
My question:
Is there any way how to join CDR records from both servers, when user from server_A call to user from server_B??
In this case, you can automatically append a system identifier to front of the unique IDs by setting systemname in the asterisk.conf file (for each of your boxes):
[options]
systemname=server_A
Further info:
http://ofps.oreilly.com/titles/9780596517342/asterisk-DB.html#setting_systemname
For each SIP device, either ensure that you define:
accountcode=the_user_id+the_server_id
... or ...
setvar=user_server=the_user_id+the_server_id
... where you would naturally replace "the_user_id+the_server_id" with meaningful data.
You can then tweak your MySQL CDRs so that you are storing either "accountcode" or "user_server" as a field. If you want to get very clever -- and it's likely a good idea for data fault tolerance anyway -- set up MySQL replication between the two servers so you're actually writing to the same Database/Table for your CDR data.
Further reading:
http://www.voip-info.org/wiki/view/Asterisk+variables
http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
My receive port is of sqlBinding and typed polling. It invokes a SP to fetch a record and based on filter condition the corresponding orchestration kicks off. The BizTalk group consists of 2 servers; thus 2 ReceiveHostInstances. If both the host instances are running -at some point the same request is being read twice - causing a duplicate at the receivers end. But, why is the reeive port reading it the same record more than once? The proc which reads the updates the record and updates it so that it wont be fecthed again.
I observed this scenario while submitting 10 requests; receive port read 11 times and 11 orchestrations started.
I tried the same (10 request) with one host (as in my Dev), the receive is showing 10 only. Any clues?
The quick answer is that you have two options to fix this problem:
Fix your stored procedure so that is behaves correctly in concurrent situations.
Place your SQL polling receive handler within a clustered BizTalk host.
Below is an explanation of what is going on, and under that I give details of implementations to fix the issue:
Explanation
This is due to the way BizTalk receive locations work when running on multiple host instances (that is, that the receive handler for the adapter specified in the receive location is running on a host that has multiple host instances).
In this situation both of the host instances will run their receive handler.
This is usually not a problem - most of the receive adapters can manage this and give you the behaviour you would expect. For example, the file adapter places a lock on files while they are being read, preventing double reads.
The main place where this is a problem is exactly what you are seeing - when a polling SQL receive location is hitting a stored procedure. In this case BizTalk has no option other than to trust the SQL procedure to give the correct results.
It is hard to tell without seeing your procedure, but the way you are querying your records is not guaranteeing unique reads.
Perhaps you have something like:
Select * From Record
Where Status = 'Unread'
Update Record
Set Status = 'Read'
Where Status = 'Unread'
The above procedure can give duplicate records because between the select and the update, another call of the select is able to sneak in and select the records that have not been updated yet.
Implementing a solution
Fixing the procedure
One simple fix to the procedure is to update with a unique id first:
Update Record
Set UpdateId = ##SPID, Status = 'Reading'
Where Status = 'Unread'
Select * From Record
Where UpdateId = ##SPID
And Status = 'Reading'
Update Record
Set Status = 'Read'
Where UpdateId = ##SPID
And Status = 'Reading'
##SPID should be unique, but if it proves not to be you could use newid()
Using a clustered host
Within the BizTalk server admin console when creating a new host it is possible to specify that that host is clustered. Details on doing this are in this post by Kent Weare.
Essentially you create a host as normal, with host instances on each server, then right click the host and select cluster.
You then create a SQL receive handler for the polling that works under that host and use this handler in your receive location.
A BizTalk clustered host ensures that all items that are members of that host will run on one and only one host instance at a time. This will include your SQL receive location, so you will not have any chance of race conditions when calling your procedure.