I have 2 asterisk servers (server_A and server_B), On both I store CDR records in MySql.
My question:
Is there any way how to join CDR records from both servers, when user from server_A call to user from server_B??
In this case, you can automatically append a system identifier to front of the unique IDs by setting systemname in the asterisk.conf file (for each of your boxes):
[options]
systemname=server_A
Further info:
http://ofps.oreilly.com/titles/9780596517342/asterisk-DB.html#setting_systemname
For each SIP device, either ensure that you define:
accountcode=the_user_id+the_server_id
... or ...
setvar=user_server=the_user_id+the_server_id
... where you would naturally replace "the_user_id+the_server_id" with meaningful data.
You can then tweak your MySQL CDRs so that you are storing either "accountcode" or "user_server" as a field. If you want to get very clever -- and it's likely a good idea for data fault tolerance anyway -- set up MySQL replication between the two servers so you're actually writing to the same Database/Table for your CDR data.
Further reading:
http://www.voip-info.org/wiki/view/Asterisk+variables
http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
Related
I have a specific architecture to set up with postgresql.
I have a system that is based on two databases N and N+1.
The database N is available for clients in read only mode, and the database N+1 is available for modification for clients.
The client can also send two commands to the system:
An "apply" command: all the modifications made on the N+1 db are kept and the new state of the system is a readonly db with N+1 data and a N+2 db with same data available for writes.
A "reset" command: the N+1 db is dropped and a new copy of the N database is made for writes access for the users.
My first idea was to keep two databases in an instance of postgresql and perform pg_dump and pg_restore command on apply or reset command, and rename the database for the apply(N+1 -> N). The db can possibly reach the size of 8 Go, so I am currently performing test of such dump&restore on a Centos 6.7 vm.
Then I looked to the pg_basebackup command, to set up a hot standby database that will be the readonly one. The problem is that such an architecture is based on the idea of data replication from a master to a slave, and that is something I don't want since the client can ask a reset command that will drop the N+1 db.
The thing is I don't know if a system based on daily dump/restore is viable or not, or if there is with postgresql a simple way to handle two databases with the same schema and "detect and apply" the differences between the two: with that ability, I will be able , on apply command, to copy from N+1 to N only the difference, and the contrary with a reset command.
Any idea ?
I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. MichaĆ Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.
I followed the following link to output CDR records to my call logging server via sFTP : http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/service/9_1_1/car/CUCM_BK_C28A807A_00_cdr-analysis-reporting-admin-guide91_chapter_01.html
Both publisher and subscriber were configured to send data to the call logging server.
The call logging server received 19674 records from the Call Manager, but only 25 records are of CDR type and the rest are of CMR type.
From my experience, I would expect at much higher number of CDR type records. Additionally, there were certainly more than 25 calls were made/received on the CUCM extensions/gateways.
Are there any settings that need to be configured on the CUCM in order to generate the rest of CDRs? Should I configure only Publisher or only Subscriber to generate CDR? Is there a way of switching off CMR type records?
Assuming that CDRs are in fact enabled, You likely have a very long "CDR File Interval". You could set this to once a day, which would only send over one CDR a day (they'd be huge though).
Look here:
Callmanager -> Cisco Unified CM Administration -> System -> Enterprise Parameters
CDR File Time Interval - Set this to 1 minute (which is the default)
If you look under system parameters there is a "CDR Enable Flag" which is disabled by default. The CDR DB is updated across all systems so you do not need to setup the offload on multiple servers. The CMR records are records on call quality average jitter MOS scores etc and would be referenced by callid
I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time" on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
Your
SELECT * FROM [table] WHERE timestamp > last_backup_time
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).
My receive port is of sqlBinding and typed polling. It invokes a SP to fetch a record and based on filter condition the corresponding orchestration kicks off. The BizTalk group consists of 2 servers; thus 2 ReceiveHostInstances. If both the host instances are running -at some point the same request is being read twice - causing a duplicate at the receivers end. But, why is the reeive port reading it the same record more than once? The proc which reads the updates the record and updates it so that it wont be fecthed again.
I observed this scenario while submitting 10 requests; receive port read 11 times and 11 orchestrations started.
I tried the same (10 request) with one host (as in my Dev), the receive is showing 10 only. Any clues?
The quick answer is that you have two options to fix this problem:
Fix your stored procedure so that is behaves correctly in concurrent situations.
Place your SQL polling receive handler within a clustered BizTalk host.
Below is an explanation of what is going on, and under that I give details of implementations to fix the issue:
Explanation
This is due to the way BizTalk receive locations work when running on multiple host instances (that is, that the receive handler for the adapter specified in the receive location is running on a host that has multiple host instances).
In this situation both of the host instances will run their receive handler.
This is usually not a problem - most of the receive adapters can manage this and give you the behaviour you would expect. For example, the file adapter places a lock on files while they are being read, preventing double reads.
The main place where this is a problem is exactly what you are seeing - when a polling SQL receive location is hitting a stored procedure. In this case BizTalk has no option other than to trust the SQL procedure to give the correct results.
It is hard to tell without seeing your procedure, but the way you are querying your records is not guaranteeing unique reads.
Perhaps you have something like:
Select * From Record
Where Status = 'Unread'
Update Record
Set Status = 'Read'
Where Status = 'Unread'
The above procedure can give duplicate records because between the select and the update, another call of the select is able to sneak in and select the records that have not been updated yet.
Implementing a solution
Fixing the procedure
One simple fix to the procedure is to update with a unique id first:
Update Record
Set UpdateId = ##SPID, Status = 'Reading'
Where Status = 'Unread'
Select * From Record
Where UpdateId = ##SPID
And Status = 'Reading'
Update Record
Set Status = 'Read'
Where UpdateId = ##SPID
And Status = 'Reading'
##SPID should be unique, but if it proves not to be you could use newid()
Using a clustered host
Within the BizTalk server admin console when creating a new host it is possible to specify that that host is clustered. Details on doing this are in this post by Kent Weare.
Essentially you create a host as normal, with host instances on each server, then right click the host and select cluster.
You then create a SQL receive handler for the polling that works under that host and use this handler in your receive location.
A BizTalk clustered host ensures that all items that are members of that host will run on one and only one host instance at a time. This will include your SQL receive location, so you will not have any chance of race conditions when calling your procedure.