Cognos Security at #prompt Token# - cognos-10

I am working on cognos 10.2 and our company had 4 servers located in different places, all the servers are OLTP and have the same schema and tables, which catch transaction data of sales in different locations,
Question
By using #prompt token # i am able to create a single package where i can connect to the different servers and fetch the data for the servers at a particular execution, but i want to give the security at this point where when i run the cognos report it will prompt for the datasource connection, at this point it should check for the users does he fall under the same group or he got the required permissions
for example
A , B, C, D are the different servers / datasource
A1, A2, A3 users/groups
B1, B2 users/groups
C1, C2 users/groups
D1 users/groups
when A1 run cognos report and selects datasource as A then the report should execute, if he selects datasource a B it should not execute or should give a error message.
What are the steps or functions do i need to follow to create a package/ report
Thanks very much
Anil RG

Related

A problem about federated tables in different clusters

I have a Mysql server(node A) in galera cluster 1 and a Mysql server (node B) in galera cluster 2.
Node A and node B are indifferent cluster so they have completely different write nodes.
I'm trying to join tables in node A with those tables in node B use FederatedX engine.
But found these words in MariaDB's Knowledge Base about the FederatedX's limitations.
There is no way for the handler to know if the foreign database or table has changed. The reason for this is that this database has to work like a data file that would never be written to by anything other than the database. The integrity of the data in the local table could be breached if there was any change to the foreign database.
These nodes are in different clusters,so it must be update by other write nodes.
But I need to figure out the way to join Tables in different nodes.
Some website says that federated tables will use remote server's data.
It sounds like that I don't have to worry about it... But I'mnot quite sure.
So...my question is...Does Federated tables in different clusters will work?
Thanks a lot.

Migrating all the Kusto database principals to a different database on a different Kusto cluster

I have a database D1 on the cluster C1 and I have a bunch of (maybe 100) principals defined on D1. Now assume that I also have another database D2 on the cluster C2. Also I am admin on both the databases. Is there any way I can just script out all my service principals and execute that script in one shot against D2 and thus achieve migration of all the principals? If not, is there any other way of achieving this, other than explicitly granting permission to each one of the principals against the database D2? (that will be like executing a hundred commands)
There's no current way of exporting/importing database-level principals from one database to another.
One option for you to consider is writing a simple app/script (using Kusto's API) that:
Lists all principals using .show database ['database_name'] principals
Generates a list of .add database ['database_name'] <role> (<principal>) commands based on 1. (or a single command per role, were is a comma-separated list of principals)
Runs all commands from 2.

Postgresql 9.1 - Find difference between two databases

I have a specific architecture to set up with postgresql.
I have a system that is based on two databases N and N+1.
The database N is available for clients in read only mode, and the database N+1 is available for modification for clients.
The client can also send two commands to the system:
An "apply" command: all the modifications made on the N+1 db are kept and the new state of the system is a readonly db with N+1 data and a N+2 db with same data available for writes.
A "reset" command: the N+1 db is dropped and a new copy of the N database is made for writes access for the users.
My first idea was to keep two databases in an instance of postgresql and perform pg_dump and pg_restore command on apply or reset command, and rename the database for the apply(N+1 -> N). The db can possibly reach the size of 8 Go, so I am currently performing test of such dump&restore on a Centos 6.7 vm.
Then I looked to the pg_basebackup command, to set up a hot standby database that will be the readonly one. The problem is that such an architecture is based on the idea of data replication from a master to a slave, and that is something I don't want since the client can ask a reset command that will drop the N+1 db.
The thing is I don't know if a system based on daily dump/restore is viable or not, or if there is with postgresql a simple way to handle two databases with the same schema and "detect and apply" the differences between the two: with that ability, I will be able , on apply command, to copy from N+1 to N only the difference, and the contrary with a reset command.
Any idea ?

How to synchronize local db value to Server(Hosting) db values in sql server 2008 R2?

How to Synchronize local database(datas) value to Server(Hosting) database values in SQL server 2008 R2? ex:from our client having a pc wen they are enterting entry it will insert kay ah...but in our concern keeping backup from server Hosting i mean IBM server...suppose client
connecting internet connection means clicking single button event want to transfer OUR own server database also..can u got it
Is there such any option there? Please let me know.
Thank you in advance!
I suggest that use Redgate Data compare tools in order to synchronize data from one database to another database.
You can also use some query such as following query in order to determine deferent record in two database and synch them
USE Datbase1
INSERT INTO Schema1.Table1 (columns)
SELECT t1.Columns
FROM Datbase2.Schema2.Table2 t1
LEFT JOIN Schema1.Table1 t2 ON t1.keyColumn = t2.Keycolumn
WHERE t2.keycolumn IS NULL
The RedGate will be costly for you. Use can use Open Source Database compare and sync tool as
Open DBDiff
I have used it for my Live Databases compare with Local Database. Still today I have not found any issue.
Hope It will help you.
Open DBDiff is an open source database schema comparison tool for SQL Server 2005/2008.
It reports differences between two database schemas and provides a synchronization script to upgrade a database from one to the other.

Generating CDR records from CUCM 9.x

I followed the following link to output CDR records to my call logging server via sFTP : http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/service/9_1_1/car/CUCM_BK_C28A807A_00_cdr-analysis-reporting-admin-guide91_chapter_01.html
Both publisher and subscriber were configured to send data to the call logging server.
The call logging server received 19674 records from the Call Manager, but only 25 records are of CDR type and the rest are of CMR type.
From my experience, I would expect at much higher number of CDR type records. Additionally, there were certainly more than 25 calls were made/received on the CUCM extensions/gateways.
Are there any settings that need to be configured on the CUCM in order to generate the rest of CDRs? Should I configure only Publisher or only Subscriber to generate CDR? Is there a way of switching off CMR type records?
Assuming that CDRs are in fact enabled, You likely have a very long "CDR File Interval". You could set this to once a day, which would only send over one CDR a day (they'd be huge though).
Look here:
Callmanager -> Cisco Unified CM Administration -> System -> Enterprise Parameters
CDR File Time Interval - Set this to 1 minute (which is the default)
If you look under system parameters there is a "CDR Enable Flag" which is disabled by default. The CDR DB is updated across all systems so you do not need to setup the offload on multiple servers. The CMR records are records on call quality average jitter MOS scores etc and would be referenced by callid

Resources