How can I use multiple databases in a sitescope monitor? - oracle11g

We have 2 databases. One is an oracle 11g DB and the other is a DB2 database. I need to make a query to the oracle database to get a list of accounts and feed that as parameters into another DB2 query. If the DB2 Query returns any results then I need to send an alert. Is this in any way possible with sitescope (I am fairly new to sitescope so be gentle)? It looks like there is only room for 1 connection string in the sitescope monitors. Can I create 2 monitors (one for DB2 and one for Oracle) and use the results of one query as a parameter into the other monitor? Looks like there is some monitor to monitor capabilities but still trying to understand what is possible. Thanks!

It is possible to extract the results from the first query using a custom script alert, but it is not possible than to reuse the data in another monitor.
SiteScope (SiS) Database Query monitor has no feature to include dynamically changing data in the query used. Generally speaking monitor configurations are static and can only be updated by an external agent (user or integration).
Thinking inside the box of one vendor using HP Operations Orchestration (OO) would be an option to achieve your goal. You could either use OO to run the checks and send you alerts in case of a problem or run the checks and dump the result to a file somewhere which than subsequently can be read by SiS using Script monitor or Logfile monitor.

Related

How to transfer data from SQL Server to Informix and vice versa

I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. MichaƂ Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.

How to validate data in Teradata from Oracle

My source data is in Oracle and target data is in Teradata.Can you please provide me the easy and quick way to validate data .There are 900 tables.If possible can you provide syntax too
There is a product available known as the Teradata Gateway that works with Oracle and allows you to access Teradata in a "heterogeneous" manner. This may not be the most effective way to compare the data.
Ultimately what your requirements sound more process driven and to be done effectively would require the source data to be compared/validated as stage tables on the Teradata environment after your ETL/ELT process has completed.

is it possible to insert rows from a local table into a remote table in postgresql?

I have two postgresql databases, one on my local machine and one on a remote machine. If I'm connected to the local database (with psql), how do I execute a statement that inserts rows into a table on the remote database by selecting rows from a table on the local database? (I've seen this asked a handful of times, like here, here and here, but I haven't yet found a satisfactory answer or anyone saying definitively that it's not possible).
Specifically, assume I have tables remotetable and localtable, each with a single column columnA. I can run this statement successfully:
select dblink_exec('myconnection', 'insert into remotetable (columnA) values (1);');
but what I want to do is this:
select dblink_exec('myconnection', 'insert into remotetable (columnA) select columnA from localtable;');
But this fails with: relation "localtable" does not exist, presumably because localtable does not exist on the remote database.
Is it possible to do what I'm trying to do? If so, how do I indicate that localtable is, in fact, local? All of the examples I've seen for dblink-exec show inserts with static values, not with the results of a local query.
Note: I know how to query data from a remote table and insert into a local table, but I'm trying to move data in the other direction.
If so, how do I indicate that localtable is, in fact, local?
It's not possible because dblink acts as an SQL client to the remote server. That's why the queries sent through dblink_exec must be self-contained: they can do no more than any other query sent by any SQL application. Every object in the query is local to it from the server's perspective.
That is, unless you use another functionality, a Foreign-Data Wrapper with the postgres_fdw driver. This is a more sophisticated way to achieve server-to-server querying in which the SQL engine itself has this notion of foreign and local tables.

Balance reads in a MongoDB replica set with rmongodb

I have a MongoDB as a replica set with one master and one slave. I am using RmongoDB and I want to explicitly send a query to each machine using a parallelized for loop.
I succesfully created a conection with all the hosts:
mongo <- mongo.create(host=c("mastermng01:27001","slavemng01:27001"),
name="myRS",
username="user",
password="pass",
db="myDB")
ns_actual <- "myDB.MyCollection"
Then, I run a query like this:
cursor <- mongo.find(mongo,ns=ns_actual,query=list(var1="value"),
options=mongo.find.slave.ok)
So far, R knows the slave hosts and it is allowed to query them. But when it is going to do it? Can I force R to balance the queries among the hosts?
Sorry, no solution so far. The underlying C connector is not supporting this functionality. There is a new mongoC library available which supports this. But moving rmongodb to this library will take a lot of time which is currently not available.

Aggregating multiple distributed MySQL databases

I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time" on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
Your
SELECT * FROM [table] WHERE timestamp > last_backup_time
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).

Resources