is it possible to insert rows from a local table into a remote table in postgresql? - postgresql-9.1

I have two postgresql databases, one on my local machine and one on a remote machine. If I'm connected to the local database (with psql), how do I execute a statement that inserts rows into a table on the remote database by selecting rows from a table on the local database? (I've seen this asked a handful of times, like here, here and here, but I haven't yet found a satisfactory answer or anyone saying definitively that it's not possible).
Specifically, assume I have tables remotetable and localtable, each with a single column columnA. I can run this statement successfully:
select dblink_exec('myconnection', 'insert into remotetable (columnA) values (1);');
but what I want to do is this:
select dblink_exec('myconnection', 'insert into remotetable (columnA) select columnA from localtable;');
But this fails with: relation "localtable" does not exist, presumably because localtable does not exist on the remote database.
Is it possible to do what I'm trying to do? If so, how do I indicate that localtable is, in fact, local? All of the examples I've seen for dblink-exec show inserts with static values, not with the results of a local query.
Note: I know how to query data from a remote table and insert into a local table, but I'm trying to move data in the other direction.

If so, how do I indicate that localtable is, in fact, local?
It's not possible because dblink acts as an SQL client to the remote server. That's why the queries sent through dblink_exec must be self-contained: they can do no more than any other query sent by any SQL application. Every object in the query is local to it from the server's perspective.
That is, unless you use another functionality, a Foreign-Data Wrapper with the postgres_fdw driver. This is a more sophisticated way to achieve server-to-server querying in which the SQL engine itself has this notion of foreign and local tables.

Related

How to transfer data from SQL Server to Informix and vice versa

I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. MichaƂ Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.

Invalid column definition error when using four part name to access Oracle DB as SQL Server linked server

I have setup a linked server in SQL Server 2008 R2 in order to access an Oracle 11g database. The MSDASQL provider is used to connect to the linked server through the Oracle Instant Client ODBC driver. The connection works well when using the OPENQUERY with the below syntax:
SELECT *
FROM OPENQUERY(LINKED_SERVER, 'SELECT * FROM SCHEMA.TABLE')
However, went I try to use a four part name using the below syntax:
SELECT *
FROM LINKED_SERVER..SCHEMA.TABLE
I receive the following error:
Msg 7318, Level 16, State 1, Line 1
The OLE DB provider "MSDASQL" for linked server "LINKED_SERVER" returned an invalid column definition for table ""SCHEMA"."TABLE"".
Does anyone have any insight on what my be causing the four part name query to fail while the OPENQUERY one works without any problems?
The correct path to follow is to use OPENQUERY function because your linked server is Oracle: the four name syntax will work fine for MSSQL servers, essentially because they understand T-SQL.
With very simple queries, a 4 part name can accidentally work but not often if you are in a real scenario. In your case, the SELECT * is returning all the columns, and in your case one of the column definition is not compatible with SQL Server. Try another table or try to select a single simple column (e.g. a CHAR or a NUMBER), maybe it will work without problem.
In any case, using distributed queries can be tricky sometime. Database itself does some optimizations before executing commands, so it is important for the database to know what it can do and what it can't. If the DB thinks the linked server is MSSQL, it will take some action that may not work with Oracle.
When using four part name syntax with a linked DB different from MSSQL, you will have other problems as well, for example using database builtin functions (i.e. to_date() Oracle function will not work because MSSQL would want to use its own convert() function, and so on).
So again, if the linked server is not a MSSQL, the right choice is to use OPENQUERY and passing it a query that use a syntax valid against the linked server SQL dialect.
If you use the OLEDB provider for Oracle you can query without using openquery

How to create a small and simple database using Oracle 11 g and SQL Developer?

How to create a small and simple database using Oracle 11 g and SQL Developer ?
I am seeing too many errors and I cannot find any way to make a simple database.
For example
create database company;
Caused the following error:
Error starting at line 1 in command:
create database company
Error at Command Line:1 Column:0
Error report:
SQL Error: ORA-01501: CREATE DATABASE failed
ORA-01100: database already mounted
01501. 00000 - "CREATE DATABASE failed"
*Cause: An error occurred during create database
*Action: See accompanying errors.
EDIT-
This is completely different from MySQL and MS-SQL that I am familiar with.
Not as intuitive as I was expecting.
First off, what Oracle calls a "database" is generally different than what most other database products call a "database". A "database" in MySQL or SQL Server is much closer to what Oracle calls a "schema" which is the set of objects owned by a particular user. In Oracle, you would generally only have one database per server (a large server might have a handful of databases on it) where each database has many different schemas. If you are using the express edition of Oracle, you are only allowed to have 1 database per server. If you are connected to Oracle via SQL Developer, that indicates that you already have the Oracle database created.
Assuming that you really want to create a schema, not a database (using Oracle terminology), you would create the user
CREATE USER company
IDENTIFIED BY <<password>>
DEFAULT TABLESPACE <<tablespace to use for objects by default>>
TEMPORARY TABLESPACE <<temporary tablespace to use>>
You would then assign the user whatever privileges you wanted
GRANT CREATE SESSION TO company;
GRANT CREATE TABLE TO company;
GRANT CREATE VIEW TO company;
...
Once that is done, you can connect to the (existing) database as COMPANY and create objects in the COMPANY schema.
Actually the answer from Justin above could not be more incorrect. SQL Server and MySQL are for smallish databases. Oracle is for large enterprise databases, thus the difference in it's structure. And it is common to have more than one Oracle database on a server provided that the server is robust enough to handle the load. If you received the error posted above then you obviously are trying to create a new Oracle database and if you are doing that then you probably already understand the structure of an Oracle database. The likely scenario is that you attempted to create a database using dbca, it initially failed, but the binaries were created. You then adjusted your initial parameters and re-tried creating the database using dbca. However, the utility sees the binaries and folder structure for the database that you are creating so it thinks that the database already exists but is not mounted. Dropping the database and removing the binaries and folders as well as any other cleanup of the initial attempt should be done first, then try again.
From your question description, I think you were to create a database schema, not a database instance. In Oracle terminology, a database instance is a set of files in the file system. It's more like data files in MySQL. Whereas database in MySQL is somewhat equivalent to Oracle's schema.
To create a schema in Oracle: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_6014.htm
To create a database instance in Oracle (I personally prefer CDBA):
https://docs.oracle.com/cd/E11882_01/server.112/e25494/create.htm#ADMIN11068
Notice the Oracle Express edition does not support mounting more than one database instance at one time.

Trace the Cause for update of Sql Table

I have a table Product which have Quantity column, This table get updated thru .net application using Stored procedure based on flag variable. Now im having problem reported from user that even though the flag variable is not set table is getting updated with new values.
Now i need to isolated the cause for the issue.How will i check which update and through which application this table is getting modified. I have no idea about it.
What is the best approach to resolve this issue?
Assuming you are using SQL Server:
You can monitor calls to SQL Server using SQL Server Profiler. You can setup a filter to monitor queries affecting the Product table. The log will show what the query looked like, when the query was executed, the database user executing the query, the name of the application (if that is specified in the connection string) and a bunch of other things.

Aggregating multiple distributed MySQL databases

I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time" on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
Your
SELECT * FROM [table] WHERE timestamp > last_backup_time
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).

Resources