Multiple SQL statements in flyway.initSql - flyway

Is something like this possible and if yes, is initSql executed each time a connection is established? For example i want to make sure that on each connection the time zone is set to UTC. Or is there a better way to execute connection-init-sql statements?
flyway:
datasources:
default:
initSql: SET ROLE test_role; SET time zone 'UTC'; <--- is this second statement executed?
Thanks!

Yes, initsql supports multiple statements. It is run "to initialize a new database connection immediately after opening it," so yes, it's run each time a connection is established.
For example, against a test SQL Server database, I ran:
flyway -initSql="select 1; select 2" info
This output the normal content of flyway info, but also included the following after the usual connection info header:

Related

specify a database name in databricks sql connection parameters

I am using airflow 2.0.2 to connect with databricks using the airflow-databricks-operator. The SQL Operator doesn't let me specify the database where the query should be executed, so I have to prefix the table_name with database_name. I tried reading through the doc of databricks-sql-connector as well here -- https://docs.databricks.com/dev-tools/python-sql-connector.html and still couldn't figure out if I could give the database name as a parameter in the connection string itself.
I tried setting database/schema/namespace in the **kwargs, but no luck. The query executor keeps saying that the table not found, because the query keeps getting executed in the default database.
Right now it's not supported - primarily reason is that if you have multiple statements then connector could reconnect between their execution, and result of use will be lost. databricks-sql-connector also doesn't allow setting of the default database.
Right now you can workaround that by adding explicit use <database> statement into a list of SQLs to execute (the sql parameter could be a list of strings, not only string).
P.S. I'll look, maybe I'll add setting of the default catalog/database in the next versions

How Do I properly Encode `extra` parameters while using `airflow connections add`

Problem Statement
When editing the UI I can add modify the extra field to contain {"no_host_key_check": true}
But when I attempt to add this connection from the CLI with this command which follows the connections documentation format
airflow connections add local_sftp --conn-uri "sftp://test_user:test_pass#local_spark_sftp_server_1:22/schema?extra='{\"no_host_key_check\": true}"
The connection is added as {"extra": "'{\"no_host_key_check\": true}"}
How do I need to modify my airflow connections add command to properly format this connection configuration?
This is fundamentally a misunderstanding that all schema parameters for the airflow connections add command are extra parameters by default
airflow connections add local_sftp --conn-uri "sftp://test_user:test_pass#local_spark_sftp_server_1:22/schema?no_host_key_check=true"
Correctly sets the desired parameter.

Specifying default schema WebSpehere 8.5 Server

I am working on a migration project where we are migrating one application from Weblogic to Websphere 8.5 server.
In Weblogic server, we can specify default schema while creating datasource but I don't see same option in WebSpehere 8.5 server.
Is there any custom property through which we can set it , I tried currentSchema=MySchema but it did not work.
This answer requires significantly more work, but I'm including it because it's the designed solution to customize pretty much anything about a connection, including the schema. WebSphere Application Sever allows you to provide/extend a DataStoreHelper.
Knowledge Center document on providing a custom DataStoreHelper
In this case, you can extend com.ibm.websphere.rsadapter.Oracle11gDataStoreHelper.
JavaDoc for Oracle11gDataStoreHelper
The following methods will be of interest:
doConnectionSetup, which performs one-time initialization on a connection when it is first created
doConnectionCleanup, which resets connection state before returning it to the connection pool.
When you override doConnectionSetup, you are supplied with the newly created connection, upon which you can do,
super.doConnectionSetup(connection);
Statement stmt = connection.createStatement();
try {
stmt.execute(sqlToUpdateSchema);
} finally {
stmt.close();
}
doConnectionCleanup lets you account for the possibility that application code that is using the connection might switch the schema to something else. doConnectionCleanup gives you the opportunity to reset it. Again, you are supplied with a connection, upon which you can do,
super.doConnectionCleanup(connection);
Statement stmt = connection.createStatement();
try {
stmt.execute(sqlToUpdateSchema);
} finally {
stmt.close();
}
Note that in both cases, invoking the corresponding super class method is important to ensure you don't wipe out the database-specific initialization/cleanup code that WebSphere Application Server has built in based on the database.
As far as I know Weblogic only allows setting a default schema by setting the 'Init SQLto a SQL string which sets the current schema in the database, such asSQL ALTER SESSION SET CURRENT_SCHEMA=MySchema`. So, this answer is assuming the only way to set the current schema of a data source is via SQL.
In WebSphere, the closest thing to WebLogic's Init SQL is the preTestSQLString property on WebSphere.
The idea of the preTestSQLString property is that WebSphere will execute a very simple SQL statement to verify that you can connect to your database properly when the server is starting. Typically values for this property are really basic things like select 1 from dual', but since you can put in whatever SQL you want, you could setpreTestSQLStringtoSQL ALTER SESSION SET CURRENT_SCHEMA=MySchema`.
Steps from the WebSphere documentation (link):
In the administrative console, click Resources > JDBC providers.
Select a provider and click Data Sources under Additional properties.
Select a data source and click WebSphere Application Server data source properties under Additional properties.
Select the PreTest Connections check box.
Type a value for the PreTest Connection Retry Interval, which is measured in seconds. This property determines the frequency with which a new connection request is made after a pretest operation fails.
Type a valid SQL statement for the PreTest SQL String. Use a reliable SQL command, with minimal performance impact; this statement is processed each time a connection is obtained from the free pool.
For example, "select 1 from dual" in oracle or "SQL select 1" in SQL Server.
Universal Connection Pool (UCP) is a Java connection pool and the whitepaper "UCP with Webshere" shows how to set up UCP as a datasource.
for JDBC datasource, the steps are similar but, you can choose the default JDBC driver option.
Check out the paper for reference.

Is restarting host instance mandatory while changing the stored procedures?

In BizTalk 2010, I am using the SQL Adapter for polling a table to create a message and to initiate the orchestration process.
I have modified the stored procedure without changing the schema. But i have started getting errors after modifying it and SQL polling is not happening. So i have restarted the Host instance and it started working.
So here my question is Is restarting host instance mandatory after changing the stored procedures?
Error is "The adapter "WCF-Custom" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.AdapterException: The ResultSet returned as part of the Typed Stored Procedure or Typed Polling invocation did not match the metadata available. If this Stored Procedure or Polling Statement can return a variable number of result sets, consider using the un-typed Stored Procedure or un-typed Polling operation instead."
Can anyone please suggest what could be the root cause?
Thanks,
Sasikumar.S
Yes, you will need to restart the Host Instance of the Host configured for your WCF-SQL Handler.
Under the hood, the first time a particular Stored Proc is called, the WCF-SQL adapter first executes it with the SET FMTONLY ON; flag. This causes Sql Server to return just the datatypes of the expected data, but not execute the sproc itself. The adapter caches these datatypes for the lifetime of the host process.
If you change the data returned by the stored procedure, the next time it executes, it will be out of sync, and unable to coerce into the expected type. Hence, the need to restart the Host Instance(s).
TL;DR - If you change a stored procedure, you need to restart the WCF-SQL Host Instance.

SQLite multiple insert issue

I'm working with SQLite for my Android application and after some research I figured out how to do multiple insert transactions using the UNION statement.
But this is quite inefficient. From what I see at http://www.sqlite.org/speed.html, and in a lot of other forums, I can speed up the process using the BEGIN - COMMIT statements. But when I use them I get this error:
Cannot start a transaction within a transaction.
Why? What is the most efficient way of doing multiple insert?
Which JDBC driver are you using? Is there only one that's built into the Android distribution?
The problem is most likely with java.sql.Connection#setAutoCommit(). If the connection already has auto-commit enabled—which you can check with Connection#getAutoCommit()—then your JDBC driver is already issuing the SQL commands to start a transaction before your manual attempt to do, which renders your manual command redundant and invalid.
If you're looking to control transaction extent, you need to disable auto-commit mode for the Connection by calling
connection.setAutoCommit(false);
and then later, after your individual DML statements have all been issued, either commit or roll back the active transaction via Connection#commit() or Connection#rollback().
I have noticed that some JDBC drivers have a hard time coordinating auto-commit mode with PreparedStatement's batch-related methods. In particular, the Xerial JDBC driver and the Zentus driver on which it's based both fight against a user controlling the auto-commit mode with batch statement execution.

Resources