I am using flyway to execute Oracle SQL. I can get debug statements using -x but not for select statements.
For eg: insert queries returns like below in the logs
Insert into xxx
Update count: x
But select queries didnt return any logs on outputs. I tried writing a custom slf4j logger on top of flyway but no use. Any thoughts on this please ?
Related
I am using airflow 2.0.2 to connect with databricks using the airflow-databricks-operator. The SQL Operator doesn't let me specify the database where the query should be executed, so I have to prefix the table_name with database_name. I tried reading through the doc of databricks-sql-connector as well here -- https://docs.databricks.com/dev-tools/python-sql-connector.html and still couldn't figure out if I could give the database name as a parameter in the connection string itself.
I tried setting database/schema/namespace in the **kwargs, but no luck. The query executor keeps saying that the table not found, because the query keeps getting executed in the default database.
Right now it's not supported - primarily reason is that if you have multiple statements then connector could reconnect between their execution, and result of use will be lost. databricks-sql-connector also doesn't allow setting of the default database.
Right now you can workaround that by adding explicit use <database> statement into a list of SQLs to execute (the sql parameter could be a list of strings, not only string).
P.S. I'll look, maybe I'll add setting of the default catalog/database in the next versions
I am experimenting with using flyway repair, but I'm curious as to exactly what commands flyway is sending to the target database. Is there a way to configure it to echo the commands?
The -X command line option gives a lot more verbose information but doesn't echo out the exact SQL used to repair the history table. All the information is there, though as the log contains entries like
Repairing Schema History table for version 1.0.1 (Description: create table foo, Type: SQL, Checksum: 1456329846) ...
which translates into UPDATE flyway_schema_history SET description=#p0, type=#p1, checksum=#p2 where version=#p3 in the obvious way
(at least with SQL Server on which I've just run the profiler).
You could submit it as a feature request? https://github.com/flyway/flyway/issues
We want to be able to pin all sql executions to a particular schema_version table in schema A. We need this so that we can run sqls as a sysdba and flyway always references A.schema_version to validate checksums and update result of SQL runs. We tried by adding the following settings:
flyway.schemas=A
flyway.table=schema_version
However we find that if we run info as user B then flyway is not able to show it can read A.schema_version. What are we missing?
Found out the solution. Schemas is case sensitive as specified in the flyway docs. We needed to reference flyway.schemas= where schema_name is in caps
I have a long script with sql statements to run on teradata. I want the script to keep running until the end and save the errors in a log file and that it will stop on every error. How can I do it?
thanks
Assuming you are using Teradata SQL Assistant:
Click on Tools in the menu bar, then Options, then Query. There is a checkbox that says "Stop query execution if an SQL error occurs"
To get the most recent error hit F11. Otherwise, from the menu bar click Tools, then show history. Double click on the row number on the left side of one of the history records and it will bring up a screen with the result messages for each statement. You can also query this sort of info directly from one of the QryLog views in DBC.
Errors can be of multiple types, some can be by-passed and some cannot be. For example, with native Teradata Tools and Utilities you can make a script ignore run-time errors, or even syntax errors, but generally it is impossible to ignore network connectivity errors and still get remaining part of your queries executed.
Generally in such scenarios, you want to use the BTEQ tool for executing the SQL in which you can ignore the execution errors. BTEQ is a standard Teradata tool which can be downloaded from Teradata website for free and is commonly installed by users querying Teradata through plain SQL.
To create a workable BTEQ script simply copy paste all of your queries into a plain text file, separate all queries with semicolons ; and on the very top of that plain text file add a logon statement as stated below
.logon Teradata_IP_Address/your_UserName,your_Password;
example script:
.logon 127.0.0.1/dbc,dbc;
/*Some sample queries. Replace these with your actual queries*/
SELECT Current_Timestamp;
CREATE TABLE My_Table (Dummy INTEGER) PRIMARY INDEX (Dummy);
So BTEQ got you through the execution errors. To avoid network connectivity issues, ideally you want to execute that on a server which has a constant connection to Teradata and with Teradata Tools and Utilities installed. Such a server may be called as ETL server, landing server, edge node or managed server (or something else, depending on your environment). You will definitely need login credentials to that server (if you don't already have access). Preferable commands to execute a bteq script are
Windows: bteq < yourscriptname >routine_logfile 2>error_logfile
Linux (bash/ksh): nohup bteq < yourscriptname >routine_logfile 2>error_logfile &
Make sure not to close the command prompt if you are on windows. On Linux you can close the current window or even terminate your network session with your ETL server if you use the recommended command.
If you see a warning about EOL line found at the end of your logs, just ignore it; it is because for simplicity I ignored some optional BTEQ statements which ensure cleaner exit.
I am very new to the PL/SQL programming. I tried to write a pl/sql procedure with some DML Statements(insert) inside the code. I am not doing any explicit commit after performing insert operations in the pl/sql code. But the transaction is getting commited after executing pl/sql procedure.
is this the default behaviour?
How can i control this?
DML statements (INSERT/DELETE/UPDATE/MERGE) don't do an auto commit in PL/SQL.
DDL statements do commit (ALTER/CREATE etc) and this will happen even if something failed. If you're running EXECUTE IMMEDIATE like dynamic statement that runs a DDL, this will also commit your transaction. And its been like that [and will remain] since 2000
Client interfaces like SQL*Plus have an auto commit feature that can be turned off/on , look for it in the client documentations. Something like
SET AUTOCOMMIT OFF
You can see the current status of this variable
SHOW AUTCOMMIT
and that will tell you whether its on/off .
Go through this for more variations of autocommit
In the PL/SQL Developer client, you control the autocommit of SQL Window transactions via Preferences.