Airflow - SQL Server connection - airflow

I have a question for changing the backend connection from SQLite to SQL Server. After passing in the correct connection string for sql_alchemy_conn, I run this command: airflow initdb. I get the following error:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]A table can only have one timestamp column. Because table 'task_reschedule' already has one, the column 'start_date' cannot be added. (2738) (SQLExecDirectW)") [SQL: '\nCREATE TABLE task_reschedule (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\ttask_id VARCHAR(250) NOT NULL, \n\tdag_id VARCHAR(250) NOT NULL, \n\texecution_date TIMESTAMP NOT NULL, \n\ttry_number INTEGER NOT NULL, \n\tstart_date TIMESTAMP NOT NULL, \n\tend_date TIMESTAMP NOT NULL, \n\tduration INTEGER NOT NULL, \n\treschedule_date TIMESTAMP NOT NULL, \n\tPRIMARY KEY (id), \n\tCONSTRAINT task_reschedule_dag_task_date_fkey FOREIGN KEY(task_id, dag_id, execution_date) REFERENCES task_instance (task_id, dag_id, execution_date)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/f405)

So this works for me:
In the file: 0a2a5b66e19d_add_task_reschedule_table.py add this:
def mysql_datetime():
return mysql.DATETIME(timezone=True)
and replace any lines which has timestamp() such as the below:
sa.Column('execution_date', timestamp(), nullable=False, server_default=None),
with this:
sa.Column('execution_date', mysql_datetime(), nullable=False, server_default=None),
Once I made this change, the above error disappears but I am not sure if there are any other unintended consequences. If so I will update here or just resort to using MySQL database.

Related

Kafka JDBC source connector time stamp mode failing for sqlite3

I tried to set up a database with two tables in sqlite. Once of my table is having a timestamp column . I am trying to implement timestamp mode to capture incremental changes in the DB. Kafka connect is failing with the below error:
ERROR Failed to get current time from DB using Sqlite and query 'SELECT
CURRENT_TIMESTAMP'
(io.confluent.connect.jdbc.dialect.SqliteDatabaseDialect:471)
java.sql.SQLException: Error parsing time stamp
Caused by: java.text.ParseException: Unparseable date: "2019-02-05 02:05:29"
does not match (\p{Nd}++)\Q-\E(\p{Nd}++)\Q-\E(\p{Nd}++)\Q
\E(\p{Nd}++)\Q:\E(\p{Nd}++)\Q:\E(\p{Nd}++)\Q.\E(\p{Nd}++)
Many thanks for the help
Config:
name=test-query-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlite:employee.db
query=SELECT users.id, users.name, transactions.timestamp, transactions.payment_type FROM users JOIN transactions ON (users.id = transactions.user_id)
mode=timestamp
timestamp.column.name=timestamp
topic.prefix=test-joined
DDL:
CREATE TABLE transactions(id integer primary key not null,
payment_type text not null,
timestamp DATETIME DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
user_id int not null,
constraint fk foreign key(user_id) references users(id)
);
CREATE TABLE users (id integer primary key not null,name text not null);
The kafka connect jdbc connector easily detects the changes in the timestamp, if the values of the 'timestamp' column are in the format of the 'UNIX timestamp'.
sqlite> CREATE TABLE transact(timestamp TIMESTAMP DEFAULT (STRFTIME('%s', 'now')) not null,
...> id integer primary key not null,
...> payment_type text not null);
sqlite>
The values can be inserted as:
sqlite> INSERT INTO transact(timestamp,payment_type,id) VALUES (STRFTIME('%s', 'now'),'cash',1);
The timestamp related changes are then detected by the kafka jdbc source connector and the same can be consumed as follows:
kafka-console-consumer --bootstrap-server localhost:9092 --topic jdbc-transact --from-beginning
{"timestamp":1562321516,"id":2,"payment_type":"card"}
{"timestamp":1562321790,"id":1,"payment_type":"online"}
I've reproduced this, and it is already logged as an issue for the JDBC Source connector. You can monitor it here: https://github.com/confluentinc/kafka-connect-jdbc/issues/219

Executing SQL script in server ERROR: Error 1064

I'm trying to create a database from the E/R Diagram I just created using MySQL WorkBench, can someone please help?
Executing SQL script in server
ERROR: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near
CONSTRAINT `IDProveedor`
FOREIGN KEY (`IDproveedor`)
REFERENCES `Repu
at line 11
SQL Code:
-- -----------------------------------------------------
-- Table `Repuestos`.`Articulo`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `Repuestos`.`Articulo` (
`IDArticulo` INT NOT NULL AUTO_INCREMENT,
`Nombre` VARCHAR(45) NOT NULL,
`IDproveedor` INT NOT NULL,
`Articulocol` VARCHAR(45) NOT NULL,
`Valor_Unitario` INT NOT NULL,
PRIMARY KEY (`IDArticulo`),
INDEX `NitProveedor_idx` (`IDproveedor` ASC) VISIBLE,
CONSTRAINT `IDProveedor`
FOREIGN KEY (`IDproveedor`)
REFERENCES `Repuestos`.`Proveedor` (`IDProveedor`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB
SQL script execution finished: statements: 6 succeeded, 1 failed
Fetching back view definitions in final form.
Nothing to fetch.

ERROR: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server

So I keep having this error on my sql
ERROR: Error 1064: You have an error in your SQL syntax; check the
manual that corresponds to your MariaDB server version for the right
syntax to use near ' CONSTRAINT fk_examinee_user1
FOREIGN KEY (userName)
REFERENCES `q' at line 12
SQL Code:
-- -----------------------------------------------------
-- Table `questionnaire`.`examinee`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `questionnaire`.`examinee` (
`examineeNumber` INT NOT NULL,
`userName` VARCHAR(45) NOT NULL,
`examineeID` VARCHAR(45) NOT NULL,
`startDate` INT NOT NULL,
`endDate` INT NOT NULL,
`Active` VARCHAR(45) NOT NULL,
PRIMARY KEY (`examineeID`),
INDEX `fk_examinee_user1_idx` (`userName` ASC) VISIBLE,
CONSTRAINT `fk_examinee_user1`
FOREIGN KEY (`userName`)
REFERENCES `questionnaire`.`user` (`userName`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
Tried everything but code seems right to me. please help

How can I use IF statements in Teradata without using BTEQ

I'm trying to create some deployment tools and I don't want to use BTEQ. I've been trying to work with the Teradata.Client.Provider in PowerShell but I'm getting syntax errors on the creation of a table.
[Teradata Database] [3706] Syntax error: expected something between
';' and the 'IF' keyword.
SELECT * FROM DBC.TablesV WHERE DatabaseName = DATABASE AND TableName = 'MyTable';
IF ACTIVITYCOUNT > 0 THEN GOTO EndStep1;
CREATE MULTISET TABLE MyTable ,
NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
MyColId INTEGER GENERATED ALWAYS AS IDENTITY
(START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 2147483647
NO CYCLE)
NOT NULL,
MyColType VARCHAR(50) NULL,
MyColTarget VARCHAR(128) NULL,
MyColScriptName VARCHAR(256) NULL,
MyColOutput VARCHAR(64000) NULL,
isMyColException BYTEINT(1) NULL,
ExceptionOutput VARCHAR(64000) NULL,
MyColBuild VARCHAR(128) NULL,
MyColDate TIMESTAMP NOT NULL
)
PRIMARY INDEX PI_MyTable_MyColLogId(MyColLogId);
LABEL EndStep1;
I would rather not use BTEQ as I've not found it has worked well in other deployment tools we have created and requires a bit of hacks. Is there anything I can use that would avoid using that tool?
What Parse error?
The CREATE will fail due to double INTEGER in MyColId and VARCHAR(max) in ExceptionOutput, it's an unknown datatype in Teradata.

flyway unable to init or migrate after a clean

I've just performed a clean operation, i've added a new schema one the prop file then ran clean again to remove that schema from db then when i tried to do a migrate it doesnt allow me i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
Also when i tried to initialize it sing init i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
ERROR: Occured in com.googlecode.flyway.core.dbsupport.SqlScript.execute() at line 91
ERROR: Caused by com.microsoft.sqlserver.jdbc.SQLServerException: The specified schema name "xx" either does not exist or you do not have permission to use it.
ERROR: Occured in com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError() at line 197
Please not that I've dropped all the objects and I can confirm they do not exist on my DB instance.
How can I overcome this? I am concerned when using the tool in dev and prod environment we can't just delete the db instance and start again. at this point I cant use the tool to do the migration and i dont want to delete the db to overcome this issue.
Flyway's schema creation is currently an all or nothing deal. Either all schemas are missing and then all will be created or at least one schema is present and then none will be created.
For you this means that you must either create xx yourself or drop the other schemas first.

Resources