Couldn't insert record with text fields with a lengh of more than 257 characters via pyodbc - pyodbc

I have a SQL server which version is 10.50.4000. I connect to it from linux via pyodbc with SQL server Native Client 11.0 driver.
Here is the table definition.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [ctm].[services](
[id] [char](36) NOT NULL,
[Name] [varchar](45) NOT NULL,
[ServiceDescription] [varchar](256) NULL,
[Version] [varchar](45) NULL,
[Status] [varchar](45) NOT NULL,
[StatusDescription] [varchar](256) NULL,
[WSDL] [text] NULL,
[WADL] [text] NULL,
[XSD] [text] NULL,
[CreatedBy] [varchar](100) NULL,
[CreatedOn] [datetime] NOT NULL,
[CreatedAt] [varchar](45) NULL,
[UpdatedBy] [varchar](100) NULL,
[UpdatedOn] [datetime] NULL,
[UpdatedAt] [varchar](45) NULL,
[deleted] [bit] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
The field 'WSDL' is used to keep schema of a web service. I had read the schema file and stored it into a string. But when I tried to insert the record into database server, I got bellow error message:
Traceback (most recent call last):
File "", line 1, in
pyodbc.DataError: ('22001', '[22001] [Microsoft][SQL Server Native Client 11.0]String data, right truncation (0) (SQLExecDirectW)')
Here is the command I executed:
cursor.execute("""insert into ctm.services (id, Name, ServiceDescription, Version, Status,StatusDescription, WSDL, WADL, XSD, CreatedBy, CreatedOn, CreatedAt, UpdatedBy, UpdatedOn,UpdatedAt, deleted) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?,current_timestamp, ?, null, null, null, ?) """, 'abcdefghijklmn', 'whatservice', 'testing', '1.0.0', 'active', '', 'NOTES!!!This should be a schema file context. But it could not be shown for some reason. So I input garbage message here', 'null', 'null', getpass.getuser(), socket.gethostname(), '0')
When the string size of WSDL is less than 257 characters, I could insert the record into data base server. If it is larger than 257, it failed. But as we can see, the field type of WSDL is text. It allows 2147483647 characters. Please check below debug output:
>>> for row in cursor.columns(table='services'):
... print row.column_name + " : " + str(row.data_type) + " size:" + str(row.column_size)
...
id : 1 size:36
Name : 12 size:45
ServiceDescription : 12 size:256
Version : 12 size:45
Status : 12 size:45
StatusDescription : 12 size:256
WSDL : -1 size:2147483647
WADL : -1 size:2147483647
XSD : -1 size:2147483647
CreatedBy : 12 size:100
CreatedOn : 93 size:23
CreatedAt : 12 size:45
UpdatedBy : 12 size:100
UpdatedOn : 93 size:23
UpdatedAt : 12 size:45
deleted : -7 size:1
name : -9 size:128
service_id : 4 size:10
principal_id : 4 size:10
service_queue_id : 4 size:10
I had used wireshark to debug and found that the sql command was not sent to the sql server. So the error message must come from the SQL server Native Client driver.
I tried to google for a solution but couldn't find anything. Is there anyone came across with same issues before?
P.S.
I had tried same command under Microsoft SQL Server Management Studio. It works without any problems. So the issue could be from the ODBC driver or the mismatching between ODBC driver and pyodbc.

Related

need help - teradata TPT script failing

TPT SCRIPT:
DEFINE JOB LD_CAN_IN_TRANSIT
DESCRIPTION 'Load canada retail in transit'
(
DEFINE SCHEMA schema_Canada_Retail_Intransit_Wrk
(
VOUCHER_DATE VARCHAR(50),
VOU_NO VARCHAR(50),
PO_NO VARCHAR(50),
DOC_QTY VARCHAR(50),
DESCRIP VARCHAR(50),
UPC VARCHAR(50),
STORE_ADDR6 VARCHAR(50),
COUNTRY VARCHAR(50),
VENDOR_CODE VARCHAR(50),
LINENUMBER VARCHAR(50)
);
DEFINE OPERATOR DDL_OPERATOR
TYPE DDL
ATTRIBUTES
(
VARCHAR PrivateLogName = 'ddl_log',
VARCHAR TdpId = '******',
VARCHAR LogonMech = 'LDAP',
VARCHAR UserName = '*************',
VARCHAR UserPassword = '*************',
VARCHAR ErrorList = '3807'
);
DEFINE OPERATOR dml_canada_retail_int
TYPE UPDATE
SCHEMA *
ATTRIBUTES
(
VARCHAR LogonMech = 'LDAP',
VARCHAR TdpId = '*******',
VARCHAR UserName = '************',
VARCHAR UserPassword = '********',
VARCHAR TargetTable = 'ODM_SCD_STG_T.abc' ,
VARCHAR LogTable = 'EIS_AUX_T.log_table',
VARCHAR ErrorTable1 = 'EIS_AUX_T.err_ET',
VARCHAR ErrorTable2 = 'EIS_AUX_T.err_RL'
VARCHAR DeleteTask = 'Y'
);
DEFINE OPERATOR prod_can_ret_int
TYPE DATACONNECTOR PRODUCER
SCHEMA schema_Canada_Retail_Intransit_Wrk
ATTRIBUTES
(
VARCHAR DirectoryPath= '<path_to_file>',
VARCHAR FileName = #data_file,
VARCHAR Format = 'Delimited',
VARCHAR OpenMode = 'Read',
VARCHAR TextDelimiter =',',
INTEGER SkipRows = 1
);
DEFINE OPERATOR load_can_ret_int
TYPE LOAD
SCHEMA *
ATTRIBUTES
(
VARCHAR LogonMech = 'LDAP',
VARCHAR TdpId = '********',
VARCHAR UserName = '*****************',
VARCHAR UserPassword = '***************',
VARCHAR TargetTable = 'ODM_SCD_STG_T.abc' ,
VARCHAR LogTable = 'EIS_AUX_T.log_table',
VARCHAR ErrorTable1 = 'EIS_AUX_T.err_ET',
VARCHAR ErrorTable2 = 'EIS_AUX_T.err_RL'
);
STEP Setup_Tables
(
APPLY
('DROP TABLE EIS_AUX_T.log_table;'),
('DROP TABLE EIS_AUX_T.err_ET;'),
('DROP TABLE EIS_AUX_T.err_RL;')
TO OPERATOR (DDL_OPERATOR);
);
STEP stSetup_Tables
(
APPLY
('DELETE ODM_SCD_STG_T.abc;')
TO OPERATOR (dml_canada_retail_int);
);
STEP stLOAD_CAN_RET_INT
(
APPLY
('<insert statement>')
TO OPERATOR (load_can_ret_int)
SELECT * FROM OPERATOR(prod_can_ret_int);
);
);
ERROR:
Teradata Parallel Transporter Version 16.20.00.14 64-Bit
The global configuration file '/opt/teradata/client/16.20/tbuild/twbcfg.ini' is used.
Log Directory: /opt/teradata/client/16.20/tbuild/logs
Checkpoint Directory: /opt/teradata/client/16.20/tbuild/checkpoint
TPT_INFRA: TPT03624: Warning: tbuild -s option argument specifies the first job step;
no job steps will be skipped (unless this is a restarted job).
Job log: /opt/teradata/client/16.20/tbuild/logs/ec2-user-207.out
Job id is ec2-user-207, running on ip-10-179-114-26.us-west-2.compute.internal
Teradata Parallel Transporter SQL DDL Operator Version 16.20.00.14
DDL_OPERATOR: private log specified: ddl_log
DDL_OPERATOR: connecting sessions
DDL_OPERATOR: sending SQL requests
DDL_OPERATOR: TPT10508: RDBMS error 3807: Object 'EIS_AUX_T.err_ET' does not exist.
DDL_OPERATOR: TPT18046: Error is ignored as requested in ErrorList
DDL_OPERATOR: TPT10508: RDBMS error 3807: Object 'EIS_AUX_T.err_RL' does not exist.
DDL_OPERATOR: TPT18046: Error is ignored as requested in ErrorList
DDL_OPERATOR: disconnecting sessions
DDL_OPERATOR: Total processor time used = '0.012241 Second(s)'
DDL_OPERATOR: Start : Thu Jan 9 20:22:28 2020
DDL_OPERATOR: End : Thu Jan 9 20:22:28 2020
Job step Setup_Tables completed successfully
Teradata Parallel Transporter Update Operator Version 16.20.00.14
dml_canada_retail_int: private log not specified
dml_canada_retail_int: connecting sessions
dml_canada_retail_int: preparing target table(s)
**dml_canada_retail_int: TPT10508: RDBMS error 3524: The user does not have CREATE TABLE access to database ODM_SCD_STG_T.**
dml_canada_retail_int: disconnecting sessions
dml_canada_retail_int: Performance metrics:
dml_canada_retail_int: MB/sec in Acquisition phase: 0
dml_canada_retail_int: Elapsed time from start to Acquisition phase: 2 second(s)
dml_canada_retail_int: Elapsed time in Acquisition phase: 0 second
dml_canada_retail_int: Elapsed time in Application phase: 0 second
dml_canada_retail_int: Elapsed time from Application phase to end: < 1 second
dml_canada_retail_int: Total processor time used = '0.0397 Second(s)'
dml_canada_retail_int: Start : Thu Jan 9 20:22:28 2020
dml_canada_retail_int: End : Thu Jan 9 20:22:30 2020
Job step stSetup_Tables terminated (status 12)
Job ec2-user terminated (status 12)
Job start: Thu Jan 9 20:22:28 2020
Job end: Thu Jan 9 20:22:30 2020
Question:
The goal is to create error tables in EIS_AUX_T schema and
the user that i'm using have CREATE TABLE access to it. But i'm not
sure why do i need CREATE TABLE access to database (ODM_SCD_STG_T in
this case) in which target table exists. Process is failing at Load
operator step.
1.User have CREATE TABLE access in EIS_AUX_T database.
2.User does NOT have CREATE TABLE access in ODM_SCD_STG_T database.
3.USer has DML privileges in ODM_SCD_STG_T database.
4.Error and Log tables should be created in EIS_AUX_T database.
The UPDATE operator also uses a WorkTable in addition to Log & Error tables; since you didn't specify WorkTable or WorkingDatabase attributes, TPT is trying to create the work table in the target table's database.
But using the UPDATE operator DeleteTask to truncate a table is just extra overhead. Use the DDL operator for this.

Airflow - SQL Server connection

I have a question for changing the backend connection from SQLite to SQL Server. After passing in the correct connection string for sql_alchemy_conn, I run this command: airflow initdb. I get the following error:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]A table can only have one timestamp column. Because table 'task_reschedule' already has one, the column 'start_date' cannot be added. (2738) (SQLExecDirectW)") [SQL: '\nCREATE TABLE task_reschedule (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\ttask_id VARCHAR(250) NOT NULL, \n\tdag_id VARCHAR(250) NOT NULL, \n\texecution_date TIMESTAMP NOT NULL, \n\ttry_number INTEGER NOT NULL, \n\tstart_date TIMESTAMP NOT NULL, \n\tend_date TIMESTAMP NOT NULL, \n\tduration INTEGER NOT NULL, \n\treschedule_date TIMESTAMP NOT NULL, \n\tPRIMARY KEY (id), \n\tCONSTRAINT task_reschedule_dag_task_date_fkey FOREIGN KEY(task_id, dag_id, execution_date) REFERENCES task_instance (task_id, dag_id, execution_date)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/f405)
So this works for me:
In the file: 0a2a5b66e19d_add_task_reschedule_table.py add this:
def mysql_datetime():
return mysql.DATETIME(timezone=True)
and replace any lines which has timestamp() such as the below:
sa.Column('execution_date', timestamp(), nullable=False, server_default=None),
with this:
sa.Column('execution_date', mysql_datetime(), nullable=False, server_default=None),
Once I made this change, the above error disappears but I am not sure if there are any other unintended consequences. If so I will update here or just resort to using MySQL database.

ERROR: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server

So I keep having this error on my sql
ERROR: Error 1064: You have an error in your SQL syntax; check the
manual that corresponds to your MariaDB server version for the right
syntax to use near ' CONSTRAINT fk_examinee_user1
FOREIGN KEY (userName)
REFERENCES `q' at line 12
SQL Code:
-- -----------------------------------------------------
-- Table `questionnaire`.`examinee`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `questionnaire`.`examinee` (
`examineeNumber` INT NOT NULL,
`userName` VARCHAR(45) NOT NULL,
`examineeID` VARCHAR(45) NOT NULL,
`startDate` INT NOT NULL,
`endDate` INT NOT NULL,
`Active` VARCHAR(45) NOT NULL,
PRIMARY KEY (`examineeID`),
INDEX `fk_examinee_user1_idx` (`userName` ASC) VISIBLE,
CONSTRAINT `fk_examinee_user1`
FOREIGN KEY (`userName`)
REFERENCES `questionnaire`.`user` (`userName`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
Tried everything but code seems right to me. please help

How can I use IF statements in Teradata without using BTEQ

I'm trying to create some deployment tools and I don't want to use BTEQ. I've been trying to work with the Teradata.Client.Provider in PowerShell but I'm getting syntax errors on the creation of a table.
[Teradata Database] [3706] Syntax error: expected something between
';' and the 'IF' keyword.
SELECT * FROM DBC.TablesV WHERE DatabaseName = DATABASE AND TableName = 'MyTable';
IF ACTIVITYCOUNT > 0 THEN GOTO EndStep1;
CREATE MULTISET TABLE MyTable ,
NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
MyColId INTEGER GENERATED ALWAYS AS IDENTITY
(START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 2147483647
NO CYCLE)
NOT NULL,
MyColType VARCHAR(50) NULL,
MyColTarget VARCHAR(128) NULL,
MyColScriptName VARCHAR(256) NULL,
MyColOutput VARCHAR(64000) NULL,
isMyColException BYTEINT(1) NULL,
ExceptionOutput VARCHAR(64000) NULL,
MyColBuild VARCHAR(128) NULL,
MyColDate TIMESTAMP NOT NULL
)
PRIMARY INDEX PI_MyTable_MyColLogId(MyColLogId);
LABEL EndStep1;
I would rather not use BTEQ as I've not found it has worked well in other deployment tools we have created and requires a bit of hacks. Is there anything I can use that would avoid using that tool?
What Parse error?
The CREATE will fail due to double INTEGER in MyColId and VARCHAR(max) in ExceptionOutput, it's an unknown datatype in Teradata.

flyway unable to init or migrate after a clean

I've just performed a clean operation, i've added a new schema one the prop file then ran clean again to remove that schema from db then when i tried to do a migrate it doesnt allow me i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
Also when i tried to initialize it sing init i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
ERROR: Occured in com.googlecode.flyway.core.dbsupport.SqlScript.execute() at line 91
ERROR: Caused by com.microsoft.sqlserver.jdbc.SQLServerException: The specified schema name "xx" either does not exist or you do not have permission to use it.
ERROR: Occured in com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError() at line 197
Please not that I've dropped all the objects and I can confirm they do not exist on my DB instance.
How can I overcome this? I am concerned when using the tool in dev and prod environment we can't just delete the db instance and start again. at this point I cant use the tool to do the migration and i dont want to delete the db to overcome this issue.
Flyway's schema creation is currently an all or nothing deal. Either all schemas are missing and then all will be created or at least one schema is present and then none will be created.
For you this means that you must either create xx yourself or drop the other schemas first.

Resources