Debezium Source - JDBC Sink - Precision Mode Connect and Infinity result in date out of range - datetime

I have Debezium in a container, capturing all changes of PostgeSQL database records. In addition a have a confluent JDBC container to write all changes to another database.
While the operation is in progress, after a while i am receiving the following error:
ERROR: timestamp out of range: "290303-12-10 19:59:27.6+00 BC"
My configuration on source about datetime:
"time.precision.mode": "connect",
After a carful investigation, I saw that inside the failing table there are dates with value '-infinity'...
Based on PostgreSQL documendation, infinity and -infinity exists:
PostgreSQL supports several special date/time input values for convenience, as shown in Table 8.13. The values infinity and -infinity are specially represented inside the system and will be displayed unchanged; but the others are simply notational shorthands that will be converted to ordinary date/time values when read. (In particular, now and related strings are converted to a specific time value as soon as they are read.) All of these values need to be enclosed in single quotes when used as constants in SQL commands.
I also see that this has been fixed on Debezium 1.5.0-Beta based on the Issue https://issues.redhat.com/browse/DBZ-2450 but i have Debezium 2.0.0. Lastly I see that now this is a JDBC issue and not Debezium so i tried to update JDBC from 10.4.0 to 10.6.0 but nothing changed.
Any ideas how I can fix that?

Related

How to interpret or decode the Hostdata column from Teradata error table

The use case is that, there is an Informatica Cloud mapping which loads from SQL Server to Teradata database. If there any failures during the run time of the mapping then that mappings writes all the failed rows to a table in Teradata database. The key column in this error table is HOSTDATA which I assume. I am trying to decode the HOSTDATA column so that if a similar ETL failure happens in the production then it would be helpful in identifying the root cause much quickly. By default HOSTDATA is a column of type VARBYTES.
To decode the HOSTDATA column, converted the column to ASCII and Base 16 format. None of them made any use.
Then tried the below from the Teradata forum.
Then tried to extract the data from the error table using a BTEQ script. For that the data is being exported into a .err file and it is being loaded back into the Teradata database using a fastload script. Fastload is unable to load the data because there is no specific delimiter for the data. There data in the .err file looks gibberish. Snapshot of the data from the .err file:
My end goal is to interpret the Hostdata column in a more human readable way. Any suggestions in this direction are also welcome.
The Error Table Extractor command twbertbl which is part of "Teradata Parallel Transporter Base" software is designed to extract and format HOSTDATA from the error table VARBYTE column.
Based on the screenshot in your question, I suspect you will need to specify FORMATTED as the record format option for twbertbl (default is DELIMITED).

How to implement INSERT where not exists for ORACLE in Mule4

I am trying to implement a use-case in Mule4 where a tour needs to be assigned to a user if it has not already been assigned.
I was hoping that I could implement it using Mule db:insert component and using INSERT WHERE NOT EXISTS SQL script as below.
INSERT INTO TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM) select :tourno,:tlid,:system from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO=:tourno and TLID=:tlid and SYSTEM=:system))
However, this is resulting in Mule Exception
Message : ORA-01722: invalid number
Error type : DB:BAD_SQL_SYNTAX
TL_MAPPING_TOUR table has an id column (Primary Key), but that is auto-generated by a sequence.
The same script, modified for running directly in SQL developer, as shown below, is working fine.
INSERT into TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM)
select 'CLLO001474','123456789','AS400'
from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO='CLLO001474' and TLID='123456789' and SYSTEM='AS400'));
Clearly Mule db:insert component doesn't like the syntax, but it's not very clear to me what is wrong here. I can't find any INSERT WHERE NOT EXISTS example implementation for the Mule4 Database component either.
stackoverflow page https://stackoverflow.com/questions/54910330/insert-record-into-sql-server-when-it-does-not-already-exist-using-mule directs to page not found.
Any idea what is wrong here and how to implement this in Mule4 without using another Mule4 db:select component before db:insert?
I don't know "mule4", but this:
Message : ORA-01722: invalid number
doesn't mean that syntax is wrong (as you already tested it - the same statement works OK in another tool).
Cause: You executed a SQL statement that tried to convert a string to a number, but it was unsuccessful.
Resolution:
The option(s) to resolve this Oracle error are:
Option #1: Only numeric fields or character fields that contain numeric values can be used in arithmetic operations. Make sure that all expressions evaluate to numbers.
Option #2: If you are adding or subtracting from dates, make sure that you added/substracted a numeric value from the date.
In other words, it seems that one of columns is declared as NUMBER, while you passed something that is a string. Oracle performed implicit conversion when you tested the statement in SQL Developer, but it seems that mule4 didn't and hence the error.
The most obvious cause (based on what you posted) is putting '123456789' into TLID as other values are obviously strings. Therefore, pass 123456789 (a number, no single quotes around it) and see what happens. Should work.
SQL Developer is too forgiving. It will convert string to numbers and vise versa automatically when it can. And it can a lot.
Mulesoft DB connector tries the same but it is not as succefule as native tools. Pretty often it fails to convert, especially on dates but this is not your case.
In short - do not trust too much data sense of Mulesoft. If it works - great! Otherwise try to eliminate any intelligence from it and do all conversions in the query and better from the string. Usually number works fine but if doesn't - use to_number function to mark properly that this is the number.
More about this is here https://simpleflatservice.com/mule4/AvoidCoversionsOrMakeThemNative.html

Delphi - ClientDataSet SQL calculated field causing "Invalid field type" error at runtime [duplicate]

Using Delphi 10.2, SQLite and Teecharts. My SQLite database has two fields, created with:
CREATE TABLE HistoryRuntime ('DayTime' DateTime, Device1 INTEGER DEFAULT (0));
I access the table using a TFDQuery called qryGrpahRuntime with the following SQL:
SELECT DayTime AS TheDate, Sum(Device1) As DeviceTotal
FROM HistoryRuntime
WHERE (DayTime >= "2017-06-01") and (DayTime <= "2017-06-26")
Group by Date(DayTime)
Using the Field Editor in the Delphi IDE, I can add two persistent fields, getting TheDate as a TDateTimeField and DeviceTotal as a TLargeIntField.
I run this query in a program to create a TeeChart, which I created at design time. As long as the query returns some records, all this works. However, if there are no records for the requested dates, I get an EDatabaseError exception with the message:
qryGrpahRuntime: Type mismatch for field 'DeviceTotal', expecting: LargeInt actual: Widestring
I have done plenty of searching for solutions on the web on how to prevent this error on an empty query, but have had not luck with anything I found. From what I can tell, SQLite defaults to the wide string field when no data is returned. I have tried using CAST in the query and it did not seem to make any difference.
If I remove the persistent fields, the query will open without problems on an empty return set. However, in order to use the TeeChart editor in the IDE, it appears I need persistent fields.
Is there a way I can make this work with persistent fields, or am I going to have to throw out the persistent fields and then add the TeeChart Series at runtime?
This behavior is described in Adjusting FireDAC Mapping chapter of the FireDAC's SQLite manual:
For an expression in a SELECT list, SQLite avoids type name
information. When the result set is not empty, FireDAC uses the value
data types from the first record. When empty, FireDAC describes those
columns as dtWideString. To explicitly specify the column data type,
append ::<type name> to the column alias:
SELECT count(*) as "cnt::INT" FROM mytab
So modify your command e.g. this way (I used BIGINT, but you can use any pseudo data type that maps to a 64-bit signed integer data type and is not auto incrementing, which corresponds to your persistent TLargeIntField field):
SELECT
DayTime AS "TheDate",
Sum(Device1) AS "DeviceTotal::BIGINT"
FROM
HistoryRuntime
WHERE
DayTime BETWEEN {d 2017-06-01} AND {d 2017-06-26}
GROUP BY
Date(DayTime)
P.S. I did a small optimization by using BETWEEN operator (which evaluates the column value only once), and used an escape sequence for date constants (which, in real you replace by parameter, I guess; so just for curiosity).
This data type hinting is parsed by the FDSQLiteTypeName2ADDataType procedure that takes and parses column name in format <column name>::<type name> in its AColName parameter.

Having trouble with FIELDPROC on a database (Column Encryption on Iseries)

I used Listing 3 in the following link to create a FIELDPROC program QGPL/MOBHOMEPAS which should encrypt a variable char column Field Encryption in DB2 for i
I compiled the RPGLE program and I created a separate database DBMLIB/UMAAAP00 as follows
A R UMAAAF00 TEXT('-
A TEST ENCRYPTION')
A*
A IPIAAA 20A VARLEN(20)
A KYGAAA 11S 2 COLHDG('SALARY')
I then use strsql to alter the table and protect IPIAAA
ALTER TABLE DBMLIB/UMAAAP00 alter column IPIAAA set FIELDPROC
QGPL.MOBHOMEPAS
ALTER COMPLETED FOR TABLE UMAAAP00 IN DBMLIB.
For some reason when I go in to add entries through upddta directly to the file itself and then do a wrkqry to query and file and view them I don't see them as encrypted.
Is this not how it's supposed to work? Is anyone able to assist me with the logic? Ultimately, I'd like to create a simple table from scratch that has a single 20 character or so password column as encrypted.
If the code being utilized for the named FieldProc program QGPL.MOBHOMEPAS was modeled-after [an effective copy of] the source code that was found at the URL from the OP [which BTW includes a position-to request to the comments section... Why?], then that code is implemented using the base-level of the DB2 for IBM i 7.1 SQL FieldProc support, not the next [enhanced] level of support in which the masking feature was added. That is, every invocation other than for function-code=8 will necessarily always be an Encode or a Decode operation for which any masking of the data is unsupported, because changing the data [with that level of support] would corrupt the data in the TABLE.
Note [from http://www.mcpressonline.com/rpg/db2-field-procedures-finally-support-conditional-masking.html] the differences in the coding requirements described for the pre-masking-support [eight parameters] and since-masking-support [nine parameters] as the pre-requisite to have the Run Query (RUNQRY) and Update Data (UPDDTA) features mask the data that is presented to the user:
The new FieldProc Masking support revolves around two main components.
The first component is a new parameter that was added to the parameter
lists that the DB2 engine passes to the FieldProc program on each
decode call. This new parameter controls whether or not the FieldProc
program can return a masked value. There are some DB2 operations—such
as the RGZPFM (Reorganize Physical File Member) command and trigger
processing—that always require the clear-text version of the data to
be returned. The second component is a new special SQLState value
('09501') that is to be returned by the FieldProc program whenever it
is passed a masked value on the encode call. This prevents the masked
value from being encoded, which would result in the original data
value being lost. When this special SQLState value is returned, DB2
will ignore the encoded value that is passed back by the FieldProc
program and instead use the value that's currently stored in the
record image for that column.
For some reason when I go in to add entries through upddta directly to
the file itself and then do a wrkqry to query and file and view them I
don't see them as encrypted. Is this not how it's supposed to work?
No, that's not how it's supposed to work. The data will be encoded on disk only.
When you view the data it will be decoded automatically by the FIELDPROC program no matter what you're using to view it (WRKQRY [yuck], DFU, STRSQL, whatever). This is how it works regardless of field masking (which is different/additional functionality).

Updating an SQLite database via an ODBC linked table in Access

I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.

Resources