mysqli_fetch_field_direct wrong result - mariadb

I was using mysqli_fetch_field_direct on MySQL to get length of fields, using :
$tab_field = mysqli_fetch_field_direct($result_fields,$j);
$long = $tab_field->length;
After creating a varchar(100) under PhpMyAdmin, I got back 100 as my varchar length which was correct.
Now, I'm using MariaDB and the same call to mysqli_fetch_field_direct for the same field, give me 300. I accept the fact that, according to the encoding, maybe it's the "internal size", but I need to know the number of char I can put, so I need to get back "100".
I notice that PhpMyAdmin return 100 when it shows the "structure" of the table, but it seems to use a SHOW query rather than fetch_field_direct.
Any idea?

Let's see SHOW CREATE TABLE. I'll guess that the column is declared CHAR(100)? And the old version was CHARACTER SET latin1? And the MariaDB one was CHARACTER SET utf8?
Generally VARCHAR is preferred over CHAR. (You found one reason.)
Generally utf8 is preferred these days.

Related

How to implement INSERT where not exists for ORACLE in Mule4

I am trying to implement a use-case in Mule4 where a tour needs to be assigned to a user if it has not already been assigned.
I was hoping that I could implement it using Mule db:insert component and using INSERT WHERE NOT EXISTS SQL script as below.
INSERT INTO TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM) select :tourno,:tlid,:system from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO=:tourno and TLID=:tlid and SYSTEM=:system))
However, this is resulting in Mule Exception
Message : ORA-01722: invalid number
Error type : DB:BAD_SQL_SYNTAX
TL_MAPPING_TOUR table has an id column (Primary Key), but that is auto-generated by a sequence.
The same script, modified for running directly in SQL developer, as shown below, is working fine.
INSERT into TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM)
select 'CLLO001474','123456789','AS400'
from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO='CLLO001474' and TLID='123456789' and SYSTEM='AS400'));
Clearly Mule db:insert component doesn't like the syntax, but it's not very clear to me what is wrong here. I can't find any INSERT WHERE NOT EXISTS example implementation for the Mule4 Database component either.
stackoverflow page https://stackoverflow.com/questions/54910330/insert-record-into-sql-server-when-it-does-not-already-exist-using-mule directs to page not found.
Any idea what is wrong here and how to implement this in Mule4 without using another Mule4 db:select component before db:insert?
I don't know "mule4", but this:
Message : ORA-01722: invalid number
doesn't mean that syntax is wrong (as you already tested it - the same statement works OK in another tool).
Cause: You executed a SQL statement that tried to convert a string to a number, but it was unsuccessful.
Resolution:
The option(s) to resolve this Oracle error are:
Option #1: Only numeric fields or character fields that contain numeric values can be used in arithmetic operations. Make sure that all expressions evaluate to numbers.
Option #2: If you are adding or subtracting from dates, make sure that you added/substracted a numeric value from the date.
In other words, it seems that one of columns is declared as NUMBER, while you passed something that is a string. Oracle performed implicit conversion when you tested the statement in SQL Developer, but it seems that mule4 didn't and hence the error.
The most obvious cause (based on what you posted) is putting '123456789' into TLID as other values are obviously strings. Therefore, pass 123456789 (a number, no single quotes around it) and see what happens. Should work.
SQL Developer is too forgiving. It will convert string to numbers and vise versa automatically when it can. And it can a lot.
Mulesoft DB connector tries the same but it is not as succefule as native tools. Pretty often it fails to convert, especially on dates but this is not your case.
In short - do not trust too much data sense of Mulesoft. If it works - great! Otherwise try to eliminate any intelligence from it and do all conversions in the query and better from the string. Usually number works fine but if doesn't - use to_number function to mark properly that this is the number.
More about this is here https://simpleflatservice.com/mule4/AvoidCoversionsOrMakeThemNative.html

Update query to append zeroes into blob field with SQLiteStudio

I'm trying to append zeroes to a blob field in a sqlite3 db.
What I tried is this:
UPDATE Logs
SET msg_buffer = msg_buffer || zeroblob(1)
WHERE msg_id = 'F0'
As you can see, the field name is "msg_buffer", and what I want is to append byte zero. It seems that the concat operator || doesn't work.
How could I achieve this?
Reading the doc link posted by Googie (3.2. Affinity Of Expressions), I managed to find the way:
UPDATE Logs
SET msg_buffer = CAST(msg_buffer || zeroblob(1) AS blob)
WHERE msg_id = 'F0'
The CAST operator can take the expression with the concatenate operator and force the blob affinity.
SQLite3 does support datatypes. See https://www.sqlite.org/datatype3.html
They are not strictly linked with declared type of a column, but rather individual per each cell value. The type is determined by how it was created/modified. For example if you insert 5, it will be INTEGER. If you insert 5.5, it will be REAL. If you insert 'test' it will be TEXT, if you insert zeroblob(1), it will be BLOB and if you insert null, it will be NULL.
Now, what you are doing is that you're trying to concatenate current value with a BLOB type. If current value is TEXT (or basically if you use || operator, as you do, you are converting any type into a TEXT), it will be concatenated with byte \x00, which actually determines the end of a string. In other words, you are adding yet another string terminator, to an already existing one, that the TEXT type has.
There will be no change on output of this operation. TEXT always ends with byte zero and it is always excluded from the result, as it's a meta character, not the value itself.
Additional information from http://sqlite.1065341.n5.nabble.com/Append-data-to-a-BLOB-field-td46003.html - appending binary data to BLOB field is not possible. You can modify prealocated blob:
Append is not possible. But if you preallocate space using
zeroblob() or similar, you can write to it using the incremental
blob API:
http://www.sqlite.org/c3ref/blob_open.html
Finally, please see accepted answer, as author of the question found an interesting solution.

How to extract a Teradata .TPT file with UTF-8 encoding

We are currently extracting several Teradata .TPT files that we will upload to AWS S3, however the files are coming with ANSI encode
I need them to come with encode UTF-8
You must specify the character set in your TPT script. At the top add:
USING CHARACTER SET UTF8
The tricky part is that UTF8 here has 3 bytes per character, so in your DEFINE SCHEMA you must triple the size of each field.
For example if your schema looks like:
DEFINE SCHEMA s_some_export
(
status VARCHAR(20),
userid VARCHAR(20),
firstname VARCHAR(64),
);
You'll have to triple the values to accommodate your UTF8 characters:
DEFINE SCHEMA s_some_export
(
status VARCHAR(60),
userid VARCHAR(60),
firstname VARCHAR(192),
);
Sometimes, because I'm lazy, I define my TPT with USING CHARACTER SET UTF16 so that I only need double each field size (the math is easier). BUT it means I have to convert it to UTF8 after extraction. In Linux this would just be iconv -f UTF-16LE -t UTF-8 myoutputfile.csv > myoutputfile.utf8.csv
Some caveats:
If your table's field is defined as CHAR and CHARACTER SET LATIN then you may run into column size issues with your schema. see here
Dates and Timestamps can get wierd as they don't need to be doubled so defining them as VARCHAR in your schema can get you into trouble. You may have to fuss around a bit here. My suggestion would be to change the view from which you are selecting the data for you TPT and CAST(yourdate AS VARCHAR(10)) as yourdate and then use VARCHAR(30) in your schema so you don't have to think about the field types while defining your schema. This means extra CPU overhead in your extraction, but unless you are running tight on resources I think it's worth it. I'm also very lazy that way and always happy to just get the damned TPT to extract data without much debugging.

SQLite/FoxPro Update can't find record using = but finds it using LIKE

I'm converting a FoxPro database to SQLite, and migrating the instructions to update, where I found a problem.
If inside FoxPro I use Update Fact01 set Motivo = 'asdfgh' where TipoDoc='FV'
the rows are not updated.
But if I use Update Fact01 set Motivo = 'asdfgh' where TipoDoc Like 'FV' the rows are changed.
If I do the first instruction inside the SQLite engine, the rows are also changed. The field type for TipoDoc is NChar(2).
Also, If i do a select * from Fact01 where TipoDoc ='FV' statement inside Foxpro it works OK.
Any idea what's happening here?
I'm not sure if it's due to the fact that Nchar can store unicode data or the way data in general is stored. Wrapping an ALLTRIM around the WHERE clause may correct the problem.
Update Fact01 set Motivo = 'asdfgh' where ALLTRIM(TipoDoc)='FV'
This happened to me because i stored the column value as a BLOB instead of TEXT. Turns out the BLOB value is less than a TEXT value so LIKE found matches but = did not. you might be having the same issue.

Storing French (decimal values) in database?

I have my form set in french as well, and it automatically changes the text format to use ','. However When I try to insert my values into the database it says cannot convert nvarchar to decimal?
Worst case, Is there a way I can disable the numbers from changing to use ',' and just use '.' always regardless what language it is?
My working language is vb.net
Thanks,
Robert
If you're passing the values down to the database as nvarchar then you'll need to have converted this to a string using yourDecimalValue.ToString(Globalization.CultureInfo.InvariantCulture) or similar. SQL Server will always expect a decimal to be in 1.23 format - you can imagine the trouble that would result if queries including WHERE myvalue IN (1,25, 1,33, 1,45) were submitted!

Resources