Oracle string in DB larger than on UI - asp.net

There is a string, which comes from text field, and has 200 character limit. Field in oracle DB's table has a maximum value of 200 characters. Application crashes, saying it can't write 212 characters to a field of maximum 200 characters. Problem is clearly on DB level, as on the other database with identical table and CRUD, it all goes well.
Suspecting that problem might be in encoding differences, I made a
SELECT * FROM NLS_DATABASE_PARAMETERS;
on both databases. Results are identical, NLS_CHARACTERSET in both cases show value of AL32UTF8. What might be the problem?
P.S. It's ASP.NET application, if it helps.

If also NLS_LENGTH_SEMANTICS parameter is the same, maybe the columns are defined differenty: VARCHAR2(200 BYTE) vs VARCHAR2(200 CHAR)?
HTH.
Alessandro

Related

How to implement INSERT where not exists for ORACLE in Mule4

I am trying to implement a use-case in Mule4 where a tour needs to be assigned to a user if it has not already been assigned.
I was hoping that I could implement it using Mule db:insert component and using INSERT WHERE NOT EXISTS SQL script as below.
INSERT INTO TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM) select :tourno,:tlid,:system from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO=:tourno and TLID=:tlid and SYSTEM=:system))
However, this is resulting in Mule Exception
Message : ORA-01722: invalid number
Error type : DB:BAD_SQL_SYNTAX
TL_MAPPING_TOUR table has an id column (Primary Key), but that is auto-generated by a sequence.
The same script, modified for running directly in SQL developer, as shown below, is working fine.
INSERT into TL_MAPPING_TOUR(TOURNO,TLID,SYSTEM)
select 'CLLO001474','123456789','AS400'
from DUAL
where not exists(select * from TL_MAPPING_TOUR where (TOURNO='CLLO001474' and TLID='123456789' and SYSTEM='AS400'));
Clearly Mule db:insert component doesn't like the syntax, but it's not very clear to me what is wrong here. I can't find any INSERT WHERE NOT EXISTS example implementation for the Mule4 Database component either.
stackoverflow page https://stackoverflow.com/questions/54910330/insert-record-into-sql-server-when-it-does-not-already-exist-using-mule directs to page not found.
Any idea what is wrong here and how to implement this in Mule4 without using another Mule4 db:select component before db:insert?
I don't know "mule4", but this:
Message : ORA-01722: invalid number
doesn't mean that syntax is wrong (as you already tested it - the same statement works OK in another tool).
Cause: You executed a SQL statement that tried to convert a string to a number, but it was unsuccessful.
Resolution:
The option(s) to resolve this Oracle error are:
Option #1: Only numeric fields or character fields that contain numeric values can be used in arithmetic operations. Make sure that all expressions evaluate to numbers.
Option #2: If you are adding or subtracting from dates, make sure that you added/substracted a numeric value from the date.
In other words, it seems that one of columns is declared as NUMBER, while you passed something that is a string. Oracle performed implicit conversion when you tested the statement in SQL Developer, but it seems that mule4 didn't and hence the error.
The most obvious cause (based on what you posted) is putting '123456789' into TLID as other values are obviously strings. Therefore, pass 123456789 (a number, no single quotes around it) and see what happens. Should work.
SQL Developer is too forgiving. It will convert string to numbers and vise versa automatically when it can. And it can a lot.
Mulesoft DB connector tries the same but it is not as succefule as native tools. Pretty often it fails to convert, especially on dates but this is not your case.
In short - do not trust too much data sense of Mulesoft. If it works - great! Otherwise try to eliminate any intelligence from it and do all conversions in the query and better from the string. Usually number works fine but if doesn't - use to_number function to mark properly that this is the number.
More about this is here https://simpleflatservice.com/mule4/AvoidCoversionsOrMakeThemNative.html

PartitionKey extracted from document doesn't match the one specified in the header

I have created a partitioned collection on a long field (playerId) and also added a hash index on that field (DataType.Number). When I insert records most of the time it works, but sometimes it gives me a PartitionKey extracted from document doesn't match the one specified in the header.
After I tested this in the Azure Data Explorer I found out there's a rounding problem with long numbers. If I insert 183548146777950021 through Data Explorer it will save it, but then return that same record to me as 183548146777950000. Is this a known issue?
I'm using the latest 1.23.2 of the .NET client, in Direct/TCP mode.
If I insert 183548146777950021 through Data Explorer it will save it, but then return that same record to me as 183548146777950000. Is this a known issue?
As far as I know, Azure DocumentDB uses IEEE754 standard for numbers, which could cause the truncation or loss of precision for large integers or higher precision decimal numbers. If possible, you could try to modify and store the playerId field as string "183548146777950021".
And you could refer to this similar issue: Azure DocumentDB decimal truncation.

Is there issues with Oracle username starting with numbers? - username in quotes

Using Oracle 11gR2
You can't create a username starting with a number:
SQL> create user 123 identified by temp;
create user 123 identified by temp
*
ERROR at line 1:
ORA-01935: missing user or role name
However, you can create it as:
SQL> create user "123" identified by temp;
User created.
Somebody knows possible problems with this kind of users?
Somebody knows oracle rules/reasons why you can't create it without quotes, ie, to have usernames starting with numbers?
Thanks in advance
Problems with quoted identifiers
Quoted identifiers can be successfully used for almost any Oracle object, including users. In theory, they work everywhere. In practice, you will run into many inconveniences and problems with quoted identifiers.
From the SQL Language Reference:
"Note: Oracle does not recommend using quoted identifiers for database object names. These quoted identifiers are accepted by SQL*Plus, but they may not be valid when using other tools that manage database objects."
Once you use double quotes, every reference to that object must use double quotes, and the correct case. You'll find lots of problems with tools that don't always use double quotes. And problems with scripts that look at metadata and don't always add double quotes. Quoted identifiers are just asking for trouble.
Why does Oracle have quoted identifiers?
This question is harder to answer, but I would guess limiting the types of characters used by objects makes parsing much easier. SQL already has a lot of keywords, and has many weird language ambiguities. If object names started with numbers it would make it difficult to differentiate between real numbers and objects.
For example, without quoted identifiers, this simple statement could be a mess:
select 1.1 + 2.2 from some_table;
Without restricting object names, 1.1 could be a huge number of things, and the parser would have to look for objects named "1", and then dependent objects named "1", and then determine if that takes precedence over the number "1.1".
Weird names are possible in languages, but I assume when someone wrote the first SQL compiler 40 years ago they decided not to make their lives so complicated just to accommodate a few weird names.
Check if the user name is not present in reserved words and doesn't start with number:
SELECT *
FROM v$reserved_words
ORDER BY keyword
If you are creating user try this:
alter session set "_ORACLE_SCRIPT"=true;
CREATE USER oe IDENTIFIED BY oe;
check your connection type is cdb or not. if it is cdb as shown in the below
image
use prefix c## before the username in the command for creating user

Updating an SQLite database via an ODBC linked table in Access

I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.

storing and retrieving large strings in SQLite with ADO and VBScript

I am using VBscript, ADO and the SQLite ODBC driver to store and retrieve large strings (~5KB). Storing them works fine, maybe because I am able to specify a size while I bind the parameters of the insert statement. When I try to retrieve those strings, however, I correctly get the first 256 (or 255) characters but the rest seams to come from a random memory area. What am I doing wrong (besides using VBscript and ADO...)?
I'm open to the idea of storing the text as binary data. But the functions I tried, to retrieve it later, didn't work.
getChunk will not work on a record field as noted on msdn, also the field attribute adFldLong states if getChunk can be used on that field.
In some fields you must use the SQL query to retrieve the length of data instead of using attribute actualSize
there is a good example e here http://kek.ksu.ru/eos/ecommerce/masteringasp/18-06.html

Resources