Numbers in power mode in PL/SQL developer - switch to numerical sequence - plsql

In PL/SQL developer, I have a table column with format NUMBER(19).
The longer numbers are displayed in power mode instead of plain digital sequence, e.g.
1,2345678901234E15
instead of
1234567890123456
So the last digits are lost - also when I export the results as .csv or .xls.
How can I change display to numerical sequence?

A colleague has found it in https://forums.allroundautomations.com/ubb/ubbthreads.php?ubb=showflat&Number=61009:
[Configure >] Preferences > [Window Types >] SQL Window > (Checkbox:) Number fields to_char

Related

Add leading zeros to a character variable in progress 4gl

I am trying to import a .csv file to match the records in the database. However, the database records has leading zeros. This is a character field The amount of data is a bit higher side.
Here the length of the field in database is x(15).
The problem I am facing is that the .csv file contains data like example AB123456789 wherein the database field has "00000AB123456789" .
I am importing the .csv to a character variable.
Could someone please let me know what should I do to get the prefix zeros using progress query?
Thank you.
You need to FILL() the input string with "0" in order to pad it to a specific length. You can do that with code similar to this:
define variable inputText as character no-undo format "x(15)".
define variable n as integer no-undo.
input from "input.csv".
repeat:
import inputText.
n = 15 - length( inputText ).
if n > 0 then
inputText = fill( "0", n ) + inputText.
display inputText.
end.
input close.
Substitute your actual field name for inputText and use whatever mechanism you are actually using for importing the CSV data.
FYI - the "length of the field in the database" is NOT "x(15)". That is a display formatting string. The data dictionary has a default format string that was created when the schema was defined but it has absolutely no impact on what is actually stored in the database. ALL Progress data is stored as variable length length. It is not padded to fit the display format and, in fact, it can be "overstuffed" and it is very, very common for applications to do so. This is a source of great frustration to SQL reporting tools that think the display format is some sort of length limit. It is not.

Pasting SQL decimal columns into Excel

I have a issue with data formats of Excel and SQL.
I have a column in SQL which is of datatype DECIMAL(18,0) and when I am trying to paste the result in SQL..the last 3 digits of the sql result gets replaced by 0 in Excel.
Example:
In SQL the result set has a column called session id and has decimal numbers like
119,597,417,242,309,670
329,621,151,415,350,454
134,460,940,261,658,890
but when I paste it in Excel the numbers look like:
I tried changing the format in EXCEL to paste as text however, the whole format of the result set gets distorted (and only the first column gets pasted properly without the 0's)
I can't keep casting all columns in SQL from decimal to int as there are way too many columns.
Can you please guide me as to what I can do?
Numeric fields in Excel are limited to 15 digits precision.
In SQL Assistant under Tools / Options / Data Format you can ask to have large Decimal (and BIGINT) fields displayed as text for just this sort of copy / paste. Or you can tell SQL Assistant to Save As or Export to Excel format.
For other tools you can explicitly FORMAT and CAST the data to VARCHAR in your SELECT so it is retrieved as text.
Several things you can do. I'll list 4.
Pick whatever suits you best.
First paste in a text editor (like notepad), seach/replace there, and paste that.
Set the datarange where you're going to paste to "text", and then paste. After that you can search/replace, and change to the correct format.
Change the regional settings of Windows to match the data that you have.
You can generate formula's from your SQL query, instead of floating point numbers. So generate a text like =5/10 instead of 0.5 or 0,5. Excel will pick it up correctly regardless of your regional settings.

Get the correct Hexadecimal for strange symbol

I have this strange symbol on my pl/sql developer client (check image it's the symbol between P and B )
In the past, and for a different symbol, i was able to update my DB and remove them making this:
update table set ent_name = replace(ent_name, UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C29B')), ' ');
The problem is that i dont remember how I translated the symbol (i had at that time) to the C29B.
Can you help me to understand how can i translate the currenct symbol to the HEX format, to i can use the command to remove it from my database?
Thanks
As long as it's in your table, you can use the DUMP function to find it.
Use DUMP to get the byte representation of the data in code of you wish to inspect for weirdness.
A good overview: Oracle / PLSQL: DUMP Function
Here's some text with plain ASCII:
select dump('Dashes-and "smart quotes"') from dual;
Typ=96 Len=25:
68,97,115,104,101,115,45,97,110,100,32,34,115,109,97,114,116,32,113,117,111,116,101,115,34
Now introduce funny characters:
select dump('Dashes—and “smart quotes”') from dual;
Typ=96 Len=31:
68,97,115,104,101,115,226,128,148,97,110,100,32,226,128,156,115,109,97,114,116,32,113,117,111,116,101,115,226,128,157
In this case, the number of bytes increased because my DB is using UTF8. Numbers outside of the valid range for ASCII stand out and can be inspected further.
The ASCIISTR function provides an even more convenient way to see the special characters:
select asciistr('Dashes—and “smart quotes”') from dual;
Dashes\2014and \201Csmart quotes\201D
This one converts non-ASCII characters into backslashed Unicode hex.
The DUMP function takes an additional argument that can be used to format the output in a nice way:
select DUMP('Thumbs 👍', 1017) from dual;
Typ=96 Len=11 CharacterSet=AL32UTF8: T,h,u,m,b,s, ,f0,9f,91,8d
select DUMP('Smiley 😊 Face', 17) from dual;
Typ=96 Len=16: S,m,i,l,e,y, ,f0,9f,98,8a, ,F,a,c,e

Why one AL32UTF8 character not display the I-Acute, yet other one displays the tilde-N?

My Oracle 11g is configured with AL32UTF8
NLS_CHARACTERSET AL32UTF8
Why does the tilde-N display as tilde-N in the second record, but the Acute-I and K
not display with Acute-I and K in the first record?
Additional Information:
The hex code for the Accent-I is CD
When I take the HEX code from the dump and convert it using UNISTR(), the character displays with the accent.
select
unistr('\0052\0045\0059\004B\004A\0041\0056\00CD\004B')
as hex_to_unicode
from dual;
This is probably an issue with whatever client you are using to display the results than your database. What are you using?
You can check if the database results are correct using the DUMP function. If the value in your table has the correct byte sequence for your database character set, you're good.
Edit:
OK, I'm pretty sure your data is bad. You're talking about
LATIN CAPITAL LETTER I WITH ACUTE, which is Unicode code point U+00CD. That is not the same as byte 0xCD. You're using database character set AL32UTF8, which uses UTF-8 encoding. The correct UTF-8 encoding for the U+00CD character is the two-byte sequence 0xC38D.
What you have is UTF-8 byte sequence 0xCD4B, which I'm pretty sure is invalid.
The Oracle UNISTR function takes the code point in UCS-2 encoding, which is roughly the same as UTF-16, not UTF-8.
Demonstration here: http://sqlfiddle.com/#!4/7e9d1f/1

Maximum Length of Value in R Data Frame, RODBC

I am trying to do a simple query of a DB2 database using the RODBC package in R (myQuery<-sqlQuery(channel,paste0("..."))) One of the columns is a Varchar of length 3000. The resulting data frame shows a "NA" in that column when there should be text. Exporting it to csv also only shows "NA". A query in Access shows an odd character encoding (only after clicking on the cell). Is there a maximum length of a value in a R data frame or a maximum length of a field that can be pulled using RODBC? Or is it the encoding of the field that causes the "NA" to appear?
I did an end to end test on DB2 (LUW 9.7) and R (3.2.2 Windows) and it worked fine for me.
SQL code:
create table test (foo varchar(3000));
--actual insert is 3000 chars
insert into test values ('aaaaaa .... a');
--this select worked fine in my normal SQL client
select * from test
R code:
long = sqlQuery(connection, "select * from test");
#Displays the 3000 character value.
long;
My guess is the problem is for some other reason than simply the size of the field:
Character encoding issues. If you are seeing something funny in Access, perhaps the content of the field is something not acceptable in the character encoding R is using, so it is being discarded. (I'm not familiar with character encoding in R in particular, but it is in general a thorny issue for software development).
Overall size of the results. Maybe the problem is due to the overall length of a row rather than the length of a single field. Is the query also returning lots of other stuff? Have you tried a simple test of just this field?
Problem in another version. Maybe you are using a different version than I was, and there is indeed a problem with your version. If you think so, update your question with more information.

Resources