I have a piece of code that read data from excel file store it in DB.
`decimal num = Convert.ToDecimal(rowOptions["excel_cl_name"]);`
But it does not work properly for NL culture. Let say when value is 878,90 it should be 878.9000 but it becomes 87890.0000. When value is 123,5 , it should be 123.5000 but it becomes 1235.0000. When value is 123 then it is 123.0000.
I cannot replace the ',' with space and divide the number by 100 as for other 2 cases it will fail.
Note: In database precision is 4.
Is there any way I can force the culture to be en-us always ?
Is there a better way to manage it?
I can also have value like 12.345,90 which is 12,345.9000.
Related
I am working on an existing script where the format of one of the columns is set as :
column A BYTEINT Compress (0,1,2,3,4,5,6,7,8,9,NULL)
The possible values for this field lie between 1 - 899 or null. However, when I try to insert the data in the table, I get error 2616 'Numeric Overflow'.
As a possible solution, I changed the format to INT and it seems to work. However, I am not sure what the impact of changing it to INT would be. In particular, I am not sure about:
What compress does to the data
Why BYTEINT would have been set in the first place?
Thanks
Note - It is a pre-existing script of over 5000 rows of code.
I am trying to import a .csv file to match the records in the database. However, the database records has leading zeros. This is a character field The amount of data is a bit higher side.
Here the length of the field in database is x(15).
The problem I am facing is that the .csv file contains data like example AB123456789 wherein the database field has "00000AB123456789" .
I am importing the .csv to a character variable.
Could someone please let me know what should I do to get the prefix zeros using progress query?
Thank you.
You need to FILL() the input string with "0" in order to pad it to a specific length. You can do that with code similar to this:
define variable inputText as character no-undo format "x(15)".
define variable n as integer no-undo.
input from "input.csv".
repeat:
import inputText.
n = 15 - length( inputText ).
if n > 0 then
inputText = fill( "0", n ) + inputText.
display inputText.
end.
input close.
Substitute your actual field name for inputText and use whatever mechanism you are actually using for importing the CSV data.
FYI - the "length of the field in the database" is NOT "x(15)". That is a display formatting string. The data dictionary has a default format string that was created when the schema was defined but it has absolutely no impact on what is actually stored in the database. ALL Progress data is stored as variable length length. It is not padded to fit the display format and, in fact, it can be "overstuffed" and it is very, very common for applications to do so. This is a source of great frustration to SQL reporting tools that think the display format is some sort of length limit. It is not.
Have a positional flat file schema with date as data type. We have format as ddMMyy. We have a requirement where 000000 needs to be allowed in date field.
When 000000 is passed in the flat file, we are getting Date is not in valid Gregorian date format.
To resolve this I have tried padding with padding character 0 and min occurs as 0. This make 000000 as valid value but it is not taking real valid date values.
Apart from regex expression, is there any way I can have this resolved?
If the field might contain "000000" then you can't use a date/datetime type on it.
Instead, treat it as a String for the Flat File.
You should the conversion from/to the 6 char value in a Map. The Flat File properties don't give you enough options.
If you can change the data type, you could create a new type with xsd:union that accepts any Date and String with the restriction "000000".
xsd:union
I'm trying to do a query like this on a table with a DATETIME column.
SELECT * FROM table WHERE the_date =
2011-03-06T15:53:34.890-05:00
I have the following as an string input from an external source:
2011-03-06T15:53:34.890-05:00
I need to perform a query on my database table and extract the row which contains this same date. In my database it gets stored as a DATETIME and looks like the following:
2011-03-06 15:53:34.89
I can probably manipulate the outside input slightly ( like strip off the -5:00 ). But I can't figure out how to do a simple select with the datetime column.
I found the convert function, and style 123 seems to match my needs but I can't get it to work. Here is the link to reference about style 123
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.blocks/html/blocks/blocks125.htm
I think that convert's slightly wrongly documented in that version of the docs.
Because this format always has century I think you only need use 23. Normally the 100 range for convert adds the century to the year format.
That format only goes down to seconds what's more.
If you want more you'll need to past together 2 x converts. That is, past a ymd part onto a convert(varchar, datetime-column, 14) and compare with your trimmed string. milliseconds comparison is likely to be a problem depending on where you got your big time string though because the Sybase binary stored form has a granularity of 300ms I think, so if your source string is from somewhere else it's not likely to compare. In other words - strip the milliseconds and compare as strings.
So maybe:
SELECT * FROM table WHERE convert(varchar,the_date,23) =
'2011-03-06T15:53:34'
But the convert on the column would prevent the use of an index, if that's a problem.
If you compare as datetimes then the convert is on the rhs - but you have to know what your milliseconds are in the_date. Then an index can be used.
I have a string variable which is getting approx 1184 var from query but it is displaying only 263 character in SQR report which is in CSV format. Please tell how to get whole character in my variable. Please help i m new to SQR Report
There is not enough info in the question to answer. My best guess is that when the SQR is writing the output to the CSV formatted file, the results are being truncated by the file properties.
In SQR, files are opened with parameters:
!-- File is opened but will write only 300 characters
Open $myFile as 10 For-Writing Record=300
!-- other code
Write 10 From $var1 $comma $var2
!-- other code
If the file is opened for 300 characters, then if the total length of $var1 and $var2 are over 300, you'll have truncated results in the output.
This is my best guess since the truncation didn't happen on a binary number boundary like 255/256 or 511/512.
We could potentially increase String variable size by modifying the SQR initialization file (aka “sqlsize”) to adjust the maximum allowed size of text string variables up to 64K-1 bytes if necessary.
The limiting factor in this particular case is not the size of the variable, but the maximum allowed length of a single LET command – the default for that is only 2048 bytes. I believe your environment is set to use the default.
Try increasing the size using the above comment