Numeric overflow on using exponent function - teradata

I have written a select statement with the exp function to return the exponent of a value. However an error ie. SELECT Failed. [2616] Numeric overflow occurred during computation.
The raw number itself is a float type and when I select a few record there are no problems. But when I run it for all records it fails. It obviously has to do with either the exponent not liking the raw values or the resultant exponent violates something.

Related

How does subtracting .as_ptr() values work?

Looking at a code snippet for parsing HTTP requests provided as part of the Tokio examples, I see the following code:
let toslice = |a: &[u8]| {
let start = a.as_ptr() as usize - src.as_ptr() as usize;
assert!(start < src.len());
(start, start + a.len())
};
As I understand, the above code snippet it is getting the pointer location for the input vector a and the pointer location for a variable outside the scope of the closure and subtracting them. Then it returns a tuple containing this calculated value and the calculated value plus the length of the input vector.
What is this trying to accomplish? One could end up with a negative number and then panic because it wouldn't cast to usize. In fact, when I compile the example, this is exactly what happens when the input is the bytes for the string GET or POST, but not for other values. Is this a performance optimization for doing some sort of substring from a vector?
Yes this is just subtracting pointers. The missing context is the closure clearly intends that a must be a subslice (substring) of the closed-over src slice. Thus toslice(a) ends up returning the start and end indices of a inside src. More explicitly, let (start, end) = toslice(a); means src[start] is a[0] (not just equal value, they are the same address), and src[end - 1] is the last byte of a.
Violating this context assumption will likely produce panics. That's fine because this closure is a local variable that is only used locally and not exposed to any unknown users, so the only calls to the closure are in the example you linked and they evidently all satisfy the constraint.

OpenCL convert_long from NaN Producing Incorrect Result on RTX 2060

OpenCL's convert_T (OpenCL 1.2, others are similar) function for long is producing an odd result.
In particular the function definition states:
Conversions to integer type may opt to convert using the optional
saturated mode by appending the _sat modifier to the conversion
function name. When in saturated mode, values that are outside the
representable range shall clamp to the nearest representable value in
the destination format. (NaN should be converted to 0).
However, on an NVidia RTX 2060 I am getting the most negative integral value for NaN inputs. For instance consider, the following kernel given NaN inputs such as 0x7FC00001 and 0xFFC00001.
kernel void test(
const global uint *srcF)
{
uint id = get_global_id(0);
float x = ((global float *)srcF)[id];
long x_rte = convert_long_sat_rte(x);
if (isnan(x) && x_rte == 0x8000000000000000) {
printf("0x%016llx: oops! 0x%08X fails to generate zero\n", x_rte, srcF[id]);
}
}
On an NVidia RTX 2060 I see.
0x8000000000000000: oops! 0x7FC00001 fails to generate zero
0x8000000000000000: oops! 0xFFC00001 fails to generate zero
It seems to generates 0x8000000000000000 (most negative long) instead of the expected value of 0. On Intel HD 630 I get 0's as expected. Similarly, I noticed some double to other integral types with convert_T_sat also fail similarly (returning the most negative integral value).
My question, am I missing something here? Am I misunderstanding the above spec? I know typical conversion has ill defined behavior outside bounds, but this explicit conversion seems to clearly say NaN's must be converted to 0. Still, this seems like an obvious conformance test that the driver must have gone through and I suspect myself of screwing up here.

SQLite Affinity Bug? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
EDIT:
The answer here: Is floating point math broken? assists in understanding this question. However, this question is not language agnostic. It is specific to the documented behavior and affinity of floating point numbers as handled by SQLite. Having a very similar answer to a different question != duplicate question.
QUESTION:
I have a rather complex SQLite Where Clause comparing numerical values. I have read and "think" I understand the Datatype Documentation here: https://www.sqlite.org/datatype3.html
Still confused as to the logic SQLite uses to determine datatypes in comparison clauses such as =, >, <, <> etc. I can narrow my example down to this bit of test SQL of which the results make little sense to me.
SELECT
CAST(10 AS NUMERIC) + CAST(254.53 AS NUMERIC) = CAST(264.53 AS NUMERIC) AS TestComparison1,
CAST(10 AS NUMERIC) + CAST(254.54 AS NUMERIC) = CAST(264.54 AS NUMERIC) AS TestComparison2
Result: "1" "0"
The second expression in the select statement (TestComparison2) is converting the left-side of the equation to a TEXT value. I can prove this by casting the right-side of the equation to TEXT and the result = 1.
Obviously I'm missing something in the way SQLite computes Affinity. These are values coming from columns in a large/complex query. Should I be casting both sides of the equations in WHERE/Join Clauses to TEXT to avoid these issues?
The reason why you are not getting the expected result is that the underlying results will be floating point.
Although DataTypes in SQLite3 covers much, you should also consider the following section from Expressions :-
Affinity of type-name Conversion Processing
NONE
Casting a value to a type-name with no affinity causes the value to be converted into a BLOB. Casting to a BLOB consists of first
casting the value to TEXT in the encoding of the database connection,
then interpreting the resulting byte sequence as a BLOB instead of as
TEXT.
TEXT
To cast a BLOB value to TEXT, the sequence of bytes that make up the BLOB is interpreted as text encoded using the database
encoding.
Casting an INTEGER or REAL value into TEXT renders the value as if via
sqlite3_snprintf() except that the resulting TEXT uses the encoding of
the database connection.
REAL
When casting a BLOB value to a REAL, the value is first converted to TEXT.
When casting a TEXT value to REAL, the longest possible prefix of the
value that can be interpreted as a real number is extracted from the
TEXT value and the remainder ignored. Any leading spaces in the TEXT
value are ignored when converging from TEXT to REAL.
If there is no prefix that can be interpreted as a real number, the
result of the conversion is 0.0.
INTEGER
When casting a BLOB value to INTEGER, the value is first converted to TEXT.
When casting a TEXT value to INTEGER, the longest possible prefix of the value >that can be interpreted as an integer number is extracted
from the TEXT value and the remainder ignored. Any leading spaces in
the TEXT value when converting from TEXT to INTEGER are ignored.
If there is no prefix that can be interpreted as an integer number,
the result of the conversion is 0.
If the prefix integer is greater than +9223372036854775807 then the
result of the cast is exactly +9223372036854775807.
Similarly, if the
prefix integer is less than -9223372036854775808 then the result of
the cast is exactly -9223372036854775808.
When casting to INTEGER, if the text looks like a floating point value with an exponent, the exponent will be ignored because it is no
part of the integer prefix. For example, "(CAST '123e+5' AS INTEGER)"
results in 123, not in 12300000.
The CAST operator understands decimal integers only — conversion of hexadecimal integers stops at the "x" in the "0x" prefix of the
hexadecimal integer string and thus result of the CAST is always zero.
A cast of a REAL value into an INTEGER results in the integer between the REAL value and zero that is closest to the REAL value. If
a REAL is greater than the greatest possible signed integer
(+9223372036854775807) then the result is the greatest possible signed
integer and if the REAL is less than the least possible signed integer
(-9223372036854775808) then the result is the least possible signed
integer.
Prior to SQLite version 3.8.2 (2013-12-06), casting a REAL value greater than +9223372036854775807.0 into an integer resulted in the
most negative integer, -9223372036854775808. This behavior was meant
to emulate the behavior of x86/x64 hardware when doing the equivalent
cast.
NUMERIC
Casting a TEXT or BLOB value into NUMERIC first does a forced conversion into REAL but then further converts the result into
INTEGER if and only if the conversion from REAL to INTEGER is lossless
and reversible. This is the only context in SQLite where the NUMERIC
and INTEGER affinities behave differently.
Casting a REAL or INTEGER value to NUMERIC is a no-op, even if a real
value could be losslessly converted to an integer.
NOTE
Before this section there is a section on Literal Values (i.e. casting probably only needs to be applied to values extracted from columns).
Try :-
SELECT
round(CAST(10 AS NUMERIC) + CAST(254.53 AS NUMERIC),2) = round(CAST(264.53 AS NUMERIC),2) AS TestComparison1,
round(CAST(10 AS NUMERIC) + CAST(254.54 AS NUMERIC),2) = round(CAST(264.54 AS NUMERIC),2) AS TestComparison2
:-

Difference between number and integer datatype in oracle dictionary views

I used oracle dictionary views to find out column differences if any between two schema's. While syncing data type discrepancies I found that both NUMBER and INTEGER data types stored in all_tab_columns/user_tab_columns/dba_tab_columns as NUMBER only so it is difficult to sync data type discrepancies where one schema/column has number datatype and another schema/column has integer data type.
While comparison of schema's it show datatype mismatch. Please suggest if there is any other alternative apart form using dictionary views or if any specific properties from dictionary views can be used to identify if data type is integer.
the best explanation i've found is this:
What is the difference betwen INTEGER and NUMBER? When should we use NUMBER and when should we use INTEGER? I just wanted to update my comments here...
NUMBER always stores as we entered. Scale is -84 to 127. But INTEGER rounds to whole number. The scale for INTEGER is 0. INTEGER is equivalent to NUMBER(38,0). It means, INTEGER is constrained number. The decimal place will be rounded. But NUMBER is not constrained.
INTEGER(12.2) => 12
INTEGER(12.5) => 13
INTEGER(12.9) => 13
INTEGER(12.4) => 12
NUMBER(12.2) => 12.2
NUMBER(12.5) => 12.5
NUMBER(12.9) => 12.9
NUMBER(12.4) => 12.4
INTEGER is always slower then NUMBER. Since integer is a number with added constraint. It takes additional CPU cycles to enforce the constraint. I never watched any difference, but there might be a difference when we load several millions of records on the INTEGER column. If we need to ensure that the input is whole numbers, then INTEGER is best option to go. Otherwise, we can stick with NUMBER data type.
Here is the link
Integer is only there for the sql standard ie deprecated by Oracle.
You should use Number instead.
Integers get stored as Number anyway by Oracle behind the scenes.
Most commonly when ints are stored for IDs and such they are defined with no params - so in theory you could look at the scale and precision columns of the metadata views to see of no decimal values can be stored - however 99% of the time this will not help.
As was commented above you could look for number(38,0) columns or similar (ie columns with no decimal points allowed) but this will only tell you which columns cannot take decimals, and not what columns were defined so that INTS can be stored.
Suggestion:
do a data profile on the number columns. Something like this:
select max( case when trunc(column_name,0)=column_name then 0 else 1 end ) as has_dec_vals
from table_name
This is what I got from oracle documentation, but it is for oracle 10g release 2:
When you define a NUMBER variable, you can specify its precision (p) and scale (s) so that it is sufficiently, but not unnecessarily, large. Precision is the number of significant digits. Scale can be positive or negative. Positive scale identifies the number of digits to the right of the decimal point; negative scale identifies the number of digits to the left of the decimal point that can be rounded up or down.
The NUMBER data type is supported by Oracle Database standard libraries and operates the same way as it does in SQL. It is used for dimensions and surrogates when a text or INTEGER data type is not appropriate. It is typically assigned to variables that are not used for calculations (like forecasts and aggregations), and it is used for variables that must match the rounding behavior of the database or require a high degree of precision. When deciding whether to assign the NUMBER data type to a variable, keep the following facts in mind in order to maximize performance:
Analytic workspace calculations on NUMBER variables is slower than other numerical data types because NUMBER values are calculated in software (for accuracy) rather than in hardware (for speed).
When data is fetched from an analytic workspace to a relational column that has the NUMBER data type, performance is best when the data already has the NUMBER data type in the analytic workspace because a conversion step is not required.

Issues with checking the equality of two doubles in .NET -- what's wrong with this method?

So I'm just going to dive into this issue... I've got a heavily used web application that, for the first time in 2 years, failed doing an equality check on two doubles using the equality function a colleague said he'd also been using for years.
The goal of the function I'm about to paste in here is to compare two double values to 4 digits of precision and return the comparison results. For the sake of illustration, my values are:
Dim double1 As Double = 0.14625000000000002 ' The result of a calculation
Dim double2 As Double = 0.14625 ' A value that was looked up in a DB
If I pass them into this function:
Public Shared Function AreEqual(ByVal double1 As Double, ByVal double2 As Double) As Boolean
Return (CType(double1 * 10000, Long) = CType(double2 * 10000, Long))
End Function
the comparison fails. After the multiplication and cast to Long, the comparison ends up being:
Return 1463 = 1462
I'm kind of answering my own question here, but I can see that double1 is within the precision of a double (17 digits) and the cast is working correctly.
My first real question is: If I change the line above to the following, why does it work correctly (returns True)?
Return (CType(CType(double1, Decimal) * 10000, Long) = _
CType(CType(double2, Decimal) * 10000, Long))
Doesn't Decimal have even more precision, thus the cast to Long should still be 1463, and the comparison return False? I think I'm having a brain fart on this stuff...
Secondly, if one were to change this function to make the comparison I'm looking for more accurate or less error prone, would you recommend changing it to something much simpler? For example:
Return (Math.Abs(double1 - double2) < 0.0001)
Would I be crazy to try something like:
Return (double1.ToString("N5").Equals(double2.ToString("N5")))
(I would never do the above, I'm just curious about your reactions. It would be horribly inefficient in my application.)
Anyway, if someone could shed some light on the difference I'm seeing between casting Doubles and Decimals to Long, that would be great.
Thanks!
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Relying on a cast in this situation is error prone, as you have discovered - depending upon the rules used when casting, you may not get the number you expect.
I would strongly advise you to write the comparison code without a cast. Your Math.Abs line is perfectly fine.
Regarding your first question:
My first real question is: If I change
the line above to the following, why
does it work correctly (returns True)?
The reason is that the cast from Double to Decimal is losing precision, resulting in a comparison of 0.1425 to 0.1425.
When you use CType, you're telling your program "I don't care how you round the numbers; just make sure the result is this other type". That's not exactly what you want to say to your program when comparing numbers.
Comparing floating-point numbers is a pain and I wouldn't ever trust a Round function in any language unless you know exactly how it behaves (e.g. sometimes it rounds .5 up and sometimes down, depending on the previous number...it's a mess).
In .NET, I might actually use Math.Truncate() after multiplying out my double value. So, Math.Truncate(.14625 * 10000) (which is Math.Truncate(1462.5)) is going to equal 1462 because it gets rid of all decimal values. Using Truncate() with the data from your example, both values would end up being equal because 1) they remain doubles and 2) you made sure the decimal was removed from each.
I actually don't think String comparison is very bad in this situation since floating point comparison is pretty nasty in itself. Granted, if you're comparing numbers, it's probably better to stick with numeric types, but using string comparison is another option.

Resources