I have an unbound vgrid control. One field's unbound expression is like this:
Iif([NETSAL]=0, 0, [GP] / [NETSAL] * 100 )
The unbound type is decimal, the format type is numeric, the format string n1.
The problem is, that I don't get the right formatted values. F. e. if gp=200 and netsal=1500, I should get: 13,3, but I get 0,0. I checked the computed value, this is 0,0 too.
But if gp=2500 ant netsal=1000, then the value is 200, so it seems, that the value is rounded.
But why?
Thanks.
The expression result type depends on expression members types. In your case, all members of the expression [GP] / [NETSAL] are integer values. This is why the result is rounded to the nearest integer value.
Adding a decimal constant value to the expression will change the type of the expression result to decimal. According to the Criteria Language Syntax, the type of a numeric constant can be declared using special literals. For the decimal type, the literal is 'm'.
Try the following expression, it should work as you expect:
Iif([NETSAL]=0, 0, 1m * [GP] / [NETSAL] * 100 )
Related
In SQLite, dynamic typing is used and implicit conversions are done in expressions. For example
SELECT (3 < 2); -- false
SELECT (3 < '2'); -- true (what's happning here?)
SELECT ('3' < '2'); -- false
SELECT (3 < 20); -- true
SELECT (3 < '20'); -- true (what's happning here?)
SELECT ('3' < '20'); -- false
But the official documentation and the O'REILLY book Using SQLite say nothing about how operands are casted in implicit conversions.
In C++, Standard strictly defines (i.e. explicitly explains) how implicit conversions occur. For example, if either operand is of the type long double, another operand is casted to long double.
Is there such a rule in SQLite?
From Datatypes In SQLite Version 3/4.1. Sort Order:
An INTEGER or REAL value is less than any TEXT or BLOB value.
Obviously:
SELECT TYPEOF(3);
returns integer
and
SELECT TYPEOF('2');
SELECT TYPEOF('20');
return text.
So there is no conversion here:
SELECT 3 < '2';
which returns true.
But there is an implicit conversion in an expression like this:
SELECT 3 < '2' + 0;
which returns false and this conversion is forced by the operator + which applies a numeric operation to '2' thus converting it to an integer.
Edit to clarify:
This behavior applies only to literal values like 3 and '2'.
When it comes to expressions or column values then an implicit conversion does happen.
So if you define a table like:
create table test(id integer);
insert into test(id) values (1), (2), (3);
a statement like:
select * from test where id > '1'
will return:
| id |
| --- |
| 2 |
| 3 |
see the demo.
Implicit conversions sometimes occur but sometimes don't. The conditions which determine whether conversions are done before comparisons are described in 4.2. Type Conversions Prior To Comparison. According to the section,
Affinity is applied to operands of a comparison operator prior to the comparison according to the following rules in the order shown:
If one operand has INTEGER, REAL or NUMERIC affinity and the other operand has TEXT or BLOB or no affinity then NUMERIC affinity is applied to other operand.
If one operand has TEXT affinity and the other has no affinity, then TEXT affinity is applied to the other operand.
Otherwise, no affinity is applied and both operands are compared as is.
But how is the type affinity of an expression (including literals) defined? It is explained in 3.2. Affinity Of Expressions as
Every table column has a type affinity (one of BLOB, TEXT, INTEGER, REAL, or NUMERIC) but expressions do no necessarily have an affinity.
Expression affinity is determined by the following rules:
The right-hand operand of an IN or NOT IN operator has no affinity if the operand is a list and has the same affinity as the affinity of the result set expression if the operand is a SELECT.
When an expression is a simple reference to a column of a real table (not a VIEW or subquery) then the expression has the same affinity as the table column.
Parentheses around the column name are ignored. Hence if X and Y.Z are column names, then (X) and (Y.Z) are also considered column names and have the affinity of the corresponding columns.
Any operators applied to column names, including the no-op unary "+" operator, convert the column name into an expression which always has no affinity. Hence even if X and Y.Z are column names, the expressions +X and +Y.Z are not column names and have no affinity.
An expression of the form "CAST(expr AS type)" has an affinity that is the same as a column with a declared type of "type".
A COLLATE operator has the same affinity as its left-hand side operand.
Otherwise, an expression has no affinity.
So in the cases of examples in OP, literals have no affinity and are thus compared as-is. Since
An INTEGER or REAL value is less than any TEXT or BLOB value.
as pointed out in forpas's answer, 3 < '2' returns true.
These rules correctly describes apparently strange behavior referred to in this comment. CAST ('1' AS INTEGER) does have the type affinity INTEGER so that >= '1' is interpreted as >= 1 and thus CAST ('1' AS INTEGER) >= '1' returns true whereas 1 >= '1' returns false.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
EDIT:
The answer here: Is floating point math broken? assists in understanding this question. However, this question is not language agnostic. It is specific to the documented behavior and affinity of floating point numbers as handled by SQLite. Having a very similar answer to a different question != duplicate question.
QUESTION:
I have a rather complex SQLite Where Clause comparing numerical values. I have read and "think" I understand the Datatype Documentation here: https://www.sqlite.org/datatype3.html
Still confused as to the logic SQLite uses to determine datatypes in comparison clauses such as =, >, <, <> etc. I can narrow my example down to this bit of test SQL of which the results make little sense to me.
SELECT
CAST(10 AS NUMERIC) + CAST(254.53 AS NUMERIC) = CAST(264.53 AS NUMERIC) AS TestComparison1,
CAST(10 AS NUMERIC) + CAST(254.54 AS NUMERIC) = CAST(264.54 AS NUMERIC) AS TestComparison2
Result: "1" "0"
The second expression in the select statement (TestComparison2) is converting the left-side of the equation to a TEXT value. I can prove this by casting the right-side of the equation to TEXT and the result = 1.
Obviously I'm missing something in the way SQLite computes Affinity. These are values coming from columns in a large/complex query. Should I be casting both sides of the equations in WHERE/Join Clauses to TEXT to avoid these issues?
The reason why you are not getting the expected result is that the underlying results will be floating point.
Although DataTypes in SQLite3 covers much, you should also consider the following section from Expressions :-
Affinity of type-name Conversion Processing
NONE
Casting a value to a type-name with no affinity causes the value to be converted into a BLOB. Casting to a BLOB consists of first
casting the value to TEXT in the encoding of the database connection,
then interpreting the resulting byte sequence as a BLOB instead of as
TEXT.
TEXT
To cast a BLOB value to TEXT, the sequence of bytes that make up the BLOB is interpreted as text encoded using the database
encoding.
Casting an INTEGER or REAL value into TEXT renders the value as if via
sqlite3_snprintf() except that the resulting TEXT uses the encoding of
the database connection.
REAL
When casting a BLOB value to a REAL, the value is first converted to TEXT.
When casting a TEXT value to REAL, the longest possible prefix of the
value that can be interpreted as a real number is extracted from the
TEXT value and the remainder ignored. Any leading spaces in the TEXT
value are ignored when converging from TEXT to REAL.
If there is no prefix that can be interpreted as a real number, the
result of the conversion is 0.0.
INTEGER
When casting a BLOB value to INTEGER, the value is first converted to TEXT.
When casting a TEXT value to INTEGER, the longest possible prefix of the value >that can be interpreted as an integer number is extracted
from the TEXT value and the remainder ignored. Any leading spaces in
the TEXT value when converting from TEXT to INTEGER are ignored.
If there is no prefix that can be interpreted as an integer number,
the result of the conversion is 0.
If the prefix integer is greater than +9223372036854775807 then the
result of the cast is exactly +9223372036854775807.
Similarly, if the
prefix integer is less than -9223372036854775808 then the result of
the cast is exactly -9223372036854775808.
When casting to INTEGER, if the text looks like a floating point value with an exponent, the exponent will be ignored because it is no
part of the integer prefix. For example, "(CAST '123e+5' AS INTEGER)"
results in 123, not in 12300000.
The CAST operator understands decimal integers only — conversion of hexadecimal integers stops at the "x" in the "0x" prefix of the
hexadecimal integer string and thus result of the CAST is always zero.
A cast of a REAL value into an INTEGER results in the integer between the REAL value and zero that is closest to the REAL value. If
a REAL is greater than the greatest possible signed integer
(+9223372036854775807) then the result is the greatest possible signed
integer and if the REAL is less than the least possible signed integer
(-9223372036854775808) then the result is the least possible signed
integer.
Prior to SQLite version 3.8.2 (2013-12-06), casting a REAL value greater than +9223372036854775807.0 into an integer resulted in the
most negative integer, -9223372036854775808. This behavior was meant
to emulate the behavior of x86/x64 hardware when doing the equivalent
cast.
NUMERIC
Casting a TEXT or BLOB value into NUMERIC first does a forced conversion into REAL but then further converts the result into
INTEGER if and only if the conversion from REAL to INTEGER is lossless
and reversible. This is the only context in SQLite where the NUMERIC
and INTEGER affinities behave differently.
Casting a REAL or INTEGER value to NUMERIC is a no-op, even if a real
value could be losslessly converted to an integer.
NOTE
Before this section there is a section on Literal Values (i.e. casting probably only needs to be applied to values extracted from columns).
Try :-
SELECT
round(CAST(10 AS NUMERIC) + CAST(254.53 AS NUMERIC),2) = round(CAST(264.53 AS NUMERIC),2) AS TestComparison1,
round(CAST(10 AS NUMERIC) + CAST(254.54 AS NUMERIC),2) = round(CAST(264.54 AS NUMERIC),2) AS TestComparison2
:-
I was scanning some stylesheets when I noticed one which used a linear-gradient with rgba() color-stops in which the rgba numbers used multiple instances of 0 instead of just a single 0:
background-image:linear-gradient(to top left, rgba(000,000,000,0.1),rgba(100,100,100,1));
I hadn't seen multiple zeroes (instead of a single zero) occupying a single slot in the rgb/a color space before, but confirmed on CodePen this is valid. I then looked up the W3C definition of number here.
To make a long story short, after some more poking and digging, I didn't realize I could prepend an indeterminate number of zeroes to a length and get the same result as with no zeroes prepended, like this:
/* The two squares generated have equivalent width and height of 100px - for giggles, I also extended the same idea to the transition-duration time */
<style>
div.aaa {
width:00000000100px;
height:100px;
background-image:linear-gradient(to top left,rgba(000,000,000,0.1),rgba(100,100,100,1));
transition:1s cubic-bezier(1,1,1,1)
}
div.bbb {
width:100px;
height:000000000000000000000000000000000100px;
background-color:green;
transition:0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001s cubic-bezier(1,1,1,1)
}
div:hover { background-color:red }
</style>
<div class="aaa"></div>
<div class="bbb"></div>
It's difficult to directly verify these numbers are equivalent representations, because using a scripting language:
/* PHP */
$x = 100;
$y = 00000000000100; // problem is PHP treats this as an octal number
echo ($x == $y) ? 'true' : 'false'; // echoes the string ---> false
/* Javascript */
var x = 100;
var y = 00000000000100; // also treats this as an octal number
var res = (x == y) ? 'true' : 'false';
alert(res); // alerts ---> false
These examples suggest to me that CSS does not treat e.g. 0000100 as an octal number, but rather as a decimal (or at least as non-octal numbers) since the magnitude of the width, height, and transition-duration for the html elements generated above appear to be identical.
Extending this CSS approach to any property and any unit, e.g., time,
Is any unit-containing CSS property value prepended with any positive number of zeroes syntactically equivalent to the same value without any prepended zeroes?
I have to admit I found this question interesting.
https://www.w3.org/TR/CSS21/syndata.html
The css 2 syntax spec says:
num [0-9]+|[0-9]*\.[0-9]+
Note that 000000000000000037.3 meets this rule and definition, a series of numbers between 0 and 9, optionally followed by a . and a further series of numbers from 0 to 9.
The css 3 spec goes on:
https://www.w3.org/TR/css3-values/#numbers
4.2. Real Numbers: the type
Number values are denoted by <number>, and represent real numbers,
possibly with a fractional component.
When written literally, a number is either an integer, or zero or more
decimal digits followed by a dot (.) followed by one or more decimal
digits and optionally an exponent composed of "e" or "E" and an
integer. It corresponds to the production in the CSS
Syntax Module [CSS3SYN]. As with integers, the first character of a
number may be immediately preceded by - or + to indicate the number’s
sign.
https://www.w3.org/TR/css-syntax-3/#convert-a-string-to-a-number
This I believe roughly explains how a css parser is supposed to take the css value and convert it to a number:
4.3.13. Convert a string to a number
This section describes how to convert a string to a number . It
returns a number.
Note: This algorithm does not do any verification to ensure that the
string contains only a number. Ensure that the string contains only a
valid CSS number before calling this algorithm.
Divide the string into seven components, in order from left to right:
A sign: a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), or the empty string. Let s be the number -1 if the sign is U+002D
HYPHEN-MINUS (-); otherwise, let s be the number 1.
An integer part: zero or more digits. If there is at least one digit, let i be the number formed by interpreting the digits as a
base-10 integer; otherwise, let i be the number 0.
A decimal point: a single U+002E FULL STOP (.), or the empty string.
A fractional part: zero or more digits. If there is at least one digit, let f be the number formed by interpreting the digits as a
base-10 integer and d be the number of digits; otherwise, let f and d
be the number 0.
An exponent indicator: a single U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e), or the empty string.
(-), or the empty string. Let t be the number -1 if the
sign is U+002D HYPHEN-MINUS (-); otherwise, let t be the number 1.
An exponent: zero or more digits. If there is at least one digit, let e be the number formed by interpreting the digits as a base-10
integer; otherwise, let e be the number 0.
Return the number s·(i + f·10-d)·10te.
I think the key term there is a base-10 number.
Note that for other possible situations where the starting 0 is meaningful, you have to escape it for it to function as something other than a simple number, I believe, if I read this spec right:
https://www.w3.org/TR/css-syntax-3/#escaping
Any Unicode code point can be included in an identifier or quoted
string by escaping it. CSS escape sequences start with a backslash
(\), and continue with:
Any Unicode code point that is not a hex digits or a newline. The escape sequence is replaced by that code point.
Or one to six hex digits, followed by an optional whitespace. The escape sequence is replaced by the Unicode code point whose value is
given by the hexadecimal digits. This optional whitespace allow
hexadecimal escape sequences to be followed by "real" hex digits.
An identifier with the value "&B" could be written as \26 B or \000026B.
A "real" space after the escape sequence must be doubled.
However, even here it appears the starting 0's are optional, though it's not crystal clear.
The CSS specs were while obtuse fairly clear, which isn't always the case. So yes, numbers are made from strings of digits, and can have decimals as well, and are base 10, so that means the leading zeros are simply nothing.
I speculate as well that because the specs further state that no units are required when the number value is 0, that in fact, a leading zero may mean null, nothing, internally, though obviously you'd have to look at css parsing code itself to see how that is actually handled by browsers.
So that's kind of interesting. I think that probably because css is a very simple language, it doesn't do 'clever' things like php or javascript do with leading zeros, it simply does what you'd expect, treat them as zeros, nothing.
Thanks for asking though, sometimes it's nice to go back and read the raw specs just to see how the stuff works.
Say I've got this value xxx in hex 007800780078
How can I convert back the hex value to characters using bitwise operations?
Can I?
I suppose you could do it using "bitwise" operations, but it'd probably be a horrendous mess of code as well as being totally unnecessary since ILE RPG can do it easily using appropriate built-in functions.
First is that you don't exactly have what's usually thought of as a "hex" value. That is, you're showing a hexadecimal representation of a value; but basic "hex" conversion will not give a useful result. What you're showing seems to be a UCS-2 value for "xxx".
Here's a trivial example that shows a conversion of that hexadecimal string into a standard character value:
d ds
d charField 6 inz( x'007800780078' )
d UCSField1 3c overlay( charField )
d TargetField s 6
d Length s 10i 0
/free
Length = %len( %trim( UCSField1 ));
TargetField = %trim( %char( UCSField1 ));
*inlr = *on;
return;
/end-free
The code has a DS that includes two sub-fields. The first is a simple character field that declares six bytes of memory initialized to x'007800780078'. The second sub-field is declared as data type 'C' to indicate UCS-2, and it overlays the first sub-field. Because it's UCS-2, its size is given as "3" to allow for three characters. (Each character is 16-bits wide.)
The executable statements don't do much, just enough to let you test the converted values. Using debug, you should see that Length comes out to be (3) and TargetField becomes 'xxx'.
The %CHAR() built-in function can be used to convert from UCS-2 to the character encoding used by the program. To go in the opposite direction, use the %UCS2() built-in function.
I want to convert decimal number to hexadecimal in Embedded C.
Actually in my project input to controller is decimal which will be subtracted from hexadecimal value, so I need a conversion.
I tried to write it but after converting a number (75 for example) to its hexadecimal equivalent it will be (411). The trouble is that I did not know how to convert a number like 11 to b in hexadecimal as you know that there is no 11 in hexadecimal, it is b; so please help.
I want to save the converted value in a flag (for subtracting), not for printing. I can print a hex value by simple put a condition like:
(if (a > 10) printf("b"))
but this is not a solution for Embedded.
So please give me a complete solution.
I am not sure what you mean but your integer is just the "type of interpretation". In your memory it's just a bunch of 0 and 1, and therefore you can also presentate that "data" in an decimal, hexadecimal way.
If you need it as input for a register, you can just pass it into it.
Or do I missunderstand your problem?