Why can't R handle inequalities between negative numbers in quotes - r

This is a weird problem, with an easy workaround, but I'm just so curious why R is behaving this way.
> "-1"<"-2"
[1] TRUE
> -1<"-2"
[1] TRUE
> "-1"< -2
[1] TRUE
> -1< -2
[1] FALSE
> as.numeric("-1")<"-2"
[1] TRUE
> "-1"<as.numeric("-2")
[1] TRUE
> as.numeric("-1")<as.numeric("-2")
[1] FALSE
What is happening? Please, for my own sanity...

A "number in quotes" is not a number at all, it is a string of characters. Those characters happen to be displayed with the same drawing on your screen as the corresponding number, but they are fundamentally not the same object.
The behavior you are seeing is consistent with the following:
A pair of numbers (numeric in R) is compared in the way that you should expect, numerically with the natural ordering. So, -1 < -2 is indeed FALSE.
A pair of strings (character in R) are compared in lexicographic order, meaning roughly that it is compared alphabetically, character by character, from left to right. Since "-1" and "-2" start with the same character, we move to the second, and "2" comes after "1", so "-2" comes after "-1" and therefore "-1" < "-2" is TRUE.
When comparing objects of mismatched types, you have two basic choices: either you give an error, or you convert one of the types to the other and then fall back on the two facts above. R takes the 2nd route, and chooses to convert numeric to character, which explains the result you got above (all your mismatched examples give TRUE).
Note that it makes more sense to convert numeric to character, rather than the other way around, because most character can't be automatically converted to numeric in a meaningful way.

I've always thought this is because the default behavior is to treat the values in quotes as character, and the values without quotes as double. Without expressly declaring the data types, you get this:
> typeof(-1)
[1] "double"
> typeof("-1")
[1] "character"
> typeof(as.numeric("-1"))
[1] "double"
It's only when the negative numbers are put in quotes that it orders them alphabetically, because they are characters.

Related

How does R compare version strings with the inequality operators?

Could someone explain this behavior in R?
> '3.0.1' < '3.0.2'
[1] TRUE
> '3.0.1' > '3.0.2'
[1] FALSE
What process is R doing to make the comparison?
It's making a lexicographic comparison in this case, as opposed to converting to numeric, as calling as.numeric('3.0.1') returns NA.
The logic here would be something like, "the strings '3.0.1' and '3.0.2' are equivalent until their final characters, and since 1 precedes 2 in an alphanumeric alphabet, '3.0.1' is less than '3.0.2'." You can test this with some toy examples:
'a' < 'b' # TRUE
'ab' < 'ac' # TRUE
'ab0' < 'ab1' # TRUE
Per the note in the manual in the post that #rawr linked in the comments, this will get hairy in different locales, where the alphanumeric alphabet may be sorted differently.

Comparing integers with characters in R [duplicate]

This question already has answers here:
Why TRUE == "TRUE" is TRUE in R?
(3 answers)
Why does "one" < 2 equal FALSE in R?
(2 answers)
Closed last year.
It appears that as.character() of a number is still a number, which I find counter intuitive. Consider this example:
1 > "2"
[1] FALSE
2 > "1"
[1] TRUE
Even if I try to use as.character() or paste()
as.character(2)
[1] "2"
as.character(2) > 1
[1] TRUE
as.character(2) < 1
[1] FALSE
Why is that? Can't I have R return an error when I am comparing numbers with strings?
As explained in the comments the problem is that the numeric 1 is coerced to character.
The operation < still works for characters. A character is smaller than another if it comes first in alphabetical order.
> "a" < "b"
[1] TRUE
> "z" < "b"
[1] FALSE
So in your case as.character(2) > 1 is transformed to as.character(2) > as.character(1) and because of the "alphabetical" order of numbers TRUEis returned.
To prevent this you would have to check for the class of an object manually.
The documentation of ?Comparison states that
If the two arguments are atomic vectors of different types, one is coerced to the type of the other, the (decreasing) order of precedence being character, complex, numeric, integer, logical and raw.
So in your case the number is automatically coerced to string and the comparison is made based on the respective collation.
In order to prevent it, the only option I know of is to manually compare the class first.

Why "<some string>" >= <a number> is TRUE?

Maybe it is a silly question but playing with subsetting I faced this thing and I can't understand why it happens.
For example let's consider a string, say "a", and an integer, say 3, why this expression returns TRUE?
"a" >= 3
[1] TRUE
When you try to compare a string to an integer, R will coerce the number into a string, so 3 becomes "3".
Using logical operators on strings will check if the condition is true or false given their alphabetical order. For example:
> "a" < "b"
[1] TRUE
> "b" > "c"
[1] FALSE
This results happen because for R, the ascending order is a, b, c. Numbers usually come before letters in alphabetical orders (just check files ordered by name which start with a number). This is why you get
"a" >= 3
[1] TRUE
Finally, note that your result can vary depending on your locale and how the alphabetical order is defined on it. The manual says:
Comparison of strings in character vectors is lexicographic within the
strings using the collating sequence of the locale in use: see
locales. The collating sequence of locales such as en_US is normally
different from C (which should use ASCII) and can be surprising.
Beware of making any assumptions about the collation order: e.g. in
Estonian Z comes between S and T, and collation is not necessarily
character-by-character – in Danish aa sorts as a single letter, after
z. In Welsh ng may or may not be a single sorting unit: if it is it
follows g. Some platforms may not respect the locale and always sort
in numerical order of the bytes in an 8-bit locale, or in Unicode
code-point order for a UTF-8 locale (and may not sort in the same
order for the same language in different character sets). Collation of
non-letters (spaces, punctuation signs, hyphens, fractions and so on)
is even more problematic.
This is important and should be considered if the logical operators are used to compare strings (regardless of comparing them to numbers or not).

Why does "one" < 2 equal FALSE in R?

I'm reading Hadley Wickham's Advanced R section on coercion, and I can't understand the result of this comparison:
"one" < 2
# [1] FALSE
I'm assuming that R coerces 2 to a character, but I don't understand why R returns FALSE instead of returning an error. This is especially puzzling to me since
-1 < "one"
# TRUE
So my question is two-fold: first, why this answer, and second, is there a way of seeing how R converts the individual elements within a logical vector like these examples?
From help("<"):
If the two arguments are atomic vectors of different types, one is
coerced to the type of the other, the (decreasing) order of precedence
being character, complex, numeric, integer, logical and raw.
So in this case, the numeric is of lower precedence than the character. So 2 is coerced to the character "2". Comparison of strings in character vectors is lexicographic which, as I understand it, is alphabetic but locale-dependent.
It coerces 2 into a character, then it does an alphabetical comparison. And numeric characters are assumed to come before alphabetical ones
to get a general idea on the behavior try
'a'<'1'
'1'<'.'
'b'<'B'
'a'<'B'
'A'<'B'
'C'<'B'

Is it okay to use floating-point numbers as indices or when creating factors in R?

Is it okay to use floating-point numbers as indices or when creating factors in R?
I don't mean numbers with decimal parts; that would clearly be odd, but instead numbers which really are integers (to the user, that is), but are being stored as floating point numbers.
For example, I've often used constructs like (1:3)*3 or seq(3,9,by=3) as indices, but you'll notice that they're actually being represented as floating point numbers, not integers, even though to me, they're really integers.
Another time this could come up is when reading data from a file; if the file represents the integers as 1.0, 2.0, 3.0, etc, R will store them as floating-point numbers.
(I posted an answer below with an example of why one should be careful, but it doesn't really address if simple constructs like the above can cause trouble.)
(This question was inspired by this question, where the OP created integers to use as coding levels of a factor, but they were being stored as floating point numbers.)
It's always better to use integer representation when you can. For instance, with (1L:3L)*3L or seq(3L,9L,by=3L).
I can come up with an example where floating representation gives an unexpected answer, but it depends on actually doing floating point arithmetic (that is, on the decimal part of a number). I don't know if storing an integer directly in floating point and possibly then doing multiplication, as in the two examples in the original post, could ever cause a problem.
Here's my somewhat forced example to show that floating points can give funny answers. I make two 3's that are different in floating point representation; the first element isn't quite exactly equal to three (on my system with R 2.13.0, anyway).
> (a <- c((0.3*3+0.1)*3,3L))
[1] 3 3
> a[1] == a[2]
[1] FALSE
Creating a factor directly works as expected because factor calls as.character on them which has the same result for both.
> as.character(a)
[1] "3" "3"
> factor(a, levels=1:3, labels=LETTERS[1:3])
[1] C C
Levels: A B C
But using it as an index doesn't work as expected because when they're forced to an integer, they are truncated, so they become 2 and 3.
> trunc(a)
[1] 2 3
> LETTERS[a]
[1] "B" "C"
Constructs such as 1:3 are really integers:
> class(1:3)
[1] "integer"
Using a float as an index entails apparently some truncation:
> foo <- 1:3
> foo
[1] 1 2 3
> foo[1.0]
[1] 1
> foo[1.5]
[1] 1

Resources