Creating a vector, that contains all even numbers - r

Using R, I need to create a vector and in all even numbers. The numbers displayed must be in ascending order.

Related

How to access the entry value with unrecognised number of decimal in data frame in R?

I have a data frame in R that I want to analyse. I want to know how many specific numbers are in a data frame column. for example, I want to know the frequency of number 0.9998558 by using
sum(deviation_multiple_regression_3cell_types_all_spots_all_intersection_genes_exclude_50_10dec_rowSums_not_0_for_moran_scaled[,3]== 0.9998558)
However, it seems that the decimal shown is not the actual one (it must be 0.9998558xxxxx) since the result I got from using the above command is 0 (the correct one should be 3468). How can I access that number without knowing the exact decimal numbers so that I get the correct answer? Please see the screenshot below.
The code below gives the number of occurrences in the column.
x <- 0.9998558
length(which(df$a==x))
If you are looking for numbers stating with 0.9998558, I think you can do it in two different ways: working with data as numeric or as character.
Let x be your variable:
Data as character
This way counts exactly what you are looking for
sum(substr(as.character(x),1,9)=="0.9998558")
Data as numeric
This will include all the values with a difference with the reference value lower than 1e-7; this may include values not starting exactly with 0.9998558
sum(abs(x-0.9998558)<1e-7)
You can also "truncate" the numbers in your vector and compare them with the number you want. Here, we write 10^7 because 7 is the number of decimals you want to compare.
sum(trunc(x*10^7)/10^7)==0.9998558)

Using SumIF for first 6 values

I have the following array of numbers.
enter image description here
Overall, the numbers in the array will always change slightly, however, the only commonality is that the positive numbers will eventually become negative. I would like to build out a function/equation that would only sum the negative numbers of this array, however, I would also want this function to only sum the first 6 negative number.
How can I do this?

How does the Between operator work in dynamodb with strings

I was not expecting to get back a value from the query below. 1574208000#W2 is not between 1574207999 and 1574208001. But the records are still returned. Can anyone shed light on how the between comparison is done?
DynamoDb between operator with strings works with the lexicographic order of the strings (ie, the order in which they would appear in a dictionary). Using this order, 1574208000#W2 does fall between 1574207999 and 1574208001
Two strings are lexicographically equal if they are the same length and contain the same characters in the same positions.
Apart from that, to determine which string comes first, compare corresponding characters of the two strings from left to right. The first character where the two strings differ determines which string comes first. Characters are compared using the Unicode character set. All uppercase letters come before lower case letters. If two letters are the same case, then alphabetic order is used to compare them.
If two strings contain the same characters in the same positions, then the shortest string comes first. Ref
To try this out, you can try a simple example in Java
String a = "1574207999", b = "1574208000#W2", c = "1574208001";
System.out.println(a.compareTo(b)); // prints negative number, indicating a < b
System.out.println(b.compareTo(c)); // prints negative number, indicating b < c

Integer overflow when using variables but not when using integers

I am trying to multiply 2 numbers together. I am aware that R has a limit for integer size, but from the console, when I manually multiply the numbers, I get a result.
However, when I use variables containing those exact numbers, and I multiply those together, I get a NAs produced by integer overflow error:
Why is this? Are the variables somehow not properly resolving before being multiplied? I need to be able to use variables, so would it be possible to make it work? Floating point numbers are not an option since precision is needed.

R version 3.3.1, preventing coercion of decimals into factors using read.csv()

I am attempting to use read.csv() to read in a .csv file, but three of my columns contain floating point values. R coerces these into factors, but I would like them to retain their original values so I can accurately compare them to one another. I've tried to read the documentation, but the only thing I see there is the option to set stringsasfactors = FALSE. Then I retain the decimal places in my column elements, but they are not numbers that I can compare to one another.
For example, if my column contained the values 3.1, 4.2, 5.3, R would coerce these into factors. If I calls as.numeric() on them, they are squashed to 3, 4, 5. How can I keep them as floating point values when I read them in?
My experience is that you need to as.numeric(as.character(data)). This is because a factor is listed as a string, that is identified as specific and repeating. All letters, numbers and symbols are considered characters within factors.
Going straight to a number, there is a chance that some part, such as the period, which is not a decimal place in a factor character string interrupts the conversion. It rounded in this case. Try dropping it out of factor into character, then to numeric!

Resources