logically this code should make sense, I'm primarily a python programmer but I'm unsure why this is not working. It is not returning any errors. What I want is for this vector of primarily zeros to be changed to a vector of only 1's and -1's (hence using the sample function). My issue is that the values of the vector are not being updated, they are just staying as 0 and I'm not sure why.
Y = numeric(100)
for (i in 100){
x <- sample(1:2, 1)
if (x == 2){
Y[i] = 1
}
else{
Y[i] = -1
}
}
I've also changed the Y[i] = 1 to Y[i] <- 1 but this has not helped. I also know that x is either 1 or 2 because I test it manually using x == 2...etc
The only other issue I could think of is that x is an integer while the numbers sample returns are not but per checking this: (Note that x = 2L after the loop exited)
> typeof(x)
[1] "integer"
> typeof(2)
[1] "double"
> x == 2
[1] TRUE
I don't think it is the problem.
Any suggestions?
Because the loop is just run once i.e. the last iteration. It did change in the output vector Y
tail(Y)
#[1] 0 0 0 0 0 -1
Instead it would be 1:100
for(i in 1:100)
The second issue raised is with the typeof 'x'. Here, we are sampleing an integer with 1:2 instead of a numeric vector and that returns the same type as the input. According to ?':'
For numeric arguments, a numeric vector. This will be of type integer if from is integer-valued and the result is representable in the R integer type, otherwise of type "double"
typeof(1:2)
#[1] "integer"
typeof(c(1, 2))
#[1] "double"
Another option if it is a range (:) is to wrap with as.numeric
for (i in 1:100){
x <- sample(as.numeric(1:2), 1)
if (x == 2){
Y[i] = 1
}
else{
Y[i] = -1
}
}
check the type
typeof(Y)
#[1] "double"
typeof(x)
#[1] "double"
Also, R is a vectorized language so this:
x<-sample(1:2, 100, replace = TRUE)
Y<-ifelse(x==2, 1, -1)
will run about 1000 times faster than your loop.
To exclude elements from a vector x,
x <- c(1, 4, 3, 2)
we can subtract a vector of positions:
excl <- c(2, 3)
x[-excl]
# [1] 1 2
This also works dynamically,
(excl <- which(x[-which.max(x)] > quantile(x, .25)))
# [1] 2 3
x[-excl]
# [1] 1 2
until excl is of length zero:
excl.nolength <- which(x[-which.max(x)] > quantile(x, .95))
length(excl.nolength)
# [1] 0
x[-excl.nolength]
# integer(0)
I could kind of reformulate that, but I have many objects to which excl is applied, say:
letters[1:4][-excl.nolength]
# character(0)
I know I could use setdiff, but that's rather long and hard to read:
x[setdiff(seq(x), excl.nolength)]
# [1] 1 4 3 2
letters[1:4][setdiff(seq(letters[1:4]), excl.nolength)]
# [1] "a" "b" "c" "d"
Now, I could exploit the fact that nothing is excluded if the element number is greater than the number of elements:
length(x)
# [1] 4
x[-5]
# [1] 1 4 3 2
To generalize that I should probably use .Machine$integer.max:
tmp <- which(x[-which.max(x)] > quantile(x, .95))
excl <- if (!length(tmp) == 0) tmp else .Machine$integer.max
x[-excl]
# [1] 1 4 3 2
Wrapped into a function,
e <- function(x) if (!length(x) == 0) x else .Machine$integer.max
that's quite handy and clear:
x[-e(excl)]
# [1] 1 2
x[-e(excl.nolength)]
# [1] 1 4 3 2
letters[1:4][-e(excl.nolength)]
# [1] "a" "b" "c" "d"
But it seems a little fishy to me...
Is there a better equally concise way to deal with a subset of length zero in base R?
Edit
excl comes out as dynamic result of a function before (as shown with which above) and might be of length zero or not. If length(excl) == 0 nothing should be excluded. Following lines of code, e.g. x[-excl] should not have to be changed at best or as little as possible.
You can overwrite [ with your own function.
"[" <- function(x,y) {if(length(y)==0) x else .Primitive("[")(x,y)}
x <- c(1, 4, 3, 2)
excl <- c(2, 3)
x[-excl]
#[1] 1 2
excl <- integer()
x[-excl]
#[1] 1 4 3 2
rm("[") #Go back to normal mode
I would argue this is somewhat opinion based.
For example i find:
x <- x[-if(length(excl <- which(x[-which.max(x)] > quantile(x, .95))) == 0) .Machine$integer.max else excl]
very unreadable, but some people like one-liners. Reading package code you'll often find this is instead split up into one of the many suggestions you gave
excl <- which(x[-which.max(x)] > quantile(x, .95))
if(length(excl) != 0)
x <- x[-excl]
Alternatively, you could avoid which, and simply use the logical vector for subsetting, and this would likely be considered more clean by most
x <- x[!x[-which.max(x)] > quantile(x, .95)]
This would avoid zero-length index problem, at the cost of some loss of efficiency.
As a side note, the very example used above and in the question seems somewhat off. First which.max only returns the first index which is equal to the max value, and in addition the index will be offset for every value removed. More likely the expected example would be
x <- x[!(x > quantile(x, .95))[-which(x == max(x))]]
How bout this?
a <- letters[1:3]
excl1 <- c(1,3)
excl2 <- c()
a[!(seq_along(a) %in% excl1)]
a[!(seq_along(a) %in% excl2)]
I have to create a function as: ans(x) which returns the value 2*abs(x), if x is
negative, and the value x otherwise. What command could i use?
Thanks
ans <- function(x){
ifelse(x < 0, 2*abs(x), x)
}
will do.
> ans(2)
[1] 2
> ans(-2)
[1] 4
Explanation:
We can use the built-in base R function ifelse(). The logic is pretty simple:
ifelse(condition, output if condition is TRUE, output if condition is FALSE)
Therefore, ifelse(x < 0, 2*abs(x), x) will do the following:
evaluate whether value x is negative (<0)
if TRUE, return 2*abs(x)
if FALSE, return x
The advantage of ifelse() over traditional if() is the vectorization. if() can only handle a single value, ifelse() will evaluate any vector given as input.
Comparison:
ans_if <- function(x){
if(x < 0){2*abs(x)}else{x}
}
This is the same function, using a traditional if() structure. Giving a single value as input will result in the same output for both functions:
> ans(-2)
[1] 4
> ans_if(-2)
[1] 4
But if you want to input multiple values, let's say
test <- c(-1, -2, 3, -4)
the ifelse() variant will evaluate every element of the vector and generate the correct output as a vector of the same length:
> ans(test)
[1] 2 4 3 8
whereas the if() variant will throw a warning
> ans_if(test)
[1] 2 4 6 8
Warning message:
In if (x < 0) { :
the condition has length > 1 and only the first element will be used
and return the wrong output, as only the first value was used for evaluation (-1) and the operation over the whole vector was based on this evaluation.
I am working in R. I have a series of coordinates in decimal degrees, and I would like to sort these coordinates by how many decimal places these numbers have (i.e. I will want to discard coordinates that have too few decimal places).
Is there a function in R that can return the number of decimal places a number has, that I would be able to incorporate into function writing?
Example of input:
AniSom4 -17.23300000 -65.81700
AniSom5 -18.15000000 -63.86700
AniSom6 1.42444444 -75.86972
AniSom7 2.41700000 -76.81700
AniLac9 8.6000000 -71.15000
AniLac5 -0.4000000 -78.00000
I would ideally write a script that would discard AniLac9 and AniLac 5 because those coordinates were not recorded with enough precision. I would like to discard coordinates for which both the longitude and the latitude have fewer than 3 non-zero decimal values.
You could write a small function for the task with ease, e.g.:
decimalplaces <- function(x) {
if ((x %% 1) != 0) {
nchar(strsplit(sub('0+$', '', as.character(x)), ".", fixed=TRUE)[[1]][[2]])
} else {
return(0)
}
}
And run:
> decimalplaces(23.43234525)
[1] 8
> decimalplaces(334.3410000000000000)
[1] 3
> decimalplaces(2.000)
[1] 0
Update (Apr 3, 2018) to address #owen88's report on error due to rounding double precision floating point numbers -- replacing the x %% 1 check:
decimalplaces <- function(x) {
if (abs(x - round(x)) > .Machine$double.eps^0.5) {
nchar(strsplit(sub('0+$', '', as.character(x)), ".", fixed = TRUE)[[1]][[2]])
} else {
return(0)
}
}
Here is one way. It checks the first 20 places after the decimal point, but you can adjust the number 20 if you have something else in mind.
x <- pi
match(TRUE, round(x, 1:20) == x)
Here is another way.
nchar(strsplit(as.character(x), "\\.")[[1]][2])
Rollowing up on Roman's suggestion:
num.decimals <- function(x) {
stopifnot(class(x)=="numeric")
x <- sub("0+$","",x)
x <- sub("^.+[.]","",x)
nchar(x)
}
x <- "5.2300000"
num.decimals(x)
If your data isn't guaranteed to be of the proper form, you should do more checking to ensure other characters aren't sneaking in.
Not sure why this simple approach was not used above (load the pipe from tidyverse/magrittr).
count_decimals = function(x) {
#length zero input
if (length(x) == 0) return(numeric())
#count decimals
x_nchr = x %>% abs() %>% as.character() %>% nchar() %>% as.numeric()
x_int = floor(x) %>% abs() %>% nchar()
x_nchr = x_nchr - 1 - x_int
x_nchr[x_nchr < 0] = 0
x_nchr
}
> #tests
> c(1, 1.1, 1.12, 1.123, 1.1234, 1.1, 1.10, 1.100, 1.1000) %>% count_decimals()
[1] 0 1 2 3 4 1 1 1 1
> c(1.1, 12.1, 123.1, 1234.1, 1234.12, 1234.123, 1234.1234) %>% count_decimals()
[1] 1 1 1 1 2 3 4
> seq(0, 1000, by = 100) %>% count_decimals()
[1] 0 0 0 0 0 0 0 0 0 0 0
> c(100.1234, -100.1234) %>% count_decimals()
[1] 4 4
> c() %>% count_decimals()
numeric(0)
So R does not seem internally to distinguish between getting 1.000 and 1 initially. So if one has a vector input of various decimal numbers, one can see how many digits it initially had (at least) by taking the max value of the number of decimals.
Edited: fixed bugs
If someone here needs a vectorized version of the function provided by Gergely Daróczi above:
decimalplaces <- function(x) {
ifelse(abs(x - round(x)) > .Machine$double.eps^0.5,
nchar(sub('^\\d+\\.', '', sub('0+$', '', as.character(x)))),
0)
}
decimalplaces(c(234.1, 3.7500, 1.345, 3e-15))
#> 1 2 3 0
I have tested some solutions and I found this one robust to the bugs reported in the others.
countDecimalPlaces <- function(x) {
if ((x %% 1) != 0) {
strs <- strsplit(as.character(format(x, scientific = F)), "\\.")
n <- nchar(strs[[1]][2])
} else {
n <- 0
}
return(n)
}
# example to prove the function with some values
xs <- c(1000.0, 100.0, 10.0, 1.0, 0, 0.1, 0.01, 0.001, 0.0001)
sapply(xs, FUN = countDecimalPlaces)
In [R] there is no difference between 2.30000 and 2.3, both get rounded to 2.3 so the one is not more precise than the other if that is what you want to check. On the other hand if that is not what you meant: If you really want to do this you can use 1) multiply by 10, 2) use floor() function 3) divide by 10 4) check for equality with the original. (However be aware that comparing floats for equality is bad practice, make sure this is really what you want)
For the common application, here's modification of daroczig's code to handle vectors:
decimalplaces <- function(x) {
y = x[!is.na(x)]
if (length(y) == 0) {
return(0)
}
if (any((y %% 1) != 0)) {
info = strsplit(sub('0+$', '', as.character(y)), ".", fixed=TRUE)
info = info[sapply(info, FUN=length) == 2]
dec = nchar(unlist(info))[seq(2, length(info), 2)]
return(max(dec, na.rm=T))
} else {
return(0)
}
}
In general, there can be issues with how a floating point number is stored as binary. Try this:
> sprintf("%1.128f", 0.00000000001)
[1] "0.00000000000999999999999999939458150688409432405023835599422454833984375000000000000000000000000000000000000000000000000000000000"
How many decimals do we now have?
Interesting question. Here is another tweak on the above respondents' work, vectorized, and extended to handle the digits on the left of the decimal point. Tested against negative digits, which would give an incorrect result for the previous strsplit() approach.
If it's desired to only count the ones on the right, the trailingonly argument can be set to TRUE.
nd1 <- function(xx,places=15,trailingonly=F) {
xx<-abs(xx);
if(length(xx)>1) {
fn<-sys.function();
return(sapply(xx,fn,places=places,trailingonly=trailingonly))};
if(xx %in% 0:9) return(!trailingonly+0);
mtch0<-round(xx,nds <- 0:places);
out <- nds[match(TRUE,mtch0==xx)];
if(trailingonly) return(out);
mtch1 <- floor(xx*10^-nds);
out + nds[match(TRUE,mtch1==0)]
}
Here is the strsplit() version.
nd2 <- function(xx,trailingonly=F,...) if(length(xx)>1) {
fn<-sys.function();
return(sapply(xx,fn,trailingonly=trailingonly))
} else {
sum(c(nchar(strsplit(as.character(abs(xx)),'\\.')[[1]][ifelse(trailingonly, 2, T)]),0),na.rm=T);
}
The string version cuts off at 15 digits (actually, not sure why the other one's places argument is off by one... the reason it's exceeded through is that it counts digits in both directions so it could go up to twice the size if the number is sufficiently large). There is probably some formatting option to as.character() that can give nd2() an equivalent option to the places argument of nd1().
nd1(c(1.1,-8.5,-5,145,5,10.15,pi,44532456.345243627,0));
# 2 2 1 3 1 4 16 17 1
nd2(c(1.1,-8.5,-5,145,5,10.15,pi,44532456.345243627,0));
# 2 2 1 3 1 4 15 15 1
nd1() is faster.
rowSums(replicate(10,system.time(replicate(100,nd1(c(1.1,-8.5,-5,145,5,10.15,pi,44532456.345243627,0))))));
rowSums(replicate(10,system.time(replicate(100,nd2(c(1.1,-8.5,-5,145,5,10.15,pi,44532456.345243627,0))))));
Don't mean to hijack the thread, just posting it here as it might help someone to deal with the task I tried to accomplish with the proposed code.
Unfortunately, even the updated #daroczig's solution didn't work for me to check if a number has less than 8 decimal digits.
#daroczig's code:
decimalplaces <- function(x) {
if (abs(x - round(x)) > .Machine$double.eps^0.5) {
nchar(strsplit(sub('0+$', '', as.character(x)), ".", fixed = TRUE)[[1]][[2]])
} else {
return(0)
}
}
In my case produced the following results
NUMBER / NUMBER OF DECIMAL DIGITS AS PRODUCED BY THE CODE ABOVE
[1] "0.0000437 7"
[1] "0.000195 6"
[1] "0.00025 20"
[1] "0.000193 6"
[1] "0.000115 6"
[1] "0.00012501 8"
[1] "0.00012701 20"
etc.
So far was able to accomplish the required tests with the following clumsy code:
if (abs(x*10^8 - floor(as.numeric(as.character(x*10^8)))) > .Machine$double.eps*10^8)
{
print("The number has more than 8 decimal digits")
}
PS: I might be missing something in regard to not taking the root of the .Machine$double.eps so please take caution
Another contribution, keeping fully as numeric representations without converting to character:
countdecimals <- function(x)
{
n <- 0
while (!isTRUE(all.equal(floor(x),x)) & n <= 1e6) { x <- x*10; n <- n+1 }
return (n)
}
Vector solution based on daroczig's function (can also deal with dirty columns containing strings and numerics):
decimalplaces_vec <- function(x) {
vector <- c()
for (i in 1:length(x)){
if(!is.na(as.numeric(x[i]))){
if ((as.numeric(x[i]) %% 1) != 0) {
vector <- c(vector, nchar(strsplit(sub('0+$', '', as.character(x[i])), ".", fixed=TRUE)[[1]][[2]]))
}else{
vector <- c(vector, 0)
}
}else{
vector <- c(vector, NA)
}
}
return(max(vector))
}
as.character uses scientific notation for numbers that are between -1e-4 and 1e-4 but not zero:
> as.character(0.0001)
[1] "1e-04"
You can use format(scientific=F) instead:
> format(0.0001,scientific=F)
[1] "0.0001"
Then do this:
nchar(sub("^-?\\d*\\.?","",format(x,scientific=F)))
Or in vectorized form:
> nplaces=function(x)sapply(x,function(y)nchar(sub("^-?\\d*\\.?","",format(y,scientific=F))))
> nplaces(c(0,-1,1.1,0.123,1e-8,-1e-8))
[1] 0 0 1 3 8 8