R - strtoi strange behavior to get week of year - r

I use strtoi to determine the week of year in the following function:
to.week <- function(x) strtoi(format(x, "%W"))
It works fine for most dates:
> to.week(as.Date("2015-01-11"))
[1] 1
However, when I'm trying dates between 2015-02-23 and 2015-03-08, I get NA as a result:
> to.week(as.Date("2015-02-25"))
[1] NA
Could you please explain to me what causes the problem?

Here is an implementation that works:
to.week <- function(x) as.integer(format(x, "%W"))
The reason strtoi fails is by default it tries to interpret numbers as if they were octal when they are preceeded by a "0". Since "%W" returns "08", and 8 doesn't exist in octal, you get the NA. From ?strtoi:
Convert strings to integers according to the given base using the C function strtol, or choose a suitable base following the C rules.
...
For decimal strings as.integer is equally useful.
Also, you can use:
week(as.Date("2015-02-25"))
Though you may have to offset the result of that by 1 to match your expectations.

you can slightly modify your code like this
to.week <- function(x) strtoi(format(x, "%W"), 10)
and use base 10.

Related

lubridate::floor_date returns NA

I am trying to write a function that, given a date start and an integer n, adds n months to the date and then takes the last day of the resulting month.
The following piece of code, however, returns NA for n=8+12k, but seems to work in other cases.
library(lubridate)
start <- ymd("2020-06-29")
n <- 8
floor_date(start + months(n), "month") + months(1) - days(1)
>[1] NA
I guess this is somewhat due to the existence of leap years, but I still find it puzzling. Plus, I couldn't find anything in the docs about this.
Can someone please explain what is going on, or suggest a better way to do the job?
The clock package is really good for this, as it has an explicit argument specifying what you want done with invalid dates. And also the date_end function.
I think you want: (R >= 4.1 for native pipe)
library(clock)
start <- parse_date("2020-06-29")
n <- 8
start |> add_months(n, invalid = "previous") |> date_end('month')
Which gives:
[1] "2021-02-28"

Trying to extract a date from a 5 or 6-digit number

I am trying to extract a date from a number. The date is stored as the first 6 digits of a 11-digit personal ID-number (date-month-year). Unfortunately the cloud-based database (REDCap) output of this gets formatted as a number, so that the leading zero in those born on the first nine days of the month end up with a 10 digit ID number instead of a 11 digit one. I managed to extract the 6 or 5 digit number corresponding to the date, i.e. 311230 for 31st December 1930, or 11230 for first December 1930. I end up with two problems that I have not been able to solve.
Let's say we use the following numbers:
dato <- c(311230, 311245, 311267, 311268, 310169, 201104, 51230, 51269, 51204)
I convert these into string, and then apply the as.Date() function:
datostr <- as.character(dato)
datofinal <- as.Date(datostr, "%d%m%y")
datofinal
The problems i have are:
Five-digit numbers (eg 11230) gets reported as NA.
Six-digit numbers are recognized, but those born before 1.1.1969 gets reported with 100 years added, i.e. 010160 gets converted to 2060.01.01
I am sure this must be easy for those who are more knowledgeable about R, but, I struggle a bit solving this. Any help is greatly appreciated.
Greetings
Bjorn
If your 5-digit numbers really just need to be zero-padded, then
dato_s <- sprintf("%06d", dato)
dato_s
# [1] "311230" "311245" "311267" "311268" "310169" "201104" "051230" "051269" "051204"
From there, your question about "dates before 1969", take a look at ?strptime for the '%y' pattern:
'%y' Year without century (00-99). On input, values 00 to 68 are
prefixed by 20 and 69 to 99 by 19 - that is the behaviour
specified by the 2018 POSIX standard, but it does also say
'it is expected that in a future version the default century
inferred from a 2-digit year will change'.
So if you have specific alternate years for those, you need to add the century before sending to as.Date (which uses strptime-patterns).
dato_d <- as.Date(gsub("([0-4][0-9])$", "20\\1",
gsub("([5-9][0-9])$", "19\\1", dato_s)),
format = "%d%m%Y")
dato_d
# [1] "2030-12-31" "2045-12-31" "1967-12-31" "1968-12-31" "1969-01-31" "2004-11-20"
# [7] "2030-12-05" "1969-12-05" "2004-12-05"
In this case, I'm assuming 50-99 will be 1900, everything else 2000. If you need 40s or 30s, feel free to adjust the pattern: add digits to the second pattern (e.g., [3-9]) and remove from the first pattern (e.g., [0-2]), ensuring that all decades are included in exactly one pattern, not "neither" and not "both".
Borrowing from Allan's answer, I like that assumption of now() (since you did mention "born on"). Without lubridate, try this:
dato_s <- sprintf("%06d", dato)
dato_d <- as.Date(dato_s, format = "%d%m%y")
dato_d[ dato_d > Sys.Date() ] <-
as.Date(sub("([0-9]{2})$", "19\\1", dato_s[ dato_d > Sys.Date() ]), format = "%d%m%Y")
dato_d
# [1] "1930-12-31" "1945-12-31" "1967-12-31" "1968-12-31" "1969-01-31" "2004-11-20"
# [7] "1930-12-05" "1969-12-05" "2004-12-05"
You can make this a bit easier using lubridate, and noting that no-one can have a date of birth that is in the future of the current time:
library(lubridate)
dato <- dmy(sprintf("%06d", dato))
dato[dato > now()] <- dato[dato > now()] - years(100)
dato
#> [1] "1930-12-31" "1945-12-31" "1967-12-31" "1968-12-31" "1969-01-31"
#> [6] "2004-11-20" "1930-12-05" "1969-12-05" "2004-12-05"
Of course, without further information, this method will not (nor will any other method) be able to pick out the edge cases of people who are aged over 100. This might be easy to determine from the context.
Created on 2020-06-29 by the reprex package (v0.3.0)
Converting five digit "numbers" to six digits is straightforward: x <- stringr::str_pad(x, 6, pad="0") or similar will do the trick.
Your problem with years is the Millennium bug revisited. You'll have to consult with whoever compiled your data to see what assumptions they used.
I suspect all dates on or before 31Dec1970 are affected, not just those before 01Jan1960. That's because as.Date uses a default origin of 01Jan1970 when deciding how to handle two digit years. So your solution is to pick an appropriate origin in your conversion to fix this dataset. Something like d <- as.Date(x, origin="1900-01-01"). And then start using four digit years in the fiture! ;)

Remove all leading zeros and turn number into a positive

I'm trying to convert .000278 into 278 in R but all functions I see can only move it to .278. It needs to be a whole, positive number.
Is there a function that will remove the .0+ preceding a number??
I assume you want to apply this to many numbers at once (otherwise I might not understand the question).
a <- c(0.003, 0.0056, 0.000278)#Example data
a*(10^(nchar(a)-2))
[1] 3 56 278
Make sure scientific notation is disabled with scipen, as discussed in this post (e.g., by doing options(scipen=999). This is important because nchar counts the number of characters the numbers have (minus 2 for "0.").
Another approach. Use the package stringr.
a <- c(0.003, 0.0056, 0.000278)#Example data
library(stringr)
as.numeric(str_replace(a, "0.", ""))
[1] 3 56 278
Note that with this method you need to convert the output of str_replace back to numeric using as.numeric (not ideal).
Or use
substr and regexpr what gives exactly what you wanted
x <- 0.000278
as.numeric(substr(x ,regexpr("[^0.]",x),nchar(x)))
[1] 278
And this also works for different numbers, just set:
options("scipen"=100, "digits"=10) # Force R not to use exponential notation (e.g. e-8)
and you could try for example:
z <- 0.000000588
as.numeric(substr(z ,regexpr("[^0.]",z),nchar(z)))
[1] 588
Try this (len is adjustable):
a <- 0.000278
a*(10^match(T,round(a, 1:(len=10)) == a)) # counts the number of decimals places
# 278
Use as.numeric(a) in case a is of type character.
> as.numeric(gsub("^[0.]*", "", paste(0.000278)))
[1] 278

Forcing full weeks with apply.weekly()

I'm trying to figure out what xts (or zoo) uses as the time after doing an apply.period. Consider the following:
> myTs = xts(1:10, as.Date(1:10, origin = '2012-12-1'))
> apply.weekly(myTs, colSums)
[,1]
2012-12-02 1
2012-12-09 35
2012-12-11 19
I think the '2012-12-02' means "for the week ending 2012-12-02, the sum is 1". So basically the time is the end of the week.
But the problem is with that "2012-12-11" - I think what it's doing is saying that the 11th is the last day of the week that was given, so it's giving that as the time.
Is there any way to force it to give the sunday on which it ends, even if that day was not included in the data set?
Try this:
nextsun <- function(x) 7 * ceiling(as.numeric(x-0+4) / 7) + as.Date(0-4)
aggregate(myTs, nextsun, sum)
where nextsun was derived from nextfri code given in the zoo quick reference by replacing 5 (for Friday) with 0 (for Sunday).
Those are full weeks. It's only showing you the date of the very last observation. See ?endpoints (apply.weekly, is essentially a thin wrapper for endpoints).
apply.weekly
function (x, FUN, ...)
{
ep <- endpoints(x, "weeks")
period.apply(x, ep, FUN, ...)
}
<environment: namespace:xts>
From ?endpoints
endpoints returns a numeric vector corresponding to the last
observation in each period specified by on, with a zero added to the
beginning of the vector, and the index of the last observation in x at
the end.
Valid values for the argument on include: “us” (microseconds),
“microseconds”, “ms” (milliseconds), “milliseconds”, “secs” (seconds),
“seconds”, “mins” (minutes), “minutes”, “hours”, “days”, “weeks”,
“months”, “quarters”, and “years”.
The answer to your second question is no, there is no option to do so. But you could always edit the last date manually, if you're going to present all data wrapped up anyways, I don't see any harm in it.
No you can't force it give you the sunday.
Because the index of the result of period.apply is given by
ep <- endpoints(myTs,'weeks')
myTs[ep]
[,1]
2012-12-02 2
2012-12-09 9
2012-12-10 10
So you need to shift the last date. Unfortunately xts don't offer this option, you can't shift a single value of the index. I don't know why (maybe a design choice get unique index)
e.g You can do the flowing:
ts.weeks <- apply.weekly(myTs, colSums)
ts.weeks[length(ts.weeks)] <- last(index(myTs)) + 7-last(floor(diff(ep)))

Removing zero lines from dataframe yields dataframe of zero lines

I have a script that has a bunch of quality control checksums and it got caught on a dataset that had no need to remove any samples (rows) due to quality control. However, this script gave me an unexpected result of a dataframe with zero rows. With example data, why does this work:
data(iris)
##get rid of those pesky factors
iris$Species <- NULL
med <- which(iris[, 1] < 4.9)
medtemp <- iris[-med, ]
dim(medtemp)
[1] 134 4
but this returns a dataframe of zero rows:
small <- which(iris[, 1] < 4.0)
smalltemp <- iris[-small, ]
dim(smalltemp)
[1] 0 4
As does this:
x <- 0
zerotemp <- iris[-x, ]
dim(zerotemp)
[1] 0 4
It seems that the smalltemp dataframe should be the same size as iris since there are no rows to remove at all. Why is this?
Copied verbatim from Patrick Burns's R Inferno p. 41 (I hope this constitutes "fair use" -- if someone objects I'll remove it)
negative nothing is something
> x2 <- 1:4
> x2[-which(x2 == 3)]
[1] 1 2 4
The command above returns all of the values in x2 not equal to 3.
> x2[-which(x2 == 5)]
numeric(0)
The hope is that the above command returns all of x2 since no elements are
equal to 5. Reality will dash that hope. Instead it returns a vector of length
zero.
There is a subtle difference between the two following statements:
x[]
x[numeric(0)]
Subtle difference in the input, but no subtlety in the difference in the output.
There are at least three possible solutions for the original problem.
out <- which(x2 == 5)
if(length(out)) x2[-out] else x2
Another solution is to use logical subscripts:
x2[!(x2 %in% 5)]
Or you can, in a sense, work backwards:
x2[ setdiff(seq along(x2), which(x2 == 5)) ]
Could it be that in your second example, small evaluates to 0?
Taking the zeroth element of a vector will always return the empty vector:
> foo <- 1:3
> foo
[1] 1 2 3
> foo[0]
integer(0)
>
Instead of using which to get your indices, I would use a boolean vector and negate it. That way you can do this:
small <- iris[, 1] < 4.0
smalltemp <- iris[!small, ]
dim(smalltemp)
[1] 150 4
EDIT: I don't think a negative index of 0 (as in your case) is allowed since there is no 0th index and thus R can't exclude that index from your selection. Negative indexing can be interpreted as: "give me back all rows except those with these indices".
It is because of the rules of what to do with an index that is zero. Only strictly positive or strictly negative indices are allowed. As [0] returns nothing, and
R> -0 == 0
[1] TRUE
Hence you get nothing where you expected it to drop nothing.
The identical(0) issue is treated as indexing by a NULL and this is documented to work as if indexing by 0 and hence the same behaviour.
This is discussed in the R Language Definition manual

Resources