Table by row with R - r

I would like to tabulate by row within a data frame. I can obtain adequate results using table within apply in the following example:
df.1 <- read.table(text = '
state county city year1 year2 year3 year4 year5
1 2 4 0 0 0 1 2
2 5 3 10 20 10 NA 10
2 7 1 200 200 NA NA 200
3 1 1 NA NA NA NA NA
', na.strings = "NA", header=TRUE)
tdf <- t(df.1)
apply(tdf[4:nrow(tdf),1:nrow(df.1)], 2, function(x) {table(x, useNA = "ifany")})
Here are the results:
[[1]]
x
0 1 2
3 1 1
[[2]]
x
10 20 <NA>
3 1 1
[[3]]
x
200 <NA>
3 2
[[4]]
x
<NA>
5
However, in the following example each row consists of a single value.
df.2 <- read.table(text = '
state county city year1 year2 year3 year4 year5
1 2 4 0 0 0 0 0
2 5 3 1 1 1 1 1
2 7 1 2 2 2 2 2
3 1 1 NA NA NA NA NA
', na.strings = "NA", header=TRUE)
tdf.2 <- t(df.2)
apply(tdf.2[4:nrow(tdf.2),1:nrow(df.2)], 2, function(x) {table(x, useNA = "ifany")})
The output I obtain is:
# [1] 5 5 5 5
As such, I cannot tell from this output that the first 5 is for 0, the second 5 is for 1, the third 5 is for 2 and the last 5 is for NA. Is there a way I can have R return the value represented by each 5 in the second example?

You can use lapply to systematically output a list. You would have to loop over the row indices:
sub.df <- as.matrix(df.2[grepl("year", names(df.2))])
lapply(seq_len(nrow(sub.df)),
function(i)table(sub.df[i, ], useNA = "ifany"))

Protect the result by wrapping with list:
apply(tdf.2[4:nrow(tdf.2),1:nrow(df.2)], 2,
function(x) {list(table(x, useNA = "ifany")) })

Here's a table solution:
table(
rep(rownames(df.1),5),
unlist(df.1[,4:8]),
useNA="ifany")
This gives
0 1 2 10 20 200 <NA>
1 3 1 1 0 0 0 0
2 0 0 0 3 1 0 1
3 0 0 0 0 0 3 2
4 0 0 0 0 0 0 5
...and for your df.2:
0 1 2 <NA>
1 5 0 0 0
2 0 5 0 0
3 0 0 5 0
4 0 0 0 5
Well, this is a solution unless you really like having a list of tables for some reason.

I think the problem is stated in applys help:
... If n equals 1, apply returns a vector if MARGIN has length 1 and
an array of dimension dim(X)[MARGIN] otherwise ...
The inconsistencies of the return values of base R's apply family is the reason why I shifted completely to plyrs **ply functions. So this works as desired:
library(plyr)
alply( df.2[ 4:8 ], 1, function(x) table( unlist(x), useNA = "ifany" ) )

Related

Drop columns when there are many missingness in R

I am trying to drop some columns that have less than 5 valid values. Here is an example dataset.
df <- data.frame(id = c(1,2,3,4,5,6,7,8,9,10),
i1 = c(0,1,1,1,1,0,0,1,NA,1),
i2 = c(1,0,0,1,0,1,1,0,0,NA),
i3 = c(NA,NA,NA,NA,NA,NA,NA,NA,NA,0),
i4 = c(NA,1,NA,NA,NA,NA,NA,NA,1,NA))
> df
id i1 i2 i3 i4
1 1 0 1 NA NA
2 2 1 0 NA 1
3 3 1 0 NA NA
4 4 1 1 NA NA
5 5 1 0 NA NA
6 6 0 1 NA NA
7 7 0 1 NA NA
8 8 1 0 NA NA
9 9 NA 0 NA 1
10 10 1 NA 0 NA
in this case, columns i3 and i4 needs to be dropped from the data frame.
How can I get the desired dataset below:
> df
id i1 i2
1 1 0 1
2 2 1 0
3 3 1 0
4 4 1 1
5 5 1 0
6 6 0 1
7 7 0 1
8 8 1 0
9 9 NA 0
10 10 1 NA
You can keep cols with at least 5 non-missing values with:
df[colSums(!is.na(df)) >= 5]
Can use discard from the purrr package:
library(purrr)
df <- data.frame(id = c(1,2,3,4,5,6,7,8,9,10),
i1 = c(0,1,1,1,1,0,0,1,NA,1),
i2 = c(1,0,0,1,0,1,1,0,0,NA),
i3 = c(NA,NA,NA,NA,NA,NA,NA,NA,NA,0),
i4 = c(NA,1,NA,NA,NA,NA,NA,NA,1,NA))
df %>%
discard(~ sum(!is.na(.))<5)
#> id i1 i2
#> 1 1 0 1
#> 2 2 1 0
#> 3 3 1 0
#> 4 4 1 1
#> 5 5 1 0
#> 6 6 0 1
#> 7 7 0 1
#> 8 8 1 0
#> 9 9 NA 0
#> 10 10 1 NA
Created on 2022-11-10 with reprex v2.0.2
While this is likely slower than base R methods (for datasets with extremely many columns > 1000), I generally feel the readability of the code is far superior. In addition, it is easy to do more complicated statements.
Using R base, another approach...
> df[, sapply(df, function(x) sum(is.na(x))) < 5]
id i1 i2
1 1 0 1
2 2 1 0
3 3 1 0
4 4 1 1
5 5 1 0
6 6 0 1
7 7 0 1
8 8 1 0
9 9 NA 0
10 10 1 NA
A performance comparison of the different answers given in this post:
funs = list(
colSums = function(df){df[colSums(!is.na(df)) >= nrow/10]},
sapply = function(df){df[, sapply(df, function(x) sum(!is.na(x))) >= nrow/10]},
discard = function(df){df %>% discard(~ sum(!is.na(.)) < nrow/10)},
mutate = function(df){df %>% mutate(across(where(~ sum(!is.na(.)) < nrow/10), ~ NULL))},
select = function(df){df %>% select(where(~ sum(!is.na(.)) >= nrow/10))})
ncol = 10000
nrow = 100
df = replicate(ncol, sample(c(1:9, NA), nrow, TRUE)) %>% as_tibble()
avrtime = map_dbl(funs, function(f){
duration = c()
for(i in 1:10){
t1 = Sys.time()
f(df)
t2 = Sys.time()
duration[i] = as.numeric(t2 - t1)}
return(mean(duration))})
avrtime[order(avrtime)]
The average time taken by each (in seconds):
colSums sapply discard select mutate
0.04510500 0.04692972 0.29207475 0.29451160 0.31755514
Using select
library(dplyr)
df %>%
select(where(~ sum(complete.cases(.x)) >=5))
-output
id i1 i2
1 1 0 1
2 2 1 0
3 3 1 0
4 4 1 1
5 5 1 0
6 6 0 1
7 7 0 1
8 8 1 0
9 9 NA 0
10 10 1 NA
Or in base R
Filter(\(x) sum(complete.cases(x)) >=5 , df)

A handy means of reordering columns in large data frame using R

This question pertains to reordering of columns in a large data frame, for example, having about 800 columns. The data frame has many column names preceding different dates for each id (i.e. first column). Similar questions appear online (e.g. Reordering columns in data frame once again and Reordering columns in large data frame) but their specifics do not fit into my case. A sample of the data set is
df <-
structure(
list(
id = c(1L, 2L, 3L, 4L,5L),
date1 = c("1/4/2004", "3/8/2004", "NA", "13/10/2004","11/3/2003"),
ax=c(1,2,1,"NA",5),
am=c(1,0,1,0,0),
aq=c(0,0,1,1,1),
date2 = c("8/6/2002", "11/5/2004", "3/5/2004",
"25/11/2004","21/1/2004"),
bx=c(3,2,6,1,5),
bm=c(1,1,0,1,1),
bq=c(1,0,1,0,0),
date3=c("23/6/2006", "24/12/2006", "18/2/2006", "NA","NA"),
cx=c(1,2,4,1,0),
cm=c(1,1,0,1,1),
cq=c(1,0,1,0,0)
),
.Names = c("id",
"date1","ax","am","aq","date2","bx","bm","bq","date3","cx","cm","cq"),
class = "data.frame",
row.names = c(NA,-5L)
)
I want to reorder the columns such that we have "am","aq","ax"; "bm","bq","bx" and "cm","cq","cx" following the date1; date2 and date3, respectively. For this small scenario example, I have tried
df1<-df[,c(1,2,4,5,3,6,8,9,7,10,12,13,11)]
This code works well and it produces the expected results below
df1
id date1 am aq ax date2 bm bq bx date3 cm cq cx
1 1 1/4/2004 1 0 1 8/6/2002 1 1 3 23/6/2006 1 1 1
2 2 3/8/2004 0 0 2 11/5/2004 1 0 2 24/12/2006 1 0 2
3 3 NA 1 1 1 3/5/2004 0 1 6 18/2/2006 0 1 4
4 4 13/10/2004 0 1 NA 25/11/2004 1 0 1 NA 1 0 1
5 5 11/3/2003 0 1 5 21/1/2004 1 0 5 NA 1 0 0
However, I am looking for a much handy code that would be easy on large data. Any help is greatly appreciated.
If your complete data follows the pattern you've outlined you can recycle a vector of position adjustments like so:
df[c(1, (2:ncol(df) + c(0,1,1,-2)))]
id date1 am aq ax date2 bm bq bx date3 cm cq cx
1 1 1/4/2004 1 0 1 8/6/2002 1 1 3 23/6/2006 1 1 1
2 2 3/8/2004 0 0 2 11/5/2004 1 0 2 24/12/2006 1 0 2
3 3 NA 1 1 1 3/5/2004 0 1 6 18/2/2006 0 1 4
4 4 13/10/2004 0 1 NA 25/11/2004 1 0 1 NA 1 0 1
5 5 11/3/2003 0 1 5 21/1/2004 1 0 5 NA 1 0 0
Explanation:
The pattern is to keep the date in place, move second and third columns forward one, and move the fourth back two. We can create a vector of this:
adj.pattern <- c(0,1,1,-2)
Because R recycles shorter vectors to match the length of longer ones we can apply it easily to the index of column positions from position 2 to the number of columns in the data frame 2:ncol(df), which gives
col.index <- 2:ncol(df) + adj.pattern
col.index
[1] 2 4 5 3 6 8 9 7 10 12 13 11
Then we use this index to order the data frame (adding 1 at the start for the ID column):
df[c(1, col.index)]
If you want to keep the id and date columns fixed and sort the remaining columns within themselves based on name, we can do
#1:ncol(df)
all_cols <- seq_len(ncol(df))
#Get indices of fixed columns
fixed_columns <- c(1, grep("date", names(df)))
#Get the name of columns apart from fixed ones
cols <- names(df)[-fixed_columns]
#Sort and match them and update the new order in all_cols
all_cols[-fixed_columns] <- match(sort(cols), names(df))
df[all_cols]
# id date1 am aq ax date2 bm bq bx date3 cm cq cx
#1 1 1/4/2004 1 0 1 8/6/2002 1 1 3 23/6/2006 1 1 1
#2 2 3/8/2004 0 0 2 11/5/2004 1 0 2 24/12/2006 1 0 2
#3 3 NA 1 1 1 3/5/2004 0 1 6 18/2/2006 0 1 4
#4 4 13/10/2004 0 1 NA 25/11/2004 1 0 1 NA 1 0 1
#5 5 11/3/2003 0 1 5 21/1/2004 1 0 5 NA 1 0 0

Deleting unnecessary rows after column shuffling in a data frame in R

I have a data frame as below. The Status of each ID recorded in different time points. 0 means the person is alive and 1 means dead.
ID Status
1 0
1 0
1 1
2 0
2 0
2 0
3 0
3 0
3 0
3 1
I want to shuffle the column Status and each ID can have a status of 1, just one time. After that, I want to have NA for other rows. For instance, I want my data frame to look like below after shuffling:
ID Status
1 0
1 0
1 0
2 0
2 1
2 NA
3 0
3 1
3 NA
3 NA
From the data you posted and your example output, it looks like you want to randomly sample df$Status and then do the replacement. To get what you want in one step you could do:
set.seed(3)
df$Status <- ave(sample(df$Status), df$ID, FUN = function(x) replace(x, which(cumsum(x)>=1)[-1], NA))
df
# ID Status
#1 1 0
#2 1 0
#3 1 0
#4 2 1
#5 2 NA
#6 2 NA
#7 3 0
#8 3 0
#9 3 1
#10 3 NA
One option to use cumsum of cumsum to decide first 1 appearing for an ID.
Note that I have modified OP's sample dataframe to represent logic of reshuffling.
library(dplyr)
df %>% group_by(ID) %>%
mutate(Sum = cumsum(cumsum(Status))) %>%
mutate(Status = ifelse(Sum > 1, NA, Status)) %>%
select(-Sum)
# # A tibble: 10 x 2
# # Groups: ID [3]
# ID Status
# <int> <int>
# 1 1 0
# 2 1 0
# 3 1 1
# 4 2 0
# 5 2 1
# 6 2 NA
# 7 3 0
# 8 3 1
# 9 3 NA
# 10 3 NA
Data
df <- read.table(text =
"ID Status
1 0
1 0
1 1
2 0
2 1
2 0
3 0
3 1
3 0
3 0", header = TRUE)

sapply function(x) where x is subsetted argument

So, I want to generate a new vector from the information in two existing ones (numerical), one which sets the id for the participant, the other indicating the observation number. Each paticipant has been observed different times.
Now, the new vector should should state: 0 when obs_no=1; 1 when obs_no=last observation for that id; NA for cases in between.
id obs_no new_vector
1 1 0
1 2 NA
1 3 NA
1 4 NA
1 5 1
2 1 0
2 2 1
3 1 0
3 2 NA
3 3 1
I figure I could do this separatly for every id using code like this
new_vector <- c(0, rep(NA, times=length(obs_no[id==1])-2), 1)
Or I guess just using max() but it wouldn't make any difference.
But adding each participant manually is really inconvenient since I have a lot of cases. I can't figure out how to make a generic function. I tried to define a function(x) using sapply but cant get it to work since x is positioned within subsetting brackets.
Any advice would be helpful. Thanks.
ave to the rescue:
dat$newvar <- NA
dat$newvar <- with(dat,
ave(newvar, id, FUN=function(x) replace(x, c(length(x),1), c(1,0)) )
)
Or use a bit of duplicated() fun:
dat$newvar <- NA
dat$newvar[!duplicated(dat$id, fromLast=TRUE)] <- 1
dat$newvar[!duplicated(dat$id)] <- 0
Both giving:
# id obs_no new_vector newvar
#1 1 1 0 0
#2 1 2 NA NA
#3 1 3 NA NA
#4 1 4 NA NA
#5 1 5 1 1
#6 2 1 0 0
#7 2 2 1 1
#8 3 1 0 0
#9 3 2 NA NA
#10 3 3 1 1
You can also do this with dplyr
str <- "
id obs_no new_vector
1 1 0
1 2 NA
1 3 NA
1 4 NA
1 5 1
2 1 0
2 2 1
3 1 0
3 2 NA
3 3 1
"
dt <- read.table(textConnection(str), header = T)
library(dplyr)
dt %>% group_by(id) %>%
mutate(newvar = if_else(obs_no==1,0L,if_else(obs_no==max(obs_no),1L,as.integer(NA))))
We can use data.table
library(data.table)
i1 <- setDT(df1)[, .I[seq_len(.N) %in% c(1, .N)], id]$V1
df1[i1, newvar := c(0, 1)]
df1
# id obs_no new_vector newvar
# 1: 1 1 0 0
# 2: 1 2 NA NA
# 3: 1 3 NA NA
# 4: 1 4 NA NA
# 5: 1 5 1 1
# 6: 2 1 0 0
# 7: 2 2 1 1
# 8: 3 1 0 0
# 9: 3 2 NA NA
#10: 3 3 1 1
Use split:
result = lapply(split(obs_no, id), function (x) c(0, rep(NA, length(x) - 2), 1))
This gives you a list of vectors. You can paste them back together like this:
do.call(c, result)

R aggregate all possible combinations incl. "don't cares"

Say we've got a dataframe with 3 columns representing 3 different cases, and each can be of state 0 or 1. A fourth column contains a measurement.
set.seed(123)
df <- data.frame(round(runif(25)),
round(runif(25)),
round(runif(25)),
runif(25))
colnames(df) <- c("V1", "V2", "V3", "x")
head(df)
V1 V2 V3 x
1 0 1 0 0.2201189
2 1 1 0 0.3798165
3 0 1 1 0.6127710
aggregate(df$x, by=list(df$V1, df$V2, df$V3), FUN=mean)
Group.1 Group.2 Group.3 x
1 0 0 0 0.1028646
2 1 0 0 0.5081943
3 0 1 0 0.4828984
4 1 1 0 0.5197925
5 0 0 1 0.4571073
6 1 0 1 0.3219217
7 0 1 1 0.6127710
8 1 1 1 0.6029213
The aggregate function calculates the mean for all possible combinations. However, in my research I also need to know the outcome of combinations, where certain columns may have any state. For example, the mean of all observations with V1==1 & V2==1, regardless the contents of V3. The result should look like this, with the asterisk representing "don't care":
Group.1 Group.2 Group.3 x
1 * * * 0.1234567 (this is the mean of all rows)
2 0 * * 0.1234567
3 1 * * 0.1234567
4 * 0 * 0.1224567
5 * 1 * 0.1234567
[ all other possible combinations follow, should be total of 27 rows ]
Is there a easy way to achieve this?
Here is the ldply-ddply method:
library(plyr)
ldply(list(.(V1,V2,V3),.(V1),.(V2),.()), function(y) ddply(df,y,summarise,x=mean(x)))
V1 V2 V3 x .id
1 0 0 0 0.1028646 <NA>
2 0 0 1 0.4571073 <NA>
3 0 1 0 0.4828984 <NA>
4 0 1 1 0.6127710 <NA>
5 1 0 0 0.5081943 <NA>
6 1 0 1 0.3219217 <NA>
7 1 1 0 0.5197925 <NA>
8 1 1 1 0.6029213 <NA>
9 0 NA NA 0.4436400 <NA>
10 1 NA NA 0.4639997 <NA>
11 NA 0 NA 0.4118793 <NA>
12 NA 1 NA 0.5362985 <NA>
13 NA NA NA 0.4566702 <NA>
Essentially you create a list of all your variable combinations you are interested in, and iterate over this with ldply and using ddply to perform the aggreation. The magic of plyr puts it all into a compact dataframe for you. All that remains is to remove the spurious .id column introduced by the grand mean (.()) and to replace the NAs in the groups with "*" if needed.
To get all combinations you can use combn and lapply to generate a list with the relevant combinations to plug into ldply:
all.combs <- unlist(lapply(0:3,combn,x=c("V1","V2","V3"),simplify=FALSE),recursive=FALSE)
ldply(all.combs, function(y) ddply(df,y,summarise,x=mean(x)))
.id x V1 V2 V3
1 <NA> 0.4566702 NA NA NA
2 <NA> 0.4436400 0 NA NA
3 <NA> 0.4639997 1 NA NA
4 <NA> 0.4118793 NA 0 NA
5 <NA> 0.5362985 NA 1 NA
6 <NA> 0.4738541 NA NA 0
7 <NA> 0.4380543 NA NA 1
8 <NA> 0.3862588 0 0 NA
9 <NA> 0.5153666 0 1 NA
10 <NA> 0.4235250 1 0 NA
11 <NA> 0.5530440 1 1 NA
12 <NA> 0.3878900 0 NA 0
13 <NA> 0.4882400 0 NA 1
14 <NA> 0.5120604 1 NA 0
15 <NA> 0.4022073 1 NA 1
16 <NA> 0.4502901 NA 0 0
17 <NA> 0.3820042 NA 0 1
18 <NA> 0.5013455 NA 1 0
19 <NA> 0.6062045 NA 1 1
20 <NA> 0.1028646 0 0 0
21 <NA> 0.4571073 0 0 1
22 <NA> 0.4828984 0 1 0
23 <NA> 0.6127710 0 1 1
24 <NA> 0.5081943 1 0 0
25 <NA> 0.3219217 1 0 1
26 <NA> 0.5197925 1 1 0
27 <NA> 0.6029213 1 1 1
(Nice reproducible code, btw, well-stated question.)
Perhaps the best way to attack this would be to create (and later
discard) another column indicating a grouping. Starting with your
data:
set.seed(123)
df <- data.frame(round(runif(25)),
round(runif(25)),
round(runif(25)),
runif(25))
colnames(df) <- c("V1", "V2", "V3", "x")
Let's first form a data.frame with all possibles, using a fourth
column to provide a unique group id.
allpossibles <- expand.grid(V1=unique(df$V1), V2=unique(df$V2), V3=unique(df$V3))
allpossibles$id <- 1:nrow(allpossibles)
head(allpossibles, n=3)
## V1 V2 V3 id
## 1 0 1 0 1
## 2 1 1 0 2
## 3 0 0 0 3
With this data.frame, change the id for rows where you have desired
commonality. For instance, the following two combinations (1,1,0) and
(1,1,1) are identical as far as you care, so set the id variable to
be the same:
subset(allpossibles, V1==1 & V2==1)
## V1 V2 V3 id
## 2 1 1 0 2
## 6 1 1 1 6
allpossibles$id[6] <- 2
From here, merge the two data.frames so that id is incorporated into
the original:
df2 <- merge(df, allpossibles, by=c('V1','V2','V3'))
head(df2, n=3)
## V1 V2 V3 x id
## 1 0 0 0 0.1028646 3
## 2 0 0 1 0.1750527 7
## 3 0 0 1 0.3435165 7
From here, it's a simple matter of aggregating the data and remerging
with allpossibles (to regain V1, V2, and V3):
df3 <- aggregate(df2$x, by=list(df2$id), FUN=mean)
colnames(df3) <- c('id','x')
(df4 <- merge(allpossibles, df3, by='id'))
## id V1 V2 V3 x
## 1 1 0 1 0 0.4828984
## 2 2 1 1 0 0.5530440
## 3 2 1 1 1 0.5530440
## 4 3 0 0 0 0.1028646
## 5 4 1 0 0 0.5081943
## 6 5 0 1 1 0.6127710
## 7 7 0 0 1 0.4571073
## 8 8 1 0 1 0.3219217
If you can accept the data with semi-duplicate rows (see rows 2 and 3
above), then just remove the $id column and have at it. If you must
unique-ify the rows, something like the following might work:
df5 <- do.call(rbind, by(df4, df4$id, function(ldf) {
if (nrow(ldf) > 1) {
uniqlen <- apply(ldf, 2, function(x) length(unique(x)))
ldf[,which(uniqlen > 1)] <- NA
ldf <- ldf[1,]
}
ldf
}))
df5 <- df5[, ! 'id' == names(df5)]
df5
## V1 V2 V3 x
## 1 0 1 0 0.4828984
## 2 1 1 NA 0.5530440
## 3 0 0 0 0.1028646
## 4 1 0 0 0.5081943
## 5 0 1 1 0.6127710
## 7 0 0 1 0.4571073
## 8 1 0 1 0.3219217
(Slightly cleaner-looking code can be used if you replace
do.call(rbind, by( with ddply( using the plyr package. The
internal function and its results are the same. ddply in this case
is a little slower, but that could likely be improved with a better
internal function.)
First, let me define a helper function to create all possible combinations of columns
allcomb<-function(x, addnone=T) {
x<-do.call(c, lapply(length(v):1, function(n) combn(v,n,simplify=F)))
if(addnone) x<-c(x,0)
x
}
Now we can use this to aggregate over the different subsets
v<-names(df)[1:3]
vv<-allcomb(v)
dd<-lapply(vv, function(cols) aggregate(df$x, df[, cols, drop=F], mean))
This actually returns a list of data.frames for all the different combinations, to merge them all together, we can use rbind.fill from plyr
library(plyr)
dd<-do.call(rbind.fill, dd)
This actually leaves the "any" values as NA rather than "*". If want to turn those into asterisks (and consequently convert your group columns to strings rather than numeric values) you can do
dd[1:3]<-lapply(dd[1:3], function(x) {x[is.na(x)]<-"*";x})
which finally gives
V1 V2 V3 x
1 0 0 0 0.1028646
2 1 0 0 0.5081943
3 0 1 0 0.4828984
4 1 1 0 0.5197925
5 0 0 1 0.4571073
6 1 0 1 0.3219217
7 0 1 1 0.6127710
8 1 1 1 0.6029213
9 0 0 * 0.3862588
10 1 0 * 0.4235250
11 0 1 * 0.5153666
12 1 1 * 0.5530440
13 0 * 0 0.3878900
14 1 * 0 0.5120604
15 0 * 1 0.4882400
16 1 * 1 0.4022073
17 * 0 0 0.4502901
18 * 1 0 0.5013455
19 * 0 1 0.3820042
20 * 1 1 0.6062045
21 0 * * 0.4436400
22 1 * * 0.4639997
23 * 0 * 0.4118793
24 * 1 * 0.5362985
25 * * 0 0.4738541
26 * * 1 0.4380543
27 * * * 0.4566702

Resources