I would like to order a data frame based on an alphanumeric variable. Here how my dataset looks like:
sample.data <- data.frame(Grade=c(4,4,4,4,3,3,3,3,3,3,3,3),
ItemID = c(15,15,15,15,17,17,17,17,16,16,16,16),
common.names = c("15_AS_SA1_Correct","15_AS_SA10_Correct","15_AS_SA2_Correct","15_AS_SA3_Correct",
"17_AS_2_B2","17_AS_2_B1","17_AS_5_C1","17_AS_4_D1",
"16_AS_SA1_Negative","16_AS_SA11_Prediction","16_AS_SA12_UnitMeaning","16_AS_SA3_Complete"))
> sample.data
Grade ItemID common.names
1 4 15 15_AS_SA1_Correct
2 4 15 15_AS_SA10_Correct
3 4 15 15_AS_SA2_Correct
4 4 15 15_AS_SA3_Correct
5 3 17 17_AS_2_B2
6 3 17 17_AS_2_B1
7 3 17 17_AS_5_C1
8 3 17 17_AS_4_D1
9 3 16 16_AS_SA1_Negative
10 3 16 16_AS_SA11_Prediction
11 3 16 16_AS_SA12_UnitMeaning
12 3 16 16_AS_SA3_Complete
I need to order by Grade and ItemID, then by common.names variable that contains alphanumeric.
I used this:
sample.data.ordered <- sample.data %>%
arrange(Grade, ItemID,common.names)
but it did not work for the whole set.
My desired output is:
> sample.data.ordered
Grade ItemID common.names
1 3 16 16_AS_SA1_Negative
2 3 16 16_AS_SA3_Complete
3 3 16 16_AS_SA11_Prediction
4 3 16 16_AS_SA12_UnitMeaning
5 3 17 17_AS_2_B1
6 3 17 17_AS_2_B2
7 3 17 17_AS_4_D1
8 3 17 17_AS_5_C1
9 4 15 15_AS_SA1_Correct
10 4 15 15_AS_SA2_Correct
11 4 15 15_AS_SA3_Correct
12 4 15 15_AS_SA10_Correct
Any thoughts?
Thanks!
A base R solution using order as well as a more complex procedure for common.names involving gsub, regular expression and multiple backreference to match the numbers in the strings by which the column can be ordered:
sample.data[order(sample.data$Grade,
sample.data$ItemID,
as.numeric(gsub(".*(SA|AS_)(\\d+)_(\\w)?(\\d)?.*", "\\2\\4", sample.data$common.names))),]
Grade ItemID common.names
9 3 16 16_AS_SA1_Negative
12 3 16 16_AS_SA3_Complete
10 3 16 16_AS_SA11_Prediction
11 3 16 16_AS_SA12_UnitMeaning
6 3 17 17_AS_2_B1
5 3 17 17_AS_2_B2
8 3 17 17_AS_4_D1
7 3 17 17_AS_5_C1
1 4 15 15_AS_SA1_Correct
3 4 15 15_AS_SA2_Correct
4 4 15 15_AS_SA3_Correct
2 4 15 15_AS_SA10_Correct
Related
I want to use conditional statement to consecutive values in the sliding manner.
For example, I have dataset like this;
data <- data.frame(ID = rep.int(c("A","B"), times = c(24, 12)),
+ time = c(1:24,1:12),
+ visit = as.integer(runif(36, min = 0, max = 20)))
and I got table below;
> data
ID time visit
1 A 1 7
2 A 2 0
3 A 3 6
4 A 4 6
5 A 5 3
6 A 6 8
7 A 7 4
8 A 8 10
9 A 9 18
10 A 10 6
11 A 11 1
12 A 12 13
13 A 13 7
14 A 14 1
15 A 15 6
16 A 16 1
17 A 17 11
18 A 18 8
19 A 19 16
20 A 20 14
21 A 21 15
22 A 22 19
23 A 23 5
24 A 24 13
25 B 1 6
26 B 2 6
27 B 3 16
28 B 4 4
29 B 5 19
30 B 6 5
31 B 7 17
32 B 8 6
33 B 9 10
34 B 10 1
35 B 11 13
36 B 12 15
I want to flag each ID by continuous values of "visit".
If the number of "visit" continued less than 10 for 6 times consecutively, I'd attach "empty", and "busy" otherwise.
In the data above, "A" is continuously below 10 from rows 1 to 6, then "empty". On the other hand, "B" doesn't have 6 consecutive one digit, then "busy".
I want to apply the condition to next segment of 6 values if the condition weren't fulfilled in the previous segment.
I'd like achieve this using R. Any advice will be appreciated.
The following randomly splits a data frame into halves.
df <- read.csv("https://raw.githubusercontent.com/HirokiYamamoto2531/data/master/data.csv")
head(df, 3)
# dv iv subject item
#1 562 -0.5 1 7
#2 790 0.5 1 21
#3 NA -0.5 1 19
r <- seq_len(nrow(df))
first <- sample(r, 240)
second <- r[!r %in% first]
df_1 <- df[first, ]
df_2 <- df[second, ]
However, in this way, each data frame (df_1 and df_2) is not balanced on subject and item: e.g.,
table(df_1$subject)
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
# 7 8 3 5 5 3 8 1 5 7 7 6 7 7 9 8 8 9 6 7 8 5 4 4 5 2 7 6 9
# 30 31 32 33 34 35 36 37 38 39 40
# 7 5 7 7 7 3 5 7 5 3 8
table(df_1$item)
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
# 12 11 12 12 9 11 11 8 11 12 10 8 14 7 14 10 8 7 9 9 7 11 9 8
# There are 40 subjects and 24 items, and each subject is assigned to 12 items and each item to 20 subjects.
I would like to know how to split the data frame into halves that are balanced on subject and item (i.e., exactly 6 data points from each subject and 10 data points from each item).
You can use the createDataPartition function from the caret package to create a balanced partition of one variable.
The code below creates a balanced partition of the dataset according to the variable subject:
df <- read.csv("https://raw.githubusercontent.com/HirokiYamamoto2531/data/master/data.csv")
partition <- caret::createDataPartition(df$subject, p = 0.5, list = FALSE)
first.half <- df[partition, ]
second.half <- df[-partition, ]
table(first.half$subject)
table(second.half$subject)
I'm not sure whether it's possible to balance two variables at once. You can try balancing for one variable and checking if you're happy with the partition of the second variable.
My data looks like this:
x y
1 1
2 2
3 2
4 4
5 5
6 6
7 6
8 8
9 9
10 9
11 11
12 12
13 13
14 13
15 14
16 15
17 14
18 16
19 17
20 18
y is a grouping variable. I would like to see how well this grouping went.
Because of this I want to extract a sample of n pairs of cases that are grouped together by variable y
and n pairs of cases that are not grouped together by variable y. In order to calculate the number of
false positives and false negatives (either falsly grouped or not). How do I extract a sample of grouped pairs
and a sample of not-grouped pairs?
I would like the samples to look like this (for n=6) :
Grouped sample:
x y
2 2
3 2
9 9
10 9
15 14
17 14
Not-grouped sample:
x y
1 1
2 2
6 8
6 8
11 11
19 17
How would I go about this in R?
I'm not entirely clear on what you like to do, partly because I feel there is some context missing as to what you're trying to achieve. I also don't quite understand your expected output (for example, the not-grouped sample contains an entry 6 8 that does not exist in your original data...)
That aside, here is a possible approach.
# Maximum number of samples per group
n <- 3;
# Set fixed RNG seed for reproducibility
set.seed(2017);
# Grouped samples
df.grouped <- do.call(rbind.data.frame, lapply(split(df, df$y),
function(x) if (nrow(x) > 1) x[sample(min(n, nrow(x))), ]));
df.grouped;
# x y
#2.3 3 2
#2.2 2 2
#6.6 6 6
#6.7 7 6
#9.10 10 9
#9.9 9 9
#13.13 13 13
#13.14 14 13
#14.15 15 14
#14.17 17 14
# Ungrouped samples
df.ungrouped <- df[sample(nrow(df.grouped)), ];
df.ungrouped;
# x y
#7 7 6
#1 1 1
#9 9 9
#4 4 4
#3 3 2
#2 2 2
#5 5 5
#6 6 6
#10 10 9
#8 8 8
Explanation: Split df based on y, then draw min(n, nrow(x)) samples from subset x containing >1 rows; rbinding gives the grouped df.grouped. We then draw nrow(df.grouped) samples from df to produce the ungrouped df.ungrouped.
Sample data
df <- read.table(text =
"x y
1 1
2 2
3 2
4 4
5 5
6 6
7 6
8 8
9 9
10 9
11 11
12 12
13 13
14 13
15 14
16 15
17 14
18 16
19 17
20 18", header = T)
I have a data frame d and I'd like to add a VALUE_GROUP column that looks at the value field and returns the upper limit of the bucket the value falls into
Value Value_group
0<=value<5 5
5<=value<10 10
10<=value<15 15
15<=value<20 20
You can see the Value_group is the max possible value in the bucket i.e. for value between 0 and 5 Value_group = 5
d =data.frame(group = rep("A",20),value = seq(1,20,1))
d
d$Value_Group = ??
Value_group can be added using multiple ifelse() statements but is there a better way?
The result would be:
group value Value_Group
1 A 1 5
2 A 2 5
3 A 3 5
4 A 4 5
5 A 5 5
6 A 6 10
7 A 7 10
8 A 8 10
9 A 9 10
10 A 10 10
11 A 11 15
12 A 12 15
13 A 13 15
14 A 14 15
15 A 15 15
16 A 16 20
17 A 17 20
18 A 18 20
19 A 19 20
20 A 20 20
Thank you.
This question already has answers here:
How to sum a variable by group
(18 answers)
Closed 4 years ago.
I have a data frame like this:
Date Amount Category
1 02.07.15 1 1
2 02.07.15 2 1
3 02.07.15 3 1
4 02.07.15 4 2
5 03.07.15 5 2
6 04.07.15 6 3
7 05.07.15 7 3
8 06.07.15 8 3
9 07.07.15 9 4
10 08.07.15 10 5
11 09.07.15 11 6
12 10.07.15 12 4
13 11.07.15 13 4
14 12.07.15 14 5
15 13.07.15 15 5
16 14.07.15 16 6
17 15.07.15 17 6
18 16.07.15 18 5
19 17.07.15 19 4
I would like to calculate the sum of the amount for each single day in a category. My attempts like (see the code) are both not sufficient.
summarise(group_by(testData, Category), sum(Amount))
Wrong output --> here the sum is calculated over each group
Category sum(Amount)
1 1 6
2 2 9
3 3 21
4 4 53
5 5 57
6 6 44
summarise(group_by(testData, Date), sum(Amount), categories = toString(Category))
Wrong output --> here the sum is calculated over each day but the categories are not considered
Date sum(Amount) categories
1 02.07.15 10 1, 1, 1, 2
2 03.07.15 5 2
3 04.07.15 6 3
4 05.07.15 7 3
5 06.07.15 8 3
6 07.07.15 9 4
7 08.07.15 10 5
8 09.07.15 11 6
9 10.07.15 12 4
10 11.07.15 13 4
11 12.07.15 14 5
12 13.07.15 15 5
13 14.07.15 16 6
14 15.07.15 17 6
15 16.07.15 18 5
16 17.07.15 19 4
So far I did not succeed in combining both statements.
How can I nest both group_by statements to calculate the sum of the amount for each single day in each category?
Nesting the groups like:
summarise(group_by(group_by(testData, Date), Category), sum(Amount), dates = toString(Date))
Category sum(Amount) dates
1 1 6 02.07.15, 02.07.15, 02.07.15
2 2 9 02.07.15, 03.07.15
3 3 21 04.07.15, 05.07.15, 06.07.15
4 4 53 07.07.15, 10.07.15, 11.07.15, 17.07.15
5 5 57 08.07.15, 12.07.15, 13.07.15, 16.07.15
6 6 44 09.07.15, 14.07.15, 15.07.15
does not work as intended.
I have heard of dplyr - summarise weighted data summarise_each but could not get it to work:
summarise_each(testData, funs(Category))
Error could not find function Category
You can try
testData %>%
group_by(Date,Category) %>%
summarise(Amount= sum(Amount))