I have a dataframe like this:
ID1 ID2 Position Grade Day
234 756 2 87 27
245 486 4 66 26
321 275 1 54 20
768 656 6 51 7
421 181 1 90 14
237 952 8 68 23
237 553 4 32 30
And I have another dataframe like this:
ID1 ID2 Day Count
234 756 2 3
245 486 2 1
209 706 2 1
124 554 2 2
237 553 2 4
I need to add the Counts to the first dataframe where the ID1, ID2 and Day are matched. However, I also need to have it so that if there is no match (no Counts in the second dataframe for the set of ID1, ID2 and Day in the first dataframe) then a zero is put in that place. So the final dataframe would be something like:
ID1 ID2 Position Grade Day Count
234 756 2 87 27 3
245 486 4 66 26 1
321 275 1 54 20 0
768 656 6 51 7 0
421 181 1 90 14 0
237 952 8 68 23 0
237 553 4 32 30 4
This can be useful
> # First, merge df1 and df2
> df3 <- merge(df1, df2, by=c("ID1", "ID2"), all.x=TRUE)
> # Replace NA with 0's
> transform(df3[, -6], Count=ifelse(is.na(Count), 0, Count))
ID1 ID2 Position Grade Day.x Count
1 234 756 2 87 27 3
2 237 553 4 32 30 4
3 237 952 8 68 23 0
4 245 486 4 66 26 1
5 321 275 1 54 20 0
6 421 181 1 90 14 0
7 768 656 6 51 7 0
Related
df <- data.frame(items=sample(LETTERS,replace= T),quantity=sample(1:100,26,replace=FALSE),price=sample(100:1000,26,replace=FALSE))
I want to group_by sum quantity is about 500(ballpark) ,
When count close about 500 put the same group,like below
Any help would be appreciated.
Updated
Because the condition need to change, I reset the threshold to 250,
I summarize to find the max total value for each group, and then,
How could I change the the total of group6 < 200 , into group5.
I think about using ifelse but can't work successfully.
set.seed(123)
df <- data.frame(items=sample(LETTERS,replace= T),quantity=sample(1:100,26,replace=FALSE),price=sample(100:1000,26,replace=FALSE))
df$group=cumsum(c(1,ifelse(diff(cumsum(df$quantity)%% 250) < 0,1,0)))
df$total=ave(df$quantity,df$group,FUN=cumsum)
df %>% group_by(group) %>% summarise(max = max(total, na.rm=TRUE))
# A tibble: 6 × 2
group max
<dbl> <int>
1 1 238
2 2 254
3 3 256
4 4 246
5 5 237
6 6 101
I want get like
> df
items quantity price group total
1 O 36 393 1 36
2 S 78 376 1 114
3 N 81 562 1 195
4 C 43 140 1 238
5 J 76 530 2 76
6 R 15 189 2 91
7 V 32 415 2 123
8 K 7 322 2 130
9 E 9 627 2 139
10 T 41 215 2 180
11 N 74 705 2 254
12 V 23 873 3 23
13 Y 27 846 3 50
14 Z 60 555 3 110
15 E 53 697 3 163
16 S 93 953 3 256
17 Y 86 138 4 86
18 Y 88 258 4 174
19 I 38 851 4 212
20 C 34 308 4 246
21 H 69 473 5 69
22 Z 72 917 5 141
23 G 96 133 5 237
24 J 63 615 5 300
25 I 13 112 5 376
26 S 25 168 5 477
Thank you for any helping all the time.
Base R
set.seed(123)
df <- data.frame(items=sample(LETTERS,replace= T),quantity=sample(1:100,26,replace=FALSE),price=sample(100:1000,26,replace=FALSE))
df$group=cumsum(c(1,ifelse(diff(cumsum(df$quantity)%%500)<0,1,0)))
df$total=ave(df$quantity,df$group,FUN=cumsum)
items quantity price group total
1 O 36 393 1 36
2 S 78 376 1 114
3 N 81 562 1 195
4 C 43 140 1 238
5 J 76 530 1 314
6 R 15 189 1 329
7 V 32 415 1 361
8 K 7 322 1 368
9 E 9 627 1 377
10 T 41 215 1 418
11 N 74 705 1 492
12 V 23 873 2 23
13 Y 27 846 2 50
14 Z 60 555 2 110
15 E 53 697 2 163
16 S 93 953 2 256
17 Y 86 138 2 342
18 Y 88 258 2 430
19 I 38 851 2 468
20 C 34 308 2 502
21 H 69 473 3 69
22 Z 72 917 3 141
23 G 96 133 3 237
24 J 63 615 3 300
25 I 13 112 3 313
26 S 25 168 3 338
You could use Reduce(..., accumulate = TRUE) to find where the first cumulative quantity >= 500.
set.seed(123)
df <- data.frame(items=sample(LETTERS,replace= T),quantity=sample(1:100,26,replace=FALSE),price=sample(100:1000,26,replace=FALSE))
library(dplyr)
df %>%
group_by(group = lag(cumsum(Reduce(\(x, y) {
z <- x + y
if(z < 500) z else 0
}, quantity, accumulate = TRUE) == 0) + 1, default = 1)) %>%
mutate(total = sum(quantity)) %>%
ungroup()
# A tibble: 26 × 5
items quantity price group total
<chr> <int> <int> <dbl> <int>
1 O 36 393 1 515
2 S 78 376 1 515
3 N 81 562 1 515
4 C 43 140 1 515
5 J 76 530 1 515
6 R 15 189 1 515
7 V 32 415 1 515
8 K 7 322 1 515
9 E 9 627 1 515
10 T 41 215 1 515
11 N 74 705 1 515
12 V 23 873 1 515
13 Y 27 846 2 548
14 Z 60 555 2 548
15 E 53 697 2 548
16 S 93 953 2 548
17 Y 86 138 2 548
18 Y 88 258 2 548
19 I 38 851 2 548
20 C 34 308 2 548
21 H 69 473 2 548
22 Z 72 917 3 269
23 G 96 133 3 269
24 J 63 615 3 269
25 I 13 112 3 269
26 S 25 168 3 269
Here is a base R solution. The groups break after the cumulative sum passes a threshold. The output of aggregate shows that all cumulative sums are above thres except for the last one.
set.seed(2022)
df <- data.frame(items=sample(LETTERS,replace= T),
quantity=sample(1:100,26,replace=FALSE),
price=sample(100:1000,26,replace=FALSE))
f <- function(x, thres) {
grp <- integer(length(x))
run <- 0
current_grp <- 0L
for(i in seq_along(x)) {
run <- run + x[i]
grp[i] <- current_grp
if(run > thres) {
current_grp <- current_grp + 1L
run <- 0
}
}
grp
}
thres <- 500
group <- f(df$quantity, thres)
aggregate(quantity ~ group, df, sum)
#> group quantity
#> 1 0 552
#> 2 1 513
#> 3 2 214
ave(df$quantity, group, FUN = cumsum)
#> [1] 70 133 155 224 235 327 347 409 481 484 552 29 95 129 224 263 294 377 433
#> [20] 434 453 513 50 91 182 214
Created on 2022-09-06 by the reprex package (v2.0.1)
Edit
To assign groups and total quantities to the data can be done as follows.
df$group <- f(df$quantity, thres)
df$total_quantity <- ave(df$quantity, df$group, FUN = cumsum)
head(df)
#> items quantity price group total_quantity
#> 1 D 70 731 0 70
#> 2 S 63 516 0 133
#> 3 N 22 710 0 155
#> 4 W 69 829 0 224
#> 5 K 11 887 0 235
#> 6 D 92 317 0 327
Created on 2022-09-06 by the reprex package (v2.0.1)
Edit 2
To assign only the total quantity per group use sum instead of cumsum.
df$total_quantity <- ave(df$quantity, df$group, FUN = sum)
I'm dealing with the following dataset
animal protein herd sire dam
6 416 189.29 2 15 236
7 417 183.27 2 6 295
9 419 193.24 3 11 268
10 420 198.84 2 12 295
11 421 205.25 3 3 251
12 422 204.15 2 2 281
13 423 200.20 2 3 248
14 424 197.22 2 11 222
15 425 201.14 1 10 262
17 427 196.20 1 11 290
18 428 208.13 3 9 294
19 429 213.01 3 14 254
21 431 203.38 2 4 273
22 432 190.56 2 8 248
25 435 196.59 3 9 226
26 436 193.31 3 10 249
27 437 207.89 3 7 272
29 439 202.98 2 10 260
30 440 177.28 2 4 291
31 441 182.04 1 6 282
32 442 217.50 2 3 265
33 443 190.43 2 11 248
35 445 197.24 2 4 256
37 447 197.16 3 5 240
42 452 183.07 3 5 293
43 453 197.99 2 6 293
44 454 208.27 2 6 254
45 455 187.61 3 12 271
46 456 173.18 2 6 280
47 457 187.89 2 6 235
48 458 191.96 1 7 286
49 459 196.39 1 4 275
50 460 178.51 2 13 262
52 462 204.17 1 6 253
53 463 203.77 2 11 273
54 464 206.25 1 13 249
55 465 211.63 2 13 222
56 466 211.34 1 6 228
57 467 194.34 2 1 217
58 468 201.53 2 12 247
59 469 198.01 2 3 251
60 470 188.94 2 7 290
61 471 190.49 3 2 220
62 472 197.34 2 3 224
63 473 194.04 1 15 229
64 474 202.74 2 1 287
67 477 189.98 1 6 300
69 479 206.37 3 2 293
70 480 183.81 2 10 274
72 482 190.70 2 12 265
74 484 194.25 3 2 262
75 485 191.15 3 10 297
76 486 193.23 3 15 255
77 487 193.29 2 4 266
78 488 182.20 1 15 260
81 491 195.89 2 12 294
82 492 200.77 1 8 278
83 493 179.12 2 7 281
85 495 172.14 3 13 252
86 496 183.82 1 4 264
88 498 195.32 1 6 249
89 499 197.19 1 13 274
90 500 178.07 1 8 293
92 502 209.65 2 7 241
95 505 199.66 3 5 220
96 506 190.96 2 11 259
98 508 206.58 3 3 230
100 510 196.60 2 5 231
103 513 193.25 2 15 280
104 514 181.34 2 3 227
I'm interested with the animals indexes and corresponding to them the dams' indexes. Using table function I was able to check that some dams are matched to different animals. In fact I got the following output
217 220 222 224 226 227 228 229 230 231 235 236 240 241 247 248 249 251 252 253 254 255 256 259 260 262
1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 3 3 2 1 1 2 1 1 1 2 3
264 265 266 268 271 272 273 274 275 278 280 281 282 286 287 290 291 293 294 295 297 300
1 2 1 1 1 1 2 2 1 1 2 2 1 1 1 2 1 4 2 2 1 1
Using length function I checked that there are only 48 dams in this dataset.
I would like to 'reindex' them with the integers 1, ..., 48 instead of these given in my set. Is there any method of doing such things?
You can use match and unique.
df$index <- match(df$dam, unique(df$dam))
Or convert to factor and then integer
df$index <- as.integer(factor(df$dam))
Another option is group_indices from dplyr.
df$index <- dplyr::group_indices(df, dam)
We can use .GRP in data.table
library(data.table)
setDT(df)[, index := .GRP, dam]
Let's say you have two data frames, both of which contain some, but not all of the same records. Where they are the same records, the id variable in both data frames matches. There is a particular variable in each data frame that needs to be checked for consistency across the data frames, and any discrepancies need to be printed:
d1 <- ## first dataframe
d2 <- ## second dataframe
colnames(d1) #column headings for dataframe 1
[1] "id" "variable1" "variable2" "variable3"
colnames(d2) #column headings for dataframe 2 are identical
[1] "id" "variable1" "variable2" "variable3"
length(d1$id) #there are 200 records in dataframe 1
[1] 200
length(d2$id) #there are not the same number in dataframe 2
[1] 150
##Some function that takes d1$id, matches with d2$id, then compares the values of the matched, returning any discrepancies
I constructed an elaborate loop for this, but feel as though this is not the right way of going about it. Surely there is some better way than this for-if-for-if-if statement.
for (i in seq(d1$id)){ ##Sets up counter for loop
if (d1$id[i] %in% d2$id){ ## Search, compares and saves a common id and variable
index <- d1$id[i];
variable_d1 <- d1$variable1[i];
for (p in seq(d2$id)){ set
if (d2$id[p] == index){ ## saves the corresponding value in the second dataframe
variable_d2 <- d2$variable1[p];
if (variable_d2 != variable_d1) { ## prints if they are not equal
print(index);
}
}
}
}
}
Here's a solution, using random input data with a 50% chance that a given cell will be discrepant between d1 and d2:
set.seed(1);
d1 <- data.frame(id=sample(300,200),variable1=sample(2,200,replace=T),variable2=sample(2,200,replace=T),variable3=sample(2,200,replace=T));
d2 <- data.frame(id=sample(300,150),variable1=sample(2,150,replace=T),variable2=sample(2,150,replace=T),variable3=sample(2,150,replace=T));
head(d1);
## id variable1 variable2 variable3
## 1 80 1 2 2
## 2 112 1 1 2
## 3 171 2 2 1
## 4 270 1 2 2
## 5 60 1 2 2
## 6 266 2 2 2
head(d2);
## id variable1 variable2 variable3
## 1 258 1 2 1
## 2 11 1 1 1
## 3 290 2 1 2
## 4 222 2 1 2
## 5 81 2 1 1
## 6 200 1 2 1
com <- intersect(d1$id,d2$id); ## derive common id values
d1com <- match(com,d1$id); ## find indexes of d1 that correspond to common id values, in order of com
d2com <- match(com,d2$id); ## find indexes of d2 that correspond to common id values, in order of com
v1diff <- com[d1$variable1[d1com]!=d2$variable1[d2com]]; ## get ids of variable1 discrepancies
v1diff;
## [1] 60 278 18 219 290 35 107 4 237 131 50 210 29 168 6 174 61 127 99 220 247 244 157 51 84 122 196 125 265 115 186 139 3 132 223 211 268 102 155 207 238 41 199 200 231 236 172 275 250 176 248 255 222 59 100 33 124
v2diff <- com[d1$variable2[d1com]!=d2$variable2[d2com]]; ## get ids of variable2 discrepancies
v2diff;
## [1] 112 60 18 198 219 290 131 50 210 29 168 258 215 291 127 161 99 220 110 293 87 164 84 122 196 125 186 139 81 132 82 89 223 268 98 14 155 241 207 231 172 62 275 176 248 255 59 298 100 12 156
v3diff <- com[d1$variable3[d1com]!=d2$variable3[d2com]]; ## get ids of variable3 discrepancies
v3diff;
## [1] 278 219 290 35 4 237 131 168 202 174 215 220 247 244 261 293 164 13 294 84 196 125 265 115 186 81 3 89 223 211 268 98 14 155 241 207 38 191 200 276 250 45 269 255 298 100 12 156 124
Here's a proof that all variable1 values for ids in v1diff are really discrepant between d1 and d2:
d1$variable1[match(v1diff,d1$id)]; d2$variable1[match(v1diff,d2$id)];
## [1] 1 2 2 1 1 2 2 1 1 1 2 2 2 2 1 2 2 1 2 2 1 1 2 1 1 2 1 1 1 1 1 1 1 1 1 2 2 2 1 2 2 1 1 2 1 1 2 1 2 1 2 2 1 2 2 1 1
## [1] 2 1 1 2 2 1 1 2 2 2 1 1 1 1 2 1 1 2 1 1 2 2 1 2 2 1 2 2 2 2 2 2 2 2 2 1 1 1 2 1 1 2 2 1 2 2 1 2 1 2 1 1 2 1 1 2 2
Here's a proof that all variable1 values for ids not in v1diff are not discrepant between d1 and d2:
with(subset(d1,id%in%com&!id%in%v1diff),variable1[order(id)]); with(subset(d2,id%in%com&!id%in%v1diff),variable1[order(id)]);
## [1] 1 1 2 1 1 1 2 2 1 2 2 1 2 2 1 1 2 1 2 1 2 1 1 1 1 1 1 2 2 2 2 1 1 1 2 2 2 1 1 1 1
## [1] 1 1 2 1 1 1 2 2 1 2 2 1 2 2 1 1 2 1 2 1 2 1 1 1 1 1 1 2 2 2 2 1 1 1 2 2 2 1 1 1 1
Here, I wrapped this solution in a function which returns the vectors of discrepant id values in a list, with each component named for the variable it represents:
compare <- function(d1,d2,cols=setdiff(intersect(colnames(d1),colnames(d2)),'id')) {
com <- intersect(d1$id,d2$id);
d1com <- match(com,d1$id);
d2com <- match(com,d2$id);
setNames(lapply(cols,function(col) com[d1[[col]][d1com]!=d2[[col]][d2com]]),cols);
};
compare(d1,d2);
## $variable1
## [1] 60 278 18 219 290 35 107 4 237 131 50 210 29 168 6 174 61 127 99 220 247 244 157 51 84 122 196 125 265 115 186 139 3 132 223 211 268 102 155 207 238 41 199 200 231 236 172 275 250 176 248 255 222 59 100 33 124
##
## $variable2
## [1] 112 60 18 198 219 290 131 50 210 29 168 258 215 291 127 161 99 220 110 293 87 164 84 122 196 125 186 139 81 132 82 89 223 268 98 14 155 241 207 231 172 62 275 176 248 255 59 298 100 12 156
##
## $variable3
## [1] 278 219 290 35 4 237 131 168 202 174 215 220 247 244 261 293 164 13 294 84 196 125 265 115 186 81 3 89 223 211 268 98 14 155 241 207 38 191 200 276 250 45 269 255 298 100 12 156 124
Here is an approach using merge.
First, merge the dataframes, keeping all columns.
x <- merge(d1, d1, by="id")
Then, find all rows which do not match:
x[x$variable1.x != x$variable1.y | x$variable2.x != x$variable2.y |
x$variable3.x != x$variable3.y, ]
> head(m)
X id1 q_following topic_followed topic_answered nfollowers nfollowing
1 1 1 80 80 100 180 180
2 2 1 76 76 95 171 171
3 3 1 72 72 90 162 162
4 4 1 68 68 85 153 153
5 5 1 64 64 80 144 144
6 6 1 60 60 75 135 135
> head(d)
X id1 q_following topic_followed topic_answered nfollowers nfollowing
1 1 1 63 735 665 949 146
2 2 1 89 737 666 587 185
3 3 1 121 742 670 428 264
4 4 1 277 750 706 622 265
5 5 1 339 765 734 108 294
6 6 1 363 767 766 291 427
matcher <- function(x,y){ return(na.omit(m[which(d[,y]==x),y])) }
max_matcher <- function(x) { return(sum(matcher(x,3:13))) }
result <- foreach(1:1000, function(x) {
if(max(max_matcher(1:1000)) == max_matcher(x)) return(x)
})
I want to compute result across each group, grouped by id1 of dataframe m.
m %>% group_by(id1) %>% summarise(result) #doesn't work
by(m, m[,"id1"], result) #doesn't work
How should I proceed?
I'm getting this error but the fixes in related posts don't seem to apply I'm using ungroup, though it's no longer needed (can I switch the grouping variable in a single dplyr statement? but see Format column within dplyr chain). Also I have no quotes in my group_by call and I'm not applying any functions that act on the grouped-by columns (R dplyr summarize_each --> "Error: cannot modify grouping variable") but I'm still getting this error:
> games2 = baseball %>%
+ ungroup %>%
+ group_by(id, year) %>%
+ summarize(total=g+ab, a = ab+1, id = id)%>%
+ arrange(desc(total)) %>%
+ head(10)
Error: cannot modify grouping variable
This is the baseball set that comes with plyr:
id year stint team lg g ab r h X2b X3b hr rbi sb cs bb so ibb hbp sh sf gidp
4 ansonca01 1871 1 RC1 25 120 29 39 11 3 0 16 6 2 2 1 NA NA NA NA NA
44 forceda01 1871 1 WS3 32 162 45 45 9 4 0 29 8 0 4 0 NA NA NA NA NA
68 mathebo01 1871 1 FW1 19 89 15 24 3 1 0 10 2 1 2 0 NA NA NA NA NA
99 startjo01 1871 1 NY2 33 161 35 58 5 1 1 34 4 2 3 0 NA NA NA NA NA
102 suttoez01 1871 1 CL1 29 128 35 45 3 7 3 23 3 1 1 0 NA NA NA NA NA
106 whitede01 1871 1 CL1 29 146 40 47 6 5 1 21 2 2 4 1 NA NA NA NA NA
I loaded plyr before dplyr. Other bugs to check for? Thanks for any corrections/suggestions.
Not clear what you are doing. I think following is what you are looking for:
games2 = baseball %>%
group_by(id, year) %>%
mutate(total=g+ab, a = ab+1)%>%
arrange(desc(total)) %>%
head(10)
> games2
Source: local data frame [10 x 24]
Groups: id, year
id year stint team lg g ab r h X2b X3b hr rbi sb cs bb so ibb hbp sh sf gidp total a
1 aaronha01 1954 1 ML1 NL 122 468 58 131 27 6 13 69 2 2 28 39 NA 3 6 4 13 590 469
2 aaronha01 1955 1 ML1 NL 153 602 105 189 37 9 27 106 3 1 49 61 5 3 7 4 20 755 603
3 aaronha01 1956 1 ML1 NL 153 609 106 200 34 14 26 92 2 4 37 54 6 2 5 7 21 762 610
4 aaronha01 1957 1 ML1 NL 151 615 118 198 27 6 44 132 1 1 57 58 15 0 0 3 13 766 616
5 aaronha01 1958 1 ML1 NL 153 601 109 196 34 4 30 95 4 1 59 49 16 1 0 3 21 754 602
6 aaronha01 1959 1 ML1 NL 154 629 116 223 46 7 39 123 8 0 51 54 17 4 0 9 19 783 630
7 aaronha01 1960 1 ML1 NL 153 590 102 172 20 11 40 126 16 7 60 63 13 2 0 12 8 743 591
8 aaronha01 1961 1 ML1 NL 155 603 115 197 39 10 34 120 21 9 56 64 20 2 1 9 16 758 604
9 aaronha01 1962 1 ML1 NL 156 592 127 191 28 6 45 128 15 7 66 73 14 3 0 6 14 748 593
10 aaronha01 1963 1 ML1 NL 161 631 121 201 29 4 44 130 31 5 78 94 18 0 0 5 11 792 632
The problem is that you are trying to edit id in the summarize call, but you have grouped on id.
From your example, it looks like you want mutate anyway. You would use summarize if you were looking to apply a function that would return a single value like sum or mean.
games2 = baseball %>%
dplyr::group_by(id, year) %>%
dplyr::mutate(
total = g + ab,
a = ab + 1
) %>%
dplyr::select(id, year, total, a) %>%
dplyr::arrange(desc(total)) %>%
head(10)
Source: local data frame [10 x 4]
Groups: id, year
id year total a
1 aaronha01 1954 590 469
2 aaronha01 1955 755 603
3 aaronha01 1956 762 610
4 aaronha01 1957 766 616
5 aaronha01 1958 754 602
6 aaronha01 1959 783 630
7 aaronha01 1960 743 591
8 aaronha01 1961 758 604
9 aaronha01 1962 748 593
10 aaronha01 1963 792 632