I have the following data frames:
a = sample(letters[1:10])
b = sample(1:1000000, 10)
c = sample(1:100000000, 10)
d = sample(letters[1:26], 10)
e = sample(1:1000000, 10)
f = sample(1:100000000, 10)
g = sample(letters[1:26], 10)
h = sample(1:1000000, 10)
i = sample(1:100000000, 10)
data = data.frame(a,b,c)
data2 = data.frame(d,e,f)
data3 = data.frame(g,h,i)
data
Col1 Col2 Col3
1 a 626275 52114901
2 j 26543 70683919
3 c 8953 284605
4 h 822415 35245405
5 f 595095 81093354
6 i 812429 71119567
7 g 100678 87776459
8 e 54772 9709717
9 d 19375 43611618
10 b 174711 7254034
data2
Col1 Col2 Col3
1 y 12495 78654339
2 p 423945 79628767
3 k 378461 36729002
4 x 795469 98196961
5 h 240119 71903172
6 v 691621 74276314
7 d 702074 64715230
8 n 718401 21247081
9 s 580166 52888162
10 b 194630 92287079
data3
Col1 Col2 Col3
1 m 391166 98761754
2 v 321615 71765127
3 g 959452 80114937
4 w 380126 25877104
5 f 655875 69610059
6 s 267364 7113484
7 h 391116 6801473
8 i 663616 73956544
9 o 936505 94244449
10 c 514173 82174024
I also have a table with all of the contents of column Col1 (this table is called table k. What I would like to do is write a function that allows me to subset the contents of the data frames by identifying all of the items in Col1 and table k as a match.
Table K:
k
Col1
1 a
2 j
3 c
4 h
5 f
6 i
7 g
8 e
9 d
10 b
11 y
12 p
13 k
14 x
15 h
16 v
17 d
18 n
19 s
20 b
21 m
22 v
23 g
24 w
25 f
26 s
27 h
28 i
29 o
30 c
I then want to only print the contents of column Col2 as an output of the function. Since I have multiple data frames, I know I have to put them in a list and then use lapply once I create the function. I have gotten this far, but my code does not work.
get_tair = function(df, col1, col2){
df[df[[col1]] %in% k$$Col1,]
print(df[[col2]])
}
Any help appreciated. Thanks.
We can place the 'data' objects in a list and use lapply
out_lst <- lapply(list(data, data2, data3),
function(dat) get_tair(dat, col1 = 'Col1', col2 = 'Col2'))
-function used
get_tair = function(df, col1, col2){
df[df[[col1]] %in% k$Col1,]
}
You can use merge :
get_tair = function(df, col){
unique(merge(df, k, by.x = col, by.y = 'Col1'))
}
list_data <- list(data, data2, data3)
lapply(list_data, function(x) get_tair(x, names(x)[1]))
Note that the sample data generated for 3 dataframes has different columns than what you have shown. (Col1, Col2, Col3).
Related
I am attempting to write a simulation that involves randomly re-assigning items to categories with some restrictions.
Lets say I have a collection of pebbles 1 to N distributed across buckets A through J:
set.seed(100)
df1 <- data.frame(pebble = 1:100,
bucket = sample(LETTERS[1:10], 100, T),
stringsAsFactors = F)
head(df1)
#> pebble bucket
#> 1 1 D
#> 2 2 C
#> 3 3 F
#> 4 4 A
#> 5 5 E
#> 6 6 E
I want to randomly re-assign pebbles to buckets. Without restrictions I could do it like so:
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
colSums(table(random.permutation.df1))
#> A B C D E F G H I J
#> 4 7 13 14 12 11 11 10 9 9
colSums(table(df1))
#> A B C D E F G H I J
#> 4 7 13 14 12 11 11 10 9 9
Importantly this re-assigns pebbles while ensuring that each bucket retains the same number (because we are sampling without replacement).
However, I have a set of restrictions such that certain pebbles cannot be assigned to certain buckets. I encode the restrictions in df2:
df2 <- data.frame(pebble = sample(1:100, 10),
bucket = sample(LETTERS[1:10], 10, T),
stringsAsFactors = F)
df2
#> pebble bucket
#> 1 33 I
#> 2 39 I
#> 3 5 A
#> 4 36 C
#> 5 55 J
#> 6 66 A
#> 7 92 J
#> 8 95 H
#> 9 2 C
#> 10 49 I
The logic here is that pebbles 33 and 39 cannot be placed in bucket I, or pebble 5 in bucket A, etc. I would like to permute which pebbles are in which bucket subject to these restrictions.
So far, I've thought of tackling it in a loop as below, but this does not result in buckets retaining the same number of pebbles:
perms <- character(0)
cnt <- 1
for (p in df1$pebble) {
perms[cnt] <- sample(df1$bucket[!df1$bucket %in% df2$bucket[df2$pebble==p]], 1)
cnt <- cnt + 1
}
table(perms)
#> perms
#> A B C D E F G H I J
#> 6 7 12 22 15 1 14 7 7 9
I then tried sampling positions, and then removing that position from the available buckets and the available remaining positions. This is also not working, and I suspect it is because I am sampling my way into branches of the tree that do not yield solutions.
set.seed(42)
perms <- character(0)
cnt <- 1
ids <- 1:nrow(df1)
bckts <- df1$bucket
for (p in df1$pebble) {
id <- sample(ids[!bckts %in% df2$bucket[df2$pebble==p]], 1)
perms[cnt] <- bckts[id]
bckts <- bckts[-id]
ids <- ids[ids!=id]
cnt <- cnt + 1
}
table(perms)
#> perms
#> A B C D E F G J
#> 1 1 4 1 2 1 2 2
Any thoughts or advice much appreciated (and apologies for the length).
EDIT:
I foolishly forgot to clarify that I was previously solving this by just resampling until I got a draw that didn't violate any of the conditions in df2, but I now have many conditions such that this would make my code take too long to run. I am still up for trying to force it if I could figure out a way to make forcing it faster.
I have a solution (I managed to write it in base R, but the data.table solution is easier to understand and write:
random.permutation.df2 <- data.frame(pebble = df1$pebble, bucket = rep(NA,length(df1$pebble)))
for(bucket in unique(df1$bucket)){
N <- length( random.permutation.df2$bucket[is.na(random.permutation.df2$bucket) &
!random.permutation.df2$pebble %in% df2$pebble[df2$bucket == bucket] ] )
random.permutation.df2$bucket[is.na(random.permutation.df2$bucket) &
!random.permutation.df2$pebble %in% df2$pebble[df2$bucket == bucket] ] <-
sample(c(rep(bucket,sum(df1$bucket == bucket)),rep(NA,N-sum(df1$bucket == bucket))))
}
The idea is to sample the authorised peeble for each bucket: those that are not in df2, and those that are not already filled. You sample then a vector of the good length, choosing between NAs (for the following buckets values) and the value in the loop, and voilà.
Now easier to read with data.table
library(data.table)
random.permutation.df2 <- setDT(random.permutation.df2)
df2 <- setDT(df2)
for( bucketi in unique(df1$bucket)){
random.permutation.df2[is.na(bucket) & !pebble %in% df2[bucket == bucketi, pebble],
bucket := sample(c(rep(bucketi,sum(df1$bucket == bucket)),rep(NA,.N-sum(df1$bucket == bucket))))]
}
it has the two conditions
> colSums(table(df1))
A B C D E F G H I J
4 7 13 14 12 11 11 10 9 9
> colSums(table(random.permutation.df2))
A B C D E F G H I J
4 7 13 14 12 11 11 10 9 9
To verify that there isn't any contradiction with df2
> df2
pebble bucket
1: 37 D
2: 95 H
3: 90 C
4: 80 C
5: 31 D
6: 84 G
7: 76 I
8: 57 H
9: 7 E
10: 39 A
> random.permutation.df2[pebble %in% df2$pebble,.(pebble,bucket)]
pebble bucket
1: 7 D
2: 31 H
3: 37 J
4: 39 F
5: 57 B
6: 76 E
7: 80 F
8: 84 B
9: 90 H
10: 95 D
Here a brute force approach where one simply tries long enough until a valid solution is found:
set.seed(123)
df1 <- data.frame(pebble = 1:100,
bucket = sample(LETTERS[1:10], 100, T),
stringsAsFactors = F)
df2 <- data.frame(pebble = sample(1:100, 10),
bucket = sample(LETTERS[1:10], 10, T),
stringsAsFactors = F)
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
Random permutation does not match the condition, so try new ones:
merge(random.permutation.df1, df2)
#> pebble bucket
#> 1 60 J
while(TRUE) {
random.permutation.df1 <- data.frame(pebble = df1$pebble, bucket = sample(df1$bucket))
if(nrow(merge(random.permutation.df1, df2)) == 0)
break;
}
New permutation matches the condition:
merge(random.permutation.df1, df2)
#> [1] pebble bucket
#> <0 Zeilen> (oder row.names mit Länge 0)
colSums(table(random.permutation.df1))
#> A B C D E F G H I J
#> 7 12 11 9 14 7 11 11 11 7
colSums(table(df1))
#> A B C D E F G H I J
#> 7 12 11 9 14 7 11 11 11 7
Say I have a dataframe like this:
set.seed(1)
n <- 20
df <- data.frame(ID = sample(1:5, n, replace = TRUE),
Fac1 = sample(letters[1:5], n, replace = TRUE),
Fac2 = sample(LETTERS[10:15], n, replace = TRUE),
Val1 = sample(1:10, n, replace = TRUE)) %>%
arrange(ID) %>% group_by(ID,Fac1) %>%
summarise(Val1 = sum(Val1),Fac2 = first(Fac2)) %>%
group_by(ID,Fac2) %>%
mutate(Val2 = sum(Val1))
df
ID Fac1 Val1 Fac2 Val2
1 1 b 9 N 9
2 1 c 9 O 9
3 2 a 4 K 4
4 2 b 10 M 18
5 2 c 4 L 4
6 2 d 8 M 18
7 2 e 10 N 10
8 3 d 14 N 14
9 4 b 8 L 22
10 4 c 14 L 22
11 4 d 9 K 9
12 4 e 6 N 6
13 5 a 13 M 13
14 5 b 3 N 3
ID is a grouping variable. Rows with an Fac1 value of e should have the Fac2 value changed to be that same as the other row in the group where Fac1 is either b or c and the sum of Val 2 for the two rows if greater than 20. (I've simplified this to the point where you probably don't get why but just work with me).
This is what I have tried so far:
result <- df %>% group_by(ID) %>%
mutate(Fac2 = case_when(
Fac1 == "e" &
sum(Val2,ifelse(Fac1 %in% c("b","c"), Val2, 0)) > 20 ~
ifelse(sum(Val2,ifelse(Fac1 %in% c("b","c"),Val2,0)) > 20,
as.character(Fac2),
NA_character_),
TRUE ~ as.character(Fac2)
))
It doesn't work properly because it is summing the first value of Val2 in the group rather than only doing so when Fac1 is b or c.
Any ideas?
Adding desired outcome:
ID Fac1 Val1 Fac2 Val2
1 1 b 9 N 9
2 1 c 9 O 9
3 2 a 4 K 4
4 2 b 10 M 18
5 2 c 4 L 4
6 2 d 8 M 18
7 2 e 10 M 10 **Changed to M b/c row 4 is M and 10 + 18 > 20
8 3 d 14 N 14
9 4 b 8 L 22
10 4 c 14 L 22
11 4 d 9 K 9
12 4 e 6 L 6 **Changed to L b/c row 10 is L and 6 + 22 > 20
13 5 a 13 M 13
14 5 b 3 N 3
I'm having a hard time following what you are wanting the values to be changed to.
But when I have multiple conditions or decisions that need to be made in a sequence, I use a loop and a series of if statements to go through the data frame. I prefer while loops, so that's what I'll use in the example.
counter <- 1
stopper <- nrow(df)
while (counter <= stopper) {
fac1 <- df$Fac1[counter1]
if (fac1 == 'e') {
if ([INSERT NEXT CONDITION]) #Change whichever value your trying to change using the counter to reference the correct row.
else #Change whichever value your trying to change using the counter to reference the correct row.
}
counter <- counter + 1
}
For me, simplifying the code makes it a lot easier for me to keep track of what decisions are being made. It also allows for complex decisions that are difficult to get functions to work with.
I was able to get the desired result with this code. I made a new column containing the result of the test for what value to replace Fac2 with, which wasn't entirely necessary but makes it more readable and debugable.
The key thing was to use first(na.omit()) to get the value from a different row in the same group which met the condition.
result <- df %>% group_by(ID) %>%
mutate(Max_bc_Val = ifelse(Val2 == max(ifelse(Fac1 %in% c("b","c"),
Val2,0)),
ifelse(Fac1 %in% c("b","c"),
as.character(Fac2),NA),NA)) %>%
mutate(Fac2 = case_when(
Fac1 == "e" ~ ifelse(is.na(first(na.omit(Max_bc_Val))),
NA_character_,
first(na.omit(Max_bc_Val))),
TRUE ~ as.character(Fac2)))
This works but doesn't seem like the best solution. Any other ideas?
I would like to split a data.frame into a list based on row values/characters across all columns of the data.frame.
I wrote lists of data.frames to file using write.list {erer}
So now when I read them in again, they look like this:
dummy data
set.seed(1)
df <- cbind(data.frame(col1=c(sample(LETTERS, 4),"col1",sample(LETTERS, 7))),
data.frame(col2=c(sample(LETTERS, 4),"col2",sample(LETTERS, 7))),
data.frame(col3=c(sample(LETTERS, 4),"col3",sample(LETTERS, 7))))
col1 col2 col3
1 G E Q
2 J R D
3 N J G
4 U Y I
5 col1 col2 col3
6 F M A
7 W R J
8 Y X U
9 P I H
10 N Y K
11 B T M
12 E E Y
And I would like to split into lists by c("col1","col2","col3") producing
[[1]]
col1 col2 col3
1 G E Q
2 J R D
3 N J G
4 U Y I
[[2]]
col1 col2 col3
1 F M A
2 W R J
3 Y X U
4 P I H
5 N Y K
6 B T M
7 E E Y
Feels like it should be straightforward using split, but my attempts so far have failed. Also, as you see, I can't split by a certain row interval.
Any pointers would be highly appreciated, thanks!
Try
lapply(split(d1, cumsum(grepl(names(d1)[1], d1$col1))), function(x) x[!grepl(names(d1)[1], x$col1),])
#$`0`
# col1 col2 col3
#1 G E Q
#2 J R D
#3 N J G
#4 U Y I
#$`1`
# col1 col2 col3
#6 F M A
#7 W R J
#8 Y X U
#9 P I H
#10 N Y K
#11 B T M
#12 E E Y
This should be general, if you want to split if a line is exactly like the colnames:
dfSplit<-split(df,cumsum(Reduce("&",Map("==",df,colnames(df)))))
for (i in 2:length(dfSplit)) dfSplit[[i]]<-dfSplit[[i]][-1,]
The second line can be written a little more R-style as #DavidArenburg suggested in the comments.
dfSplit[-1] <- lapply(dfSplit[-1], function(x) x[-1, ])
It has also the added benefit of doing nothing if dfSplit has length 1 (opposite to my original second line, which would throw an error).
Given a df in semi-long format with id variables a and b and measured data in columns m1and m2. The type of data is specified by the variable v (values var1 and var2).
set.seed(8)
df_l <-
data.frame(
a = rep(sample(LETTERS,5),2),
b = rep(sample(letters,5),2),
v = c(rep("var1",5),rep("var2",5)),
m1 = sample(1:10,10,F),
m2 = sample(20:40,10,F))
Looks as:
a b v m1 m2
1 W r var1 3 40
2 N l var1 6 32
3 R a var1 9 28
4 F g var1 5 21
5 E u var1 4 38
6 W r var2 1 35
7 N l var2 8 33
8 R a var2 10 29
9 F g var2 7 30
10 E u var2 2 23
If I want to make a wide format of values in m1 using id a as rows and values in v1as columns I do:
> reshape2::dcast(df_l, a~v, value.var="m1")
a var1 var2
1 E 4 2
2 F 5 7
3 N 6 8
4 R 9 10
5 W 3 1
How do I write a function that does this were arguments to dcast (row, column and value.var) are supplied as arguments, something like:
fun <- function(df,row,col,val){
require(reshape2)
res <-
dcast(df, row~col, value.var=val)
return(res)
}
I checked SO here and here to try variations of match.call and eval(substitute()) in order to "get" the arguments inside the function, and also tried with the lazyeval package. No succes.
What am I doing wrong here ? How to get dcast to recognize variable names?
Formula argument also accepts character input.
foo <- function(df, id, measure, val) {
dcast(df, paste(paste(id, collapse = " + "), "~",
paste(measure, collapse = " + ")),
value.var = val)
}
require(reshape2)
foo(df_l, "a", "v", "m1")
Note that data.table's dcast (current development) can also cast multiple value.var columns directly. So, you can also do:
require(data.table) # v1.9.5
foo(setDT(df_l), "a", "v", c("m1", "m2"))
# a m1_var1 m1_var2 m2_var1 m2_var2
# 1: F 1 6 28 21
# 2: H 9 2 38 29
# 3: M 5 10 24 35
# 4: O 8 3 23 26
# 5: T 4 7 31 39
Consider any dataframe
col1 col2 col3 col4
row.name11 A 23 x y
row.name12 A 29 x y
row.name13 B 17 x y
row.name14 A 77 x y
I have a list of rownames which I want to return from this dataframe. Lets say I have row.name12 and row.name13 in a list. I can easily return these rows from dataframe. But I also want to return 4 rows above and 4 rows below these rows. It means I want to return from row.name8 to row.name17. I think it is similar to grep -A -B in shell.
Probable solution- Is there any way to return row number by row name? Because if I have row number than I can easily subtract 4 and add 4 in row number and return rows.
Note: Here rownames are just examples. Rownames could be anything like RED, BLUE, BLACK, etc.
Try that:
extract.with.context <- function(x, rows, after = 0, before = 0) {
match.idx <- which(rownames(x) %in% rows)
span <- seq(from = -before, to = after)
extend.idx <- c(outer(match.idx, span, `+`))
extend.idx <- Filter(function(i) i > 0 & i <= nrow(x), extend.idx)
extend.idx <- sort(unique(extend.idx))
return(x[extend.idx, , drop = FALSE])
}
dat <- data.frame(x = 1:26, row.names = letters)
extract.with.context(dat, c("a", "b", "j", "y"), after = 3, before = 1)
# x
# a 1
# b 2
# c 3
# d 4
# e 5
# i 9
# j 10
# k 11
# l 12
# m 13
# x 24
# y 25
# z 26
Perhaps a combination of which() and %in% would help you:
dat[which(rownames(dat) %in% c("row.name13")) + c(-1, 1), ]
# col1 col2 col3 col4
# row.name12 A 29 x y
# row.name14 A 77 x y
In the above, we are trying to identify which row names in "dat" are "row.name13" (using which()), and the + c(-1, 1) tells R to return the row before and the row after. If you wanted to include the row, you could do something like + c(-1:1).
To get the range of rows, switch the comma to a colon:
dat[which(rownames(dat) %in% c("row.name13")) + c(-1:1), ]
# col1 col2 col3 col4
# row.name12 A 29 x y
# row.name13 B 17 x y
# row.name14 A 77 x y
Update
Matching a list is a little bit trickier, but without thinking about it too much, here is a possibility:
myRows <- c("row.name12", "row.name13")
rowRanges <- lapply(which(rownames(dat) %in% myRows), function(x) x + c(-1:1))
# [[1]]
# [1] 1 2 3
#
# [[2]]
# [1] 2 3 4
#
lapply(rowRanges, function(x) dat[x, ])
# [[1]]
# col1 col2 col3 col4
# row.name11 A 23 x y
# row.name12 A 29 x y
# row.name13 B 17 x y
#
# [[2]]
# col1 col2 col3 col4
# row.name12 A 29 x y
# row.name13 B 17 x y
# row.name14 A 77 x y
This outputs a list of data.frames which might be handy since you might have duplicated rows (as there are in this example).
Update 2: Using grep if it is more appropriate
Here is a variation of your question, one which would be less convenient to solve using the which()...%in% approach.
set.seed(1)
dat1 <- data.frame(ID = 1:25, V1 = sample(100, 25, replace = TRUE))
rownames(dat1) <- paste("rowname", sample(apply(combn(LETTERS[1:4], 2),
2, paste, collapse = ""),
25, replace = TRUE),
sprintf("%02d", 1:25), sep = ".")
head(dat1)
# ID V1
# rowname.AD.01 1 27
# rowname.AB.02 2 38
# rowname.AD.03 3 58
# rowname.CD.04 4 91
# rowname.AD.05 5 21
# rowname.AD.06 6 90
Now, imagine you wanted to identify the rows with AB and AC, but you don't have a list of the numeric suffixes.
Here's a little function that can be used in such a scenario. It borrows a little from #Spacedman to make sure that the rows returned are within the range of the data (as per #flodel's suggestion).
getMyRows <- function(data, matches, range) {
rowMatches = lapply(unlist(lapply(matches, function(x)
grep(x, rownames(data)))), function(y) y + range)
rowMatches = lapply(rowMatches, function(x) x[x > 0 & x <= nrow(data)])
lapply(rowMatches, function(x) data[x, ])
}
You can use it as follows (but I won't print the results here). First, specify the dataset, then the pattern(s) you want matched, then the range (in this example, three rows before and four rows after).
getMyRows(dat1, c("AB", "AC"), -3:4)
Applying it to the earlier example of matching row.name12 and row.name13, you can use it as follows: getMyRows(dat, c(12, 13), -1:1).
You can also modify the function to make it more general (for example, to specify matching with a column instead of row names).
Create some sample data:
> dat=data.frame(col1=letters,col2=sample(26),col3=sample(letters))
> dat
col1 col2 col3
1 a 26 x
2 b 12 i
3 c 15 v
...
Set our target vector (note I choose an edge case and overlapping cases), and find matching rows:
> target=c("a","e","g","s")
> match = which(dat$col1 %in% target)
Create sequences from -2 to +2 of the matches (adjust for your needs) and merge:
> getThese = unique(as.vector(mapply(seq,match-2,match+2)))
> getThese
[1] -1 0 1 2 3 4 5 6 7 8 9 17 18 19 20 21
Fix the edge cases:
> getThese = getThese[getThese > 0 & getThese <= nrow(dat)]
> dat[getThese,]
col1 col2 col3
1 a 26 x
2 b 12 i
3 c 15 v
4 d 22 d
5 e 2 j
6 f 9 l
7 g 1 w
8 h 21 n
9 i 17 p
17 q 18 a
18 r 10 m
19 s 24 o
20 t 13 e
21 u 3 k
>
Remember our targets were a, e, g and s. You've now got those plus two rows above and two rows below for each, with no duplicates.
If you are using row names, just create 'match' from those. I was using a column.
I'd write a bunch more tests using the testthat package if this were my problem.
Another option will be to use filter. In case stats::filter is masked e.g. by dplyr::filter you have to use stats::filter.
dat <- data.frame(x = seq_along(letters), row.names = letters)
i <- rownames(dat) %in% c("a", "b", "j", "y") #Get the matches
nAfter <- 3
nBefore <- 1
fi <- seq(-nBefore, nAfter)
n <- max(abs(x))
fi <- seq(-n, n) %in% fi
dat[head(tail(filter(c(rep(FALSE, n), i, rep(FALSE, n)), fi), -n), -n) > 0,, drop = FALSE]
# x
#a 1
#b 2
#c 3
#d 4
#e 5
#i 9
#j 10
#k 11
#l 12
#m 13
#x 24
#y 25
#z 26
I would simply proceed as follow:
dat[(grep("row.name12",row.names(dat))-4):(grep("row.name13",row.names(dat))+4),]
grep("row.name12",row.names(dat)) gives you the row number that have "row.name12" as name, so
(grep("row.name12",row.names(dat))-4):(grep("row.name13",row.names(dat))+4)
gives you a serie of row numbers ranging from the 4th row preceding the row named "row.name12" to the 4th row after the one named "row.name13".