duplicate rows and create new data frame in R - r

I have a R data frame called intraPByGroup as follows:
group, week1, week2, week3, week4
kiwi,23,43,54,23
eggplant,22,32,33,63
jasmine,23,454,12,654
coconut,32,56,22,31
What I want to do is to create a new data frame which are like the following
user,week1,week2,week3,week4
eggplantA,22,32,33,63
eggplantB,22,32,33,63
eggplantC,22,32,33,63
jasmineA,23,454,12,654
jasmineB,23,454,12,654
jasmineC,23,454,12,654
Basically, the idea is: from the original data set, I select two groups (eggplant and jasmine), and I want to create a new dataframe. This new data frame has "user" variable instead of "group". Each user name is actually "groupname+A(B or C)", and all the rest values are duplicated for all users in the same group.
How should I do that in R?
I am thinking of firstly drop the group names and select a row, and compose one new row, then repeat doing this for each selected group.
eggFrame <- intraPByGroup[intraPByGroup$group=="eggplant",-1]
eggFrame1 <- eggFrame
eggFrame1["user"] <- "Eggplant-A"
eggFrame2 <- eggFrame
eggFrame2["user"] <- "Eggplant-B"
total <- rbind(eggFrame1,eggFrame2)
I think repeatedly doing rbind is stupid, even in this way, is there any other faster ways to do it?

You can do something like this
data <- subset(data, group %in% c("eggplant", "jasmine"))[rep(1:2, each = 3), ]
data$group <- factor(paste0(data$group, LETTERS[1:3]))
data
## group week1 week2 week3 week4
## 2 eggplantA 22 32 33 63
## 2.1 eggplantB 22 32 33 63
## 2.2 eggplantC 22 32 33 63
## 3 jasmineA 23 454 12 654
## 3.1 jasmineB 23 454 12 654
## 3.2 jasmineC 23 454 12 654
If for any reason you don't like the rownames like this and you want to change "group" to "user"
rownames(data) <- NULL
names(data)[1] <- "user"
data
## user week1 week2 week3 week4
## 1 eggplantA 22 32 33 63
## 2 eggplantB 22 32 33 63
## 3 eggplantC 22 32 33 63
## 4 jasmineA 23 454 12 654
## 5 jasmineB 23 454 12 654
## 6 jasmineC 23 454 12 654

Related

Using a function and mapply in R to create new columns that sums other columns

Suppose, I have a dataframe, df, and I want to create a new column called "c" based on the addition of two existing columns, "a" and "b". I would simply run the following code:
df$c <- df$a + df$b
But I also want to do this for many other columns. So why won't my code below work?
# Reproducible data:
martial_arts <- data.frame(gym_branch=c("downtown_a", "downtown_b", "uptown", "island"),
day_boxing=c(5,30,25,10),day_muaythai=c(34,18,20,30),
day_bjj=c(0,0,0,0),day_judo=c(10,0,5,0),
evening_boxing=c(50,45,32,40), evening_muaythai=c(50,50,45,50),
evening_bjj=c(60,60,55,40), evening_judo=c(25,15,30,0))
# Creating a list of the new column names of the columns that need to be added to the martial_arts dataframe:
pattern<-c("_boxing","_muaythai","_bjj","_judo")
d<- expand.grid(paste0("martial_arts$total",pattern))
# Creating lists of the columns that will be added to each other:
e<- names(martial_arts %>% select(day_boxing:day_judo))
f<- names(martial_arts %>% select(evening_boxing:evening_judo))
# Writing a function and using mapply:
kick_him <- function(d,e,f){d <- rowSums(martial_arts[ , c(e, f)], na.rm=T)}
mapply(kick_him,d,e,f)
Now, mapply produces the correct results in terms of the addition:
> mapply(ff,d,e,f)
Var1 <NA> <NA> <NA>
[1,] 55 84 60 35
[2,] 75 68 60 15
[3,] 57 65 55 35
[4,] 50 80 40 0
But it doesn't add the new columns to the martial_arts dataframe. The function in theory should do the following
martial_arts$total_boxing <- martial_arts$day_boxing + martial_arts$evening_boxing
...
...
martial_arts$total_judo <- martial_arts$day_judo + martial_arts$evening_judo
and add four new total columns to martial_arts.
So what am I doing wrong?
The assignment is wrong here i.e. instead of having martial_arts$total_boxing as a string, it should be "total_boxing" alone and this should be on the lhs of the Map/mapply. As the OP already created the 'martial_arts$' in 'd' dataset as a column, we are removing the prefix part and do the assignment
kick_him <- function(e,f){rowSums(martial_arts[ , c(e, f)], na.rm=TRUE)}
martial_arts[sub(".*\\$", "", d$Var1)] <- Map(kick_him, e, f)
-check the dataset now
> martial_arts
gym_branch day_boxing day_muaythai day_bjj day_judo evening_boxing evening_muaythai evening_bjj evening_judo total_boxing total_muaythai total_bjj total_judo
1 downtown_a 5 34 0 10 50 50 60 25 55 84 60 35
2 downtown_b 30 18 0 0 45 50 60 15 75 68 60 15
3 uptown 25 20 0 5 32 45 55 30 57 65 55 35
4 island 10 30 0 0 40 50 40 0 50 80 40 0

R: many nested loops to remove rows in multiple data frames

I have 18 data frames called regular55, regular56, regular57, collar55, collar56, etc. In each data frame, I want to delete the first row of each nest.
Each data frame looks like this:
nest interval
1 17 -8005
2 17 183
3 17 186
4 17 221
5 17 141
6 17 30
7 17 158
8 17 23
9 17 199
10 17 51
11 17 169
12 17 176
13 31 905
14 31 478
15 31 40
16 31 488
17 31 16
18 31 203
19 31 54
20 31 341
21 31 54
22 50 -14164
23 50 98
24 50 1438
25 71 240
26 71 725
27 71 819
28 85 -13935
29 85 45
30 85 589
31 85 47
32 85 161
33 85 67
The solution I came up with to avoid writing out the function for each one of the 18 data frames includes many nested loops:
for (i in 5:7){
for (j in 5:7) {
for (k in c("regular","collar")){
for (l in c(unique(paste0(k,i,j,"$nest")))){
paste0(k,i,j)=paste0(k,i,j)[(-c(which((paste0(k,i,j,"$nest")) == l )
[1])),]
}}}}
I'm basically selecting the first value at "which" there is a "unique" value of nest. However, I get:
Error in paste0(k, i, j)[(-c(which((paste0(k, i, j, "$nest")) == l)[1])), :
incorrect number of dimensions
It might be because "paste0(k,i,j)" is only considered as a character and not recognized as the name for a data frame.
Any ideas on how to fix this? Or any other ways to delete the first rows for each nest in every data frame?
Thanks to help from the comments, my problem was solved.
Originally, I divided my data frame using a for loop and then grouped it into one list:
for (i in 5:7) {
for (j in 5:7) {
for (k in c("regular","collar")){
assign(paste0(k,i,j),
df[df$x == i & df$y == j & df$z == k,])
}}}
df.list=mget(ls(pattern=("[regular,collar][5-7][5-7]")))
I later found a way to split my data frame directly into a list based on multiple columns (R subsetting a data frame into multiple data frames based on multiple column values):
df.list= split(df, with(df, interaction(df$x, df$y, df$z)), drop = TRUE)
Finally, I was able to apply the function to remove the first rows of each nest:
df.list.updated = lapply(df.list, function(d) d %>% group_by(nest) %>%
slice(2:n()))
It is definitely easier to work from a list of data frames.

Filter rows based on values of multiple columns in R

Here is the data set, say name is DS.
Abc Def Ghi
1 41 190 67
2 36 118 72
3 12 149 74
4 18 313 62
5 NA NA 56
6 28 NA 66
7 23 299 65
8 19 99 59
9 8 19 61
10 NA 194 69
How to get a new dataset DSS where value of column Abc is greater than 25, and value of column Def is greater than 100.It should also ignore any row if value of atleast one column in NA.
I have tried few options but wasn't successful. Your help is appreciated.
There are multiple ways of doing it. I have given 5 methods, and the first 4 methods are faster than the subset function.
R Code:
# Method 1:
DS_Filtered <- na.omit(DS[(DS$Abc > 20 & DS$Def > 100), ])
# Method 2: which function also ignores NA
DS_Filtered <- DS[ which( DS$Abc > 20 & DS$Def > 100) , ]
# Method 3:
DS_Filtered <- na.omit(DS[(DS$Abc > 20) & (DS$Def >100), ])
# Method 4: using dplyr package
DS_Filtered <- filter(DS, DS$Abc > 20, DS$Def >100)
DS_Filtered <- DS %>% filter(DS$Abc > 20 & DS$Def >100)
# Method 5: Subset function by default ignores NA
DS_Filtered <- subset(DS, DS$Abc >20 & DS$Def > 100)

How to obtain a new table after filtering only one column in an existing table in R?

I have a data frame having 20 columns. I need to filter / remove noise from one column. After filtering using convolve function I get a new vector of values. Many values in the original column become NA due to filtering process. The problem is that I need the whole table (for later analysis) with only those rows where the filtered column has values but I can't bind the filtered column to original table as the number of rows for both are different. Let me illustrate using the 'age' column in 'Orange' data set in R:
> head(Orange)
Tree age circumference
1 1 118 30
2 1 484 58
3 1 664 87
4 1 1004 115
5 1 1231 120
6 1 1372 142
Convolve filter used
smooth <- function (x, D, delta){
z <- exp(-abs(-D:D/delta))
r <- convolve (x, z, type='filter')/convolve(rep(1, length(x)),z,type='filter')
r <- head(tail(r, -D), -D)
r
}
Filtering the 'age' column
age2 <- smooth(Orange$age, 5,10)
data.frame(age2)
The number of rows for age column and age2 column are 35 and 15 respectively. The original dataset has 2 more columns and I like to work with them also. Now, I only need 15 rows of each column corresponding to the 15 rows of age2 column. The filter here removed first and last ten values from age column. How can I apply the filter in a way that I get truncated dataset with all columns and filtered rows?
You would need to figure out how the variables line up. If you can add NA's to age2 and then do Orange$age2 <- age2 followed by na.omit(Orange) you should have what you want. Or, equivalently, perhaps this is what you are looking for?
df <- tail(head(Orange, -10), -10) # chop off the first and last 10 observations
df$age2 <- age2
df
Tree age circumference age2
11 2 1004 156 915.1678
12 2 1231 172 876.1048
13 2 1372 203 841.3156
14 2 1582 203 911.0914
15 3 118 30 948.2045
16 3 484 51 1008.0198
17 3 664 75 955.0961
18 3 1004 108 915.1678
19 3 1231 115 876.1048
20 3 1372 139 841.3156
21 3 1582 140 911.0914
22 4 118 32 948.2045
23 4 484 62 1008.0198
24 4 664 112 955.0961
25 4 1004 167 915.1678
Edit: If you know the first and last x observations will be removed then the following works:
x <- 2
df <- tail(head(Orange, -x), -x) # chop off the first and last x observations
df$age2 <- age2

Add scale column to data frame by factor

I'm attempting to add a column to a data frame that consists of normalized values by a factor.
For example:
'data.frame': 261 obs. of 3 variables:
$ Area : Factor w/ 29 levels "Antrim","Ards",..: 1 1 1 1 1 1 1 1 1 2 ...
$ Year : Factor w/ 9 levels "2002","2003",..: 1 2 3 4 5 6 7 8 9 1 ...
$ Arrests: int 18 54 47 70 62 85 96 123 99 38 ...
I'd like to add a column that are the Arrests values normalized in groups by Area.
The best I've come up with is:
data$Arrests.norm <- unlist(unname(by(data$Arrests,data$Area,function(x){ scale(x)[,1] } )))
This command processes but the data is scrambled, ie, the normalized values don't match to the correct Areas in the data frame.
Appreciate your tips.
EDIT:Just to clarify what I mean by scrambled data, subsetting the data frame after my code I get output like the following, where the normalized values clearly belong to another factor group.
Area Year Arrests Arrests.norm
199 Larne 2002 92 -0.992843957
200 Larne 2003 124 -0.404975825
201 Larne 2004 89 -1.169204397
202 Larne 2005 94 -0.581336264
203 Larne 2006 98 -0.228615385
204 Larne 2007 8 0.006531868
205 Larne 2008 31 0.418039561
206 Larne 2009 25 0.947120880
207 Larne 2010 22 2.005283518
Following up your by attempt:
df <- data.frame(A = factor(rep(c("a", "b"), each = 4)),
B = sample(1:4, 8, TRUE))
ll <- by(data = df, df$A, function(x){
x$B_scale <- scale(x$B)
x
}
)
df2 <- do.call(rbind, ll)
data <- transform(data, Arrests.norm = ave(Arrests, Area, FUN = scale))
will do the trick.

Resources