From the following data frame, I am trying to output two tables, one for PASS and another one for FAIL. The condition is that the output for each table should contain only the ID and the Score. Can anyone help me with this? I am still starting to know the full capabilities of the table function. If anyone could suggest other alternatives I would greatly appreciate it as long as the conditions for the output is met.
> df <- data.frame(
ID <- as.factor(c(20260, 11893, 54216, 11716, 53368, 46196, 40007, 20970, 11802, 46166, 23615, 11865, 16138, 64789, 43211, 66539));
Score <- c(9,7,6,2,10,7,8,10,6,7,7,9,9,9,10,8)
Remark<- as.factor(c("PASS","PASS","FAIL","FAIL","PASS","PASS","PASS","PASS","FAIL","PASS","PASS","PASS","PASS","PASS","PASS","PASS"))
)
> df
ID Score Remark
1 20260 9 PASS
2 11893 7 PASS
3 54216 6 FAIL
4 11716 2 FAIL
5 53368 10 PASS
6 46196 7 PASS
7 40007 8 PASS
8 20970 10 PASS
9 11802 6 FAIL
10 46166 7 PASS
11 23615 7 PASS
12 11865 9 PASS
13 16138 9 PASS
14 64789 9 PASS
15 43211 10 PASS
16 66539 8 PASS
Something like this?
df <- data.frame(
ID = as.factor(c(20260, 11893, 54216, 11716, 53368, 46196, 40007, 20970, 11802, 46166, 23615, 11865, 16138, 64789, 43211, 66539)),
Score = c(9,7,6,2,10,7,8,10,6,7,7,9,9,9,10,8),
Remark = as.factor(c("PASS","PASS","FAIL","FAIL","PASS","PASS","PASS","PASS","FAIL","PASS","PASS","PASS","PASS","PASS","PASS","PASS"))
)
df[df$Remark == "PASS", 1:2]
ID Score
1 20260 9
2 11893 7
5 53368 10
6 46196 7
7 40007 8
8 20970 10
10 46166 7
11 23615 7
12 11865 9
13 16138 9
14 64789 9
15 43211 10
16 66539 8
Related
I am not to proficient with the apply functions, or with R. But I know I overuse for loops which makes my code slow. How can the following code be sped up with apply functions, or in any other way?
sum_store = NULL
for (col in 1:ncol(cazy_fams)){ # for each column in cazy_fams (so for each master family eg. GH, AA ect...)
for (row in 1:nrow(cazy_fams)){ # for each row in cazy fams (so the specific family number e.g GH1 AA7 ect...)
# Isolating the row that pertains to the current cazy family being looked at for every dataframe in the list
filt_fam = lapply(family_summary, function(sample){
sample[as.character(sample$Family) %in% paste(colnames(cazy_fams[col]),cazy_fams[row,col], sep = ""),]
})
row_cat = do.call(rbind, filt_fam) # concatinating the lapply list output int a dataframe
if (nrow(row_cat) > 0){
fam_sum = aggregate(proteins ~ Family, data=row_cat, FUN=sum) # collapsing the dataframe into one row and summing the proteins count
sum_store = rbind(sum_store, fam_sum) # storing the results for that family
} else if (grepl("NA", paste(colnames(cazy_fams[col]),cazy_fams[row,col], sep = "")) == FALSE) {
Family = paste(colnames(cazy_fams[col]),cazy_fams[row,col], sep = "")
proteins = 0
sum_store = rbind(sum_store, data.frame(Family, proteins))
} else {
next
}
}
}
family_summary is just a list of 18 two column dataframes that look like this:
Family proteins
CE0 2
CE1 9
CE4 15
CE7 1
CE9 1
CE14 10
GH0 5
GH1 1
GH3 4
GH4 1
GH8 1
GH9 2
GH13 2
GH15 5
GH17 1
with different cazy families.
cazy_fams is just a dataframe with each coulms being a cazy class (eg. GH, AA ect...) and ech row being a family number, all taken from the linked website:
GH GT PL CE AA CBM
1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3
4 4 4 4 4 4
5 5 5 5 5 5
6 6 6 6 6 6
7 7 7 7 7 7
8 8 8 8 8 8
9 9 9 9 9 9
10 10 10 10 10 10
11 11 11 11 11 11
12 12 12 12 12 12
13 13 13 13 13 13
14 14 14 14 14 14
15 15 15 15 15 15
The reason behind the else if (grepl("NA", paste(colnames(cazy_fams[col]),cazy_fams[row,col], sep = "")) == FALSE) statment is to deal with the fact not all classes have the same number of family so when looping over my dataframe I end up with some GHNA and AANA with NA on the end.
The output sum_store is this:
Family proteins
GH1 54
GH2 51
GH3 125
GH4 29
GH5 40
GH6 25
GH7 0
GH8 16
GH9 25
GH10 19
GH11 5
GH12 5
GH13 164
GH14 3
GH15 61
A dataframe with all listed cazy families and the total number of apperances across the family_summary list.
Please let me know if you need anything else to help answer my question.
I have a dataset of Ages for the customer and I wanted to make a frequency distribution by 9 years of a gap of age.
Ages=c(83,51,66,61,82,65,54,56,92,60,65,87,68,64,51,
70,75,66,74,68,44,55,78,69,98,67,82,77,79,62,38,88,76,99,
84,47,60,42,66,74,91,71,83,80,68,65,51,56,73,55)
My desired outcome would be similar to below-shared table, variable names can be differed(as you wish)
Could I use binCounts code into it ? if yes could you help me out using the code as not sure of bx and idxs in this code?
binCounts(x, idxs = NULL, bx, right = FALSE) ??
Age Count
38-46 3
47-55 7
56-64 7
65-73 14
74-82 10
83-91 6
92-100 3
Much Appreciated!
I don't know about the binCounts or even the package it is in but i have a bare r function:
data.frame(table(cut(Ages,0:7*9+37)))
Var1 Freq
1 (37,46] 3
2 (46,55] 7
3 (55,64] 7
4 (64,73] 14
5 (73,82] 10
6 (82,91] 6
7 (91,100] 3
To exactly duplicate your results:
lowerlimit=c(37,46,55,64,73,82,91,101)
Labels=paste(head(lowerlimit,-1)+1,lowerlimit[-1],sep="-")#I add one to have 38 47 etc
group=cut(Ages,lowerlimit,Labels)#Determine which group the ages belong to
tab=table(group)#Form a frequency table
as.data.frame(tab)# transform the table into a dataframe
group Freq
1 38-46 3
2 47-55 7
3 56-64 7
4 65-73 14
5 74-82 10
6 83-91 6
7 92-100 3
All this can be combined as:
data.frame(table(cut(Ages,s<-0:7*9+37,paste(head(s+1,-1),s[-1],sep="-"))))
I have read excel file in R, where sheet1 has 51500 rows and 5 column and sheet 2 has user ID of buyers (only one column). Objective: Aim to extract the user in sheet_1 whose User Id are occurred in sheet 2.
Here is the two example input files and desired output:
df <- data.frame(User.ID=c(12: 17), Group="Test", Spend=c(15:20), Purchase=c(5:10))
df
User.ID Group Spend Purchase
1 12 Test 15 5
2 13 Test 16 6
3 14 Test 17 7
4 15 Test 18 8
5 16 Test 19 9
6 17 Test 20 10
hash.ID <- data.frame(User.ID= c(13:16))
User.ID
1 13
2 14
3 15
4 16
desired output :
User.ID Group Spend Purchase Redem_Status
1 12 Test 15 5 Test_NonRedeemer
2 13 Test 16 6 Test_Redeemer
3 14 Test 17 7 Test_Redeemer
4 15 Test 18 8 Test_Redeemer
5 16 Test 19 9 Test_Redeemer
6 17 Test 20 10 Test_NonRedeemer
based on above example, we can see that if user Id from df is existed in hash.ID table, then we add new column and label it as Test_Redeemer, otherwise label it as Test_NonRedeemer. Is there any straightforward approach that can do this task ? Thanks a lot !!
The testcase you presented helped, thanks. As mentioned in the comments, you need to subset the rows you're interested in and assign them value. By placing ! in front of the statement (notice the braces!) you negate the statement and thus select all records not selected in the previous call.
df[df$User.ID %in% hash.ID$User.ID, "Redem_Status"] <- "Test_Redeemer"
df[!(df$User.ID %in% hash.ID$User.ID), "Redem_Status"] <- "Test_NonRedeemer"
df
User.ID Group Spend Purchase Redem_Status
1 12 Test 15 5 Test_NonRedeemer
2 13 Test 16 6 Test_Redeemer
3 14 Test 17 7 Test_Redeemer
4 15 Test 18 8 Test_Redeemer
5 16 Test 19 9 Test_Redeemer
6 17 Test 20 10 Test_NonRedeemer
Currently, my dataframe is in wide-format and I want to do a factorial repeated measures analysis with two between subject factors (sex & org) and a within subject factor (tasktype). Below I've illustrated how my data looks with a sample (the actual dataset has a lot more variables). The variable starting with '1_' and '2_' belong to measurements during task 1 and task 2 respectively. this means that 1_FD_H_org and 2_FD_H_org are the same measurements but for tasks 1 and 2 respectively.
id sex org task1 task2 1_FD_H_org 1_FD_H_text 2_FD_H_org 2_FD_H_text 1_apv 2_apv
2 F T Correct 2 69.97 68.9 116.12 296.02 10 27
6 M T Correct 2 53.08 107.91 73.73 333.15 16 21
7 M T Correct 2 13.82 30.9 31.8 78.07 4 9
8 M T Correct 2 42.96 50.01 88.81 302.07 4 24
9 F H Correct 3 60.35 102.9 39.81 96.6 15 10
10 F T Incorrect 3 78.61 80.42 55.16 117.57 20 17
I want to analyze whether there is a difference between the two tasks on e.g. FD_H_org for the different groups/conditions (sex & org).
How do I reshape my data so I can analyze it with a model like this?
ezANOVA(data=df, dv=.(FD_H_org), wid=.(id), between=.(sex, org), within=.(task))
I think that the correct format of my data should like this:
id sex org task outcome FD_H_org FD_H_text apv
2 F T 1 Correct 69.97 68.9 10
2 F T 2 2 116.12 296.02 27
6 M T 1 Correct 53.08 107.91 16
6 M T 2 2 73.73 333.15 21
But I'm not sure. I tryed to achieve this wih the reshape2 package but couldn't figure out how to do it. Anybody who can help?
I think probably you need to rebuild it by binding the 2 subsets of columns together with rbind(). The only issue here was that your outcomes implied difference data types, so forced them both to text:
require(plyr)
dt<-read.table(file="dt.txt",header=TRUE,sep=" ") # this was to bring in your data
newtab=rbind(
ddply(dt,.(id,sex,org),summarize, task=1, outcome=as.character(task1), FD_H_org=X1_FD_H_org, FD_H_text=X1_FD_H_text, apv=X1_apv),
ddply(dt,.(id,sex,org),summarize, task=2, outcome=as.character(task2), FD_H_org=X2_FD_H_org, FD_H_text=X2_FD_H_text, apv=X2_apv)
)
newtab[order(newtab$id),]
id sex org task outcome FD_H_org FD_H_text apv
1 2 F T 1 Correct 69.97 68.90 10
7 2 F T 2 2 116.12 296.02 27
2 6 M T 1 Correct 53.08 107.91 16
8 6 M T 2 2 73.73 333.15 21
3 7 M T 1 Correct 13.82 30.90 4
9 7 M T 2 2 31.80 78.07 9
4 8 M T 1 Correct 42.96 50.01 4
10 8 M T 2 2 88.81 302.07 24
5 9 F H 1 Correct 60.35 102.90 15
11 9 F H 2 3 39.81 96.60 10
6 10 F T 1 Incorrect 78.61 80.42 20
12 10 F T 2 3 55.16 117.57 17
EDIT - obviously you don't need plyr for this (and it may slow it down) unless you're doing further transformations. This is the code with no non-standard dependencies:
newcolnames<-c("id","sex","org","task","outcome","FD_H_org","FD_H_text","apv")
t1<-dt[,c(1,2,3,3,4,6,8,10)]
t1$org.1<-1
colnames(t1)<-newcolnames
t2<-dt[,c(1,2,3,3,5,7,9,11)]
t2$org.1<-2
t2$task2<-as.character(t2$task2)
colnames(t2)<-newcolnames
newt<-rbind(t1,t2)
newt[order(newt$id),]
I have a data frame named df:
number value
1 5
2 5
3 5
4 6
5 6
6 6
7 6
8 7
9 7
10 7
11 7
12 7
13 8
14 9
15 9
I want to remove specific rows in case of a min and max level. I tried separate this:
df[df$value>5 , ]
and after that this:
df[df$value>8 , ]
After I tried this:
df[df$value>5 & df$value>8, ]
but it execute online the df$value>8
and another problem I observed is that when I type
df[df$value>5, ]
it eliminate the value however when I type df it contains the values I tried to remove before. What could be wrong and I don’t take a clear data frames without the removed values?
An example of the output data:
number value
4 6
5 6
6 6
7 6
8 7
9 7
10 7
11 7
12 7
If you want remove lines with level lower than min and higher than max, try this:
df[df$value<5 | df$value>8, ]
Edit
Look right code:
df <- df[df$value>5 & df$value<8,]
Its work for me.