How to use ddply to add a column to a data frame? - r

I have a data frame that looks like this:
site date var dil
1 A 7.4 2
2 A 6.5 2
1 A 7.3 3
2 A 7.3 3
1 B 7.1 1
2 B 7.7 2
1 B 7.7 3
2 B 7.4 3
I need add a column called wt to this dataframe that contains the weighting factor needed to calculate the weighted mean. This weighting factor has to be derived for each combination of site and date.
The approach I'm using is to first built a function that calculate the weigthing factor:
> weight <- function(dil){
dil/sum(dil)
}
then apply the function for each combination of site and date
> df$wt <- ddply(df,.(date,site),.fun=weight)
but I get this error message:
Error in FUN(X[[1L]], ...) :
only defined on a data frame with all numeric variables

You are almost there. Modify your code to use the transform function. This allows you to add columns to the data.frame inside ddply:
weight <- function(x) x/sum(x)
ddply(df, .(date,site), transform, weight=weight(dil))
site date var dil weight
1 1 A 7.4 2 0.40
2 1 A 7.3 3 0.60
3 2 A 6.5 2 0.40
4 2 A 7.3 3 0.60
5 1 B 7.1 1 0.25
6 1 B 7.7 3 0.75
7 2 B 7.7 2 0.40
8 2 B 7.4 3 0.60

Related

(R Studio)How to convert dataframe to Matrix?

I have a dataset, it is a data frame format.
But I need to convert to the matrix for recommender system purpose.
my data format:
col1 col1 col3
1 name 1 5.9
2 name 1 7.9
3 name 1 10
4 name 1 9
5 name 1 8.4
1 name 2 6
2 name 2 8.5
3 name 2 10
4 name 2 9.3
This is what I want:
name 1 name 2
1 5.9 6
2 7.9 8.5
3 10 10
4 9 9.3
5 8.4 NA (missing value, autofill "NA")
For the data you shared, the following base R solution works (as long as your data frame is called df
do.call(cbind, lapply(split(df$Hotel_Rating, df$Hotel_Name), `[`,
seq(max(table(df$Hotel_Name)))))

Complex aggregate function construction in R? [duplicate]

This question already has answers here:
Extract row corresponding to minimum value of a variable by group
(9 answers)
Closed 2 years ago.
Probably this is not that complex, but I couldn't figure out how to write a concise title explaining it:
I'm trying to use the aggregate function in R to return (1) the lowest value of a given column (val) by category (cat.2) in a data frame and (2) the value of another column (cat.1) on the same row. I know how to do part #1, but I can't figure out part #2.
The data:
cat.1<-c(1,2,3,4,5,1,2,3,4,5)
cat.2<-c(1,1,1,2,2,2,2,3,3,3)
val<-c(10.1,10.2,9.8,9.7,10.5,11.1,12.5,13.7,9.8,8.9)
df<-data.frame(cat.1,cat.2,val)
> df
cat.1 cat.2 val
1 1 1 10.1
2 2 1 10.2
3 3 1 9.8
4 4 2 9.7
5 5 2 10.5
6 1 2 11.1
7 2 2 12.5
8 3 3 13.7
9 4 3 9.8
10 5 3 8.9
I know how to use aggregate to return the minimum value for each cat.2:
> aggregate(df$val, by=list(df$cat.2), FUN=min)
Group.1 x
1 1 9.8
2 2 9.7
3 3 8.9
The second part of it, which I can't figure out, is to return the value in cat.1 on the same row of df where aggregate found min(df$val) for each cat.2. Not sure I'm explaining it well, but this is the intended result:
> ...
Group.1 x cat.1
1 1 9.8 3
2 2 9.7 4
3 3 8.9 5
Any help much appreciated.
If we need the output after the aggregate, we can do a merge with original dataset
merge(aggregate(df$val, by=list(df$cat.2), FUN=min),
df, by.x = c('Group.1', 'x'), by.y = c('cat.2', 'val'))
# Group.1 x cat.1
#1 1 9.8 3
#2 2 9.7 4
#3 3 8.9 5
But, this can be done more easily with dplyr by using slice to slice the rows with the min value of 'val' after grouping by 'cat.2'
library(dplyr)
df %>%
group_by(cat.2) %>%
slice(which.min(val))
# A tibble: 3 x 3
# Groups: cat.2 [3]
# cat.1 cat.2 val
# <dbl> <dbl> <dbl>
#1 3 1 9.8
#2 4 2 9.7
#3 5 3 8.9
Or with data.table
library(data.table)
setDT(df)[, .SD[which.min(val)], cat.2]
Or in base R, this can be done with ave
df[with(df, val == ave(val, cat.2, FUN = min)),]
# cat.1 cat.2 val
#3 3 1 9.8
#4 4 2 9.7
#10 5 3 8.9

Assigning rank of values within groups with NAs

I have such a data frame(df) which is just a sapmle:
group value
1 12.1
1 10.3
1 NA
1 11.0
1 13.5
2 11.7
2 NA
2 10.4
2 9.7
Namely,
df<-data.frame(group=c(1,1,1,1,1,2,2,2,2), value=c(12.1, 10.3, NA, 11.0, 13.5, 11.7, NA, 10.4, 9.7))
Desired output is:
group value order
1 12.1 3
1 10.3 1
1 NA NA
1 11.0 2
1 13.5 4
2 11.7 3
2 NA NA
2 10.4 2
2 9.7 1
Namely, I want to find the
rank of the "value"s from starting from the smallest value
within the "group"s.
How can I do that with R? I will be very glad for any help Thanks a lot.
We could use ave from base R to create the rank column ("order1") of "value" by "group". If we need to have NAs for corresponding NA in "value" column, this can be done (df$order[is.na(..)])
df$order1 <- with(df, ave(value, group, FUN=rank))
df$order1[is.na(df$value)] <- NA
Or using data.table
library(data.table)
setDT(df)[, order1:=rank(value)* NA^(is.na(value)), by = group][]
# group value order1
#1: 1 12.1 3
#2: 1 10.3 1
#3: 1 NA NA
#4: 1 11.0 2
#5: 1 13.5 4
#6: 2 11.7 3
#7: 2 NA NA
#8: 2 10.4 2
#9: 2 9.7 1
You can use the rank() function applied to each group at a time to get your desired result. My solution for doing this is to write a small helper function and call that function in a for loop. I'm sure there are other more elegant means using various R libraries but here is a solution using only base R.
df <- read.table('~/Desktop/stack_overflow28283818.csv', sep = ',', header = T)
#helper function
rankByGroup <- function(df = NULL, grp = 1)
{
rank(df[df$group == grp, 'value'])
}
# Remove NAs
df.na <- df[is.na(df$value),]
df.0 <- df[!is.na(df$value),]
# For loop over groups to list the ranks
for(grp in unique(df.0$group))
{
df.0[df.0$group == grp, 'order'] <- rankByGroup(df.0, grp)
print(grp)
}
# Append NAs
df.na$order <- NA
df.out <- rbind(df.0,df.na)
#re-sort for ordering given in OP (probably not really required)
df.out <- df.out[order(as.numeric(rownames(df.out))),]
This gives exactly the output desired, although I suspect that maintaining the position of the NAs in the data may not be necessary for your application.
> df.out
group value order
1 1 12.1 3
2 1 10.3 1
3 1 NA NA
4 1 11.0 2
5 1 13.5 4
6 2 11.7 3
7 2 NA NA
8 2 10.4 2
9 2 9.7 1

rotating variable in ddply

I am trying to get means from a column in a data frame based on a unique value. So trying to get mean of column b and column c in this exampled based on the unique values in column a. I thought the .(a) would make it calculate by unique value in a (it gives the unique values of a) but it just gives a mean for the whole column b or c.
df2<-data.frame(a=seq(1:5),b=c(1:10), c=c(11:20))
simVars <- c("b", "c")
for ( var in simVars ){
print(var)
dat = ddply(df2, .(a), summarize, mean_val = mean(df2[[var]])) ## my script
assign(var, dat)
}
c
a mean_val
1 15.5
2 15.5
3 15.5
4 15.5
5 15.5
How can I have it take an average for the column based on the unique value from column a?
thanks
You don't need a loop. Just calculate the means of b and c within a single call to ddply and the means will be calculated separately for each value of a. And, as #Gregor said, you don't need to re-specify the data frame name inside mean():
ddply(df2, .(a), summarise,
mean_b=mean(b),
mean_c=mean(c))
a mean_b mean_c
1 1 3.5 13.5
2 2 4.5 14.5
3 3 5.5 15.5
4 4 6.5 16.5
5 5 7.5 17.5
UPDATE: To get separate data frames for each column of means:
# Add a few additional columns to the data frame
df2 = data.frame(a=seq(1:5),b=c(1:10), c=c(11:20), d=c(21:30), e=c(31:40))
# New data frame with means by each level of column a
library(dplyr)
dfmeans = df2 %>%
group_by(a) %>%
summarise_each(funs(mean))
# Separate each column of means into a separate data frame and store it in a list:
means.list = lapply(names(dfmeans)[-1], function(x) {
cbind(dfmeans[,"a"], dfmeans[,x])
})
means.list
[[1]]
a b
1 1 3.5
2 2 4.5
3 3 5.5
4 4 6.5
5 5 7.5
[[2]]
a c
1 1 13.5
2 2 14.5
3 3 15.5
4 4 16.5
5 5 17.5
[[3]]
a d
1 1 23.5
2 2 24.5
3 3 25.5
4 4 26.5
5 5 27.5
[[4]]
a e
1 1 33.5
2 2 34.5
3 3 35.5
4 4 36.5
5 5 37.5

Calculate mean value of sets of 4 sub locations from multiple location from a larger matrix

I am doing a data analysis on wall thickness measurements of circular tubes. I have the following matrix:
> head(datIn, 12)
Component Tube.number Measurement.location Sub.location Interval Unit Start
1 In 1 1 A 121 U6100 7/25/2000
2 In 1 1 A 122 U6100 5/24/2001
3 In 1 1 A 222 U6200 1/19/2001
4 In 1 1 A 321 U6300 6/1/2000
5 In 1 1 A 223 U6200 5/22/2002
6 In 1 1 A 323 U6300 6/18/2002
7 In 1 1 A 21 U6200 10/1/1997
8 In 1 1 A 221 U6200 6/3/2000
9 In 1 1 A 322 U6300 12/11/2000
10 In 1 1 B 122 U6100 5/24/2001
11 In 1 1 B 322 U6300 12/11/2000
12 In 1 1 B 21 U6200 10/1/1997
End Measurement Material.loss Material.loss.interval Run.hours.interval
1 5/11/2001 7.6 0.4 NA 6653.10
2 2/7/2004 6.1 1.9 1.5 15484.82
3 3/7/2002 8.5 -0.5 -0.5 8826.50
4 12/1/2000 7.8 0.2 0.2 4170.15
5 4/30/2003 7.4 0.6 1.1 6879.73
6 9/30/2003 7.9 0.1 -0.1 9711.56
7 4/20/2000 7.6 0.4 NA 15159.94
8 1/5/2001 8.0 0.0 -0.4 4728.88
9 5/30/2002 7.8 0.2 0.0 9829.75
10 2/7/2004 5.9 2.1 0.9 15484.82
11 5/30/2002 7.0 1.0 0.7 9829.75
12 4/20/2000 8.2 -0.2 NA 15159.94
Run.hours.prior.to.interval Total.run.hours.end.interval
1 0.00 6653.10
2 6653.10 22137.92
3 19888.82 28715.32
4 0.00 4170.15
5 28715.32 35595.05
6 30039.58 39751.14
7 0.00 15159.94
8 15159.94 19888.82
9 20209.83 30039.58
10 6653.10 22137.92
11 20209.83 30039.58
12 0.00 15159.94
Straight.or.In.Out.Middle.bend.1 Straight.or.In.Out.Middle.bend.2
1 Out Out
2 Out Out
3 Out Out
4 Out Out
5 Out Out
6 Out Out
7 Out Out
8 Out Out
9 Out Out
10 Middle Out
11 Middle Out
12 Middle Out
The Sub.location column has values A, B, C, D. They are measurements at the same measurement location but at a different position in the cross section. So at 0, 90, 180, 270 degrees along the tube.
I would like to make a plot in which it becomes clear which measurement location has the biggest wall thickness decrease in time.
To do this I first want to calculate the mean value of the wall thickness of a tube at each measurement location at each unique interval (the running hours are coupled to the interval).
I tried doing this with the following formula:
par(mfrow=c(1,2))
myfunction <- function(mydata1) { return(mean(mydata1,na.rm=TRUE))}
AVmeasloc <- tapply(datIn$Measurement,list(as.factor(datIn$Sub.location),as.factor(datIn$Measurement.location), myfunction))
AVmeasloc
This doesnt seem to work. I would like to keep the tapply function as I also calculated the standard deviation for some values with this and it lets me make plots easily.
Does anyone have any advice how to tackle this problem?
From the code you've post, there is a parenthesis error around list(), it should read
AVmeasloc <- tapply(datIn$Measurement,list(as.factor(datIn$Sub.location),as.factor(datIn$Measurement.location)), myfunction)
This can now be cleaned up to
AVmeasloc <- tapply(datIn$Measurement,datIn[,c(3,4)],mean,na.rm=TRUE)
Here's a working example:
test.data <- data.frame(cat1 = c("A","A","A","B","B","B","C","C","D"),
cat2 = c(1,1,2,2,1,NA,2,1,1),
val = c(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9))
tapply(test.data$val, test.data[,c(1,2)],mean,na.rm=TRUE)
cat2
cat1 1 2
A 0.15 0.3
B 0.50 0.4
C 0.80 0.7
D 0.90 NA

Resources