Calculating percentiles across certain rows in a data frame in r - r

My data has a temperature measurement for each day in a year and other variables necessary for the analysis by villageID. I would like to create a new variable that calculates the 95 percentile threshold of all 365 temperature measurements for each village.
My data is in wide format and looks like this:
villageID temp1 temp2 temp3.... temp365 otherVars
1 1 70 86 98 79 x
2 2 73 89 99 86 x
3 3 71 82 96 75 x
4 4 78 79 94 81 x
5 5 90 91 89 85 x
I would like to create this 95% threshold variable that calculates the threshold (or temperature measure) that indicates at what temperature the 95th percentile starts at. I would like to do this across all temperature measures columns[2:366] and keep all other variables the same.
Like this:
villageID temp1 temp2 temp3 .....temp365 otherVars 95per
1 1 70 86 98 79 x 81
2 2 73 89 99 86 x 90
3 3 71 82 96 75 x 86
4 4 78 79 94 81 x 82
5 5 90 91 89 85 x 99

Although I think you should keep your data in long format here is some code that will compute it and put it back in the wide format that you have. Just know that often times it's not the best way to go about things, especially if you want to plot your data later:
library(tidyverse)
dat <- tribble(~"villageID", ~"temp1", ~"temp2", ~"temp3", ~"temp365",
1, 70, 86, 98, 79,
2, 73, 89, 99, 86,
3, 71, 82, 96, 75,
4, 78, 79, 94, 81,
5, 90, 91, 89, 85)
dat %>%
gather(key = "day", value = "temp", -villageID) %>%
group_by(villageID) %>%
mutate(perc_95 = quantile(temp, probs = .95)) %>%
spread(day, temp)
#> # A tibble: 5 x 6
#> # Groups: villageID [5]
#> villageID perc_95 temp1 temp2 temp3 temp365
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 96.2 70 86 98 79
#> 2 2 97.5 73 89 99 86
#> 3 3 93.9 71 82 96 75
#> 4 4 92.0 78 79 94 81
#> 5 5 90.8 90 91 89 85
Created on 2019-02-27 by the reprex package (v0.2.1)

In base R it would just be (assuming that only the temperature column have the string "temp" in them):
dfrm$temp95perc <-
apply( dfrm[ ,grep("temp", names(dfrm) )], #select just `tempNNN` columns
1, # row-wise calcs
quantile, probs=0.95) # give `quantile` a probs

Related

Filtering multiple conditions with multiple variables using filter function in R

I am working on a dataset where I am trying to filter down some data before I start running operations on it. However, I am having the following issues:
Up until using the select() I get all the data from the selected variables.
Once I attempt to apply filters on one variable it works showing the filtered data. However, as soon as I attempt to do a second condition it prints out 0 observations.
Any help would be much appreciated. I am trying to figure out if this is just semantically wrong or there some syntax issue I'm missing. I have been searching and cannot find out the fix. I feel like the issue has something to do with the logical operators but I cannot figure out what needs to go there.
The not equal operators are for specific "coded" values inside the data set.
The code:
select(X1, X2, X3, X4) %>%
filter(X1 != "97" &
X1 != "98" &
X1 != "99" &
X2 != "88" &
X2 != "77" &
X2 != "99" &
X3 != "88" &
X3 != "77" &
X4 != "99" &
!is.na(X1)
!is.na(X2)
!is.na(X3)
!is.na(X4))
AS pointed by Rui Barradas in the comments, use ! x %in% y instead of multiple (in)equality conditions.
You replied in a comment that was still not "printing any variables". If the function runs without errors, could it be that there are no observations left after filtering? Maybe you mean to use some OR operators for each of the variable chunks. See my second example if that works for you.
For the missing filter, I assume you mean to keep only the complete cases.
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tidyr) # for drop_na
vec <- c(NA, 80:99)
set.seed(42)
df <- data.frame(
X1 = sample(vec, 20, replace = T),
X2 = sample(vec, 20, replace = T),
X3 = sample(vec, 20, replace = T),
X4 = sample(vec, 20, replace = T)) %>% as_tibble()
# filtering with AND between each variable
df %>%
filter(!(X1 %in% c(97, 98, 99)) &
!(X2 %in% c(77, 88, 99)) &
!(X3 %in% c(77, 88, 99)) &
!(X4 %in% c(77, 88, 99)),
) %>% drop_na() # will keep only complete cases, with valid obs for all rows
#> # A tibble: 10 x 4
#> X1 X2 X3 X4
#> <int> <int> <int> <int>
#> 1 83 98 96 98
#> 2 88 86 84 89
#> 3 82 81 80 81
#> 4 85 93 80 83
#> 5 82 86 84 80
#> 6 92 82 86 95
#> 7 93 83 95 92
#> 8 81 82 85 86
#> 9 87 80 82 86
#> 10 91 95 87 82
# filtering with OR between each variable
df %>%
filter(!(X1 %in% c(77, 88, 99)) |
!(X2 %in% c(77, 88, 99)) |
!(X3 %in% c(77, 88, 99)) |
!(X4 %in% c(77, 88, 99)),
) %>% drop_na() # will keep only complete cases, with valid obs for all rows
#> # A tibble: 17 x 4
#> X1 X2 X3 X4
#> <int> <int> <int> <int>
#> 1 95 83 99 90
#> 2 83 98 96 98
#> 3 88 86 84 89
#> 4 82 81 80 81
#> 5 95 88 81 83
#> 6 93 89 99 92
#> 7 85 93 80 83
#> 8 82 86 84 80
#> 9 83 82 88 96
#> 10 92 82 86 95
#> 11 98 96 83 96
#> 12 93 83 95 92
#> 13 81 82 85 86
#> 14 87 80 82 86
#> 15 82 96 91 99
#> 16 83 81 88 82
#> 17 91 95 87 82
Created on 2021-06-26 by the reprex package (v2.0.0)
To add on Marcelo Avila's answer, there is also if_any and if_all in {dplyr}. Note that you can also use NA inside ! .x %in% c(NA, 99, 88).
library(dplyr)
# data taken from Marcelo's answer
vec <- c(NA, 80:99)
set.seed(42)
df <- data.frame(
X1 = sample(vec, 20, replace = T),
X2 = sample(vec, 20, replace = T),
X3 = sample(vec, 20, replace = T),
X4 = sample(vec, 20, replace = T)) %>% as_tibble()
df %>%
filter(!if_any(X1:X4, is.na),
!if_any(X1:X2, ~ .x %in% c(97, 98, 99)),
!X3 %in% c(88, 77),
X4 != 99)
#> # A tibble: 12 x 4
#> X1 X2 X3 X4
#> <int> <int> <int> <int>
#> 1 95 83 99 90
#> 2 88 86 84 89
#> 3 82 81 80 81
#> 4 95 88 81 83
#> 5 93 89 99 92
#> 6 85 93 80 83
#> 7 82 86 84 80
#> 8 92 82 86 95
#> 9 93 83 95 92
#> 10 81 82 85 86
#> 11 87 80 82 86
#> 12 91 95 87 82
Created on 2021-06-26 by the reprex package (v0.3.0)

Rowwise Column Count in Dataframe

Let's say I have the following dataframe
country_df <- tibble(
population = c(328, 38, 30, 56, 1393, 126, 57),
population2 = c(133, 12, 99, 83, 1033, 101, 33),
population3 = c(89, 39, 33, 56, 193, 126, 58),
pop = 45
)
All I need is a concise way inside the mutate function to get the number of columns (population to population3) that are greater than the value of the pop column within each row.
So what I need is the following results (more specifically the GreaterTotal column) Note: I can get the answer by working through each column but it would take a while with more columns)
population population2 population3 pop GreaterThan0 GreaterThan1 GreaterThan2 GreaterTotal
<dbl> <dbl> <dbl> <dbl> <lgl> <lgl> <lgl> <int>
1 328 133 89 45 TRUE TRUE TRUE 3
2 38 12 39 45 FALSE FALSE FALSE 0
3 30 99 33 45 FALSE TRUE FALSE 1
4 56 83 56 45 TRUE TRUE TRUE 3
5 1393 1033 193 45 TRUE TRUE TRUE 3
6 126 101 126 45 TRUE TRUE TRUE 3
7 57 33 58 45 TRUE FALSE TRUE 2
I've tried using apply with the row index, but I can't get at it. Can somebody please point me in the right direction?
You can select the 'Population' columns and compare those column with pop and use rowSums to count how many of them are greater in each row.
cols <- grep('population', names(country_df))
country_df$GreaterTotal <- rowSums(country_df[cols] > country_df$pop)
# population population2 population3 pop GreaterTotal
# <dbl> <dbl> <dbl> <dbl> <dbl>
#1 328 133 89 45 3
#2 38 12 39 45 0
#3 30 99 33 45 1
#4 56 83 56 45 3
#5 1393 1033 193 45 3
#6 126 101 126 45 3
#7 57 33 58 45 2
In dplyr 1.0.0, you can do this with rowwise and c_across :
country_df %>%
rowwise() %>%
mutate(GreaterTotal = sum(c_across(population:population3) > pop))
Using tidyverse, we can do
library(dplyr)
country_df %>%
mutate(GreaterTotal = rowSums(select(.,
starts_with('population')) > .$pop) )
-output
# A tibble: 7 x 5
# population population2 population3 pop GreaterTotal
# <dbl> <dbl> <dbl> <dbl> <dbl>
#1 328 133 89 45 3
#2 38 12 39 45 0
#3 30 99 33 45 1
#4 56 83 56 45 3
#5 1393 1033 193 45 3
#6 126 101 126 45 3
#7 57 33 58 45 2

Simplify multiple rowSums looping through columns

I'm currently on R trying to create for a DF multiple columns with the sum of previous one. Imagine I got a DF like this:
df=
sep-2016 oct-2016 nov-2016 dec-2016 jan-2017
1 70 153 NA 28 19
2 57 68 73 118 16
3 29 NA 19 32 36
4 177 36 3 54 53
and I want to add at the end the sum of the rows previous of the month that I'm reporting so for October you end up with the sum of sep and oct, and for November you end up with the sum of sep, oct and november and end up with something like this:
df=
sep-2016 oct-2016 nov-2016 dec-2016 jan-2017 status-Oct2016 status-Nov 2016
1 70 153 NA 28 19 223 223
2 57 68 73 118 16 105 198
3 29 NA 19 32 36 29 48
4 177 36 3 54 53 213 93
I want to know a efficient way insted of writing a lots of lines of rowSums() and even if I can get the label on the iteration for each month would be amazing!
Thanks!
We can use lapply to loop through the columns to apply the rowSums.
dat2 <- as.data.frame(lapply(2:ncol(dat), function(i){
rowSums(dat[, 1:i], na.rm = TRUE)
}))
names(dat2) <- paste0("status-", names(dat[, -1]))
dat3 <- cbind(dat, dat2)
dat3
# sep-2016 oct-2016 nov-2016 dec-2016 jan-2017 status-oct-2016 status-nov-2016 status-dec-2016 status-jan-2017
# 1 70 153 NA 28 19 223 223 251 270
# 2 57 68 73 118 16 125 198 316 332
# 3 29 NA 19 32 36 29 48 80 116
# 4 177 36 3 54 53 213 216 270 323
DATA
dat <- read.table(text = " 'sep-2016' 'oct-2016' 'nov-2016' 'dec-2016' 'jan-2017'
1 70 153 NA 28 19
2 57 68 73 118 16
3 29 NA 19 32 36
4 177 36 3 54 53",
header = TRUE, stringsAsFactors = FALSE)
names(dat) <- c("sep-2016", "oct-2016", "nov-2016", "dec-2016", "jan-2017")
Honestly I have no idea why you would want your data in this format, but here is a tidyverse method of accomplishing it. It involves transforming the data to a tidy format before spreading it back out into your wide format. The key thing to note is that in a tidy format, where month is a variable in a single column instead of spread across multiple columns, you can simply use group_by(rowid) and cumsum to calculate all the values you want. The last few lines are constructing the status- column names and spreading the data back out into a wide format.
library(tidyverse)
df <- read_table2(
"sep-2016 oct-2016 nov-2016 dec-2016 jan-2017
70 153 NA 28 19
57 68 73 118 16
29 NA 19 32 36
177 36 3 54 53"
)
df %>%
rowid_to_column() %>%
gather("month", "value", -rowid) %>%
arrange(rowid) %>%
group_by(rowid) %>%
mutate(
value = replace_na(value, 0),
status = cumsum(value)
) %>%
gather("vartype", "number", value, status) %>%
mutate(colname = ifelse(vartype == "value", month, str_c("status-", month))) %>%
select(rowid, number, colname) %>%
spread(colname, number)
#> # A tibble: 4 x 11
#> # Groups: rowid [4]
#> rowid `dec-2016` `jan-2017` `nov-2016` `oct-2016` `sep-2016`
#> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 28.0 19.0 0 153 70.0
#> 2 2 118 16.0 73.0 68.0 57.0
#> 3 3 32.0 36.0 19.0 0 29.0
#> 4 4 54.0 53.0 3.00 36.0 177
#> # ... with 5 more variables: `status-dec-2016` <dbl>,
#> # `status-jan-2017` <dbl>, `status-nov-2016` <dbl>,
#> # `status-oct-2016` <dbl>, `status-sep-2016` <dbl>
Created on 2018-02-16 by the reprex package (v0.2.0).
A clean way to do it is by convert your data in a long format.
library(tibble)
library(tidyr)
library(dplyr)
your_data <- tribble(~"sep_2016", ~"oct_2016", ~"nov_2016", ~"dec_2016", ~"jan_2017",
70, 153, NA, 28, 19,
57, 68, 73, 118, 16,
29, NA, 19, 32, 36,
177, 36, 3, 54, 53)
You can change the format of your data.frame with gather from the tidyr package.
your_data_long <- your_data %>%
rowid_to_column() %>%
gather(key = month_year, value = the_value, -rowid)
head(your_data_long)
#> # A tibble: 6 x 3
#> rowid month_year the_value
#> <int> <chr> <dbl>
#> 1 1 sep_2016 70
#> 2 2 sep_2016 57
#> 3 3 sep_2016 29
#> 4 4 sep_2016 177
#> 5 1 oct_2016 153
#> 6 2 oct_2016 68
Once your data.frame is in a long format. You can compute cumulative sum with cumsumand dplyrfunctions mutate and group_by.
result <- your_data_long %>%
group_by(rowid) %>%
mutate(cumulative_value = cumsum(the_value))
result
#> # A tibble: 20 x 4
#> # Groups: rowid [4]
#> rowid month_year the_value cumulative_value
#> <int> <chr> <dbl> <dbl>
#> 1 1 sep_2016 70 70
#> 2 2 sep_2016 57 57
#> 3 3 sep_2016 29 29
#> 4 4 sep_2016 177 177
#> 5 1 oct_2016 153 223
#> 6 2 oct_2016 68 125
#> 7 3 oct_2016 NA NA
#> 8 4 oct_2016 36 213
#> 9 1 nov_2016 NA NA
#> 10 2 nov_2016 73 198
#> 11 3 nov_2016 19 NA
#> 12 4 nov_2016 3 216
#> 13 1 dec_2016 28 NA
#> 14 2 dec_2016 118 316
#> 15 3 dec_2016 32 NA
#> 16 4 dec_2016 54 270
#> 17 1 jan_2017 19 NA
#> 18 2 jan_2017 16 332
#> 19 3 jan_2017 36 NA
#> 20 4 jan_2017 53 323
If you want to retrieve the starting form, you can do it with spread.
My preferred solution would be:
# library(matrixStats)
DF <- as.matrix(df)
DF[is.na(DF)] <- 0
RES <- matrixStats::rowCumsums(DF)
colnames(RES) <- paste0("status-", colnames(DF))
cbind.data.frame(df, RES)
This is closest to what you are looking for with the rowSums.
One option could be using spread and gather function from tidyverse.
Note: The status column has been added even for the 1st month. And the status columns are not in order but values are correct.
The approach is:
# Data
df <- read.table(text = "sep-2016 oct-2016 nov-2016 dec-2016 jan-2017
70 153 NA 28 19
57 68 73 118 16
29 NA 19 32 36
177 36 3 54 53", header = T, stringsAsFactors = F)
library(tidyverse)
# Just add an row number as sl
df <- df %>% mutate(sl = row_number())
#Calculate the cumulative sum after gathering and arranging by date
mod_df <- df %>%
gather(key, value, -sl) %>%
mutate(key = as.Date(paste("01",key, sep="."), format="%d.%b.%Y")) %>%
arrange(sl, key) %>%
group_by(sl) %>%
mutate(status = cumsum(ifelse(is.na(value),0L,value) )) %>%
select(-value) %>%
mutate(key = paste("status",as.character(key, format="%b.%Y"))) %>%
spread(key, status)
# Finally join cumulative calculated sum columns with original df and then
# remove sl column
inner_join(df, mod_df, by = "sl") %>% select(-sl)
# sep.2016 oct.2016 nov.2016 dec.2016 jan.2017 status Dec.2016 status Jan.2017 status Nov.2016 status Oct.2016 status Sep.2016
#1 70 153 NA 28 19 251 270 223 223 70
#2 57 68 73 118 16 316 332 198 125 57
#3 29 NA 19 32 36 80 116 48 29 29
#4 177 36 3 54 53 270 323 216 213 177
Another base solution where we build a matrix accumulating the row sums :
status <- setNames(
as.data.frame(t(apply(dat,1,function(x) Reduce(sum,'[<-'(x,is.na(x),0),accumulate = TRUE)))),
paste0("status-",names(dat)))
status
# status-sep-2016 status-oct-2016 status-nov-2016 status-dec-2016 status-jan-2017
# 1 70 223 223 251 270
# 2 57 125 198 316 332
# 3 29 29 48 80 116
# 4 177 213 216 270 323
Then bind it to your original data if needed :
cbind(dat,status[-1])

Filter the values of 15 columns by 3 SD with 100+ rows

I have a dataset with 15 columns col1 to col15 being numeric. I have 100 rows of data with names attached to each row as a factor. I want to do a summary for each row for all 15 columns.
head(df2phcl[,c(1:16)])
col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 col15 NAME
78 95 101 100 84 93 93 85 81 97 80 94 81 79 87 R04-001
100 61 96 75 98 92 99 99 102 83 84 NA 101 93 96 R04-002
81 84 82 83 77 86 90 92 92 78 86 91 59 80 84 R04-003
91 84 87 95 103 93 92 95 86 92 107 96 94 87 97 R04-004
72 79 66 98 84 75 85 83 75 80 91 65 90 81 73 R04-005
72 75 68 44 79 64 83 71 81 82 85 63 87 94 60 R04-006
My code for this is.
library(dplyr)
####Rachis
SUMCL <- df2phcl %>%
group_by(name) %>%
summarise(CL = mean(c(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15), na.rm=T),
CLMAX = max(c(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15), na.rm=T),
CLMIN = min(c(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15), na.rm=T),
CLSTD = sd(c(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15), na.rm=T),
OUT = outliers(c(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15), na.rm=T))
head(SUMCL)
tail(SUMCL)
My resulting analysis comes out as...
Error:
Evaluation error: missing value where TRUE/FALSE needed.
I've also tried this...
df2phcl$col1+col2+col3+col4+col5+col6+col7+col8+col9+col10+col11+col12+col13+co114+col15[!df2phcl$col1+col2+col3+col4+col5+col6+col7+col8+col9+col10+col11+col12+col13+col14+col15%in%boxplot.stats(df2phcl$col1+col2+col3+col4+col5+col6+col7+col8+col9+col10+co111+col12+col13+col14+col15)$out]
This returns ....
Error: object 'col2' not found
Not sure what I'm doing wrong this works with mean, max, min, and sd.
> head(SUMCL)
# A tibble: 6 x 11
# Groups: ENTRY, NAME, HEADCODE, RHTGENES, HEAD, PL [6]
ENTRY NAME HEADCODE RHTGENES HEAD PL PH CL CLMAX CLMIN CLSTD
<int> <fctr> <fctr> <fctr> <fctr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 R04-001 CAW Rht1 Club 319 83 88.53333 101 78 7.989875
2 2 R04-002 LBW Wildtype Common 330 102 91.35714 102 61 11.770936
3 3 R04-003 CBW Rht2 Club 230 82 83.00000 92 59 8.220184
4 4 R04-004 LBW Rht1 Common 328 117 93.26667 107 84 6.192930
5 5 R04-005 CBW Rht1 Club 280 97 79.80000 98 65 9.182281
6 6 R04-006 LAW Rht1 Common 310 92 73.86667 94 44 12.749603
I'm just wanting to filter the outliers at 3 sd or more and then use the dplyr to package to do my statistics...
I'm not exactly sure what you're trying to do, so let me know if the code below is on the right track.
The approach below is to convert the data from wide to long format, which makes it much easier to do the summaries for each level of name.
library(tidyverse)
# Fake data
set.seed(2)
dat = as.data.frame(replicate(15, rnorm(100)))
names(dat) = paste0("col", 1:15)
dat$name = paste0(rep(LETTERS[1:10], each=10), rep(letters[1:10], 10))
# Convert data to long format, remove outliers and summarize
dat %>%
gather(column, value, -name) %>% # reshape from wide to long
group_by(name) %>% # summarize by name
mutate(value = replace(value, abs(value - mean(value)) > 2*sd(value), NA)) %>% # set outliers to NA
summarise(mean = mean(value, na.rm=TRUE),
max = max(value, na.rm=TRUE),
sd = sd(value, na.rm=TRUE))
name mean max sd
1 Aa 0.007848188 1.238744 0.8510016
2 Ab -0.208536464 1.980401 1.2764606
3 Ac -0.152986713 1.587845 0.8443106
4 Ad -0.413543054 0.965692 0.7225872
5 Ae -0.112648322 1.178716 0.7269527
6 Af 0.442268890 2.048040 1.0350119
7 Ag 0.390627994 1.978260 0.8716681
8 Ah 0.080505879 2.396349 1.3128403
9 Ai 0.257925059 1.984474 1.0196722
10 Aj 0.137469703 1.470177 0.7192616
# ... with 90 more rows
I managed to get some of the col std. dev. changed; however, I'm not sure how many observations it took out. I was wanting to take out from the top and the bottom of the distribution at an even amount. Like a trimmed mean, it would take out 20% of the obs. from the top and bottom of the distribution. What I was curious about was just leaving the observations from the top and bottom (+-3 SD) of the distribution.
> SUMCL <- df2phcl %>%
+ gather(column, value, -c(ENTRY, NAME, HEADCODE, RHTGENES, HEAD,PL,PH)) %>% # reshape from wide to long
+ group_by(ENTRY, NAME, HEADCODE, RHTGENES, HEAD,PL,PH) %>% # summarize by name
+ mutate(value = replace(value, abs(value - mean(value)) > 2*sd(value), NA)) %>% # set outliers to NA
+ summarise(CL = mean(value, na.rm=TRUE),
+ CLMAX = max(value, na.rm=TRUE),
+ CLMIN = min(value, na.rm=TRUE),
+ N = sum(!is.na(value), na.rm=TRUE),
+ CLSTD= sd(value, na.rm=TRUE),
+ CLSE = (CLSTD / sqrt(N)))
> head(SUMCL)
# A tibble: 6 x 13
# Groups: ENTRY, NAME, HEADCODE, RHTGENES, HEAD, PL [6]
ENTRY NAME HEADCODE RHTGENES HEAD PL PH CL CLMAX CLMIN N CLSTD CLSE
<int> <fctr> <fctr> <fctr> <fctr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl>
1 1 R04-001 CAW Rht1 Club 319 83 88.53333 101 78 15 7.989875 2.062977
2 2 R04-002 LBW Wildtype Common 330 102 91.35714 102 61 14 11.770936 3.145915
3 3 R04-003 CBW Rht2 Club 230 82 84.71429 92 77 14 5.029583 1.344213
4 4 R04-004 LBW Rht1 Common 328 117 92.28571 103 84 14 5.075258 1.356420
5 5 R04-005 CBW Rht1 Club 280 97 79.80000 98 65 15 9.182281 2.370855
6 6 R04-006 LAW Rht1 Common 310 92 76.00000 94 60 14 10.076629 2.693093

Create vector by matching vector to a dataframe [R]

I have the following dataframe:
> zCode <- sample(50:150, size = 10, replace = TRUE)
> x <- sample(50:150, size = 10, replace = TRUE)
> test <- data.frame(x,zCode )
> test
zCode x
1 110 114
2 108 150
3 57 100
4 53 98
5 114 67
6 143 126
7 110 95
8 106 101
9 103 70
10 149 73
I also have this vector:
> z <- c(53, 57, 110)
> z
[1] 53 57 110
I want to create a new dataframe based on vector Z, that pulls the maximum x value associated with that z-code, like so:
Z x
53 98
57 100
110 114
Here are some possibilities. They do not use any packages.
1) For each element of z compute the subset of rows in test with that zCode and then take the maximum of each x:
data.frame(z, x = sapply(z, function(z) max(subset(test, z == zCode)$x)))
giving:
z x
1 53 98
2 57 100
3 110 114
2) Another approach is to use aggregate to find all the maxima and the merge with z to get just those:
merge(data.frame(z), aggregate(x ~ zCode, test, max), by = 1, all.x = TRUE)
giving:
z x
1 53 98
2 57 100
3 110 114
Hote: The input used, in reproducible form, is:
Lines <- "
zCode x
1 110 114
2 108 150
3 57 100
4 53 98
5 114 67
6 143 126
7 110 95
8 106 101
9 103 70
10 149 73"
test <- read.table(text = Lines)
z <- c(53, 57, 110)
Here is a data.table solution:
# Original data
dt <- data.table(zCode = c(110, 108, 57, 53, 114, 143, 110, 106, 103, 149),
x = c(114, 150, 100, 98, 67, 126, 95, 101, 70, 73))
z <- c(53, 57, 110)
# a new dataframe based on vector z
dt[zCode %in% z, max(x), by = zCode]
zCode V1
1: 110 114
2: 57 100
3: 53 98
EDIT:
# Keeps the columns names unchanged
dt[zCode %in% z, .(x = max(x)), by = zCode]
zCode x
1: 110 114
2: 57 100
3: 53 98

Resources