This question already has answers here:
How to select the rows with maximum values in each group with dplyr? [duplicate]
(6 answers)
Closed 5 years ago.
I have a similar problem with my own dataset and decided to practice on an example dataset. I'm trying to select the TailNumbers associated with the Max Air Time by Carrier.
Here's my solution thus far:
library(hflights)
hflights %>%
group_by(UniqueCarrier, TailNum) %>%
summarise(maxAT = max(AirTime)) %>%
arrange(desc(maxAT))
This provides three columns to which I can eyeball the max Air Time values and then filter them down using filter() statements. However, I feel like there's a more elegant way to do so.
You can use which.max to find out the row with the maximum AirTime and then slice the rows:
hflights %>%
select(UniqueCarrier, TailNum, AirTime) %>%
group_by(UniqueCarrier) %>%
slice(which.max(AirTime))
# A tibble: 15 x 3
# Groups: UniqueCarrier [15]
# UniqueCarrier TailNum AirTime
# <chr> <chr> <int>
# 1 AA N3FNAA 161
# 2 AS N626AS 315
# 3 B6 N283JB 258
# 4 CO N77066 549
# 5 DL N358NB 188
# 6 EV N716EV 173
# 7 F9 N905FR 190
# 8 FL N176AT 186
# 9 MQ N526MQ 220
#10 OO N744SK 225
#11 UA N457UA 276
#12 US N950UW 212
#13 WN N256WN 288
#14 XE N11199 204
#15 YV N907FJ 150
Related
My question refers to the following (simplified) panel data, for which I would like to create some sort of xrd_stock.
#Setup data
library(tidyverse)
firm_id <- c(rep(1, 5), rep(2, 3), rep(3, 4))
firm_name <- c(rep("Cosco", 5), rep("Apple", 3), rep("BP", 4))
fyear <- c(seq(2000, 2004, 1), seq(2003, 2005, 1), seq(2005, 2008, 1))
xrd <- c(49,93,121,84,37,197,36,154,104,116,6,21)
df <- data.frame(firm_id, firm_name, fyear, xrd)
#Define variables
growth = 0.08
depr = 0.15
For a new variable called xrd_stock I'd like to apply the following mechanics:
each firm_id should be handled separately: group_by(firm_id)
where fyear is at minimum, calculate xrd_stock as: xrd/(growth + depr)
otherwise, calculate xrd_stock as: xrd + (1-depr) * [xrd_stock from previous row]
With the following code, I already succeeded with step 1. and 2. and parts of step 3.
df2 <- df %>%
ungroup() %>%
group_by(firm_id) %>%
arrange(firm_id, fyear, decreasing = TRUE) %>% #Ensure that data is arranged w/ in asc(fyear) order; not required in this specific example as df is already in correct order
mutate(xrd_stock = ifelse(fyear == min(fyear), xrd/(growth + depr), xrd + (1-depr)*lag(xrd_stock))))
Difficulties occur in the else part of the function, such that R returns:
Error: Problem with `mutate()` input `xrd_stock`.
x object 'xrd_stock' not found
i Input `xrd_stock` is `ifelse(...)`.
i The error occured in group 1: firm_id = 1.
Run `rlang::last_error()` to see where the error occurred.
From this error message, I understand that R cannot refer to the just created xrd_stock in the previous row (logical when considering/assuming that R is not strictly working from top to bottom); however, when simply putting a 9 in the else part, my above code runs without any errors.
Can anyone help me with this problem so that results look eventually as shown below. I am more than happy to answer additional questions if required. Thank you very much to everyone in advance, who looks at my question :-)
Target results (Excel-calculated):
id name fyear xrd xrd_stock Calculation for xrd_stock
1 Cosco 2000 49 213 =49/(0.08+0.15)
1 Cosco 2001 93 274 =93+(1-0.15)*213
1 Cosco 2002 121 354 …
1 Cosco 2003 84 385 …
1 Cosco 2004 37 364 …
2 Apple 2003 197 857 =197/(0.08+0.15)
2 Apple 2004 36 764 =36+(1-0.15)*857
2 Apple 2005 154 803 …
3 BP 2005 104 452 …
3 BP 2006 116 500 …
3 BP 2007 6 431 …
3 BP 2008 21 388 …
arrange the data by fyear so minimum year is always the 1st row, you can then use accumulate to calculate.
library(dplyr)
df %>%
arrange(firm_id, fyear) %>%
group_by(firm_id) %>%
mutate(xrd_stock = purrr::accumulate(xrd[-1], ~.y + (1-depr) * .x,
.init = first(xrd)/(growth + depr)))
# firm_id firm_name fyear xrd xrd_stock
# <dbl> <chr> <dbl> <dbl> <dbl>
# 1 1 Cosco 2000 49 213.
# 2 1 Cosco 2001 93 274.
# 3 1 Cosco 2002 121 354.
# 4 1 Cosco 2003 84 385.
# 5 1 Cosco 2004 37 364.
# 6 2 Apple 2003 197 857.
# 7 2 Apple 2004 36 764.
# 8 2 Apple 2005 154 803.
# 9 3 BP 2005 104 452.
#10 3 BP 2006 116 500.
#11 3 BP 2007 6 431.
#12 3 BP 2008 21 388.
I have data similar to this Sample Data:
Cities Country Date Cases
1 BE A 2/12/20 12
2 BD A 2/12/20 244
3 BF A 2/12/20 1
4 V 2/12/20 13
5 Q 2/13/20 2
6 D 2/14/20 4
7 GH N 2/15/20 6
8 DA N 2/15/20 624
9 AG J 2/15/20 204
10 FS U 2/16/20 433
11 FR U 2/16/20 38
I want to organize the data by on the date and country and then sum a country's daily case. However, I try something like, it reveal the total sum:
my_data %>%
group_by(Country, Date)%>%
summarize(Cases=sum(Cases))
Your summarize function is likely being called from another package (plyr?). Try calling dplyr::sumarize like this:
my_data %>%
group_by(Country, Date)%>%
dplyr::summarize(Cases=sum(Cases))
# A tibble: 7 x 3
# Groups: Country [7]
Country Date Cases
<fct> <fct> <int>
1 A 2/12/20 257
2 D 2/14/20 4
3 J 2/15/20 204
4 N 2/15/20 630
5 Q 2/13/20 2
6 U 2/16/20 471
7 V 2/12/20 13
I sympathize with you that this is can be very frustrating. I have gotten into a habit of always using dplyr::select, dplyr::filter and dplyr::summarize. Otherwise you spend needless time frustrated about why your code isn't working.
We can also use aggregate
aggregate(Cases ~ Country + Date, my_data, sum)
This question already has answers here:
How to sum a variable by group
(18 answers)
Closed 3 years ago.
I have a number of observations from the same unit, and I need to merge the rows. So a data frame like
data.frame(
fir =c("001","006","001", "006", "062"),
sec = c(10,5,6,7,8),
thd = c(45,67,84,54,23))
fir sec thd
001 10 45
006 5 67
001 6 84
006 7 54
062 8 23
The first column has a 3 digit number representing a unit. I need to add the rows together to get a total for each unit. The other columns are numeric values that need adding together. So the dataframe would look like,
fir sec thd
001 16 129
006 12 121
062 8 23
I need it to work for any unique number in the first column.
Any ideas? Thank you for any help!
welcome this is a classic case of a group by operation, we can group your logic by group in this case we want the sum of the sec and thd columns.
library(tidyverse)
df <- data.frame(
fir =c("001","006","001", "006", "062"),
sec = c(10,5,6,7,8),
thd = c(45,67,84,54,23))
df %>%
group_by(fir) %>%
summarise(sec_sum = sum(sec),
thd_sum = sum(thd))
We can do a group by 'sum'
library(dplyr)
df1 %>%
group_by(fir) %>%
summarise_all(sum)
# A tibble: 3 x 3
# fir sec thd
# <fct> <dbl> <dbl>
#1 001 16 129
#2 006 12 121
#3 062 8 23
Or with aggregate from base R
aggregate(. ~ fir, df1, sum)
data
df1 <- data.frame(
fir =c("001","006","001", "006", "062"),
sec = c(10,5,6,7,8),
thd = c(45,67,84,54,23))
I am very, very new to any type of coding language. I am used to Pivot tables in Excel, and trying to replicate a pivot I have done in Excel in R. I have spent a long time searching the internet/ YouTube, but I just can't get it to work.
I am looking to produce a table in which I the left hand side column shows a number of locations, and across the top of the table it shows different pages that have been viewed. I want to show in the table the number of views per location which each of these pages.
The data frame 'specificreports' shows all views over the past year for different pages on an online platform. I want to filter for the month of October, and then pivot the different Employee Teams against the number of views for different pages.
specificreports <- readxl::read_excel("Multi-Tab File - Dashboard
Usage.xlsx", sheet = "Specific Reports")
specificreportsLocal <- tbl_df(specificreports)
specificreportsLocal %>% filter(Month == "October") %>%
group_by("Employee Team") %>%
This bit works, in that it groups the different team names and filters entries for the month of October. After this I have tried using the summarise function to summarise the number of hits but can't get it to work at all. I keep getting errors regarding data type. I keep getting confused because solutions I look up keep using different packages.
I would appreciate any help, using the simplest way of doing this as I am a total newbie!
Thanks in advance,
Holly
let's see if I can help a bit. It's hard to know what your data looks like from the info you gave us. So I'm going to guess and make some fake data for us to play with. It's worth noting that having field names with spaces in them is going to make your life really hard. You should start by renaming your fields to something more manageable. Since I'm just making data up, I'll give my fields names without spaces:
library(tidyverse)
## this makes some fake data
## a data frame with 3 fields: month, team, value
n <- 100
specificreportsLocal <-
data.frame(
month = sample(1:12, size = n, replace = TRUE),
team = letters[1:5],
value = sample(1:100, size = n, replace = TRUE)
)
That's just a data frame called specificreportsLocal with three fields: month, team, value
Let's do some things with it:
# This will give us total values by team when month = 10
specificreportsLocal %>%
filter(month == 10) %>%
group_by(team) %>%
summarize(total_value = sum(value))
#> # A tibble: 4 x 2
#> team total_value
#> <fct> <int>
#> 1 a 119
#> 2 b 172
#> 3 c 67
#> 4 d 229
I think that's sort of like what you already did, except I added the summarize to show how it works.
Now let's use all months and reshape it from 'long' to 'wide'
# if I want to see all months I leave out the filter and
# add a group_by month
specificreportsLocal %>%
group_by(team, month) %>%
summarize(total_value = sum(value)) %>%
head(5) # this just shows the first 5 values
#> # A tibble: 5 x 3
#> # Groups: team [1]
#> team month total_value
#> <fct> <int> <int>
#> 1 a 1 17
#> 2 a 2 46
#> 3 a 3 91
#> 4 a 4 69
#> 5 a 5 83
# to make this 'long' data 'wide', we can use the `spread` function
specificreportsLocal %>%
group_by(team, month) %>%
summarize(total_value = sum(value)) %>%
spread(team, total_value)
#> # A tibble: 12 x 6
#> month a b c d e
#> <int> <int> <int> <int> <int> <int>
#> 1 1 17 122 136 NA 167
#> 2 2 46 104 158 94 197
#> 3 3 91 NA NA NA 11
#> 4 4 69 120 159 76 98
#> 5 5 83 186 158 19 208
#> 6 6 103 NA 118 105 84
#> 7 7 NA NA 73 127 107
#> 8 8 NA 130 NA 166 99
#> 9 9 125 72 118 135 71
#> 10 10 119 172 67 229 NA
#> 11 11 107 81 NA 131 49
#> 12 12 174 87 39 NA 41
Created on 2018-12-01 by the reprex package (v0.2.1)
Now I'm not really sure if that's what you want. So feel free to make a comment on this answer if you need any of this clarified.
Welcome to Stack Overflow!
I'm not sure I correctly understand your need without a data sample, but this may work for you:
library(rpivotTable)
specificreportsLocal %>% filter(Month == "October")
rpivotTable(specificreportsLocal, rows="Employee Team", cols="page", vals="views", aggregatorName = "Sum")
Otherwise, if you do not need it interactive (as the Pivot Tables in Excel), this may work as well:
specificreportsLocal %>% filter(Month == "October") %>%
group_by_at(c("Employee Team", "page")) %>%
summarise(nr_views = sum(views, na.rm=TRUE))
I have got a data frame of Germany from 2012 with 8187 rows for 8187 postal codes (and about 10 variables listed as columns), but with no coordinates. Additionally, I have got coordinates of a different shapefile with 8203 rows (also including mostly the same postal codes).
I need the correct coordinates of the 8203 cases to be assigned to the 8178 cases of the initial data frame.
The problem: The difference of correct assignments needed is not 8178 with 16 cases missing (8203 - 8187 = 16), it is more. There are some towns (with postal codes) of 2012 which are not listed in the more recent shapefile and vice versa.
(I) Perhaps the easiest solution would be to obtain the coordinates from 2012 (unprojected: CRS("+init=epsg:4326")). --> Does anybody know an open source platform for this purpose? And do they have exactly 8187 postal codes?
(II) Or: Does anybody have an experience with assigning coordinates from to a data set of a different year? - Or, should this be avoided in any way because of some slightly changing borders and coordinates (especially when the data should be mapped and visualized in polygons from 2012) and some towns not listed in the older "and" in the newer data set?
I would appreciate your expert advice on how to approach (and hopefully solve) this issue!
EDIT - MWE:
# data set from 2012
> df1
# A tibble: 9 x 4
ID PLZ5 Name Var1
<dbl> <dbl> <chr> <dbl>
1 1 1067 Dresden 01067 40
2 2 1069 Dresden 01069 110
3 224 4571 Rötha 0
4 225 4574 Deutzen 120
5 226 4575 Neukieritzsch 144
6 262 4860 Torgau 23
7 263 4862 Mockrehna 57
8 8186 99996 Menteroda 0
9 8187 99998 Körner 26
# coordinates of recent shapefile
> df2
# A tibble: 9 x 5
ID PLZ5 Name Longitude Latitude
<dbl> <dbl> <chr> <dbl> <dbl>
1 1 1067 Dresden-01067 13.71832 51.06018
2 2 1069 Dresden-01069 13.73655 51.03994
3 224 4571 Roetha 12.47311 51.20390
4 225 4575 Neukieritzsch 12.41355 51.15278
5 260 4860 Torgau 12.94737 51.55790
6 261 4861 Bennewitz 13.00145 51.51125
7 262 4862 Mockrehna 12.83097 51.51125
8 8202 99996 Obermehler 10.59146 51.28864
9 8203 99998 Koerner 10.55294 51.21257
Hence,
4 225 4574 Deutzen 120
--> is not listed in df2 and:
6 261 4861 Bennewitz 13.00145 51.51125
--> is not listed in df1.
Any ideas concerning (I) and (II)?