I have the following data
county<-c(a,a,a,b,b,c)
id<-c(1,2,3,4,5,6)
data<-data.frame(county,id)
I need to convert from long to wide and get the following output
county<-c(a,b,c)
id__0<-c(1,4,6)
id__1<-c(2,5,##NA##)
id__2<-3,##NA##,##NA##)
data2<-data.frame(county,id__0,id__1,id__2)
My main problem is not in converting from long to wide, but how to make the columns start with id__0.
You could add an intermediate variable by grouping according to county and using mutate to build a sequence from 0 upwards for each county, then pivot_wider on that:
library(tidyr)
library(dplyr)
data %>%
group_by(county) %>%
mutate(id_count = seq(n()) - 1) %>%
pivot_wider(county, names_from =id_count, values_from = id, names_prefix = "id_")
#> # A tibble: 3 x 4
#> # Groups: county [3]
#> county id_0 id_1 id_2
#> <chr> <dbl> <dbl> <dbl>
#> 1 a 1 2 3
#> 2 b 4 5 NA
#> 3 c 6 NA NA
Created on 2022-02-10 by the reprex package (v2.0.1)
Related
I want to use the scale function but to do it on each pair of columns - To calculate the mean on pair of columns and not on each column.
In details:
This is my data for example:
phone
phone1_X
phone2
phone2_X
phone3
phone3_X
1
2
3
4
5
6
2
4
6
8
10
12
I want to use the scale function on each pair phone1+phone1_X, Phone2+Phone2_X etc..
Each pair has the same name "phone1" but the second column always contains an additional "_X" (a different condition in the experiment).
In the end, I wish to have the original table but in Z.scores (but as I mentioned before, the mean is calculated by pair of columns and not by one column)
Thank you so much!
There might be a more elegant way, but this is how I'd do it.
library(dplyr)
library(tidyr)
df %>%
pivot_longer(cols = -phone) %>%
group_by(phone, name = stringr::str_extract(name, 'phone[0-9]?')) %>%
summarise(mean_value = mean(value), .groups = 'drop') %>%
pivot_wider(names_from = name, values_from = mean_value)
#> # A tibble: 2 × 4
#> phone phone1 phone2 phone3
#> <dbl> <dbl> <dbl> <dbl>
#> 1 1 2 3.5 5.5
#> 2 2 4 7 11
My data comes from a database which, depending on when I run my SQL query could contain different values for POS from one week to the other.
Not knowing which values will be in a variable makes it very hard to automate the creation of a report.
My data looks as follows:
sample <- data.frame(DRUG = c("A","A","B"),POS = c("Hospital","Physician","Home"),GROSS_COST = c(50,100,60), NET_COST = c(45,80,40))
I need to pivot this data frame wider so that there's a column for each pos by cost (gross & net).
This can be easily achieve using pivot_wider:
x <- sample %>% pivot_wider(names_from = POS, values_from = c(GROSS_COST,NET_COST))
Objective
I would like to be able to keep the columns for each POS together i.e. the GROSS_COST_Hospital and NET_COST_Hospital would be side by side, similar for all other POS columns.
Is there an elegant way to group columns using string matching?
Unfortunately, I don't think there is a direct solution to this (yet!). See https://github.com/tidyverse/tidyr/issues/839 .
For now you can get the data in long format so you can control their ordering the way you want.
library(tidyr)
sample %>%
pivot_longer(cols = c(GROSS_COST, NET_COST)) %>%
pivot_wider(names_from = c(name, POS), values_from = value)
# DRUG GROSS_COST_Hosp… NET_COST_Hospit… GROSS_COST_Phys… NET_COST_Physic…
# <chr> <dbl> <dbl> <dbl> <dbl>
#1 A 50 45 100 80
#2 B NA NA NA NA
# … with 2 more variables: GROSS_COST_Home <dbl>, NET_COST_Home <dbl>
We can do an ordering on the select step
library(dplyr)
library(tidyr)
library(stringr)
sample %>%
pivot_wider(names_from = POS, values_from = c(GROSS_COST,NET_COST)) %>%
select(DRUG, names(.)[-1][order(str_extract(names(.)[-1], '[^_]+$'))])
# A tibble: 2 x 7
# DRUG GROSS_COST_Home NET_COST_Home GROSS_COST_Hospital NET_COST_Hospital GROSS_COST_Physician NET_COST_Physician
# <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#1 A NA NA 50 45 100 80
#2 B 60 40 NA NA NA NA
A data.table option using dcast + melt
> dcast(melt(setDT(sample), id.vars = c("DRUG", "POS")), DRUG ~ variable + POS)
DRUG GROSS_COST_Home GROSS_COST_Hospital GROSS_COST_Physician NET_COST_Home
1: A NA 50 100 NA
2: B 60 NA NA 40
NET_COST_Hospital NET_COST_Physician
1: 45 80
2: NA NA
With the advent of tidyr 1.2.0, the issue is finally resolved, you may do this directly using names_vary argument
library(tidyr)
sample <- data.frame(DRUG = c("A","A","B"),POS = c("Hospital","Physician","Home"),GROSS_COST = c(50,100,60), NET_COST = c(45,80,40))
sample %>%
pivot_wider(names_from = POS, values_from = c(GROSS_COST,NET_COST), names_vary = 'slowest')
#> # A tibble: 2 x 7
#> DRUG GROSS_COST_Hospital NET_COST_Hospital GROSS_COST_Physi~ NET_COST_Physic~
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 A 50 45 100 80
#> 2 B NA NA NA NA
#> # ... with 2 more variables: GROSS_COST_Home <dbl>, NET_COST_Home <dbl>
Created on 2022-02-18 by the reprex package (v2.0.1)
I have a tibble, df, I would like to take the tibble and group it and then use dplyr::pull to create vectors from the grouped dataframe. I have provided a reprex below.
df is the base tibble. My desired output is reflected by df2. I just don't know how to get there programmatically. I have tried to use pull to achieve this output but pull did not seem to recognize the group_by function and instead created a vector out of the whole column. Is what I'm trying to achieve possible with dplyr or base r. Note - new_col is supposed to be a vector created from the name column.
library(tidyverse)
library(reprex)
df <- tibble(group = c(1,1,1,1,2,2,2,3,3,3,3,3),
name = c('Jim','Deb','Bill','Ann','Joe','Jon','Jane','Jake','Sam','Gus','Trixy','Don'),
type = c(1,2,3,4,3,2,1,2,3,1,4,5))
df
#> # A tibble: 12 x 3
#> group name type
#> <dbl> <chr> <dbl>
#> 1 1 Jim 1
#> 2 1 Deb 2
#> 3 1 Bill 3
#> 4 1 Ann 4
#> 5 2 Joe 3
#> 6 2 Jon 2
#> 7 2 Jane 1
#> 8 3 Jake 2
#> 9 3 Sam 3
#> 10 3 Gus 1
#> 11 3 Trixy 4
#> 12 3 Don 5
# Desired Output - New Col is a column of vectors
df2 <- tibble(group=c(1,2,3),name=c("Jim","Jane","Gus"), type=c(1,1,1), new_col = c("'Jim','Deb','Bill','Ann'","'Joe','Jon','Jane'","'Jake','Sam','Gus','Trixy','Don'"))
df2
#> # A tibble: 3 x 4
#> group name type new_col
#> <dbl> <chr> <dbl> <chr>
#> 1 1 Jim 1 'Jim','Deb','Bill','Ann'
#> 2 2 Jane 1 'Joe','Jon','Jane'
#> 3 3 Gus 1 'Jake','Sam','Gus','Trixy','Don'
Created on 2020-11-14 by the reprex package (v0.3.0)
Maybe this is what you are looking for:
library(dplyr)
df <- tibble(group = c(1,1,1,1,2,2,2,3,3,3,3,3),
name = c('Jim','Deb','Bill','Ann','Joe','Jon','Jane','Jake','Sam','Gus','Trixy','Don'),
type = c(1,2,3,4,3,2,1,2,3,1,4,5))
df %>%
group_by(group) %>%
mutate(new_col = name, name = first(name, order_by = type), type = first(type, order_by = type)) %>%
group_by(name, type, .add = TRUE) %>%
summarise(new_col = paste(new_col, collapse = ","))
#> `summarise()` regrouping output by 'group', 'name' (override with `.groups` argument)
#> # A tibble: 3 x 4
#> # Groups: group, name [3]
#> group name type new_col
#> <dbl> <chr> <dbl> <chr>
#> 1 1 Jim 1 Jim,Deb,Bill,Ann
#> 2 2 Jane 1 Joe,Jon,Jane
#> 3 3 Gus 1 Jake,Sam,Gus,Trixy,Don
EDIT If new_col should be a list of vectors then you could do `summarise(new_col = list(c(new_col)))
df %>%
group_by(group) %>%
mutate(new_col = name, name = first(name, order_by = type), type = first(type, order_by = type)) %>%
group_by(name, type, .add = TRUE) %>%
summarise(new_col = list(c(new_col)))
Another option would be to use tidyr::nest:
df %>%
group_by(group) %>%
mutate(new_col = name, name = first(name, order_by = type), type = first(type, order_by = type)) %>%
nest(new_col = new_col)
Is there a way to create the following output (assuming a lot of IDs and a lot more attributes)?
I am stuck after calculating the % of total by ATT1 within ID and then ATT2, etc.. Not sure how to go about making the rows into column headers and aggregate.
Input File (df in R):
ID ATT1 ATT2 ATT3 ATT4 Value
1 a x d i 10
1 a y d j 10
1 a y d k 10
1 b y c k 10
1 b y c l 10
2 a x c k 20
…
And I want the output file to look like (ATT4_l is cut off):
ID ATT1_a ATT1_b ATT2_x ATT2_y ATT3_d ATT3_c ATT4_i ATT4_j ATT4_k
1 0.6 0.4 0.2 0.8 0.6 0.4 0.2 0.2 0.4
...
I tried using dplyr
df %>% group_by(ID, ATT1) %>% mutate(proc = (Value/sum(Value) * 100))
But I am not sure what to do once I have all the ATT calculated to get them into columns and aggregated so that each ID only has 1 row of data.
You can do this with the two main workhorses of the tidyverse: dplyr for calculations and tidyr for reshaping data. Some of the reshaping is convoluted so I'm breaking it into steps.
library(dplyr)
library(tidyr)
...
If you gather the data from its original wide format into a long format, you'll have a column of IDs, a column of ATTx values, a column of letters (don't know the context meaning of these, so I'm literally calling it letters), and a column of values. From this format, you can group observations by combinations of ID, ATT, and letter, and you can later stick ATTs and letters together in the way you've laid out.
df %>%
gather(key = att, value = letter, -ID, -Value) %>%
head()
#> # A tibble: 6 x 4
#> ID Value att letter
#> <int> <int> <chr> <chr>
#> 1 1 10 ATT1 a
#> 2 1 10 ATT1 a
#> 3 1 10 ATT1 a
#> 4 1 10 ATT1 b
#> 5 1 10 ATT1 b
#> 6 2 20 ATT1 a
After grouping, calculate total values for each ID/ATT/letter combo:
df %>%
gather(key = att, value = letter, -ID, -Value) %>%
group_by(ID, att, letter) %>%
summarise(group_val = sum(Value)) %>%
head()
#> # A tibble: 6 x 4
#> # Groups: ID, att [3]
#> ID att letter group_val
#> <int> <chr> <chr> <int>
#> 1 1 ATT1 a 30
#> 2 1 ATT1 b 20
#> 3 1 ATT2 x 10
#> 4 1 ATT2 y 40
#> 5 1 ATT3 c 20
#> 6 1 ATT3 d 30
Using mutate, you can calculate the share of each observation within its larger group. mutate drops one layer of the grouping hierarchy, so this is the share of values for each letter within a given ID and ATT. Since you no longer need the total values, just their shares, drop that column, and stick the ATTs and letters back together with unite.
df %>%
gather(key = att, value = letter, -ID, -Value) %>%
group_by(ID, att, letter) %>%
summarise(group_val = sum(Value)) %>%
mutate(share = group_val / sum(group_val)) %>%
select(-group_val) %>%
unite(group, att, letter, sep = "_") %>%
head()
#> # A tibble: 6 x 3
#> # Groups: ID [1]
#> ID group share
#> <int> <chr> <dbl>
#> 1 1 ATT1_a 0.6
#> 2 1 ATT1_b 0.4
#> 3 1 ATT2_x 0.2
#> 4 1 ATT2_y 0.8
#> 5 1 ATT3_c 0.4
#> 6 1 ATT3_d 0.6
Now you have all the information you're looking for, just need to get it into a wide format, turning the values in the group column into individual columns. You do this with spread:
df %>%
gather(key = att, value = letter, -ID, -Value) %>%
group_by(ID, att, letter) %>%
summarise(group_val = sum(Value)) %>%
mutate(share = group_val / sum(group_val)) %>%
select(-group_val) %>%
unite(group, att, letter, sep = "_") %>%
spread(key = group, value = share)
#> # A tibble: 2 x 11
#> # Groups: ID [2]
#> ID ATT1_a ATT1_b ATT2_x ATT2_y ATT3_c ATT3_d ATT4_i ATT4_j ATT4_k
#> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 1 0.6 0.4 0.2 0.8 0.4 0.6 0.2 0.2 0.4
#> 2 2 1 NA 1 NA 1 NA NA NA 1
#> # ... with 1 more variable: ATT4_l <dbl>
Note that there are NAs filled in here where there aren't observations for combinations of ID/ATT/letter. I'm assuming you'll have more complete data than in the sample you posted.
Created on 2018-10-03 by the reprex package (v0.2.1)
I believe you are looking for the reshape2 package
library(reshape2)
df.new <- dcast(df,
formula = ID~ATT1,
value.var = "proc",
fun.aggregate = mean)
This will not completely fix your problem though - I recommend doing this first to make your data tidy
df.tidy <- melt(df,
id.vars = c("ID","Value"),
variable.name = "ATT1_4",
value.name = "att.factor")
df.tidy <- df.tidy %>% group_by(ID, att.factor) %>% mutate(proc = (Value/sum(Value)*100))
df.new <- dcast(df.tidy,
formula = ID~att.factor,
value.var = "proc",
fun.aggregate = mean)
NaN will be returned for anything combination that isnt represented in df.tidy. you can use the fill argument to assign a value to those.
I like to know how I can use dplyr mutate function when I don't know column names. Here is my example code;
library(dplyr)
w<-c(2,3,4)
x<-c(1,2,7)
y<-c(1,5,4)
z<-c(3,2,6)
df <- data.frame(w,x,y,z)
df %>% rowwise() %>% mutate(minimum = min(x,y,z))
Source: local data frame [3 x 5]
Groups: <by row>
# A tibble: 3 x 5
w x y z minimum
<dbl> <dbl> <dbl> <dbl> <dbl>
1 2 1 1 3 1
2 3 2 5 2 2
3 4 7 4 6 4
This code is finding minimum value in row-wise. Yes, "df %>% rowwise() %>% mutate(minimum = min(x,y,z))" works because I typed column names, x, y, z. But, let's assume that I have a really big data.frame with several hundred columns, and I don't know all of the column names. Or, I have multiple data sets of data.frame, and they have all different column names; I just want to find a minimum value from 10th column to 20th column in each row and in each data.frame.
In this example data.frame I provided above, let's assume that I don't know column names, but I just want to get minimum value from 2nd column to 4th column in each row. Of course, this doesn't work, because 'mutate' doesn't work with vector;
df %>% rowwise() %>% mutate(minimum=min(df[,2],df[,3], df[,4]))
Source: local data frame [3 x 5]
Groups: <by row>
# A tibble: 3 x 5
w x y z minimum
<dbl> <dbl> <dbl> <dbl> <dbl>
1 2 1 1 3 1
2 3 2 5 2 1
3 4 7 4 6 1
These two codes below also don't work.
df %>% rowwise() %>% mutate(average=min(colnames(df)[2], colnames(df)[3], colnames(df)[4]))
df %>% rowwise() %>% mutate(average=min(noquote(colnames(df)[2]), noquote(colnames(df)[3]), noquote(colnames(df)[4])))
I know that I can get minimum value by using apply or different method when I don't know column names. But, I like to know whether dplyr mutate function can be able to do that without known column names.
Thank you,
With apply:
library(dplyr)
library(purrr)
df %>%
mutate(minimum = apply(df[,2:4], 1, min))
or with pmap:
df %>%
mutate(minimum = pmap(.[2:4], min))
Also with by_row from purrrlyr:
df %>%
purrrlyr::by_row(~min(.[2:4]), .collate = "rows", .to = "minimum")
Output:
# tibble [3 x 5]
w x y z minimum
<dbl> <dbl> <dbl> <dbl> <dbl>
1 2 1 1 3 1
2 3 2 5 2 2
3 4 7 4 6 4
A vectorized option would be pmin. Convert the column names to symbols with syms and evaluate (!!!) to return the values of the columns on which pmin is applied
library(dplyr)
df %>%
mutate(minimum = pmin(!!! rlang::syms(names(.)[2:4])))
# w x y z minimum
#1 2 1 1 3 1
#2 3 2 5 2 2
#3 4 7 4 6 4
Here is a tidyeval approach along the lines of the suggestion from aosmith. If you don't know the column names, you can make a function that accepts the desired positions as inputs and finds the columns names itself. Here, rlang::syms() takes the column names as strings and turns them into symbols, !!! unquotes and splices the symbols into the function.
library(dplyr)
w<-c(2,3,4)
x<-c(1,2,7)
y<-c(1,5,4)
z<-c(3,2,6)
df <- data.frame(w,x,y,z)
rowwise_min <- function(df, min_cols){
cols <- df[, min_cols] %>% colnames %>% rlang::syms()
df %>%
rowwise %>%
mutate(minimum = min(!!!cols))
}
rowwise_min(df, 2:4)
#> Source: local data frame [3 x 5]
#> Groups: <by row>
#>
#> # A tibble: 3 x 5
#> w x y z minimum
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 2 1 1 3 1
#> 2 3 2 5 2 2
#> 3 4 7 4 6 4
rowwise_min(df, c(1, 3))
#> Source: local data frame [3 x 5]
#> Groups: <by row>
#>
#> # A tibble: 3 x 5
#> w x y z minimum
#> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 2 1 1 3 1
#> 2 3 2 5 2 3
#> 3 4 7 4 6 4
Created on 2018-09-04 by the reprex package (v0.2.0).