I am trying to melt/stack/gather multiple specific columns of a dataframe into 2 columns, retaining all the others.
I have tried many, many answers on stackoverflow without success (some below). I basically have a situation similar to this post here:
Reshaping multiple sets of measurement columns (wide format) into single columns (long format)
only many more columns to retain and combine. It is important to mention my year columns are factors and I have many, many more columns than the sample listed below so I want to call column names not positions.
>df
ID Code Country year.x value.x year.y value.y year.x.x value.x.x
1 A USA 2000 34.33422 2001 35.35241 2002 42.30042
1 A Spain 2000 34.71842 2001 39.82727 2002 43.22209
3 B USA 2000 35.98180 2001 37.70768 2002 44.40232
3 B Peru 2000 33.00000 2001 37.66468 2002 41.30232
4 C Argentina 2000 37.78005 2001 39.25627 2002 45.72927
4 C Peru 2000 40.52575 2001 40.55918 2002 46.62914
I tried using the pivot_longer in tidyr based on the post above which seemed very similar, which resulted in various errors depending on what I did:
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = ".")
I also played with melt in reshape2 in various ways which either melted only the values columns or only the years columns. Such as:
new.df <- reshape2:::melt(df, id.var = c("ID", "Code", "Country"), measure.vars=c("value.x", "value.y", "value.x.x", "value.y.y", "value.x.x.x", "value.y.y.y"), value.name = "value", variable.vars=c('year.x','year.y', "year.x.x", "year.y.y", "year.x.x.x", "year.y.y.y", "value.x", variable.name = "year")
I also tried dplyr gather based on other posts but I find it extremely difficult to understand the help page and posts.
To be clear what I am looking to achieve:
ID Code Country year value
1 A USA 2000 34.33422
1 A Spain 2000 34.71842
3 B USA 2000 35.98180
3 B Peru 2000 33.00000
4 C Argentina2000 37.78005
4 C Peru 2000 40.52575
1 A USA 2001 35.35241
1 A Spain 2001 39.82727
3 B USA 2001 37.70768
3 B Peru 2001 37.66468
4 C Argentina2001 39.25627
4 C Peru 2001 40.55918
1 A USA 2002 42.30042
etc.
I really appreciate the help here.
We can specify the names_pattern
library(tidyr)
library(dplyr)
df %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or use the names_sep with escaped . as according to ?pivot_longer
names_sep - names_sep takes the same specification as separate(), and can either be a numeric vector (specifying positions to break on), or a single string (specifying a regular expression to split on).
which implies that by default the regex is on and the . in regex matches any character and not the literal dot. To get the literal value, either escape or place it inside square bracket
pivot_longer(df,
cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "\\.")
# A tibble: 18 x 6
# ID Code Country group year value
# <int> <chr> <chr> <chr> <int> <dbl>
# 1 1 A USA x 2000 34.3
# 2 1 A USA y 2001 35.4
# 3 1 A USA z 2002 42.3
# 4 1 A Spain x 2000 34.7
# 5 1 A Spain y 2001 39.8
# 6 1 A Spain z 2002 43.2
# 7 3 B USA x 2000 36.0
# 8 3 B USA y 2001 37.7
# 9 3 B USA z 2002 44.4
#10 3 B Peru x 2000 33
#11 3 B Peru y 2001 37.7
#12 3 B Peru z 2002 41.3
#13 4 C Argentina x 2000 37.8
#14 4 C Argentina y 2001 39.3
#15 4 C Argentina z 2002 45.7
#16 4 C Peru x 2000 40.5
#17 4 C Peru y 2001 40.6
#18 4 C Peru z 2002 46.6
Update
For the updated dataset
library(stringr)
df2 %>%
rename_at(vars(matches("year|value")), ~
str_replace(., "^([^.]+\\.[^.]+)\\.([^.]+)$", "\\1\\2")) %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),names_pattern = "(.*)\\.(.*)")
Or without the rename, use regex lookaround
df2 %>%
pivot_longer(cols = -c(ID, Code, Country),
names_to = c(".value", "group"),
names_sep = "(?<=year|value)\\.")
data
df <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.z = c(2002L, 2002L, 2002L, 2002L, 2002L,
2002L), value.z = c(42.30042, 43.22209, 44.40232, 41.30232, 45.72927,
46.62914)), class = "data.frame", row.names = c(NA, -6L))
df2 <- structure(list(ID = c(1L, 1L, 3L, 3L, 4L, 4L), Code = c("A",
"A", "B", "B", "C", "C"), Country = c("USA", "Spain", "USA",
"Peru", "Argentina", "Peru"), year.x = c(2000L, 2000L, 2000L,
2000L, 2000L, 2000L), value.x = c(34.33422, 34.71842, 35.9818,
33, 37.78005, 40.52575), year.y = c(2001L, 2001L, 2001L, 2001L,
2001L, 2001L), value.y = c(35.35241, 39.82727, 37.70768, 37.66468,
39.25627, 40.55918), year.x.x = c(2002L, 2002L, 2002L, 2002L,
2002L, 2002L), value.x.x = c(42.30042, 43.22209, 44.40232, 41.30232,
45.72927, 46.62914)), class = "data.frame", row.names = c(NA,
-6L))
Related
I have table such as this one
Year Type Value
1991 A 4945
1991 B 525
1991 C 764
1992 A 640
1992 B 3935
1992 D 49
1993 K 49
I would like to generate a new column that calculates the percentage of each type for each year. The types may change per year, and some years only have one type
Eg. The first percentage should be 4945/(4945+525+764)
Any help would be very welcome. Thank you very much!
Do a group by 'Year' and get the proportions of 'Value'
library(dplyr)
df1 %>%
group_by(Year) %>%
mutate(new = proportions(Value) * 100) %>%
ungroup
-output
# A tibble: 6 × 4
Year Type Value new
<int> <chr> <int> <dbl>
1 1991 A 4945 79.3
2 1991 B 525 8.42
3 1991 C 764 12.3
4 1992 A 640 13.8
5 1992 B 3935 85.1
6 1992 D 49 1.06
Or use base R with ave
df1$new <- with(df1, ave(Value, Year, FUN = proportions) * 100)
data
df1 <- structure(list(Year = c(1991L, 1991L, 1991L, 1992L, 1992L, 1992L
), Type = c("A", "B", "C", "A", "B", "D"), Value = c(4945L, 525L,
764L, 640L, 3935L, 49L)), class = "data.frame", row.names = c(NA,
-6L))
This question already has answers here:
Aggregate one data frame by time intervals from another data frame
(3 answers)
Closed 1 year ago.
I've posted this as another question, but realised I've got my sample data wrong.
I've got two separate datasets. df1 looks like this:
loc_ID year observations
nin212 2002 90
nin212 2003 98
nin212 2004 102
cha670 2001 18
cha670 2002 19
cha670 2003 21
df2 looks like this:
loc_ID start_year end_year
nin212 2002 2003
nin212 2003 2004
cha670 2001 2002
cha670 2002 2003
I want to calculate the number of observations in the time intervals (start_year to end_year) per loc_ID. In the example above, I would like to achieve this final dataset:
loc_ID start_year end_year observations
nin212 2002 2003 188
nin212 2003 2004 200
cha670 2001 2002 37
cha670 2002 2003 40
How could I do this?
We can do a non-equi join
library(data.table)
setDT(df2)[, observations := setDT(df1)[df2, sum(observations),
on = .(loc_ID, year >= start_year, year <= end_year),
by = .EACHI]$V1]
-output
df2
# loc_ID start_year end_year observations
#1: nin212 2002 2003 188
#2: nin212 2003 2004 200
#3: cha670 2001 2002 37
#4: cha670 2002 2003 40
data
structure(list(loc_ID = c("nin212", "nin212", "nin212", "cha670",
"cha670", "cha670"), year = c(2002L, 2003L, 2004L, 2001L, 2002L,
2003L), observations = c(90L, 98L, 102L, 18L, 19L, 21L)),
class = "data.frame", row.names = c(NA,
-6L))
> dput(df2)
structure(list(loc_ID = c("nin212", "nin212", "cha670", "cha670"
), start_year = c(2002L, 2003L, 2001L, 2002L), end_year = c(2003L,
2004L, 2002L, 2003L)), class = "data.frame", row.names = c(NA,
-4L))
I've got a long dataframe like this:
year value town
2001 0.15 ny
2002 0.19 ny
2002 0.14 ca
2001 NA ny
2002 0.15 ny
2002 0.12 ca
2001 NA ny
2002 0.13 ny
2002 0.1 ca
I want to calculate a mean value per year and per species. Like this:
df %>% group_by(year, town) %>% summarise(mean_year = mean(value, na.rm=T))
However, I only want to summarise those town values which have more than 2 non-NA values. In the example above, I don't want to summarise year 2001 for ny because it only has 1 non-NA value.
So the output would be like this:
town year mean_year
ny 2001 NA
ny 2002 0.156
ca 2002 0.45
try this
df %>% group_by(year, town) %>%
summarise(mean_year = ifelse(sum(!is.na(value))>=2, mean(value, na.rm = T), NA))
# A tibble: 3 x 3
# Groups: year [2]
year town mean_year
<int> <chr> <dbl>
1 2001 ny NA
2 2002 ca 0.12
3 2002 ny 0.157
dput
> dput(df)
structure(list(year = c(2001L, 2002L, 2002L, 2001L, 2002L, 2002L,
2001L, 2002L, 2002L), value = c(0.15, 0.19, 0.14, NA, 0.15, 0.12,
NA, 0.13, 0.1), town = c("ny", "ny", "ca", "ny", "ny", "ca",
"ny", "ny", "ca")), class = "data.frame", row.names = c(NA, -9L
))
I need to write a function in R to return the first date in a series for which the value of a column is greater than 0. I would like to identify that date for each year in the dataframe.
For example, given this example data...
Date Year Catch
3/12/2001 2001 0
3/19/2001 2001 7
3/24/2001 2001 9
4/6/2002 2002 12
4/9/2002 2002 0
4/15/2002 2002 5
4/27/2002 2002 0
3/18/2003 2003 0
3/22/2003 2003 0
3/27/2003 2003 15
I would like R to return the first date for each year with catch > 0
Year Date
2001 3/19/2001
2002 4/6/2002
2003 3/27/2003
I had been working with the min function below, but it only returns the line number and I was unable to return a value for each year in the dataframe. min(which(data$Catch > 0))
I'm new to writing my own functions in R. Any help would be appreciated. Thanks.
library(dplyr)
df1 %>%
group_by(Year) %>%
slice(which.max(Catch > 0))
# # A tibble: 3 x 3
# # Groups: Year [3]
# Date Year Catch
# <date> <int> <int>
# 1 2001-03-19 2001 7
# 2 2002-04-06 2002 12
# 3 2003-03-27 2003 15
Data:
df1 <-
structure(list(Date = structure(c(11393, 11400, 11405, 11783,
11786, 11792, 11804, 12129, 12133, 12138), class = "Date"), Year = c(2001L,
2001L, 2001L, 2002L, 2002L, 2002L, 2002L, 2003L, 2003L, 2003L
), Catch = c(0L, 7L, 9L, 12L, 0L, 5L, 0L, 0L, 0L, 15L)), .Names = c("Date",
"Year", "Catch"), row.names = c(NA, -10L), class = "data.frame")
Here is an option with data.table
library(data.table)
setDT(df1)[, .SD[which.max(Catch > 0)], Year]
# Year Date Catch
#1: 2001 2001-03-19 7
#2: 2002 2002-04-06 12
#3: 2003 2003-03-27 15
data
df1 <- structure(list(Date = structure(c(11393, 11400, 11405, 11783,
11786, 11792, 11804, 12129, 12133, 12138), class = "Date"), Year = c(2001L,
2001L, 2001L, 2002L, 2002L, 2002L, 2002L, 2003L, 2003L, 2003L
), Catch = c(0L, 7L, 9L, 12L, 0L, 5L, 0L, 0L, 0L, 15L)), row.names = c(NA,
-10L), class = "data.frame")
Here is a dplyr solution.
df1 %>%
group_by(Year) %>%
mutate(Inx = first(which(Catch > 0))) %>%
filter(Inx == row_number()) %>%
select(-Inx)
## A tibble: 3 x 3
## Groups: Year [3]
# Date Year Catch
# <date> <int> <int>
#1 2001-03-19 2001 7
#2 2002-04-06 2002 12
#3 2003-03-27 2003 15
Data.
df1 <- read.table(text = "
Date Year Catch
3/12/2001 2001 0
3/19/2001 2001 7
3/24/2001 2001 9
4/6/2002 2002 12
4/9/2002 2002 0
4/15/2002 2002 5
4/27/2002 2002 0
3/18/2003 2003 0
3/22/2003 2003 0
3/27/2003 2003 15
", header = TRUE)
df1$Date <- as.Date(df1$Date, "%m/%d/%Y")
df <- data.frame(Date = as.Date(c("3/12/2001", "3/19/2001", "3/24/2001",
"4/6/2002", "4/9/2002", "4/15/2002", "4/27/2002",
"3/18/2003", "3/22/2003", "3/27/2003"), "%m/%d/%Y"),
Year = c(2001, 2001, 2001, 2002, 2002, 2002, 2002, 2003, 2003, 2003),
Catch = c(0, 7, 9, 12, 0, 5, 0, 0, 0, 15))
If you do not need a function, you can try
library(dplyr)
df %>% group_by(Date) %>% filter(Catch > 0 ) %>% group_by(Year) %>% summarize(date = min(Date))
If you exactly want to write a function, perhaps
firstcatch <- function(yr) {
dd <- subset(df, yr == Year)
withcatches <- dd[which(dd$Catch > 0), ]
min(as.character(withcatches$Date))
}
yrs <- c(2001, 2002, 2003)
dates <- unlist(lapply(yrs, firstcatch))
ndt <- data.frame(Year = yrs, Date = dates)
You can try something like this:
df <- data %>%
group_by(Year) %>%
mutate(newCol=Date[Catch>0][1]) %>%
distinct(Year, newCol)
I have two data sets (one for each country) that look like this:
dfGermany
Country Sales Year Code
Germany 2000 2000 221
Germany 1500 2001 150
Germany 2150 2002 270
dfJapan
Country Sales Year Code
Japan 500 2000 221
Japan 750 2001 221
Japan 800 2001 270
Japan 1000 2002 270
Code here is the "name" of the product. What I want to do is to take half the Japanese sell and add it to the df for Germany if the code and the year matches.
So for instance, half of the sales value for product 221 and 270 in dfJapan (250 € and 500 €) should be added to dfGermany for year 2000 and 2002. But nothing should happen to the values for 2001 since the code does not match with the year.
I tried with merge, but that function did not work since the data is of different size and I also want to match both year and value.
We can do a join on 'Year', 'Code' and then update the 'dfGermany' 'Sales' column
library(data.table)
setDT(dfGermany)[dfJapan, Sales := Sales + i.Sales/2, on = .(Year, Code)]
dfGermany
# Country Sales Year Code
#1: Germany 2250 2000 221
#2: Germany 1500 2001 150
#3: Germany 2650 2002 270
data
dfGermany <- structure(list(Country = c("Germany", "Germany", "Germany"),
Sales = c(2000, 1500, 2150), Year = 2000:2002, Code = c(221L,
150L, 270L)), row.names = c(NA, -3L), class = "data.frame")
dfJapan <- structure(list(Country = c("Japan", "Japan", "Japan", "Japan"
), Sales = c(500L, 750L, 800L, 1000L), Year = c(2000L, 2001L,
2001L, 2002L), Code = c(221L, 221L, 270L, 270L)),
class = "data.frame", row.names = c(NA, -4L))
Using dplyr and #akrun's provided data:
library(dplyr)
dfGermany %>%
left_join(dfJapan %>%
select(Year, Code, sales_japan = Sales),
by = c('Year', 'Code')) %>%
mutate(Sales = Sales + coalesce(sales_japan / 2, 0)) %>%
select(-sales_japan)
> dfGermany
Country Sales Year Code
1 Germany 2250 2000 221
2 Germany 1500 2001 150
3 Germany 2650 2002 270