How to combine two variables from different data frames in R? - r

I'm new to R programming, so this question might be simple.
Anyway, I've tryed to find some answer to this specific thing I'm trying to do and didnt get it.
So, Im trying to import new data I've got to my old data.frame.
The problem is that this data has to substitute previous NA values in variables that already exist.
Also my data have different individuals (companys) in different periods (years), and my new data set only have the companys and years that was missing, plus some observation that I already had.
I tryied to simulate the problem with the data frames below:
Data frame with NAs:
df1 <- data.frame( company = c(rep("A",3), rep("B",3), rep("C",3)),
year = c(rep(2016:2018,each=1)),
income = c(95,87,93,NA,NA,58,102,80,NA),
debt = c(43,50,51,NA,37,37,53,NA,NA),
stringsAsFactors= F )
To search for new data, I created a data set with only the missing data, as my data had to many observations:
df_NA <- data.frame(df1[is.na(df1$income & df1$debt),])
So after searching, I was able to find the missing data, and now I have something like this:
df2 <- data.frame( company = c("A", "B" , "C" , "C"),
year = c(2018, 2016, 2017, 2018),
income = c(60,55, 80, 82),
debt = c(32,37, 53,48),
stringsAsFactors= F )
Now, I'm trying to get this data together, so I have the complete data.frame to work.
The problem is that I couldnt find a way to do it yet. I've tryed merge and join, indexing for company and year, but the variables that have the same name in both data.frame get duplicated and a suffix.
In my data I have much more observations and variables to fill, so I want to find a way I can do it with a command. Also this is going to happen again in the future, so it will be very helpfull.
I'm sorry if this was already answered. Thank you!

Here is an option using data.table:
library(data.table)
setDT(df1)
setDT(df2)
df1[df2, on=c("company", "year"), c('income', 'debt') := { list(i.income, i.debt)}]
# company year income debt
#1: A 2016 95 43
#2: A 2017 87 50
#3: A 2018 60 32
#4: B 2016 55 37
#5: B 2017 NA 37
#6: B 2018 58 37
#7: C 2016 102 53
#8: C 2017 80 53
#9: C 2018 82 48
Or another option using dplyr
library(dplyr)
full_join(df1, df2, by = c("year", "company")) %>%
mutate(
income = coalesce(income.x, income.y),
debt= coalesce(debt.x, debt.y),
) %>%
select(company, year, income, debt)

Related

Select columns from a data frame

I have a Data Frame made up of several columns, each corresponding to a different industry per country. I have 56 industries and 43 countries and I'd select only industries from 5 to 22 per country (18 industries). The big issue is that each industry per country is named as: AUS1, AUS2 ..., AUS56. What I shall select is AUS5 to AUS22, AUT5 to AUT22 ....
A viable solution could be to select columns according to the following algorithm: the first column of interest, i.e., AUS5 corresponds to column 10 and then I select up to AUS22 (corresponding to column 27). Then, I should skip all the remaining column for AUS (i.e. AUS23 to AUS56), and the first 4 columns for the next country (from AUT1 to AUT4). Then, I select, as before, industries from 5 to 22 for AUT. Basically, the algorithm, starting from column 10 should be able to select 18 columns(including column 10) and then skip the next 38 columns, and then select the next 18 columns. This process should be repeated for all the 43 countries.
How can I code that?
UPDATE, Example:
df=data.frame(industry = c("C10","C11","C12","C13"),
country = c("USA"),
AUS3 = runif(4),
AUS4 = runif(4),
AUS5 = runif(4),
AUS6 = runif(4),
DEU5 = runif(4),
DEU6 = runif(4),
DEU7 = runif(4),
DEU8 = runif(4))
#I'm interested only in C10-c11:
df_a=df %>% filter(grepl('C10|C11',industry))
df_a
#Thus, how can I select columns AUS10,AUS11, DEU10,DEU11 efficiently, considering that I have a huge dataset?
Demonstrating the paste0 approach.
ctr <- unique(gsub('\\d', '', names(df[-(1:2)])))
# ctr <- c("AUS", "DEU") ## alternatively hard-coded
ind <- c(10, 11)
subset(df, industry == paste0('C', 10:11),
select=c('industry', 'country', paste0(rep(ctr, each=length(ind)), ind)))
# industry country AUS10 AUS11 DEU10 DEU11
# 1 C10 USA 0.3376674 0.1568496 0.5033433 0.7327734
# 2 C11 USA 0.7421840 0.6808892 0.9050158 0.3689741
Or, since you appear to like grep you could do.
df[grep('10|11', df$industry), grep('industry|country|[A-Z]{3}1[01]', names(df))]
# industry country AUS10 AUS11 DEU10 DEU11
# 1 C10 USA 0.3376674 0.1568496 0.5033433 0.7327734
# 2 C11 USA 0.7421840 0.6808892 0.9050158 0.3689741
If you have a big data set in memory, data.table could be ideal and much faster than alternatives. Something like the following could work, though you will need to play with select_ind and select_ctr as desired on the real dataset.
It might be worth giving us a slightly larger toy example, if possible.
library(data.table)
setDT(df)
select_ind <- paste0(c("C"), c("11","10"))
select_ctr <- paste0(rep(c("AUS", "DEU"), each = 2), c("10","11"))
df[grepl(paste0(select_ind, collapse = "|"), industry), # select rows
..select_ctr] # select columns
AUS10 AUS11 DEU10 DEU11
1: 0.9040223 0.2638725 0.9779399 0.1672789
2: 0.6162678 0.3095942 0.1527307 0.6270880
For more information, see Introduction to data.table.

Refining my code for data frame extraction from excel

Looking for advice on refining my code and also trimming to a date range.
The spreadsheet itself is pulled from another system and so the structure of the excel cannot be changed. When you pull the data it basically starts at E2, with the first date column in F2, and the first item in E3. The data will continue to populate to the right for as long as it goes on for. I have replicated the structure below.
AndI want it to look like:
I have come up with the below, which works, but I was looking for advice on refining it down to fewer individual step by steps.
In the below code:
= extracting data
= pulling the dates out
= formatting from
excel number to an actual date
= grabbing the item names
= transposing data and skipping some parts
= adding in dates to the row names
#1
df <- data.frame(read_excel("C:/example.xlsx",
sheet = "Sheet1"))
#2
dfdate <- gtb[1, -c(1,2,3,4,5)]
#3
dfdate <- format(as.Date(as.numeric(dfdate),
origin = "1899-12-30"), "%d/%m/%Y")
#4
rownames(gtb) <- gtb[,1]
#5
gtb <- as.data.frame(t(gtb[, -c(1,2,3,4,5)]))
#6
rownames(gtb) <- dfdate
After the row names have been added the structure is such that I am happy to start creating the visuals where needed.
thanks for your advice
David
Here is one suggestion, I don't really have easy access to your data, but I am including code to remove those columns as you do, based on their names, which can be nicer than removing by index.
df <- read.table( text=
"Item_Code 01/01/2018 01/02/2018 01/03/2018 01/04/2018
Item 99 51 60 69
Item2 42 47 88 2
Item3 36 81 42 48
",header=TRUE, check.names=FALSE) %>%
rename( `Item Code` = Item_Code )
library(tibble)
library(lubridate)
x <- df %>% select( -matches("Code \\d|Internal Code") ) %>%
column_to_rownames("Item Code") %>%
t %>% as.data.frame %>%
rownames_to_column("Item Code") %>%
mutate( `Item Code` = dmy(`Item Code`) )
x
Output:
> x
Item Code Item Item2 Item3
1 2018-01-01 99 42 36
2 2018-02-01 51 47 81
3 2018-03-01 60 88 42
4 2018-04-01 69 2 48
I went a bit forth and back with this solution, but it can be nice to also showcase how to remove columns by a regex on their column names, since you are removing several similarly named columns.
The t trick, that you also use, works becuase there is really only one more column there that would cause problems with this, as others have commented, and this can be temporarily stowed away as rownames. If that weren't the case, you're looking at a more complex solution involving pivot_wider and pivot_longer or splitting the data.frame and transposing only one of the halves.

efficient way to match and sum variables of two data frames based on two criteria [duplicate]

This question already has answers here:
How to sum a variable by group
(18 answers)
Closed 4 years ago.
I have a data frame df1 on import data for 397 different industries over 17 years and several different exporting countries/ regions.
> head(df1)
year importer exporter imports sic87dd
2300 1991 USA CAN 9.404848e+05 2011
2301 1991 USA CAN 2.259720e+04 2015
2302 1991 USA CAN 5.459608e+02 2021
2303 1991 USA CAN 1.173237e+04 2022
2304 1991 USA CAN 2.483033e+04 2023
2305 1991 USA CAN 5.353975e+00 2024
However, I want the sum of all imports for a given industry and a given year, regardless of where they came from. (The importer is always the US, sic87dd is a code that uniquely identifies the 397 industries)
So far I have tried the following code, which works correctly but is terribly inefficient and takes ages to run.
sic87dd <- unique(df1$sic87dd)
year <- unique (df1$year)
df2 <- data.frame("sic87dd" = rep(sic87dd, each = 17), "year" = rep(year, 397), imports = rep(0, 6749))
i <- 1
j <- 1
while(i <= nrow(df2)){
while(j <= nrow(df1)){
if((df1$sic87dd[j] == df2$sic87dd[i]) == TRUE & (df1$year[j] == df2$year[i]) == TRUE){
df2$imports[i] <- df2$imports[i] + df1$imports[j]
}
j <- j + 1
}
i <- i + 1
j <- 1
}
Is there a more efficient way to do this? I have seen some questions here that were somewhat similar and suggested the use of the data.table package, but I can't figure out how to make it work in my case.
Any help is appreciated.
There is a simple solution using dplyr:
First, you'll need to set your industry field as a factor (I'm assuming this entire field consists of a 4 digit number):
df1$sic87dd <- as.factor(df1$sic87dd)
Next, use the group_by command and summarise:
df1 %>%
group_by(sic87dd) %>%
summarise(total_imports = sum(imports))

Using paste for Dynamic addition

I have a report that i need to do on a quarterly basis that involves adding various components of revenue together to formulate a trailing 12 month and trailing 24 month total.
rather than retyping a bunch of column names to add each column together on a rolling basis i was hoping to create a function where i could declare variables for the trailing months so i can sum them together easier.
my dataframe all_rel contains all the data i need to sum together. it contains the following fields (unfortunately i just inherited this report an it isn't exactly in tidy format)
Total_Processing_Revenue
Ancillary_Revenue
in the data frame i have T24 months of these data points within separate columns
the script that someone put together that i inherited uses the following to add the columns together:
all_rel$anci_rev_cy_ytd = all_rel$X201701Ancillary_Revenue+all_rel$X201702Ancillary_Revenue+all_rel$X201703Ancillary_Revenue+...+all_rel$X201712Ancillary_Revenue
i'm trying was hoping to do something with paste but can't seem to get it to work
dfname <- 'all_rel$X'
revmonth1 <- '01'
revmonth2 <- '02'
revmonth3 <- '03'
revmonth4 <- '04'
revmonth5 <- '05'
revmonth6 <- '06'
revmonth7 <- '07'
revmonth8 <- '08'
revmonth9 <- '09'
revmonth10 <- '10'
revmonth11 <- '11'
revmonth12 <- '12'
cy <- '2017'
py <- '2016'
rev1 <- 'Total_Processing_Revenue'
rev2 <- 'Ancillary_Revenue'
all_rel$anci_rev_py_ytd = paste(dfname,py,revmonth1,rev2, sep ='')+paste(dfname,py,revmonth2,rev2, sep ='')+...paste(dfname,py,revmonth12,rev2, sep ='')
when i try to sum these fields together i get a "non-numeric argument to binary operator" error. Is there something else i can do instead of what i've been trying to do?
paste(rpt,py,revmonth1,rev2, sep ='') returns "all_rel$X201601Ancillary_Revenue"
is there a way that I can tell R that the reason why I'm pasting these names is to reference the data within them rather than the text I'm pasting?
i'm fairly new to R (i've been learning on the fly to try to make my life easier.
ultimately i need to figure out how to convert this mess to a tidy data format where each of the revenue columns has a month and year but i was hoping to use this issue to understand how to use substitution logic to better automate processes. Maybe i just worded my searches incorrectly but i was struggling to find the exact issue i'm trying to solve.
Any help is greatly appreciated.
::edit::
added dput(head)
structure(list(Chain = c("000001", "000029", "000060", "000064","000076", "000079"), X201601Net_Revenue = c(-2.92, 25005.14,55787.59, 3996.69, 14229.41, 3455.85),X201601Total_Processing_Revenue = c(0,16140.48, 23238.89, 3574.17, 4093.51, 641.1), X201601Ancillary_Revenue = c(-2.92,8864.66, 32548.7, 422.52, 10135.9, 2814.75), X201602Net_Revenue = c(0,41918.84, 56696.34, 4789.57, 13113.2, 5211.27), X201602Total_Processing_Revenue = c(0,13253.19, 24733.04, 4395.69, 4102.79, 546.68), X201602Ancillary_Revenue = c(0,28665.65, 31963.3, 393.88, 9010.41, 4664.59), X201603Net_Revenue = c(0,23843.76, 62494.51, 5262.87, 20551.79, 7646.75), X201603Total_Processing_Revenue = c(0,15037.39, 27523.19,4792.63,4805.61,2134.72)),.Names=c("Chain","X201601Net_Revenue","X201601Total_Processing_Revenue","X201601Ancillary_Revenue","X201602Net_Revenue","X201602Total_Processing_Revenue","X201602Ancillary_Revenue","X201603Net_Revenue", "X201603Total_Processing_Revenue"), row.names = c(NA,6L), class = "data.frame")
Here's how to tidy your data (calling your data dd):
library(tidyr)
library(dplyr)
gather(dd, key = key, value = value, -Chain) %>%
mutate(year = substr(key, start = 2, 5),
month = substr(key, 6, 7),
metric = substr(key, 8, nchar(key))) %>%
select(-key) %>%
spread(key = metric, value = value)
# Chain year month Ancillary_Revenue Net_Revenue Total_Processing_Revenue
# 1 000001 2016 01 -2.92 -2.92 0.00
# 2 000001 2016 02 0.00 0.00 0.00
# 3 000001 2016 03 NA 0.00 0.00
# 4 000029 2016 01 8864.66 25005.14 16140.48
# 5 000029 2016 02 28665.65 41918.84 13253.19
# 6 000029 2016 03 NA 23843.76 15037.39
# 7 000060 2016 01 32548.70 55787.59 23238.89
# 8 000060 2016 02 31963.30 56696.34 24733.04
# 9 000060 2016 03 NA 62494.51 27523.19
# 10 000064 2016 01 422.52 3996.69 3574.17
# 11 000064 2016 02 393.88 4789.57 4395.69
# 12 000064 2016 03 NA 5262.87 4792.63
# 13 000076 2016 01 10135.90 14229.41 4093.51
# 14 000076 2016 02 9010.41 13113.20 4102.79
# 15 000076 2016 03 NA 20551.79 4805.61
# 16 000079 2016 01 2814.75 3455.85 641.10
# 17 000079 2016 02 4664.59 5211.27 546.68
# 18 000079 2016 03 NA 7646.75 2134.72
With that done, you can use whatever grouped operations you want - sums, rolling sums or averages, etc. You might be interested in the yearmon class provided in the zoo package, this question on rolling sums by group, and of course the R-FAQ on grouped sums.

Merge two dataframes with repeated columns

I have several .csv files, each one corresponding to a monthly list of customers and some information about them. Each file consists of the same information about customers such as:
names(data.jan)
ID AGE CITY GENDER
names(data.feb)
ID AGE CITY GENDER
To simplify, I will consider only two months, january and february, but my real set of csv files go from january to november:
Considering a "customer X",I have three possible scenarios:
1- Customer X is listed in the january database, but he left and now is not listed in february
2- Customer X is listed in both january and february databases
3- Customer X entered the database in february, so he is not listed in january
I am stuck on the following problem: I need to create a single database with all customers and their respective information that are listed in both dataframes. However, considering a customer that is listed in both dataframes, I want to pick his information from his first entry, that is, january.
When I use merge, I have four options, acording to http://www.dummies.com/how-to/content/how-to-use-the-merge-function-with-data-sets-in-r.html
data <- merge(data.jan,data.feb, by="ID", all=TRUE)
Regardless of which all, all.x or all.y I choose, I get the same undesired output called data:
data[1,]
ID AGE.x CITY.x GENDER.x AGE.y CITY.y GENDER.y
123 25 NY M 25 NY M
I think that what would work here is to merge both databases with this type of join:
Then, merge the resulting dataframe with data.jan with the full outer join. But I don't know how to code this in R.
Thanks,
Bernardo
d1 <- data.frame(x=1:9,y=1:9,z=1:9)
d2 <- data.frame(x=1:10,y=11:20,z=21:30) # example data
d3 <- merge(d1,d2, by="x", all=TRUE) #merge
# keep the original columns from janary (i.e. y.x, z.x)
# but replace the NAs in those columns with the data from february (i.e. y.y,z.y )
d3[is.na(d3[,2]) ,][,2:3] <- d3[is.na(d3[,2]) ,][, 4:5]
#> d3[, 1:3]
# x y.x z.x
#1 1 1 1
#2 2 2 2
#3 3 3 3
#4 4 4 4
#5 5 5 5
#6 6 6 6
#7 7 7 7
#8 8 8 8
#9 9 9 9
#10 10 20 30
This may be tiresome for more than 2 months though, perhaps you should consider #flodel's comments, also note there are demons when your original Jan data has NAs (and you still want the first months data, NA or not, retained) although you never mentioned them in your question.
Try:
data <- merge(data.jan,data.frame(ID=data.feb$ID), by="ID")
although I haven't tested it since no data, but if you just join the ID col from Feb, it should only filter out anything that isn't in both frames
#user1317221_G's solution is excellent. If your tables are large (lots of customers), data tables might be faster:
library(data.table)
# some sample data
jan <- data.table(id=1:10, age=round(runif(10,25,55)), city=c("NY","LA","BOS","CHI","DC"), gender=rep(c("M","F"),each=5))
new <- data.table(id=11:16, age=round(runif(6,25,55)), city=c("NY","LA","BOS","CHI","DC","SF"), gender=c("M","F"))
feb <- rbind(jan[6:10,],new)
new <- data.table(id=17:22, age=round(runif(6,25,55)), city=c("NY","LA","BOS","CHI","DC","SF"), gender=c("M","F"))
mar <- rbind(jan[1:5,],new)
setkey(jan,id)
setkey(feb,id)
join <- data.table(merge(jan, feb, by="id", all=T))
join[is.na(age.x) , names(join)[2:4]:= join[is.na(age.x),5:7,with=F]]
Edit: This adds processing for multiple months.
f <- function(x,y) {
setkey(x,id)
setkey(y,id)
join <- data.table(merge(x,y,by="id",all=T))
join[is.na(age.x) , names(join)[2:4]:= join[is.na(age.x),5:7,with=F]]
join[,names(join)[5:7]:=NULL] # get rid of extra columns
setnames(join,2:4,c("age","city","gender")) # rename columns that remain
return(join)
}
Reduce("f",list(jan,feb,mar))
Reduce(...) applies the function f(...) to the elements of the list in turn, so first to jan and feb, and then to the result and mar, etc.

Resources