Combine dplyr mutate function with a search through the whole table - r

I'm quite new to R and especially to the tidy verse. I'm trying to write a script with which we can rewrite a list of taxons. We already have one using quite a lot for and if loops and I want to try to simplify it with the tidyverse, but I'm kind of stuck how to do that.
what I have is a table that looks something like that (really simplified)
taxon_file<- tibble(name = c( "cockroach","cockroach2", "grasshopper", "spider", "lobster", "insect", "crustacea", "arachnid"),
Id = c(445,448,446,778,543,200,400,300),
parent_ID = c(200,200,200,300,400,200,400,300),
rank = c("genus","genus","genus","genus","genus","order","order","order")
)
+-------------+-----+-----------+----------+
| name | Id | parent_ID | rank |
+=============+=====+===========+==========+
| cockroach | 445 | 200 | genus |
| cockroach2 | 448 | 200 | genus |
| grasshopper | 446 | 200 | genus |
| spider | 778 | 300 | genus |
| lobster | 543 | 400 | genus |
| insect | 200 | 200 | order |
| crustacea | 400 | 400 | order |
| arachnid | 300 | 300 | order |
+-------------+-----+-----+------------+----------+
Now I want to rearrange it so that I get a new column where I can add the order that matches the parent_ID (so when parent_ID == ID then write name in column order). The end result should look kinda like this
+-------------+------------+------+-----------+
| name | order | Id | parent_ID |
+=============+============+======+===========+
| cockroach | insect | 445 | 200 |
| cockroach2 | insect | 448 | 200 |
| grasshopper | insect | 446 | 200 |
| spider | arachnid | 778 | 300 |
| lobster | crustacea | 543 | 400 |
+-------------+------------+------+-----------+
I tried to combine mutate with an ifelse statement but this just adds NA's to the whole order column.
tibble is named taxon_list
taxon_list %>%
mutate(order = ifelse(parent_ID == Id, Name, NA))
I know this will not work because it doesn't search the whole data-set for the correct row (that's what I did before with alle the for loops). Maybe someone can point me in the right direction?

One way is to filter to each rank type to 2 separate dfs, subset using select, and merge the 2.
df <- tibble(name = c( "cockroach","cockroach2", "grasshopper", "spider", "lobster", "insect", "crustacea", "arachnid"),
Id = c(445,448,446,778,543,200,400,300),
parent_ID = c(200,200,200,300,400,200,400,300),
rank = c("genus","genus","genus","genus","genus","order","order","order"))
library(tidyverse)
df_order <- df %>%
filter(rank == "order") %>%
select(order = name, parent_ID)
df_genus <- df %>%
filter(rank == "genus") %>%
select(name, Id, parent_ID) %>%
merge(df_order, by = "parent_ID")
Result:
parent_ID name Id order
1 200 cockroach 445 insect
2 200 cockroach2 448 insect
3 200 grasshopper 446 insect
4 300 spider 778 arachnid
5 400 lobster 543 crustacea

Related

How do I merge 2 dataframes without a corresponding column to match by?

I'm trying to use the Merge() function in RStudio. Basically I have two tables with 5000+ rows. They both have the same amount of rows. Although there is no corresponding Columns to merge by. However the rows are in order and correspond. E.g. The first row of dataframe1 should merge with first row dataframe2...2nd row dataframe1 should merge with 2nd row dataframe2 and so on...
Here's an example of what they could look like:
Dataframe1(df1):
+-------------------------------------+
| Name | Sales | Location |
+-------------------------------------+
| Rod | 123 | USA |
| Kelly | 142 | CAN |
| Sam | 183 | USA |
| Joyce | 99 | NED |
+-------------------------------------+
Dataframe2(df2):
+---------------------+
| Sex | Age |
+---------------------+
| M | 23 |
| M | 33 |
| M | 31 |
| F | 45 |
+---------------------+
NOTE: this is a downsized example only.
I've tried to use the merge function in RStudio, here's what I've done:
DFMerged <- merge(df1, df2)
This however increases both the rows and columns. It returns 16 rows and 5 columns for this example.
What am I missing from this function, I know there is a merge(x,y, by=) argument but I'm unable to use a column to match them.
The output I would like is:
+----------------------------------------------------------+
| Name | Sales | Location | Sex | Age |
+----------------------------------------------------------+
| Rod | 123 | USA | M | 23 |
| Kelly | 142 | CAN | M | 33 |
| Sam | 183 | USA | M | 31 |
| Joyce | 99 | NED | F | 45 |
+-------------------------------------+--------------------+
I've considering making extra columns in each dataframes, says row# and match them by that.
You could use cbind:
cbind(df1, df2)
If you want to use merge you could use:
merge(df1, df2, by=0)
You could use:
cbind(df1,df2)
This will necessarily work with same number of rows in two data frames

Removing duplicated data based on each group using R

I have a dataset which contains employee id, name and their bank account information. Some of these employees have duplicate names with either same employee id or different employee id for same employee name. Few of these employees also have same bank account information for same names while some have different bank account numbers under same name. The aim is to find those employees who have same name but different bank account number. Here's a sample of the data:
| Emp_id | Name | Bank Account |
|--------|:-------:|-------------:|
| 123 | Joan | 6758 |
| 134 | Karyn | 1244 |
| 143 | Larry | 4900 |
| 143 | Larry | 5201 |
| 235 | Larry | 5201 |
| 433 | Larry | 5201 |
| 231 | Larry | 5201 |
| 120 | Amy | 7890 |
| 135 | Amy | 7890 |
| 150 | Chris | 1280 |
| 150 | Chris | 6565 |
| 900 | Cassy | 1280 |
| 900 | Cassy | 9873 |
I had to find the employees who were duplicates based on their names which I could do successfully. Once that was done, I had to identify the employees with same name but different bank account no. Right now the issue is that it is not grouping the employees based on name and searching for different bank account. Instead, it is looking for account numbers of different individuals and if it finds it to be same, it removes one of the duplicate values. For example, Chris and Cassy have same bank account number '1280', so it is identifying it to be same and automatically removing one of Chris's record (bank account no 1280 in the output). The output that I'm getting is as shown below:
| Emp_id | Name | Bank Account |
|--------|:-----:|-------------:|
| 120 | Amy | 7890 |
| 900 | Cassy | 1280 |
| 900 | Cassy | 9873 |
| 150 | Chris | 6565 |
| 143 | Larry | 4900 |
| 143 | Larry | 5201 |
This is the code that I have followed:
sample=data.frame(Id=c("123","134","143","143","235","433","231","120","135","150","150","900","900"),
Name=c("Joan","Karyn","Larry","Larry","Larry","Larry","Larry","Amy","Amy","Chris","Chris","Cassy","Cassy"),
Bank_Account=c("6758","1244","4900","5201","5201","5201","5201","7890","7890","1280","6565","1280","9873"))
n_occur <- data.frame(table(sample$Name))
n_occur=n_occur[n_occur$Freq > 1,]
Duplicates=sample[sample$Name %in% n_occur$Var1[n_occur$Freq > 1],]
Duplicates=Duplicates %>% arrange(Duplicates$Name, Duplicates$Name)
Duplicates=Duplicates[!duplicated(Duplicates$Bank_Account),]
The actual output however, should have considered the bank account nos within each name (same name). The output should look something like this:
| Emp_id | Name | Bank Account |
|--------|:-------:|-------------:|
| 900 | Cassy |1280 |
| 900 | Cassy |9873 |
| 150 | Chris | 1280 |
| 150 | Chris | 6565 |
| 143 | Larry | 4900 |
| 143 | Larry | 5201 |
Can someone please direct me towards right code?
We can use n_distinct to filter
library(dplyr)
sample %>%
group_by(Name) %>%
filter(n() > 1) %>%
group_by(Id, add = TRUE) %>%
filter(n_distinct(Bank_Account) > 1) %>%
arrange(desc(Id))
# A tibble: 6 x 3
# Groups: Name, Id [3]
# Id Name Bank_Account
# <fct> <fct> <fct>
#1 900 Cassy 1280
#2 900 Cassy 9873
#3 150 Chris 1280
#4 150 Chris 6565
#5 143 Larry 4900
#6 143 Larry 5201
Step 1 - Identifying duplicate names:
step_1 <- sample %>%
arrange(Name) %>%
mutate(dup = duplicated(Name)) %>%
filter(Name %in% unique(as.character(Name[dup == T])))
Step 2 - Identifying duplicate accounts for these names:
step_2 <- step_1 %>%
group_by(Name, Bank_Account) %>%
mutate(dup = duplicated(Bank_Account)) %>%
filter(dup == F)

Grouping data based on repetitive records using R

I have a dataset which contains repetitive records/common records. It looks something like this:
| Vendor | Buyer | Amount |
|--------|:-----:|-------:|
| A | P | 100 |
| B | P | 150 |
| C | Q | 300 |
| A | P | 290 |
I need to group similar records together but I do not want to summarize my amount. I want to have the amount value being represented individually. The output should like something like this:
| Vendor | Buyer | Amount |
|--------|:-----:|-------:|
| A | P | 100 |
| A | P | 290 |
| | | |
| B | P | 150 |
| | | |
| C | Q | 300 |
I thought of using split(), but since my original data has too many records, the split function creates too many lists and it becomes tedious to create new datasets from them. How can I achieve the above stated output with any other method?
EDIT:
Let us assume that we have an additional column called date and the dataset now looks like this:
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|-----------|
| A | P | 100 | 3/6/2019 |
| B | P | 150 | 7/6/2018 |
| C | Q | 300 | 4/21/2018 |
| A | P | 290 | 6/5/2018 |
Once, each buyer and vendor is grouped together, I need to arrange the dates in ascending order for each buyer and vendor such that it looks something like the below one:
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|-----------|
| A | P | 290 | 6/5/2018 |
| A | P | 100 | 3/6/2019 |
| | | | |
| B | P | 150 | 7/6/2018 |
| | | | |
| C | Q | 300 | 4/21/2018 |
and then remove the single transactions to get the final table containing only
| Vendor | Buyer | Amount | Date |
|--------|:-----:|-------:|----------|
| A | P | 290 | 6/5/2018 |
| A | P | 100 | 3/6/2019 |
In the following we sort the data frame and add a group column which allows easy subsequent processing of individual groups. For example, to process the groups without creating a large split of DF:
for(g in unique(DFout$group)) {
DFsub <- subset(DFout, group == g)
... process DFsub ...
}
1) Base R Sort the data and then assign the group column using cumsum on the non-duplicated elements.
library(data.table)
o <- with(DF, order(Vendor, Buyer))
DFo <- DF[o, ]
DFout <- transform(DFo, group = cumsum(!duplicated(data.frame(Vendor, Buyer))))
DFout
giving:
Vendor Buyer Amount group
1 A P 100 1
4 A P 290 1
2 B P 150 2
3 C Q 300 3
I am not sure this is such a good idea to do in the first place but if you really want to add a row of NAs after each group:
ix <- unname(unlist(tapply(DFout$group, DFout$group, function(x) c(x, NA))))
ix[!is.na(ix)] <- seq_len(nrow(DFout))
DFout[ix, ]
2) data.table Convert to data.table, set the key (which sorts it) and use rleid to assign the group number.
library(data.table)
DT <- data.table(DF)
setkey(DT, Vendor, Buyer)
DT[, group := rleid(Vendor, Buyer)]
3) sqldf Another approach is to use SQL. This requires the development version of RSQLite on github. Here dense_rank acts similarly to rleid above.
library(sqldf)
sqldf("select *, dense_rank() over (order by Vendor, Buyer) as [group]
from DF
order by Vendor, Buyer")
giving:
Vendor Buyer Amount group
1 A P 100 1
2 A P 290 1
3 B P 150 2
4 C Q 300 3
Note
DF <- structure(list(Vendor = structure(c(1L, 2L, 3L, 1L), .Label = c("A",
"B", "C"), class = "factor"), Buyer = structure(c(1L, 1L, 2L,
1L), .Label = c("P", "Q"), class = "factor"), Amount = c(100L,
150L, 300L, 290L)), class = "data.frame", row.names = c(NA, -4L
))

Creating a new table that shows the percent change between two different categories from a single column in R

I'm trying to learn how to use some of the functions in the R "reshape2" package, specifically dcast. I'm trying to create a table that shows the aggregate sum (the sum of one category of data for all files divided by the max "RepNum" in one "Case") for two software versions and the percent change between the two.
Here's what my data set looks like (example data):
| FileName | Version | Category | Value | TestNum | RepNum | Case |
|:--------:|:-------:|:---------:|:-----:|:-------:|:------:|:-----:|
| File1 | 1.0.18 | Category1 | 32.5 | 11 | 1 | Case1 |
| File1 | 1.0.18 | Category1 | 31.5 | 11 | 2 | Case1 |
| File1 | 1.0.18 | Category2 | 32.3 | 11 | 1 | Case1 |
| File1 | 1.0.18 | Category2 | 31.4 | 11 | 2 | Case1 |
| File2 | 1.0.18 | Category1 | 34.6 | 11 | 1 | Case1 |
| File2 | 1.0.18 | Category1 | 34.7 | 11 | 2 | Case1 |
| File2 | 1.0.18 | Category2 | 34.5 | 11 | 1 | Case1 |
| File2 | 1.0.18 | Category2 | 34.6 | 11 | 2 | Case1 |
| File1 | 1.0.21 | Category1 | 31.7 | 12 | 1 | Case1 |
| File1 | 1.0.21 | Category1 | 32.0 | 12 | 2 | Case1 |
| File1 | 1.0.21 | Category2 | 31.5 | 12 | 1 | Case1 |
| File1 | 1.0.21 | Category2 | 32.4 | 12 | 2 | Case1 |
| File2 | 1.0.21 | Category1 | 31.5 | 12 | 1 | Case1 |
| File2 | 1.0.21 | Category1 | 34.6 | 12 | 2 | Case1 |
| File2 | 1.0.21 | Category2 | 31.7 | 12 | 1 | Case1 |
| File2 | 1.0.21 | Category2 | 32.4 | 12 | 2 | Case1 |
| File1 | 1.0.18 | Category1 | 32.0 | 11 | 1 | Case2 |
| File1 | 1.0.18 | Category1 | 34.6 | 11 | 2 | Case2 |
| File1 | 1.0.18 | Category2 | 34.6 | 11 | 1 | Case2 |
| File1 | 1.0.18 | Category2 | 34.7 | 11 | 2 | Case2 |
| File2 | 1.0.18 | Category1 | 32.3 | 11 | 1 | Case2 |
| File2 | 1.0.18 | Category1 | 34.7 | 11 | 2 | Case2 |
| File2 | 1.0.18 | Category2 | 31.4 | 11 | 1 | Case2 |
| File2 | 1.0.18 | Category2 | 32.3 | 11 | 2 | Case2 |
| File1 | 1.0.21 | Category1 | 32.4 | 12 | 1 | Case2 |
| File1 | 1.0.21 | Category1 | 34.7 | 12 | 2 | Case2 |
| File1 | 1.0.21 | Category2 | 31.5 | 12 | 1 | Case2 |
| File1 | 1.0.21 | Category2 | 34.6 | 12 | 2 | Case2 |
| File2 | 1.0.21 | Category1 | 31.7 | 12 | 1 | Case2 |
| File2 | 1.0.21 | Category1 | 31.4 | 12 | 2 | Case2 |
| File2 | 1.0.21 | Category2 | 34.5 | 12 | 1 | Case2 |
| File2 | 1.0.21 | Category2 | 31.5 | 12 | 2 | Case2 |
The actual data set has 6 unique files, the two most previous "TestNums & Versions", 2 unique categories, and 4 unique cases.
Using the magic of the internet, I was able to cobble together a table that looks like this for a different need (but the code should be similarish):
| FileName | Category | 1.0.1 | 1.0.2 | PercentChange |
|:--------:|:---------:|:-----:|:-----:|:-------------:|
| File1 | Category1 | 18.19 | 18.18 | -0.0045808520 |
| File1 | Category2 | 18.05 | 18.06 | -0.0005075721 |
| File2 | Category1 | 19.27 | 18.83 | -0.0224913494 |
| File2 | Category2 | 19.13 | 18.69 | -0.0231780146 |
| File3 | Category1 | 26.02 | 26.91 | 0.0342729019 |
| File3 | Category2 | 25.88 | 26.75 | 0.0335598775 |
| File4 | Category1 | 31.28 | 28.70 | -0.0823371327 |
| File4 | Category2 | 31.13 | 28.56 | -0.0826670833 |
| File5 | Category1 | 31.77 | 25.45 | -01999731215 |
| File5 | Category2 | 31.62 | 25.30 | -0.0117180458 |
| File6 | Category1 | 46.23 | 45.68 | -0.0119578545 |
| File6 | Category2 | 46.08 | 45.53 | -0.0045808520 |
This is the code that made that table:
vLatest and vPrevious are variables with the latest and second latest verion numbers
deviations<-subset(df, df$Version %in% c(vLatest, vPrevious))
deviationsCast<-dcast(df[,1:4], FileName + Category ~ Version, value.var = "Value", fun.aggregate=mean)
deviationsCast$PercentChange<-(deviationsCast[,dim(deviationsCast)[2]]-deviationsCast[,dim(deviationsCast)[2]-1])/deviationsCast[,dim(deviationsCast)[2]-1]
I'm really just hoping someone can help me understand the syntax of dcast. The initial generation of deviationsCast is where I'm most fuzzy on how everything is working together. Instead of getting this for the Files, I really want to get it so that its the sum of all files for each category for a unique "Case" and show the Percent change between them.
| Case | Measure | 1.0.18 | 1.0.21 | PercentChange |
|:------:|:----------:|:------:|:------:|:-------------:|
| Case 1 | Category 1 | 110 | 100 | 9.09% |
| Case 2 | Category 1 | 95 | 89 | 9.32% |
| Case 3 | Category 1 | 92 | 84 | 8.70% |
| Case 4 | Category 1 | 83 | 75 | 9.64% |
| Case 1 | Category 2 | 112 | 101 | 9.82% |
| Case 2 | Category 2 | 96 | 89 | 7.29% |
| Case 3 | Category 2 | 94 | 86 | 8.51% |
| Case 4 | Category 2 | 83 | 76 | 8.43% |
Note: The rounding and percent sign is a plus but a very preferred plus
The numbers do not reflect actual maths done correctly, just random numbers I put in there to show for an example. I hopefully explained the math that I'm trying to do sufficiently.
Example dataset to test with
FileName<-rep(c("File1","File2","File3","File4","File5","File6"),times=8,each=6)
Version<-rep(c("1.0.18","1.0.21"),times=4,each=36)
Category<-rep(c("Category1","Category2"),times=48,each=3)
Value<-rpois(n=288,lambda=32)
TestNum<-rep(11:12,times=4,each=36)
RepNum<-rep(1:3,times=96)
Case<-rep(c("Case1","Case2","Case3","Case4"),each=72)
df<-data.frame(FileName,Version,Category,Value,TestNum,RepNum,Case)
Its worth noting that the df here is essentially what deviations data frame is from the above code (with vLatest and vPrevious)
EDIT:
MrFlick's answer is almost perfect but when I try to implement it in my actual dataset I run into problems. The issue is due to using vLatest and vPrevious as my Versions instead of just writing the string. Here's the code that I use to get those two variables
vLatest<-unique(df[df[,"TestNum"] == max(df$TestNum), "Version"])
vPrevious<-unique(df[df[,"TestNum"] == sort(unique(df$TestNum), T)[2], "Version"])
And when I tried this:
pc <- function(a,b) (b-a)/a
summary <- df %>%
group_by(Case, Category, Version) %>%
summarize(Value=mean(Value)) %>%
spread(Version, Value) %>%
mutate(Change=scales::percent(pc(vPrevious,vLatest)))
I received this error: Error: non-numeric argument to binary operator
2nd EDIT:
I tried creating new variables that were for the two TestNum values (since they could be numeric values and wouldn't need to have factors).
maxTestNum<-max(df$TestNum)
prevTestNum<-sort(unique(df$TestNum), T)[2]
(The reason I don't use "prevTestNum<-maxTestNum-1" is because sometimes versions are omitted from the data results)
However when I put in those two variables into the code, the "Change" column is all the same value.
With the sample data set supplied by the OP, and from analysing the edits, I believe the following code might produce the desired result even with OP's production data set.
My understanding is that the OP has a data.frame with many test results but he wants only to show the relative change of the two most recent versions.
The OP has asked for help in using the dcast() function. This function is available from two packages, reshape2 and data.table. Here the data.table version is used for speed and concise code. In addition, functions from the forcats and formattable packages are used.
library(data.table) # CRAN version 1.10.4 used
# coerce to data.table object
DT <- data.table(df)
# reorder factor levels of Version according to TestNum
DT[, Version := forcats::fct_reorder(Version, TestNum)]
# determine the two most recent Versions
# trick: pick 1st and 2nd entry of the _reversed_ levels
vLatest <- DT[, rev(levels(Version))[1L]]
vPrevious <- DT[, rev(levels(Version))[2L]]
# filter DT, reshape from long to wide format,
# compute change for the selected columns using get(),
# use formattable package for pretty printing
summary <- dcast(
DT[Version %in% c(vLatest, vPrevious)],
Case + Category ~ Version, mean, value.var = "Value")[
, PercentChange := formattable::percent(get(vLatest) / get(vPrevious) - 1.0)]
summary
Case Category 1.0.18 1.0.21 PercentChange
1: Case1 Category1 33.00000 31.94444 -3.20%
2: Case1 Category2 31.83333 31.83333 0.00%
3: Case2 Category1 33.05556 33.61111 1.68%
4: Case2 Category2 30.77778 32.94444 7.04%
5: Case3 Category1 33.16667 31.94444 -3.69%
6: Case3 Category2 33.44444 33.72222 0.83%
7: Case4 Category1 30.83333 34.66667 12.43%
8: Case4 Category2 32.27778 33.44444 3.61%
Explanations
Sorting Version
The OP has recognized that simply sorting Version alphabetically doesn't ensure the proper order. This can be demontrated by
sort(paste0("0.0.", 0:12))
[1] "0.0.0" "0.0.1" "0.0.10" "0.0.11" "0.0.12" "0.0.2" "0.0.3" "0.0.4" "0.0.5"
[10] "0.0.6" "0.0.7" "0.0.8" "0.0.9"
where 0.0.10 comes before 0.0.2.
This is crucial as data.frame() turns character variables to factor by default.
Fortunately, TestNum is associated with Version. So, TestNum is used to reorder the factor levels of Version with help of the fct_reorder() function from the forcats package.
This also ensures that dcast() creates the new columns in the appropriate order.
Accessing columns through variables
Using vLatest / vPrevious in an expression returns the error message
Error in vLatest/vPrevious : non-numeric argument to binary operator
This is to be expected because vLatests and vPrevious contain character values "1.0.21" and "1.0.18", resp., which can't be divided. What is meant here is take the values of the columns which names are given by vLatests and vPrevious and divide. This is achieved by using get().
Formatting as percent
While scales::percent() returns a character vector, formattable::percent() does return a numeric vector with a percent representation, i.e., we're still able to do numeric calculations.
Data
As given by the OP:
FileName <- rep(c("File1", "File2", "File3", "File4", "File5", "File6"),
times = 8, each = 6)
Version <- rep(c("1.0.18", "1.0.21"), times = 4, each = 36)
Category <- rep(c("Category1", "Category2"), times = 48, each = 3)
Value <- rpois(n = 288, lambda = 32)
TestNum <- rep(11:12, times = 4, each = 36)
RepNum <- rep(1:3, times = 96)
Case <- rep(c("Case1", "Case2", "Case3", "Case4"), each = 72)
df <- data.frame(FileName, Version, Category, Value, TestNum, RepNum, Case)

Copy column data when function unaggregates a single row into multiple in R

I need help in taking an annual total (for each of many initiatives) and breaking that down to each month using a simple division formula. I need to do this for each distinct combination of a few columns while copying down the columns that are broken from annual to each monthly total. The loop will apply the formula to two columns and loop through each distinct group in a vector. I tried to explain in an example below as it's somewhat complex.
What I have :
| Init | Name | Date |Total Savings|Total Costs|
| A | John | 2015 | TotalD | TotalD |
| A | Mike | 2015 | TotalE | TotalE |
| A | Rob | 2015 | TotalF | TotalF |
| B | John | 2015 | TotalG | TotalG |
| B | Mike | 2015 | TotalH | TotalH |
......
| Init | Name | Date |Total Savings|Total Costs|
| A | John | 2016 | TotalI | TotalI |
| A | Mike | 2016 | TotalJ | TotalJ |
| A | Rob | 2016 | TotalK | TotalK |
| B | John | 2016 | TotalL | TotalL |
| B | Mike | 2016 | TotalM | TotalM |
I'm going to loop a function for the first row to take the "Total Savings" and "Total Costs" and divide by 12 where Date = 2015 and 9 where Date = 2016 (YTD to Sept) and create an individual row for each. I'm essentially breaking out an annual total in a row and creating a row for each month of the year. I need help in running that loop to copy also columns "Init", "Name", until "Init", "Name" combination are not distinct. Also, note the formula for the division based on the year will be different as well. I suppose I could separate the datasets for 2015 and 2016 and use two different functions and merge if that would be easier. Below should be the output:
| Init | Name | Date |Monthly Savings|Monthly Costs|
| A | John | 01-01-2015 | TotalD/12* | MonthD |
| A | John | 02-01-2015 | MonthD | MonthD |
| A | John | 03-01-2015 | MonthD | MonthD |
...
| A | Mike | 01-01-2016 | TotalE/9* | TotalE |
| A | Mike | 02-01-2016 | TotalE | TotalE |
| A | Mike | 03-01-2016 | TotalE | TotalE |
...
| B | John | 01-01-2015 | TotalG/12* | MonthD |
| B | John | 02-01-2015 | MonthG | MonthD |
| B | John | 03-01-2015 | MonthG | MonthD |
TotalD/12* = MonthD - this is the formula for 2015
TotalE/9* = MonthE - this is the formula for 2016
Any help would be appreciated...
As a start, here are some reproducible data, with the columns described:
myData <-
data.frame(
Init = rep(LETTERS[1:3], each = 4)
, Name = rep(c("John", "Mike"), each = 2)
, Date = 2015:2016
, Savings = (1:12)*1200
, Cost = (1:12)*2400
)
Next, set the divisor to be used for each year:
toDivide <-
c("2015" = 12, "2016" = 9)
Then, I am using the magrittr pipe as I split the data up into single rows, then looping through them with lapply to expand each row into the appropriate number of rows (9 or 12) with the savings and costs divided by the number of months. Finally, dplyr's bind_rows stitches the rows back together.
myData %>%
split(1:nrow(.)) %>%
lapply(function(x){
temp <- data.frame(
Init = x$Init
, Name = x$Name
, Date = as.Date(paste(x$Date
, formatC(1:toDivide[as.character(x$Date)]
, width = 2, flag = "0")
, "01"
, sep = "-"))
, Savings = x$Savings / toDivide[as.character(x$Date)]
, Cost = x$Cost / toDivide[as.character(x$Date)]
)
}) %>%
bind_rows()
The head of this looks like:
Init Name Date Savings Cost
1 A John 2015-01-01 100.0000 200.0000
2 A John 2015-02-01 100.0000 200.0000
3 A John 2015-03-01 100.0000 200.0000
4 A John 2015-04-01 100.0000 200.0000
5 A John 2015-05-01 100.0000 200.0000
6 A John 2015-06-01 100.0000 200.0000
with similar entries for each expanded row.

Resources