Converting raw data into long format - count

I am importing data that is neither long nor wide:
clear
input str1 id purchased sold
A 2017 .
B . .
C 2016 2019
C 2018 .
D 2018 2019
D 2018 .
end
My goal is to get the data in the following long format, reflecting the count in each year:
Identifier Year Inventory
A 2016 0
A 2017 1
A 2018 1
A 2019 1
B 2016 0
B 2017 0
B 2018 0
B 2019 0
C 2016 1
C 2017 1
C 2018 2
C 2019 1
D 2016 0
D 2017 0
D 2018 2
D 2019 1
My initial approach would be to transform it first into a wide format, that is having only one row per identifier, and adding columns for years between 2016-2018. And then converting this format into the desired long format. However, this seems to be inefficient.
Is there any shorter and more efficient method to do this, as I have a much larger dataset?

This needs several small tricks. The most crucial are reshape long and fillin.
The inventory is essentially a running sum of purchases minus sales.
clear
input str1 Identifier Purchased Sold
A 2017 .
B . .
C 2016 2019
C 2018 .
D 2018 2019
D 2018 .
end
generate long id = _n
rename (Purchased Sold) year=
reshape long year, i(id) j(Event) string
drop id
fillin Id year
drop _fillin
drop if missing(year)
bysort Id (year Event) : generate inventory = sum((Event == "Purchased") - (Event == "Sold"))
drop Event
bysort Id year : keep if _n == _N
list, sepby(Id)
+----------------------------+
| Identi~r year invent~y |
|----------------------------|
1. | A 2016 0 |
2. | A 2017 1 |
3. | A 2018 1 |
4. | A 2019 1 |
|----------------------------|
5. | B 2016 0 |
6. | B 2017 0 |
7. | B 2018 0 |
8. | B 2019 0 |
|----------------------------|
9. | C 2016 1 |
10. | C 2017 1 |
11. | C 2018 2 |
12. | C 2019 1 |
|----------------------------|
13. | D 2016 0 |
14. | D 2017 0 |
15. | D 2018 2 |
16. | D 2019 1 |
+----------------------------+

Related

Weekly Weight Based on a category using dplyr in R

I have the following data and looking to create the "Final Col" shown below using dplyr in R. I would appreciate your ideas.
| Year | Week | MainCat|Qty |Final Col |
|:----: |:------: |:-----: |:-----:|:------------:|
| 2017 | 1 | Edible |69 |69/(69+12) |
| 2017 | 2 | Edible |12 |12/(69+12) |
| 2017 | 1 | Flowers|88 |88/(88+47) |
| 2017 | 2 | Flowers|47 |47/(88+47) |
| 2018 | 1 | Edible |90 |90/(90+35) |
| 2018 | 2 | Edible |35 |35/(90+35) |
| 2018 | 1 | Flowers|78 |78/(78+85) |
| 2018 | 2 | Flowers|85 |85/(78+85) |
It can be done with a group_by operation i.e. grouped by 'Year', 'MainCat', divide the 'Qty' by the sum of 'Qty' to create the 'Final' column
library(dplyr)
df1 <- df1 %>%
group_by(Year, MainCat) %>%
mutate(Final = Qty/sum(Qty))
You can use prop.table :
library(dplyr)
df %>% group_by(Year, MainCat) %>% mutate(Final = prop.table(Qty))
# Year Week MainCat Qty Final
# <int> <int> <chr> <int> <dbl>
#1 2017 1 Edible 69 0.852
#2 2017 2 Edible 12 0.148
#3 2017 1 Flowers 88 0.652
#4 2017 2 Flowers 47 0.348
#5 2018 1 Edible 90 0.72
#6 2018 2 Edible 35 0.28
#7 2018 1 Flowers 78 0.479
#8 2018 2 Flowers 85 0.521
You can also do this in base R :
df$Final <- with(df, ave(Qty, Year, MainCat, FUN = prop.table))

Swap results of a value from last years value in same month if the month-year combination is not equal to month-year of system

I've a table as under
+----+-------+---------+
| ID | VALUE | DATE |
+----+-------+---------+
| 1 | 10 | 2019-09 |
| 1 | 12 | 2018-09 |
| 2 | 13 | 2019-10 |
| 2 | 14 | 2018-10 |
| 3 | 67 | 2019-01 |
| 3 | 78 | 2018-01 |
+----+-------+---------+
I want to be able to swap the VALUE column for all ID's where the DATE != year-month of system date
and If the DATE == year-month of system date then just keep this years value
the resulting table I need is as under
+----+-------+---------+
| ID | VALUE | DATE |
+----+-------+---------+
| 1 | 12 | 2019-09 |
| 2 | 13 | 2019-10 |
| 3 | 78 | 2019-01 |
+----+-------+---------+
As Jon and Maurits noticed, your example is unclear: you give no line with what is a wrong format to you, and you mention "current year" but do not describe the expected output for the next year for instance.
Here is an attempt of code to actually answer your question:
library(dplyr)
x = read.table(text = "
ID VALUE DATE
1 10 2019-09
1 12 2018-09
1 12 2018-09-04
1 12 2018-99
2 13 2019-10
2 14 2018-10
3 67 2019-01
3 78 2018-01
", header=T)
x %>%
mutate(DATE = paste0(DATE, "-01") %>% as.Date("%Y-%m-%d")) %>%
group_by(ID) %>%
filter(DATE==max(DATE, na.rm=T))
I inserted two lines with a "wrong" format (according to me) and treated "current year" as the maximum year you could find in the column for each ID.
This may be wrong assertions, but I'd need more information to better answer this.

Rank df by values and sum by unique variables [duplicate]

This question already has answers here:
Can dplyr summarise over several variables without listing each one? [duplicate]
(2 answers)
Aggregate / summarize multiple variables per group (e.g. sum, mean)
(10 answers)
Closed 6 years ago.
I have a large dataset containing the names of hospitals, the hospital groups and then the number of presenting patients by month. I'm trying to use dplyr to create a summary that contains the total number of presenting patients each month, aggregated by hospital group. The data frame looks like this:
Hospital | Hospital_group | Jan 03 | Feb 03 | Mar 03 | Apr 03 | .....
---------------------------------------------------------------
Hosp 1 | Group A | 5 | 5 | 6 | 4 | .....
---------------------------------------------------------------
Hosp 2 | Group A | 6 | 3 | 8 | 2 | .....
---------------------------------------------------------------
Hosp 3 | Group B | 5 | 5 | 6 | 4 | .....
---------------------------------------------------------------
Hosp 4 | Group B | 3 | 7 | 2 | 1 | .....
---------------------------------------------------------------
I'm trying to create a new dataframe that looks like this:
Hospital_group |Jan 03 | Feb 03 | Mar 03 | Apr 03 | .....
----------------------------------------------------------
Group A | 11 | 8 | 14 | 6 | .....
----------------------------------------------------------
Group B | 8 | 12 | 8 | 5 | .....
----------------------------------------------------------
I'm trying to use dplyr to summarise the data but am a little stuck (am very new at this as you might have guessed). I've managed to filter out the first column (hospital name) and group_by the hospital group but am not sure how to get a cumulative sum total for each month and year (there is a large number of date columns so I'm hoping there is a quick and easy way to do this).
Sorry about posting such a basic question - any help or advice would be greatly appreciated.
Greg
Use summarize_all:
Example:
df <- tibble(name=c("a","b", "a","b"), colA = c(1,2,3,4), colB=c(5,6,7,8))
df
# A tibble: 4 × 3
name colA colB
<chr> <dbl> <dbl>
1 a 1 5
2 b 2 6
3 a 3 7
4 b 4 8
df %>% group_by(name) %>% summarize_all(sum)
Result:
# A tibble: 2 × 3
name colA colB
<chr> <dbl> <dbl>
1 a 4 12
2 b 6 14
Edit: In your case, your data frame contains one column that you do not want to aggregate (the Hospital name.) You might have to either deselect the hospital name column first, or use summarize_at(vars(-Hospital), funs(sum)) instead of summarize_all.
We can do this using base R
We split the dataframe by Hospital_group and then sum it column-wise.
do.call(rbind, lapply(split(df[-c(1, 2)], df$Hospital_group), colSums))
# Jan_03 Feb_03 Mar_03 Apr_03
#Group_A 11 8 14 6
#Group_B 8 12 8 5

Making Pivot table with Multiple Columns and Aggregating by Unique Occurences

I'm having a tough time wrapping my head around this or finding a guideline online.
I have membership data. I want to be to see how many members last in a particular month before dropping their membership. I can see which month they have joined and I can see how long they've been active by looking at their transaction no (it increases by 1 each month). So if I track transaction no's for each month, I can get a waterfall of how many people joined that month and what the drop off was.
The kicker is that sometimes there are multiple transactions within a month by the same member, but I would only like to count that member once, so I would need to count that member only once.
Name | Joined Month | Transaction no
Adam | Jan | 1
Adam | Jan | 2
Adam | Jan | 2
Ben | Jan | 1
Ben | Jan | 2
Ben | Jan | 3
Ben | Jan | 4
Cathy| Jan | 1
Donna| Feb | 1
Donna| Feb | 2
Donna| Feb | 3
Evan | Mar | 1
Evan | Mar | 1
Frank | Mar | 1
Frank | Mar | 2
Aggregating for distinct members with months as columns, the result would look something like this:
Transaction# | Jan | Feb | March
1 | 3 | 1 | 2
2 | 2 | 1 | 1
3 | 1 | 1 | 0
4 | 1 | 0 | 0
Any tips or pointers in the correct direction would be very helpful. Should I be using reshape2 or a similar package? Hopefully I did not butcher the explanation or the formatting, please feel free to ask any questions.
Thank you!
Below is a reproducible example that uses the tidyverse functions dplyr::n_distinct and tidyr::spread.
I have first represented your data as a tibble (or you could use a data frame equally well).
Next we group by Transactionno and JoinedMonth before counting distinct Names. To get it in table format you request we use tidyr::spread. If you want the resulting columns in month order, ensuring your data frame has them as ordered factors would be important.
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tibble)
library(tidyr)
x <- tribble(
~Name , ~JoinedMonth, ~Transactionno,
"Adam" , "Jan" , 1,
"Adam" , "Jan" , 2,
"Adam" , "Jan" , 2,
"Ben" , "Jan" , 1,
"Ben" , "Jan" , 2,
"Ben" , "Jan" , 3,
"Ben" , "Jan" , 4,
"Cathy", "Jan" , 1,
"Donna", "Feb" , 1,
"Donna", "Feb" , 2,
"Donna", "Feb" , 3,
"Evan" , "Mar" , 1,
"Evan" , "Mar" , 1,
"Frank" , "Mar" , 1,
"Frank" , "Mar" , 2
)
x %>%
group_by(Transactionno, JoinedMonth) %>%
summarise(ct = n_distinct(Name)) %>%
tidyr::spread(JoinedMonth, ct, fill = 0)
#> # A tibble: 4 x 4
#> # Groups: Transactionno [4]
#> Transactionno Feb Jan Mar
#> <dbl> <dbl> <dbl> <dbl>
#> 1 1. 1. 3. 2.
#> 2 2. 1. 2. 1.
#> 3 3. 1. 1. 0.
#> 4 4. 0. 1. 0.
1) xtabs This one-liner uses base R and the input DF shown reproducibly in the Note below. Note that we assume that Joined.Month is a factor with levels Jan, Feb, Mar to ensure that the output is sorted in that order (rather than alphabetically).
xtabs(~ Transaction.no + Joined.Month, unique(DF))
giving:
Joined.Month
Transaction.no Jan Feb Mar
1 1 3 2
2 1 2 1
3 1 1 0
4 0 1 0
2) table Another base R approach.
with(unique(DF), table(Transaction.no, Joined.Month))
giving:
Joined.Month
Transaction.no Jan Feb Mar
1 3 1 2
2 2 1 1
3 1 1 0
4 1 0 0
2a) This would also work and is shorter but not quite as clear:
table(unique(DF)[3:2])
3) tapply This also uses only base R:
u <- unique(DF)
tapply(u[[1]], u[3:2], length, default = 0)
giving:
Joined.Month
Transaction.no Jan Feb Mar
1 3 1 2
2 2 1 1
3 1 1 0
4 1 0 0
Note
DF in reproducible form is assumed to be:
Lines <- "Name | Joined Month | Transaction no
Adam | Jan | 1
Adam | Jan | 2
Adam | Jan | 2
Ben | Jan | 1
Ben | Jan | 2
Ben | Jan | 3
Ben | Jan | 4
Cathy| Jan | 1
Donna| Feb | 1
Donna| Feb | 2
Donna| Feb | 3
Evan | Mar | 1
Evan | Mar | 1
Frank | Mar | 1
Frank | Mar | 2"
DF <- read.table(text = Lines, header = TRUE, sep = "|",
strip.white = TRUE, as.is = TRUE)
DF$Joined.Month <- factor(DF$Joined.Month, lev = month.abb[1:3])

How to use Row_Number based on Number of Days?

How to group and rank records based on 7 days.
Call 1 - 06-Jun-14 16.39.14 Rank 1
Call 7 - 10-Jun-14 14.28.40 Rank 7
After 7 days, whenever the next call date occurs,
I need to watch the next 7 days and rank accordingly.
Call 1 - 27-Jun-14 11.44.35 Rank 1
Call 4 - 03-Jul-14 14.23.39 Rank 4
CALL_DATE ROW_NUMBER
06-Jun-14 16.39.14 1
06-Jun-14 17.29.27 2
07-Jun-14 09.13.18 3
07-Jun-14 14.45.52 4
08-Jun-14 13.05.44 5
08-Jun-14 13.14.49 6
10-Jun-14 14.28.40 7
27-Jun-14 11.44.35 1
27-Jun-14 11.46.27 2
27-Jun-14 12.00.21 3
03-Jul-14 14.23.39 4
You can calculate the day number within the range by using the first_value() analytic function and getting the difference; then divide that by seven to get the week number (within the data); and then use that calculate the row_number() of each date within its calculated week number.
select call_date,
row_number() over (partition by week_num order by call_date) as row_num
from (
select call_date,
ceil((trunc(call_date)
- trunc(first_value(call_date) over (order by call_date))
+ 1) / 7) as week_num
from t42
)
order by call_date;
Which gives:
| CALL_DATE | ROW_NUM |
|-----------------------------|---------|
| June, 06 2014 16:39:14+0000 | 1 |
| June, 06 2014 17:29:27+0000 | 2 |
| June, 07 2014 09:13:18+0000 | 3 |
| June, 07 2014 14:45:52+0000 | 4 |
| June, 08 2014 13:05:44+0000 | 5 |
| June, 08 2014 13:14:49+0000 | 6 |
| June, 10 2014 14:28:40+0000 | 7 |
| June, 27 2014 11:44:35+0000 | 1 |
| June, 27 2014 11:46:27+0000 | 2 |
| June, 27 2014 12:00:21+0000 | 3 |
| July, 03 2014 14:23:39+0000 | 4 |
SQL Fiddle showing some of the intermediate steps and the final result.

Resources