How can I do this split process with this sequence in R? - r

I'm trying to create a corpus with the BILOU format, and I wanted to reuse a table. Where a given sentence is separated by columns, and each column would be an entity. How can I perform a sequence this way, making each string in each row represent the sequence B_xxx, I_xxx, L_xxx. And that it repeats itself from the moment the entity (column) changes.
Old dataframe:
first_entity <- c("Product and Other","Product2 and Second", "Product")
second_entity <- c("Price and Prices","Price2", "Price3 and example")
df <- data.frame(first_entity, second_entity)
df
----------------------------------------
first_entity second_entity
1 Product and Other Price and Prices
2 Product2 and Second Price2
3 Product Price3 and example
Desired dataframe:
Word Ent
1 Product B_pro
2 and I_pro
3 Other L_pro
4 Price B_pri
5 and I_pri
6 Prices L_pri
7 Product2 B_pro
8 and I_pro
9 Second L_pro
10 Price2 B_pri
11 Product B_pro
12 Price3 B_pri
13 and I_pri
14 example L_pri

Related

How to iterate one dataframe based on a mapping file in R?

Serial No.
Company 1
Company 2
Company 3
01
NA
2
NA
02
2
NA
5
03
NA
NA
4
04
1
NA
NA
05
NA
4
NA
I have a data structure like this where the column headings represent some companies and the row headings represents consumers who buy the products. 'NA' representing no purchase for that company's products by the consumer.
I have a second mapping file where the companies are represented as row headings as follows -
Company
Country
Category
Company 1
UK
FMCG
Company 2
UK
FMCG
Company 3
India
FMCG
Company 4
US
Nicotine
The data set is for over 10000 consumers and 1000 companies. I'm getting the market share for different countries and categories using the aggregate function and mapping file.
I want to make a look to iterate values in the first data-frame to change the share for different countries and categories. The idea is to make a loop where I can choose which country's (or category) share needs to be changed along with the share and then to use the mapping file to iterate values for companies in those countries (or category). The values need to be changes for only those consumers who buy the products from companies belonging to that country (or category).
Can someone suggest how can this be done in R (preferably) or Python?
Edit:
Before iteration I will use the aggregate function in R to get the shares for a country (or category) like this -
Country
Share
UK
0.33
US
0.02
IN
0.41
IR
0.11
PK
0.13
In the loop I want to be able to specify the share for some country (say UK) to whatever is required (say 0.5). The mapping file will be used to iterate values to the first data structure where people have bought products from companies in UK.
The final output will be something like this.
Country
Share
UK
0.50
US
0.00
IN
0.38
IR
0.11
PK
0.01
Here's a guess: ultimately, this is a combination of reshape from wide to long, then merge/join, and finally aggregation/summarizing by group. If you need more information for either operation, using those key-words (on SO) will provide very useful information.
base R (and reshape2)
## reshape
dat1melted <- reshape2::melt(dat1, "Serial No.", variable.name = "Company")
dat1melted$Company <- as.character(dat1melted$Company)
dat1melted <- dat1melted[!is.na(dat1melted$value),]
dat1melted
# Serial No. Company value
# 2 02 Company 1 2
# 4 04 Company 1 1
# 6 01 Company 2 2
# 10 05 Company 2 4
# 12 02 Company 3 5
# 13 03 Company 3 4
## merge
dat1merged <- merge(dat1melted, dat2, by = "Company", all.x = TRUE)
dat1merged
# Company Serial No. value Country Category
# 1 Company 1 02 2 UK FMCG
# 2 Company 1 04 1 UK FMCG
# 3 Company 2 01 2 UK FMCG
# 4 Company 2 05 4 UK FMCG
# 5 Company 3 02 5 India FMCG
# 6 Company 3 03 4 India FMCG
## aggregate by group
aggregate(value ~ Country, data = dat1merged, FUN = sum)
# Country value
# 1 India 9
# 2 UK 9
dplyr
library(dplyr)
# library(tidyr) # pivot_longer
dat1 %>%
## reshape
tidyr::pivot_longer(-`Serial No.`, names_to = "Company") %>%
filter(!is.na(value)) %>%
## merge
left_join(., dat2, by = "Company") %>%
## aggregate by group
group_by(Country) %>%
summarize(value = sum(value))
# # A tibble: 2 x 2
# Country value
# <chr> <int>
# 1 India 9
# 2 UK 9

Remove all rows of a category if zero in a % of cases

I have the following data set of weekly retail data ordered after Category(e.g. Chocolate), Brand (e.g. Cadbury's), and Week(1-208). CBX is a unique global identifier for each brand.
Category Brand Week Sales Price CBX
33 2 1 167650. 2.20 33 - 2
33 2 2 168044. 2.18 33 - 2
33 2 3 160770 2.24 33 - 2
I now want to remove the brands that have zero sales in more than 75% of the weeks (thus have positive sales in at least 156 weeks).
At first I deleted all brands with any zero sales using dplyr, but it deleted too much of the data. This was the code I used:
library(dplyr)
Final_df_ <- Final_df %>%
group_by(Final_df$CBX) %>%
filter(!any(Sales==0 & Price==0))
Now I'm trying to change the code so it only deletes all rows belonging to a brand (CBX) if the sales of that brand are zero in more than 25% of the cases.
This is how far I've come:
Final_df_ <- Final_df %>%
group_by(Final_df$CBX) %>%
filter(!((Final_df$Sales==0)>0.75))
Thank you!

How to Have a COUNTIF Function dependent on the dates of the same row in R

My main problem is figuring out a way to count the number of days a particular item was sold. For example, if I have the following data frame, I would like to count the number of days in which item A or B were sold, i.e., item A was only sold on one day during our sample, and item B was sold 3 times, however only sold on 2 different days. My goal would be to have a function that outputs the number of days in which item was sold, here being (A,B)=(1, 2).
row item_name date
1 A 2016-03-04 3:49
2 B 2016-05-31 16:15
3 B 2016-05-31 16:35
4 B 2016-06-08 16:05
Try this
library(dplyr)
df1 %>% group_by(item_name) %>% summarise(n_distinct(as.Date(date)))

Creating a vector containing total quantities sold per delivery term

Have a look at the simplified table below. I want for each product a vector containing the quantities sold within each delivery time. A delivery time is defined as 4 days. So if we look at product A, we see that it starts at 03/12/15 and within the first delivery term (until 07/12/15) it has sold a quantity of 4. The second delivery term starts at 08/12/15 and ends at 12/12/15. So for this period there is 1 quantity sold. The following delivery term starts at 13/12/15 and ends at 17/12/15. During these period there are no quantities sold and thus for this period the vector must have a value of 0. In the last period, finally, 2 products are sold. So basically the problem here is that information regarding the periods were no products are sold is missing.
Any ideas on how the vector I want can be created using R? I've been thinking of for or while loops, but these do not seem to give the requested results. Note that the code must be applicable on a real dataset containing over 1000 product categories, so it has to be 'automatized' in one way.
I would be very gratefull if somebody could point me in the right direction.
Product Quantity Date
A 1 03/12/15
A 2 04/12/15
A 1 05/12/15
A 1 08/12/15
A 1 17/12/16
A 1 18/12/16
B 1 19/12/15
B 2 10/05/15
B 2 11/05/15
C 1 01/06/15
C 1 02/06/15
C 1 12/06/15
Assume that dt is the dataset you provided. You'll get a better understanding of the process if you run it step by step (and maybe with an even simpler dataset).
library(lubridate)
library(dplyr)
# create date time columns
dt$Date = dmy(dt$Date)
dt %>%
group_by(Product) %>%
do(data.frame(days = seq(min(.$Date), max(.$Date), by="1 day"))) %>% # create all combinations between product and days
mutate(dist = as.numeric(difftime(days,min(days), units="days"))) %>% # create distance of each day with min date
ungroup() %>%
left_join(dt, by=c("Product"="Product","days"="Date")) %>% # join info to get quantities for each day
mutate(Quantity = ifelse(is.na(Quantity), 0, Quantity), # replace NAs with 0s
id = floor(dist/5 + 1)) %>% # create the 4 period id
group_by(Product, id) %>%
summarise(Sum = sum(Quantity),
min_date = min(days),
max_date = max(days)) %>%
ungroup
# Product id Sum min_date max_date
# 1 A 1 4 2015-12-03 2015-12-07
# 2 A 2 1 2015-12-08 2015-12-12
# 3 A 3 0 2015-12-13 2015-12-17
# 4 A 4 0 2015-12-18 2015-12-22
# 5 A 5 0 2015-12-23 2015-12-27
# 6 A 6 0 2015-12-28 2016-01-01
# 7 A 7 0 2016-01-02 2016-01-06
# 8 A 8 0 2016-01-07 2016-01-11
# 9 A 9 0 2016-01-12 2016-01-16
# 10 A 10 0 2016-01-17 2016-01-21
# .. ... .. ... ... ...
First row of the output tells you that for product A in the first 4 days period (id = 1) you had 4 quantities in total and the period is from 3/12 to 7/12.
I would suggest {dplyr}'s summarise(),mutate() and group_by() functions. group_by() groups your data by desired variables (in your case - product and delivery term),mutate() allows operations on grouped columns, and summarise() applies a summarising function over these groups (in your case sum(Quantity)).
So this is how it will look:
convert date into proper format:
library(dplyr)
df=tbl_df(df)
df$Date=as.Date(df$Date,format="%d/%m/%y")
calculating delivery terms
df=group_by(df,Product) %>% arrange(Date)
df=mutate(df,term=1+unclass((Date-min(Date)))%/%4)
group by product and terms and calculate sum of quantity:
df=group_by(df,Product,term)
summarise(df,sum=sum(Quantity))
Here's a base R way:
df$groups <- ave(as.numeric(df$Date), df$Product, FUN=function(x) {
intrvl <- findInterval(x, seq(min(x), max(x),4))
as.numeric(factor(intrvl))
})
df
# Product Quantity Date groups
# 1 A 1 2015-12-03 1
# 2 A 2 2015-12-04 1
# 3 A 1 2015-12-05 1
# 4 A 1 2015-12-08 2
# 5 A 1 2016-12-17 3
# 6 A 1 2016-12-18 3
# 7 B 1 2015-12-19 2
# 8 B 2 2015-05-10 1
# 9 B 2 2015-05-11 1
# 10 C 1 2015-06-01 1
# 11 C 1 2015-06-02 1
# 12 C 1 2015-06-12 2
The dates should be converted to one of the date classes. I chose as.Date. When it converts to numeric, the output will be the number of days from a specified date. From there, we are able to group by 4 day increments.
Data
df$Date <- as.Date(df$Date, format="%d/%m/%y")

Assign rows to a group based on spatial neighborhood and temporal criteria in R

I have an issue that I just cannot seem to sort out. I have a dataset that was derived from a raster in arcgis. The dataset represents every fire occurrence during a 10-year period. Some raster cells had multiple fires within that time period (and, thus, will have multiple rows in my dataset) and some raster cells will not have had any fire (and, thus, will not be represented in my dataset). So, each row in the dataset has a column number (sequential integer) and a row number assigned to it that corresponds with the row and column ID from the raster. It also has the date of the fire.
I would like to assign a unique ID (fire_ID) to all of the fires that are within 4 days of each other and in adjacent pixels from one another (within the 8-cell neighborhood) and put this into a new column.
To clarify, if there were an observation from row 3, col 3, Jan 1, 2000 and another from row 2, col 4, Jan 4, 2000, those observations would be assigned the same fire_ID.
Below is a sample dataset with "rows", which are the row IDs of the raster, "cols", which are the column IDs of the raster, and "dates" which are the dates the fire was detected.
rows<-sample(seq(1,50,1),600, replace=TRUE)
cols<-sample(seq(1,50,1),600, replace=TRUE)
dates<-sample(seq(from=as.Date("2000/01/01"), to=as.Date("2000/02/01"), by="day"),600, replace=TRUE)
fire_df<-data.frame(rows, cols, dates)
I've tried sorting the data by "row", then "column", then "date" and looping through, to create a new fire_ID if the row and column ID were within one value and the date was within 4 days, but this obviously doesn't work, as fires which should be assigned the same fire_ID are assigned different fire_IDs if there are observations in between them in the list that belong to a different fire_ID.
fire_df2<-fire_df[order(fire_df$rows, fire_df$cols, fire_df$date),]
fire_ID=numeric(length=nrow(fire_df2))
fire_ID[1]=1
for (i in 2:nrow(fire_df2)){
fire_ID[i]=ifelse(
fire_df2$rows[i]-fire_df2$rows[i-1]<=abs(1) & fire_df2$cols[i]-fire_df2$cols[i-1]<=abs(1) & fire_df2$date[i]-fire_df2$date[i-1]<=abs(4),
fire_ID[i-1],
i)
}
length(unique(fire_ID))
fire_df2$fire_ID<-fire_ID
Please let me know if you have any suggestions.
I think this task requires something along the lines of hierarchical clustering.
Note, however, that there will be necessarily some degree of arbitrariness in the ids. This is because it is entirely possible that the cluster of fires itself is longer than 4 days yet every fire is less than 4 days away from some other fire in that cluster (and thus should have the same id).
library(dplyr)
# Create the distances
fire_dist <- fire_df %>%
# Normalize dates
mutate( norm_dates = as.numeric(dates)/4) %>%
# Only keep the three variables of interest
select( rows, cols, norm_dates ) %>%
# Compute distance using L-infinite-norm (maximum)
dist( method="maximum" )
# Do hierarchical clustering with "single" aggl method
fire_clust <- hclust(fire_dist, method="single")
# Cut the tree at height 1 and obtain groups
group_id <- cutree(fire_clust, h=1)
# First attach the group ids back to the data frame
fire_df2 <- cbind( fire_df, group_id ) %>%
# Then sort the data
arrange( group_id, dates, rows, cols )
# Print the first 20 records
fire_df2[1:10,]
(Make sure you have dplyr library installed. You can run install.packages("dplyr",dep=TRUE) if not installed. It is a really good and very popular library for data manipulations)
A couple of simple tests:
Test #1. The same forest fire moving.
rows<-1:6
cols<-1:6
dates<-seq(from=as.Date("2000/01/01"), to=as.Date("2000/01/06"), by="day")
fire_df<-data.frame(rows, cols, dates)
gives me this:
rows cols dates group_id
1 1 1 2000-01-01 1
2 2 2 2000-01-02 1
3 3 3 2000-01-03 1
4 4 4 2000-01-04 1
5 5 5 2000-01-05 1
6 6 6 2000-01-06 1
Test #2. 6 different random forest fires.
set.seed(1234)
rows<-sample(seq(1,50,1),6, replace=TRUE)
cols<-sample(seq(1,50,1),6, replace=TRUE)
dates<-sample(seq(from=as.Date("2000/01/01"), to=as.Date("2000/02/01"), by="day"),6, replace=TRUE)
fire_df<-data.frame(rows, cols, dates)
output:
rows cols dates group_id
1 6 1 2000-01-10 1
2 32 12 2000-01-30 2
3 31 34 2000-01-10 3
4 32 26 2000-01-27 4
5 44 35 2000-01-10 5
6 33 28 2000-01-09 6
Test #3: one expanding forest fire
dates <- seq(from=as.Date("2000/01/01"), to=as.Date("2000/01/06"), by="day")
rows_start <- 50
cols_start <- 50
fire_df <- data.frame(dates = dates) %>%
rowwise() %>%
do({
diff = as.numeric(.$dates - as.Date("2000/01/01"))
expand.grid(rows=seq(rows_start-diff,rows_start+diff),
cols=seq(cols_start-diff,cols_start+diff),
dates=.$dates)
})
gives me:
rows cols dates group_id
1 50 50 2000-01-01 1
2 49 49 2000-01-02 1
3 49 50 2000-01-02 1
4 49 51 2000-01-02 1
5 50 49 2000-01-02 1
6 50 50 2000-01-02 1
7 50 51 2000-01-02 1
8 51 49 2000-01-02 1
9 51 50 2000-01-02 1
10 51 51 2000-01-02 1
and so on. (All records identified correctly to belong to the same forest fire.)

Resources